Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Learning Bayesian Models with R
  • Toc
  • feedback
Learning Bayesian Models with R

Learning Bayesian Models with R

By : Hari Manassery Koduvely
3.4 (7)
close
Learning Bayesian Models with R

Learning Bayesian Models with R

3.4 (7)
By: Hari Manassery Koduvely

Overview of this book

Bayesian Inference provides a unified framework to deal with all sorts of uncertainties when learning patterns form data using machine learning models and use it for predicting future observations. However, learning and implementing Bayesian models is not easy for data science practitioners due to the level of mathematical treatment involved. Also, applying Bayesian methods to real-world problems requires high computational resources. With the recent advances in computation and several open sources packages available in R, Bayesian modeling has become more feasible to use for practical applications today. Therefore, it would be advantageous for all data scientists and engineers to understand Bayesian methods and apply them in their projects to achieve better results. Learning Bayesian Models with R starts by giving you a comprehensive coverage of the Bayesian Machine Learning models and the R packages that implement them. It begins with an introduction to the fundamentals of probability theory and R programming for those who are new to the subject. Then the book covers some of the important machine learning methods, both supervised and unsupervised learning, implemented using Bayesian Inference and R. Every chapter begins with a theoretical description of the method explained in a very simple manner. Then, relevant R packages are discussed and some illustrations using data sets from the UCI Machine Learning repository are given. Each chapter ends with some simple exercises for you to get hands-on experience of the concepts and R packages discussed in the chapter. The last chapters are devoted to the latest development in the field, specifically Deep Learning, which uses a class of Neural Network models that are currently at the frontier of Artificial Intelligence. The book concludes with the application of Bayesian methods on Big Data using the Hadoop and Spark frameworks.
Table of Contents (11 chapters)
close
10
Index

Probability distributions

In both classical and Bayesian approaches, a probability distribution function is the central quantity, which captures all of the information about the relationship between variables in the presence of uncertainty. A probability distribution assigns a probability value to each measurable subset of outcomes of a random experiment. The variable involved could be discrete or continuous, and univariate or multivariate. Although people use slightly different terminologies, the commonly used probability distributions for the different types of random variables are as follows:

  • Probability mass function (pmf) for discrete numerical random variables
  • Categorical distribution for categorical random variables
  • Probability density function (pdf) for continuous random variables

One of the well-known distribution functions is the normal or Gaussian distribution, which is named after Carl Friedrich Gauss, a famous German mathematician and physicist. It is also known by the name bell curve because of its shape. The mathematical form of this distribution is given by:

Probability distributions

Here, Probability distributions is the mean or location parameter and Probability distributions is the standard deviation or scale parameter (Probability distributions is called variance). The following graphs show what the distribution looks like for different values of location and scale parameters:

Probability distributions

One can see that as the mean changes, the location of the peak of the distribution changes. Similarly, when the standard deviation changes, the width of the distribution also changes.

Many natural datasets follow normal distribution because, according to the central limit theorem, any random variable that can be composed as a mean of independent random variables will have a normal distribution. This is irrespective of the form of the distribution of this random variable, as long as they have finite mean and variance and all are drawn from the same original distribution. A normal distribution is also very popular among data scientists because in many statistical inferences, theoretical results can be derived if the underlying distribution is normal.

Now, let us look at the multidimensional version of normal distribution. If the random variable is an N-dimensional vector, x is denoted by:

Probability distributions

Then, the corresponding normal distribution is given by:

Probability distributions

Here, Probability distributions corresponds to the mean (also called location) and Probability distributions is an N x N covariance matrix (also called scale).

To get a better understanding of the multidimensional normal distribution, let us take the case of two dimensions. In this case, Probability distributions and the covariance matrix is given by:

Probability distributions

Here, Probability distributions and Probability distributions are the variances along Probability distributions and Probability distributions directions, and Probability distributions is the correlation between Probability distributions and Probability distributions. A plot of two-dimensional normal distribution for Probability distributions, Probability distributions, and Probability distributions is shown in the following image:

Probability distributions

If Probability distributions, then the two-dimensional normal distribution will be reduced to the product of two one-dimensional normal distributions, since Probability distributions would become diagonal in this case. The following 2D projections of normal distribution for the same values of Probability distributions and Probability distributions but with Probability distributions and Probability distributions illustrate this case:

Probability distributions

The high correlation between x and y in the first case forces most of the data points along the 45 degree line and makes the distribution more anisotropic; whereas, in the second case, when the correlation is zero, the distribution is more isotropic.

We will briefly review some of the other well-known distributions used in Bayesian inference here.

bookmark search playlist font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete