Restricted Boltzmann Machine (RBM)

1. Background

Learning restricted Boltzmann machine (RBM) is not easy for people who don’t have much knowledge in statistics. Even for people who have learned basic machine learning/data mining, it is not very straight forward. The students in my Neural Network and Deep Learning class struggled quite a bit in understanding the material.

2. Resources

There are many excellent online resources that introduce RBM. Among them, I found the following two were the most useful for preparing my lecture:

Fischer and Igel’s tutorial on RBM:

http://image.diku.dk/igel/paper/AItRBM-proof.pdf

It is very thorough and provides necessary background.

Ghodsi’s lecture video and slides:

https://uwaterloo.ca/data-science/sites/ca.data-science/files/uploads/files/dbn2.pdf

The lecture slides provide a clean and concise summary of the necessary equations.  The lecture video is very clear. However several equations were not sufficiently explained due to time constraints.

To me they are the best resources I can find to learn RBM.

3. Big picture

The purpose of RBM is to compute your training data’s probability distribution. For high dimensional data, histogram-like non-parametric representation is not tractable. We need a parameterized generative model to represent the data’s probability distribution. For people who are familiar with machine learning, you would think of expectation-maximization (EM) algorithm. It deals with data that can be assumed to be from a mixture of distributions (Gaussian in most cases). For people who are familiar with basic statistics, you would think of maximumlikelihood estimation (MLE).  When you learn it the first time in college, the application probably was to estimate the mean and variance of Gaussian distribution given a set of data. However MLE is much more general. The concept is to search the right set of distribution model parameters \theta to maximize the likelihood function p((x_1, x_2, ..., x_n)|\theta) (the probability of those data given the parameter values). The maximized likelihood function is the distribution of the data since it agree the most with the data. RBM is an approach that computes MLE, especially for the ones you don’t know the formula of the distributions.

4. Probability distribution representation

Well, if you don’t know the formula of the distribution model, but you don’t want to use bean counting for high-dimensional space data, RBM is your tool. First, you generate a few hidden/latent variables from thin air, then, you say the probability distribution of your data is a function of those hidden variables. Since they are latent variables, they can be anything. So your claim is  always true. Well, for RBM, your hidden variables are from the data. So you can learn your hidden variables. Now, you have your parameterized probability distribution representation (at lease potentially).

5. Construct RBM

For now, we only deal with binary data. Each data point \bf{v}_i in a set of training data \bf{V} is in a D-dimensional space and the values in each dimension could only be 0 or 1. For example, we may have a dataset \bf{V} that has four three-dimensional data points \bf{v}_1=[0, 0, 1], \bf{v}_2=[1, 0, 1], \bf{v}_3=[0, 1, 1], and \bf{v}_4=[1, 1, 0]. To construct a RBM to estimate the probability density function of the dataset \bf{V}, we would need to have three visible nodes in corresponding to the three dimensions. Assume we have two hidden/latent variables, so, we would need two hidden nodes. The nodes between the the visible layer and the hidden layer are fully connected to form a graph called RBM. It is a typical complete bipartite graph as shown below.

Rendered by QuickLaTeX.com

To be more precise, there are weights (\bf{W}) associated to the edges and biases (\bf{b} and \bf{c}) to the binaries. The weights and the biases are the parameters for the probability density model p((\bf{v}_1, \bf{v}_2, ..., \bf{v}_n)|\bf{\theta}), where \bf{\theta} = [\bf{W}, \bf{b}, \bf{c}].

The workflow of this structure is two directional. Given \bf{v}, the RBM computes the probability of \bf{h}. For example, given \bf{v} = [0, 1, 0], the RBM computes the probabilities p(h_1=1) and p(h_2=1). Through the other direction, given \bf{h}, the RBM computes the probability of \bf{v}. For example, given \bf{h} = [0, 1], the RBM computes the probabilities p(v_1=1), p(v_2=1), and p(v_3=1).

There are many clear ways of using a trained RBM. People stack them together to form a deep network and call it deep belief network (DBN). RBM can also be used by itself to remove noise, obtain features, deal with missing data, or generate new data points. It is very similar to autoencoder for some applications.

6. Train RBM

To train a RBM, as any other optimization problem, the routine to define a loss or goal function and then compute gradient. Then use gradient descent/ascent to find the right coefficients.

Formulate goal function
Since the goal is to obtain the training data’s probability distribution, the natural goal formulation is the maximum likelihood for all data points:

max\prod\limits_{\bf{v}^t\in\bf{V}} p(\bf{v}^t),

where \bf{V} represents the training data set and \bf{v} is one data point in the data set.

So the goal function is \prod\limits_{\bf{v}^t\in\bf{V}} p(v^t). with log likelihood, it becomes \sum\limits_{\bf{v}^t\in\bf{V}} \log p(v^t). p(v^t) can be computed by marginalizing over p(\bf{v}^t, \bf{h}):

p(\bf{v}^t) = \sum\limits_{\bf{h}}p(\bf{v}^t, \bf{h})

Since a RBM is a probability network, the joint probability p(\bf{v}, \bf{h}) is defined as

p(\bf{v}, \bf{h}) = \frac{1}{Z} \exp(-E(\bf{v}, \bf{h})), 

where E(\bf{v}, \bf{h}) is the network’s global energy function, Z is a partition function to normalize the total probability to 1.

From Hopfield nets, the global energy function is defined as

E(\bf{v}, \bf{h}) = - \bf{b}'\bf{v} - \bf{c}'\bf{h} - \bf{h}'\bf{W}\bf{v}, 

where \bf{b}, \bf{c}, and \bf{W} are the weights of the network.

Now, the maximum log likelihood can be computed as the following,


(1)   \begin{equation*}  \begin{split} g(\bf{W}, \bf{b}, \bf{c}) & = \sum\limits_{\bf{v}^t\in\bf{V}} \log p(\bf{v}^t) \\ & = \sum\limits_{\bf{v}^t\in\bf{V}} log \sum\limits_{h} p(\bf{v}^t, \bf{h})  \\  & = \sum\limits_{\bf{v}^t\in\bf{V}} log \sum\limits_{h} \exp\{-E(\bf(v)^{t}, \bf{h})\} - n \log\bf{Z} \end{split} \end{equation*}

Compute derivative of the goal function (for this case),

Leave a Reply

Your email address will not be published. Required fields are marked *