cardinality Restricted Boltzmann Machines: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 29: Line 29:
The idea of SpRBM is simple. We add a penalty term to the log-likelihood objective in order to encourage sparsity. The penalty term used in SpRBM is  
The idea of SpRBM is simple. We add a penalty term to the log-likelihood objective in order to encourage sparsity. The penalty term used in SpRBM is  


<math> \lambda (\rho \log{q_{j}} + (1 - \rho) \log{(1-q_{j})})</math>
<math> \frac{}{} \lambda (\rho \log{q_{j}} + (1 - \rho) \log{(1-q_{j})})</math>


where <math>q_{j} = \frac{1}{N}\sum{P(h_{j}=1|v_{n})}</math> reflects the average unit marginal.
where <math>q_{j} = \frac{1}{N}\sum{P(h_{j}=1|v_{n})}</math> reflects the average unit marginal.

Revision as of 19:23, 7 August 2013

Overview

Restricted Boltzmann Machine (RBM) is a probabilistic model which is usually represented by a graphical model. RBM can encode a probability distribution and therefore can be used for density estimation. It can also be viewed as a generative stochastic model of a neural network (with one hidden layer). Although interesting by itself, popularity of RBM started from the time that Hinton and others used it as the building blocks of Deep Belief Networks (DBNs). From the computational point of view, the good characteristic of RBN is that the posterior over hidden variables is factorizable.

At least two notions of sparsity can be investigated for graphical models. The first notion is the sparsity of the graph (i.e., something related to the number of edges in the graph). In this sense, a sparse model is one that has a lot of independencies in its structure. However, this is not the focus of this paper. In fact, the structure of RBM is fixed.

The other notion of sparsity is the usual definition in the other approaches. In the neural network literature, it means that we do not want the entire neurons to be activated in the same time. For our RBM, we want that the values of hidden variables be sparse for (almost) all inputs.

Motivations and Challenges

Sparsity is usually regarded as a good property for modeling complex phenomenons. From theoretical point of view, statisticians sometimes relate it to the idea of shrinkage. Also, experimental analysis shows that it is certainly useful. In particular, it has been suggested that sparse deep architectures can be very efficient. Finally, sparsity increases the interpretability of data.

The problem is for RBM is that if we want to add sparsity constraints, then the hidden variables would be coupled together. Therefore, the posterior would not be factorizable anymore, which makes sparse RBM intractable.

RBM Formulation

Assume [math]\displaystyle{ v \in \{0,1\}^{N_v} }[/math] and [math]\displaystyle{ h \in \{0,1\}^{N_h} }[/math] are the vectors of binary valued variables, corresponding to visible and hidden variables respectively. For RBM, the joint probability over these variables is defined as follows:

[math]\displaystyle{ P(v,h) = \frac{1}{Z} exp(v^{T}Wh+v^{T}b_{v}+h^{T}b_{h}) }[/math]

where [math]\displaystyle{ Z }[/math] is just a normalization factor and [math]\displaystyle{ W,b_{v},b_{h} }[/math] are the parameters.

One way to find the parameters is to optimize the likelihood function. However, finding an exact solution is not tractable. Therefore, an approximation of the gradient, called "contrastive divergence", is used for learning the parameters.

The Sparse RBM (SpRBM)

The idea of SpRBM is simple. We add a penalty term to the log-likelihood objective in order to encourage sparsity. The penalty term used in SpRBM is

[math]\displaystyle{ \frac{}{} \lambda (\rho \log{q_{j}} + (1 - \rho) \log{(1-q_{j})}) }[/math]

where [math]\displaystyle{ q_{j} = \frac{1}{N}\sum{P(h_{j}=1|v_{n})} }[/math] reflects the average unit marginal.

The Cardinality RBM (CaRBM)

Experiments