the Indian Buffet Process: An Introduction and Review

From statwiki
Revision as of 09:46, 30 August 2017 by Conversion script (talk | contribs) (Conversion script moved page The Indian Buffet Process: An Introduction and Review to the Indian Buffet Process: An Introduction and Review: Converting page titles to lowercase)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The Indian Buffet Process (IBP) is one of Bayesian nonparametric models, which is a prior measure on an infinite binary matrix. Unlike the Dirichlet process(DP), where each atom has negative correlation, IBP assumes each atom is independent.


The Indian buffet process(IBP)is a stochastic process defining a probability distribution over equivalence classes of sparse binary matrices with a finite number of rows and an unbounded number of columns.

IBP is often used in factor analysis as a prior of infinite factors.

IBP can be viewed as an extension of Dirichlet process (DP), where we drop the constraint [math] \sum_{i=1}^{\infty}{\pi_i}=1 [/math].

Because we drop the constraint, we cannot naturally use IBP as a prior of mixture models.

The Indian buffet process can also be used to define a prior distribution in any setting where the latent structure expressed in data can be expressed in the form of a binary matrix with a finite number of rows and infinite number of columns.


Like DP, IBP has several representations.

the limiting of finite distribution on sparse binary feature matrices

We have N data points and K features and the possession of feature k by data point i is indicated by a binary variable [math] z_{ik} [/math]

The generative process of the binary feature matrix is defined as below:

  • for each feature k
    • for each data point i
      • [math]\pi_k[/math] ~ [math] Beta(\frac{\alpha}{K},1) [/math]
      • [math]z_{ik}[/math] ~ [math] Bernoulli(\pi_k) [/math]

where [math] \alpha [/math] is a hyper-parameter, which is similar to the parameter defined in DP.

When K goes into infinite, such generative process will become IBP.

stick breaking construction

  • For each feature k
    • [math]\mu_k [/math] ~ [math] Beta(\alpha,1) [/math]
    • [math]\pi_{k}=\prod_{l=1}^{k}(\mu_l) [/math]
    • For each data point i
      • [math]z_{ik}[/math] ~ [math] Bernoulli(\pi_k) [/math]

the Indian buffet metaphor

N customers enter a restaurant one after another. Each customer encounters a buffet consisting of infinitely many dishes arranged in a line. The first customer starts at the left of the buffet and takes a serving from each dishes, stopping after a [math]Poisson(\alpha)[/math] number of dishes as his plate becomes overburdened.

The ith customer moves along the buffet,sampling dishes in proportion to their popularity, serving himself with probability [math] \frac{m_k}{i} [/math], where m_k is the number of previous customers who have sampled a dish. Having reached the end of all previous sampled dishes, the ith customer then tries a [math]Poisson(\frac{\alpha}{i})[/math] number of new dishes.

Properties of the distribution

  • The effective dimension of the distribution, which is the number of columns with at least one non-zero component, follow a [math]Poisson(\alpha H_N)[/math], where HN is the Nth harmonic number, i.e. [math]H_N=\sum_{j=1}^N \frac{1}{j}[/math].
  • The number of feature possessed by each object follow a [math]Poisson(\alpha)[/math] distribution. This follows from the exchangeable property of IBP.
  • The binary matrix generated from IBP remains sparse as [math]K\rightarrow \infty[/math]. Actually, [math]lim_{K\rightarrow \infty}E[1^T Z1]=N\alpha[/math].


Like DP, we can use MCMC sampling framework based on sticking breaking construction and the Indian buffet metaphor to stimulate random samples from IBP.

We can use Gibbs sampling to generate sample from IBP.

Choosing an ordering on data points such as the ith data point corresponds to the last customer to visit the buffet, we obtain: [math] p(z_{ik}=1|z_{-ik})=\frac{m_{-i,k}}{N} [/math] for any feature k such that [math]m_{-ik}\gt 0[/math] Similarly the number of new features associated with data point i should be drawn from a [math] Poisson(\frac{\alpha}{N}) [/math] distribution. where [math] z_{-ik} [/math] denotes the set of assignments of other data points, not including data point i for feature k and [math] m_{-i,k} [/math] is the number of data points possessing feature k, not including i.

A binary matrix generated by the Indian buffet process with [math]\alpha = 10[/math].

A binary matrix generated by the Indian buffet process with [math]\alpha = 10[/math].


(1) Generate to two-parameter IBP: add an extra "feature repulsion" parameter; (2) Extend binary to non-binary latent features: a simple way is to use the elementwise (Hadamard) product and an independent random variables matrix; (3) Extent to Markov Indian buffet process and time series. For time series data, we want latent factors to turn on and off in a manner that depends on time.


Due to the fact of IBP that each atom in IBP is independent, IBP can be used as a building block to construct a hierarchical mixture model, with the help of additional normalized measure such as DP.

In the Indian Buffet Process compound Dirichlet process model (IBPCDP), the authors proposed to use IBP to select independent weight and then to use DP to normalize these weights. The model is closed related to hierarchical Dirichlet process(HDP), where the top level and down level are both drawn from DP.


IBP is often used as a sparse prior to model things that can be represented in a matrix, such as factor analysis.

IBP can be extended to a Beta process, which can capture the power-law phenomenon.

Unlike other finite model, IBP can dynamically fit data when input data are never-ending.


Based on the Bayesian non-parametric framework, IBP can be used as a new building block of latent feature modeling.

However, whether Bayesian non-parametric model is superior to large parametric model is an open question.

Although IBP can not naturally use in probabilistic mixture model, IBP can be introduced into probabilistic mixture model to combine different models(features) with the help of extra normalized measure such as DP.


Williamson, Sinead, et al. "The IBP compound Dirichlet process and its application to focused topic modeling." (2010).

Griffiths, Thomas L., and Zoubin Ghahramani. "The Indian Buffet Process: An Introduction and Review." Journal of Machine Learning Research 12 (2011): 1185-1224.

Zoubin Ghahramani. "The Indian Buffet Process and Extensions". Bayesian Nonparametrics Workshop, Moncalieri, Italy 2009