the Indian Buffet Process: An Introduction and Review: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 18: Line 18:
** for each data point i
** for each data point i


*** <math>\pi_k</math> ~ <math> Beta(\fract{\alpha}{K},1) </math>
*** <math>\pi_k</math> ~ <math> Beta(\frat{\alpha}{K},1) </math>


*** <math>z_{ik}</math> ~ <math> Bernoulli(\pi_k) </math>
*** <math>z_{ik}</math> ~ <math> Bernoulli(\pi_k) </math>


===stick breaking construction===  
===stick breaking construction===  

Revision as of 20:27, 9 August 2013

The Indian Buffet Process (IBP) is one of Bayesian nonparametric models, which is a prior measure on an infinite binary matrix. Unlike the Dirichlet process(DP), where each atom has negative correlation, IBP assumes each atom is independent.


Introduction

IBP is often used in factor analysis as a prior of infinite factors. IBP can be viewed as an extension of DP, where we drop the constraint [math]\displaystyle{ \sum_{i=1}^{\inf}{\pi_i}=1 }[/math]. Because we drop the constraint, it does not naturally use IBP as a prior of mixture models.

Representations

Like DP, IBP has several representations.

the limiting of finite distribution on sparse binary feature matrices

We have N data points and K features and the possession of feature k by data point i is indicated by a binary variable [math]\displaystyle{ z_{ik} }[/math] The generative process of the binary feature matrix is defined as below:

  • for each feature k
    • for each data point i
      • [math]\displaystyle{ \pi_k }[/math] ~ [math]\displaystyle{ Beta(\frat{\alpha}{K},1) }[/math]
      • [math]\displaystyle{ z_{ik} }[/math] ~ [math]\displaystyle{ Bernoulli(\pi_k) }[/math]

stick breaking construction

the Indian buffet metaphor

comparison

Inference

Conclusion