hierarchical Dirichlet Processes
If we can put a prior on random partition and use likelihood distribution to model data points, we can use the Bayesian framework to learn the latent dimension, which is the main idea of Dirichlet process mixture model. When it comes to clustering grouped data, we usually assume some information is shared between groups, which means our model should be able to capture this information. One natural proposal about hierarchical clustering problem is each group j is modeled by a Dirichlet process mixture model [math]\displaystyle{ G_j }[/math] ~ [math]\displaystyle{ DP(\alpha, G_0) }[/math]. However, if the [math]\displaystyle{ G_0 }[/math] is continuous, this proposal generally cannot model shared information between groups. One idea is to make [math]\displaystyle{ G_0 }[/math] become discrete by limiting the choice of [math]\displaystyle{ G_0 }[/math]. The main idea of this paper is to assume [math]\displaystyle{ G_0 }[/math] drawn from other Dirichlet process [math]\displaystyle{ DP(\lambda, H) }[/math], where H is any base measure. Note that [math]\displaystyle{ G_0 }[/math] is discrete with probability one due to the fact of Dirichlet process.
Introduction
It is a common practice to tune the latent dimension K in order to get the best performance of a model. One weakness of this practice is that the corpus is static and unchanged, which means it is generally difficult to do inference given new unseen data points. In that case, we may either re-train the model or use some algebraic/heuristic fold-in technique to do inference. If we can come out some prior on the latent dimension and likelihood distribution on data points, we can learn the latent dimension K o-the-fly from the corpus based on the Bayesian framework. This is a important property when it comes to online data stream mining.
Hierarchical clustering
A recurring theme in statistics is the need to separate observations into groups, and yet allow the groups to remain liked-to "share statistical strength". In the Bayesian formalism such sharing is achieved naturally via hierarchical modeling; parameters are shared among groups, and the randomness of the parameters induces dependencies among the groups. Estimates based on the posterior distribution exhibit "shrinkage"
Dirichlet process
[math]\displaystyle{ G }[/math] ~ [math]\displaystyle{ DP(\alpha,G_0) }[/math]
- for each data point [math]\displaystyle{ x_i }[/math]
- [math]\displaystyle{ \theta_i }[/math] ~ [math]\displaystyle{ G }[/math]
- [math]\displaystyle{ x_i }[/math] ~ [math]\displaystyle{ \theta_i }[/math]
stick-breaking construction
[math]\displaystyle{ G }[/math] ~ [math]\displaystyle{ DP(\alpha,G_0) }[/math] is equivalent to the following construction.
[math]\displaystyle{ \theta_k }[/math] ~ [math]\displaystyle{ G_0 }[/math]
[math]\displaystyle{ \pi_k' }[/math] ~ [math]\displaystyle{ Beta(1,\alpha) }[/math]
[math]\displaystyle{ \pi_k = \pi_k'\prod_{l=1}^{k-1}(1-\pi_k') }[/math]
- for each data point [math]\displaystyle{ x_i }[/math]
- [math]\displaystyle{ \theta_i }[/math] ~ [math]\displaystyle{ G }[/math]
- [math]\displaystyle{ x_i }[/math] ~ [math]\displaystyle{ \theta_i }[/math]
Chinese restaurant process
for each data point [math]\displaystyle{ x_i }[/math]
- [math]\displaystyle{ \theta_i }[/math] ~ [math]\displaystyle{ \sum_{n=1}^{i-1}\frac{1}{i-1+\alpha}\delta_{\theta_n} + \frac{\alpha}{i-1+\alpha}G_0 }[/math]
- [math]\displaystyle{ x_i }[/math] ~ [math]\displaystyle{ \theta_i }[/math]
Another representation based on Chinese restaurant process can be found in below.
The table assignment [math]\displaystyle{ t_i }[/math] is followed by
[math]\displaystyle{ p(t_i=k)=\frac{m_k}{i-1+\alpha} }[/math]
[math]\displaystyle{ p(t_i=k_{new})=\frac{\alpha}{i-1+\alpha} }[/math]
[math]\displaystyle{ \theta_{k_{new}} }[/math] ~ [math]\displaystyle{ G_0 }[/math]
where [math]\displaystyle{ m_k=\sum_{n=1}^{i-1}I(t_n=k) }[/math] and [math]\displaystyle{ I() }[/math] is the indicator function.
Note that [math]\displaystyle{ \sum_{k}m_k=i-1 }[/math]
the infinite limit of finite mixture models
[math]\displaystyle{ \pi }[/math] ~ [math]\displaystyle{ Dir(\frac{\alpha}{K},\dots,\frac{\alpha}{K}) }[/math]
For each topic/latent model k
- [math]\displaystyle{ \theta_{k} }[/math] ~ [math]\displaystyle{ G_0 }[/math]
For each data point [math]\displaystyle{ x_i }[/math]
- [math]\displaystyle{ z_i }[/math] ~ [math]\displaystyle{ Multi(\pi) }[/math]
- [math]\displaystyle{ x_i }[/math] ~ [math]\displaystyle{ theta_{z_i} }[/math]
Where Dir() is a finite K-dimension Dirichlet distribution and Multi() is a K-dimension Multinomial distribution.
When K goes into infinite, we can show that such construction become a Dirichlet process.