graphical models for structured classification, with an application to interpreting images of protein subcellular location patterns: Difference between revisions
No edit summary |
No edit summary |
||
Line 9: | Line 9: | ||
Using such notion, the message sent from a variable <math>x_i</math> to a potential function <math>\phi_k</math> as: | Using such notion, the message sent from a variable <math>x_i</math> to a potential function <math>\phi_k</math> as: | ||
<math>m_{i \rightarrow k}(x_i)=\phi_i^{loc}(x_i)\prod_{j=1}^{k-1}m_{j \rightarrow i}(x_i)\text{ }(1)</math> | <center><math>m_{i \rightarrow k}(x_i)=\phi_i^{loc}(x_i)\prod_{j=1}^{k-1}m_{j \rightarrow i}(x_i)\text{ }(1)</math></center> | ||
Similarly, a message from a potential function <math>\phi_j</math> to <math>x_k</math> can be computed as: | Similarly, a message from a potential function <math>\phi_j</math> to <math>x_k</math> can be computed as: | ||
<math>m_{j \rightarrow k}(x_k)=\sum_{x_1}\sum_{x_2}...\sum_{x_{k-1}}\phi_j(x_1,...,x_k)\prod_{i=1}^{k-1}m_{i \rightarrow j}(x_i)\text{ }(2)</math> | <center><math>m_{j \rightarrow k}(x_k)=\sum_{x_1}\sum_{x_2}...\sum_{x_{k-1}}\phi_j(x_1,...,x_k)\prod_{i=1}^{k-1}m_{i \rightarrow j}(x_i)\text{ }(2)</math></center> | ||
==General graphs== | ==General graphs== | ||
Line 30: | Line 30: | ||
The Potts potential is a two-argument factor which encourages two nodes <math>x_i</math> and <math>x_j</math> to have the same label: | The Potts potential is a two-argument factor which encourages two nodes <math>x_i</math> and <math>x_j</math> to have the same label: | ||
<math>\phi(x_i,x_j)= \begin{cases} | <center><math>\phi(x_i,x_j)= \begin{cases} | ||
\omega & \text{ }x_i=x_j\\ | \omega & \text{ }x_i=x_j\\ | ||
1 & \text{ }otherwise\\ | 1 & \text{ }otherwise\\ | ||
\end{cases} \text{ }(3) | \end{cases} \text{ }(3) | ||
</math> | </math></center> | ||
whereas <math>\omega>1</math> is an arbitrary parameter expressing how strongly <math>x_i</math> and <math>x_j</math> are believed to have the same label. If the Potts potential is used for each edge in the similarity graph, the overall probability of a vector of labels x is as follows: | whereas <math>\omega>1</math> is an arbitrary parameter expressing how strongly <math>x_i</math> and <math>x_j</math> are believed to have the same label. If the Potts potential is used for each edge in the similarity graph, the overall probability of a vector of labels x is as follows: | ||
<math>P(x)=\frac{1}{z}\prod_{nodes\text{ }i}P(x_i)\prod_{edges\text{ }i,j}\phi(x_i,x_j)\text{ }(4)</math> | <center><math>P(x)=\frac{1}{z}\prod_{nodes\text{ }i}P(x_i)\prod_{edges\text{ }i,j}\phi(x_i,x_j)\text{ }(4)</math></center> | ||
===The Voting potential=== | ===The Voting potential=== | ||
Assuming that <math>N(j)</math> is the set of similarity graph neignbors of cell <math>j</math>, let's write the group of cells <math>V(j)=\{j\}\cup{N(j)}</math>. The voting potential is then defined as follows: | Assuming that <math>N(j)</math> is the set of similarity graph neignbors of cell <math>j</math>, let's write the group of cells <math>V(j)=\{j\}\cup{N(j)}</math>. The voting potential is then defined as follows: | ||
<math>\phi_j(X_{V(j)})=\frac{\lambda/n+\sum_{i\in{N(j)}I(x_i,x_j)}}{|N(j)|+\lambda}\text{ }(5)</math> | <center><math>\phi_j(X_{V(j)})=\frac{\lambda/n+\sum_{i\in{N(j)}I(x_i,x_j)}}{|N(j)|+\lambda}\text{ }(5)</math></center> | ||
whereas <math>n</math> is the number of classes, <math>\lambda</math> is a smoothing parameter and <math>I</math> is an indicator function: | whereas <math>n</math> is the number of classes, <math>\lambda</math> is a smoothing parameter and <math>I</math> is an indicator function: | ||
<math>I(x_i,x_j)= \begin{cases} | <center><math>I(x_i,x_j)= \begin{cases} | ||
1 & \text{ }if \text{ }x_i=x_j\\ | 1 & \text{ }if \text{ }x_i=x_j\\ | ||
0 & \text{ }otherwise\\ | 0 & \text{ }otherwise\\ | ||
\end{cases} | \end{cases} | ||
</math> | </math></center> | ||
===The AMN (Associative Markov Network) potential=== | ===The AMN (Associative Markov Network) potential=== | ||
AMN potential is defined to be: | AMN potential is defined to be: | ||
<math>\phi(x_1,...,x_k)=1+\sum_{y=1}^n(\omega_y-1)I(x_1=x_2=...=x_k=y)\text{ }(6)</math> | <center><math>\phi(x_1,...,x_k)=1+\sum_{y=1}^n(\omega_y-1)I(x_1=x_2=...=x_k=y)\text{ }(6)</math></center> | ||
for parameters <math>\omega_y>1</math> where I(predicate) is defined to be <math>1</math> if the predicate is true and <math>0</math> if it is false. Therefore, the AMN potential is constant unless all the variables <math>x_1...x_k</math> are assigned to the same class <math>y</math>. | for parameters <math>\omega_y>1</math> where I(predicate) is defined to be <math>1</math> if the predicate is true and <math>0</math> if it is false. Therefore, the AMN potential is constant unless all the variables <math>x_1...x_k</math> are assigned to the same class <math>y</math>. | ||
Line 63: | Line 63: | ||
while k-way factors can lead to more accurate inference, they can also slow down belief propagation. For a general k-way factor, it takes time exponential in k. For specific k-way potentials though, it is possible to take advantage of special structure to design a fast inference algorithm. In particular, for many potential functions, it is possible to write down an algorithm which efficiently performs sums of the form required for message computation: | while k-way factors can lead to more accurate inference, they can also slow down belief propagation. For a general k-way factor, it takes time exponential in k. For specific k-way potentials though, it is possible to take advantage of special structure to design a fast inference algorithm. In particular, for many potential functions, it is possible to write down an algorithm which efficiently performs sums of the form required for message computation: | ||
<math>\sum_{x_1}\sum_{x_2}...\sum_{k-1}\phi_j^*(x_1,...,x_k)\text{ }(7)</math> | <center><math>\sum_{x_1}\sum_{x_2}...\sum_{k-1}\phi_j^*(x_1,...,x_k)\text{ }(7)</math></center> | ||
<math>\phi_j^*(x_1,...,x_k)=m_1(x_1)m_2(x_2)...m_k(x_{k-1})\phi_j(x_1,...,x_k)\text{ }(7)</math> | <center><math>\phi_j^*(x_1,...,x_k)=m_1(x_1)m_2(x_2)...m_k(x_{k-1})\phi_j(x_1,...,x_k)\text{ }(7)</math></center> | ||
<math></math> | <center><math></math></center> |
Revision as of 02:12, 11 November 2011
Background
In structured classification problems, there is a direct conflict between expressive models and efficient inference: while graphical models such as factor graphs can represent arbitrary dependences among instance labels, the cost of inference via belief propagation in these models grows rapidly as the graph structure becomes more complicated. One important source of complexity in belief propagation is the need to marginalize large factors to compute messages. This operation takes time exponential in the number of variables in the factor, and can limit the expressiveness of the models used. A new class of potential functions is proposed, which is called decomposable k-way potentials. It provides efficient algorithms for computing messages from these potentials during belief propagation. These new potentials provide a good balance between expressive power and efficient inference in practical structured classification problems. Three instances of decomposable potentials are discussed: the associative Markov network potential, the nested junction tree, and the voting potential. The new representation and algorithm lead to substantial improvements in both inference speed and classification accuracy.
Belief Propagation
Just to stick with the notion of the authors, let's assume that [math]\displaystyle{ \phi_i^{loc}(x_i) }[/math] is the one-argument factor that represents the local evidence on [math]\displaystyle{ x_i }[/math]. Moreover, Figure 1 shows the notion they use in graphs. The small squares denote potential functions, and, as usual, the shaded and unshaded circles represent observed and unobserved variables respectively.
Using such notion, the message sent from a variable [math]\displaystyle{ x_i }[/math] to a potential function [math]\displaystyle{ \phi_k }[/math] as:
Similarly, a message from a potential function [math]\displaystyle{ \phi_j }[/math] to [math]\displaystyle{ x_k }[/math] can be computed as:
General graphs
The above is easily applied when the graph is tree-shaped. For graphs with loops, there are generally two alternatives, the first is to collapse groups of variable nodes together into combined nodes, which could turn the graph into a tree and makes it feasible to run Belief Propagation (BP). The second is to run an approximate inference algorithm that doesn't require a tree-shaped graph. One further solution is to combine both techniques. An example is to derive a tree-shaped graph for the graph shown in Figure 1. Figure 2 combines variables [math]\displaystyle{ x_1 }[/math] and [math]\displaystyle{ x_2 }[/math] to form the graph in Figure 2.
Loopy Belief Propagation (LBP)
If a graph is collapsed all the way to a tree, inference can be done with the exact version of BP as above. If there are still some loops left, it's LBP that should be used. In LBP (as in BP), an arbitrary node is chosen to be the root and formulas 1 & 2 are used. However, each message may have to be updated repeatedly before the marginals converge. Inference with LBP is approximate because it can double-count evidence; messages to a node [math]\displaystyle{ i }[/math] from two nodes [math]\displaystyle{ j }[/math] and [math]\displaystyle{ k }[/math] can both contain information from a common neighbor [math]\displaystyle{ l }[/math] of [math]\displaystyle{ j }[/math] and [math]\displaystyle{ k }[/math]. If LBP oscillates between some steady states and does not converge, the process could be stopped after some number of iterations. Oscillations can be avoided by using momentum, which replaces the messages that were sent at time [math]\displaystyle{ t }[/math] with a weighted average of the messages at times [math]\displaystyle{ t }[/math] and [math]\displaystyle{ t-1 }[/math]. For either exact or loopy BP, run time for each path over the factor graph is exponential in the number of distinct original variables included in the largest factor. Therefore, inference can become prohibitively expensive if the factors are too large.
Constructing factor graphs for structured classification
To construct factor graphs that encode "likely" label vectors, two steps are performed. First, domain specific heuristics are used to identify pairs of examples whose labels are likely to be the same in order to use such pairs to build a similarity graph with an edge between each pair of examples. The second step is to use this similarity graph to decide which potentials to add to the factor graph. Given the similarity graph of the protein subcellular location pattern classification problem, factor graphs built using different types of potentials are compared as we will see in the following sections.
The Potts potential
The Potts potential is a two-argument factor which encourages two nodes [math]\displaystyle{ x_i }[/math] and [math]\displaystyle{ x_j }[/math] to have the same label:
whereas [math]\displaystyle{ \omega\gt 1 }[/math] is an arbitrary parameter expressing how strongly [math]\displaystyle{ x_i }[/math] and [math]\displaystyle{ x_j }[/math] are believed to have the same label. If the Potts potential is used for each edge in the similarity graph, the overall probability of a vector of labels x is as follows:
The Voting potential
Assuming that [math]\displaystyle{ N(j) }[/math] is the set of similarity graph neignbors of cell [math]\displaystyle{ j }[/math], let's write the group of cells [math]\displaystyle{ V(j)=\{j\}\cup{N(j)} }[/math]. The voting potential is then defined as follows:
whereas [math]\displaystyle{ n }[/math] is the number of classes, [math]\displaystyle{ \lambda }[/math] is a smoothing parameter and [math]\displaystyle{ I }[/math] is an indicator function:
The AMN (Associative Markov Network) potential
AMN potential is defined to be:
for parameters [math]\displaystyle{ \omega_y\gt 1 }[/math] where I(predicate) is defined to be [math]\displaystyle{ 1 }[/math] if the predicate is true and [math]\displaystyle{ 0 }[/math] if it is false. Therefore, the AMN potential is constant unless all the variables [math]\displaystyle{ x_1...x_k }[/math] are assigned to the same class [math]\displaystyle{ y }[/math].
The proposed Decomposable potentials
while k-way factors can lead to more accurate inference, they can also slow down belief propagation. For a general k-way factor, it takes time exponential in k. For specific k-way potentials though, it is possible to take advantage of special structure to design a fast inference algorithm. In particular, for many potential functions, it is possible to write down an algorithm which efficiently performs sums of the form required for message computation: