graphical models for structured classification, with an application to interpreting images of protein subcellular location patterns: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 5: Line 5:
Just to stick with the notion of the authors, let's assume that <math>\phi_i^{loc}(x_i)</math> is the one-argument factor that represents the local evidence on <math>x_i</math>. Moreover, Figure 1 shows the notion they use in graphs. The small squares denote potential functions, and, as usual, the shaded and unshaded circles represent observed and unobserved variables respectively.
Just to stick with the notion of the authors, let's assume that <math>\phi_i^{loc}(x_i)</math> is the one-argument factor that represents the local evidence on <math>x_i</math>. Moreover, Figure 1 shows the notion they use in graphs. The small squares denote potential functions, and, as usual, the shaded and unshaded circles represent observed and unobserved variables respectively.


[[File:sum_fig1.JPG|center|frame|Fig.1 A probability distribution represented as a factor graph]]
[[File:sum_fig1.JPG|center|frame|Fig.1: A probability distribution represented as a factor graph]]


Using such notion, the message sent from a variable <math>x_i</math> to a potential function <math>\phi_k</math> as:
Using such notion, the message sent from a variable <math>x_i</math> to a potential function <math>\phi_k</math> as:
Line 16: Line 16:


==Working with general graphs==
==Working with general graphs==
The above is easily applied when the graph is tree-shaped. For graphs with loops, there are generally two alternatives, the first is to collapse groups of variable nodes together into combined nodes, which could turn the graph into a tree and makes it feasible to run Belief Propagation (BP). The second is to run an approximate inference algorithm that doesn't require a tree-shaped graph. One further solution is to combine both techniques.
The above is easily applied when the graph is tree-shaped. For graphs with loops, there are generally two alternatives, the first is to collapse groups of variable nodes together into combined nodes, which could turn the graph into a tree and makes it feasible to run Belief Propagation (BP). The second is to run an approximate inference algorithm that doesn't require a tree-shaped graph. One further solution is to combine both techniques. An example is to derive a tree-shaped graph for the graph shown in Figure 1. Figure 2 combines variables <math>x_1</math> and <math>x_2</math> to form the graph in Figure 2.


[[File:sum_fig1.JPG|center|frame|Fig.2: A tree-shaped factor graph representing the graph in Fig.1]]


<math></math>
<math></math>

Revision as of 01:35, 11 November 2011

Background

In structured classification problems, there is a direct conflict between expressive models and efficient inference: while graphical models such as factor graphs can represent arbitrary dependences among instance labels, the cost of inference via belief propagation in these models grows rapidly as the graph structure becomes more complicated. One important source of complexity in belief propagation is the need to marginalize large factors to compute messages. This operation takes time exponential in the number of variables in the factor, and can limit the expressiveness of the models used. A new class of potential functions is proposed, which is called decomposable k-way potentials. It provides efficient algorithms for computing messages from these potentials during belief propagation. These new potentials provide a good balance between expressive power and efficient inference in practical structured classification problems. Three instances of decomposable potentials are discussed: the associative Markov network potential, the nested junction tree, and the voting potential. The new representation and algorithm lead to substantial improvements in both inference speed and classification accuracy.

Belief Propagation

Just to stick with the notion of the authors, let's assume that [math]\displaystyle{ \phi_i^{loc}(x_i) }[/math] is the one-argument factor that represents the local evidence on [math]\displaystyle{ x_i }[/math]. Moreover, Figure 1 shows the notion they use in graphs. The small squares denote potential functions, and, as usual, the shaded and unshaded circles represent observed and unobserved variables respectively.

File:sum fig1.JPG
Fig.1: A probability distribution represented as a factor graph

Using such notion, the message sent from a variable [math]\displaystyle{ x_i }[/math] to a potential function [math]\displaystyle{ \phi_k }[/math] as:

[math]\displaystyle{ m_{i \rightarrow k}(x_i)=\phi_i^{loc}(x_i)\prod_{j=1}^{k-1}m_{j \rightarrow i}(x_i) (1) }[/math]

Similarly, a message from a potential function [math]\displaystyle{ \phi_j }[/math] to [math]\displaystyle{ x_k }[/math] can be computed as:

[math]\displaystyle{ m_{j \rightarrow k}(x_k)=\sum_{x_1}\sum_{x_2}...\sum_{x_{k-1}}\phi_j(x_1,...,x_k)\prod_{i=1}^{k-1}m_{i \rightarrow j}(x_i) (2) }[/math]

Working with general graphs

The above is easily applied when the graph is tree-shaped. For graphs with loops, there are generally two alternatives, the first is to collapse groups of variable nodes together into combined nodes, which could turn the graph into a tree and makes it feasible to run Belief Propagation (BP). The second is to run an approximate inference algorithm that doesn't require a tree-shaped graph. One further solution is to combine both techniques. An example is to derive a tree-shaped graph for the graph shown in Figure 1. Figure 2 combines variables [math]\displaystyle{ x_1 }[/math] and [math]\displaystyle{ x_2 }[/math] to form the graph in Figure 2.

File:sum fig1.JPG
Fig.2: A tree-shaped factor graph representing the graph in Fig.1

[math]\displaystyle{ }[/math]