# stat946f11

## Introduction

### Motivation

Graphical probabilistic models provide a concise representation of various probabilistic distributions that are found in many real world applications. Some interesting areas include medical diagnosis, computer vision, language, analyzing gene expression data, etc. A problem related to medical diagnosis is, "detecting and quantifying the causes of a disease". This question can be addressed through the graphical representation of relationships between various random variables (both observed and hidden). This is an efficient way of representing a joint probability distribution.

Graphical models are excellent tools to burden the computational load of probabilistic models. Suppose we want to model a binary image. If we have 256 by 256 image then our distribution function has $2^{256*256}=2^{65536}$ outcomes. Even very simple tasks such as marginalization of such a probability distribution over some variables can be computationally intractable and the load grows exponentially versus number of the variables. In practice and in real world applications we generally have some kind of dependency or relation between the variables. Using such information, can help us to simplify the calculations. For example for the same problem if all the image pixels can be assumed to be independent, marginalization can be done easily. One of the good tools to depict such relations are graphs. Using some rules we can indicate a probability distribution uniquely by a graph, and then it will be easier to study the graph instead of the probability distribution function (PDF). We can take advantage of graph theory tools to design some algorithms. Though it may seem simple but this approach will simplify the commutations and as mentioned help us to solve a lot of problems in different research areas.

### Notation

We will begin with short section about the notation used in these notes. Capital letters will be used to denote random variables and lower case letters denote observations for those random variables:

• $\{X_1,\ X_2,\ \dots,\ X_n\}$ random variables
• $\{x_1,\ x_2,\ \dots,\ x_n\}$ observations of the random variables

The joint probability mass function can be written as:

$P( X_1 = x_1, X_2 = x_2, \dots, X_n = x_n )$

or as shorthand, we can write this as $p( x_1, x_2, \dots, x_n )$. In these notes both types of notation will be used. We can also define a set of random variables $X_Q$ where $Q$ represents a set of subscripts.

### Example

Let $A = \{1,4\}$, so $X_A = \{X_1, X_4\}$; $A$ is the set of indices for the r.v. $X_A$.
Also let $B = \{2\},\ X_B = \{X_2\}$ so we can write

$P( X_A | X_B ) = P( X_1 = x_1, X_4 = x_4 | X_2 = x_2 ).\,\!$

### Graphical Models

Graphical models provide a compact representation of the joint distribution where V vertices (nodes) represent random variables and edges E represent the dependency between the variables. There are two forms of graphical models (Directed and Undirected graphical model). The directed graphical (Figure 1) model which also called the Bayesian network or the belief network consists of arcs and node where arcs indicate causality between the connected variables. On the other hand, the undirected graphical model (Figure 2) which is also called Markov Random Fields (MRFs) or Markov networks is based on the assumptions that two nodes or two set of nodes are conditionally independent given their neighbour1.

File:directed.png
Fig.1 A directed graph.
File:undirected.png
Fig.2 An undirected graph.

We will use graphs in this course to represent the relationship between different random variables. {{

 Template:namespace detect


| type = style | image = | imageright = | style = | textstyle = | text = This article may require cleanup to meet Wikicoursenote's quality standards. The specific problem is: It is worth noting that both Bayesian networks and Markov networks existed before introduction of graphical models but graphical models helps us to provide a unified theory for both cases and more generalized distributions.. Please improve this article if you can. (October 2010) | small = | smallimage = | smallimageright = | smalltext = }}

#### Directed graphical models (Bayesian networks)

In the case of directed graphs, the direction of the arrow indicates "causation". For example:
$A \longrightarrow B$: $A\,\!$ "causes" $B\,\!$.

In this case we must assume that our directed graphs are acyclic. If our causation graph contains a cycle then it would mean that for example:

• $A$ causes $B$
• $B$ causes $C$
• $C$ causes $A$, again.

Clearly, this would confuse the order of the events. An example of a graph with a cycle can be seen in Figure 3. Such a graph could not be used to represent causation. The graph in Figure 4 does not have cycle and we can say that the node $X_1$ causes, or affects, $X_2$ and $X_3$ while they in turn cause $X_4$.

File:cyclic.png
Fig.3 A cyclic graph.
File:acyclic.png
Fig.4 An acyclic graph.

We will consider a 1-1 map between our graph's vertices and a set of random variables. Consider the following example that uses Boolean random variables. It is important to note that the variables need not be boolean and can indeed be discrete over a range or even continuous.

Speaking about random variables, we can now refer to the relationship between random variables in terms of dependence. Therefore, the direction of the arrow indicates "conditional dependence". For example:
$A \longrightarrow B$: $B\,\!$ "is dependent on" $A\,\!$.

#### Example

In this example we will consider the possible causes for wet grass.

The wet grass could be caused by rain, or a sprinkler. Rain can be caused by clouds. On the other hand one can not say that clouds cause the use of a sprinkler. However, the causation exists because the presence of clouds does affect whether or not a sprinkler will be used. If there are more clouds there is a smaller probability that one will rely on a sprinkler to water the grass. As we can see from this example the relationship between two variables can also act like a negative correlation. The corresponding graphical model is shown in Figure 5.

File:wetgrass.png
Fig.5 The wet grass example.

This directed graph shows the relation between the 4 random variables. If we have the joint probability $P(C,R,S,W)$, then we can answer many queries about this system.

This all seems very simple at first but then we must consider the fact that in the discrete case the joint probability function grows exponentially with the number of variables. If we consider the wet grass example once more we can see that we need to define $2^4 = 16$ different probabilities for this simple example. The table bellow that contains all of the probabilities and their corresponding boolean values for each random variable is called an interaction table.

Example:

$\begin{matrix} P(C,R,S,W):\\ p_1\\ p_2\\ p_3\\ .\\ .\\ .\\ p_{16} \\ \\ \end{matrix}$

$\begin{matrix} ~~~ & C & R & S & W \\ & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 1 \\ & 0 & 0 & 1 & 0 \\ & . & . & . & . \\ & . & . & . & . \\ & . & . & . & . \\ & 1 & 1 & 1 & 1 \\ \end{matrix}$

Now consider an example where there are not 4 such random variables but 400. The interaction table would become too large to manage. In fact, it would require $2^{400}$ rows! The purpose of the graph is to help avoid this intractability by considering only the variables that are directly related. In the wet grass example Sprinkler (S) and Rain (R) are not directly related.

To solve the intractability problem we need to consider the way those relationships are represented in the graph. Let us define the following parameters. For each vertex $i \in V$,

• $\pi_i$: is the set of parents of $i$
• ex. $\pi_R = C$ \ (the parent of $R = C$)
• $f_i(x_i, x_{\pi_i})$: is the joint p.d.f. of $i$ and $\pi_i$ for which it is true that:
• $f_i$ is nonnegative for all $i$
• $\displaystyle\sum_{x_i} f_i(x_i, x_{\pi_i}) = 1$

Claim: There is a family of probability functions $P(X_V) = \prod_{i=1}^n f_i(x_i, x_{\pi_i})$ where this function is nonnegative, and

$\sum_{x_1}\sum_{x_2}\cdots\sum_{x_n} P(X_V) = 1$

To show the power of this claim we can prove the equation (\ref{eqn:WetGrass}) for our wet grass example:

$\begin{matrix} P(X_V) &=& P(C,R,S,W) \\ &=& f(C) f(R,C) f(S,C) f(W,S,R) \end{matrix}$

We want to show that

$\begin{matrix} \sum_C\sum_R\sum_S\sum_W P(C,R,S,W) & = &\\ \sum_C\sum_R\sum_S\sum_W f(C) f(R,C) f(S,C) f(W,S,R) & = & 1. \end{matrix}$

Consider factors $f(C)$, $f(R,C)$, $f(S,C)$: they do not depend on $W$, so we can write this all as

$\begin{matrix} & & \sum_C\sum_R\sum_S f(C) f(R,C) f(S,C) \cancelto{1}{\sum_W f(W,S,R)} \\ & = & \sum_C\sum_R f(C) f(R,C) \cancelto{1}{\sum_S f(S,C)} \\ & = & \cancelto{1}{\sum_C f(C)} \cancelto{1}{\sum_R f(R,C)} \\ & = & 1 \end{matrix}$

since we had already set $\displaystyle \sum_{x_i} f_i(x_i, x_{\pi_i}) = 1$.

Let us consider another example with a different directed graph.
Example:
Consider the simple directed graph in Figure 6.

Fig.6 Simple 4 node graph.

Assume that we would like to calculate the following: $p(x_3|x_2)$. We know that we can write the joint probability as:

$p(x_1,x_2,x_3,x_4) = f(x_1) f(x_2,x_1) f(x_3,x_2) f(x_4,x_3) \,\!$

We can also make use of Bayes' Rule here:

$p(x_3|x_2) = \frac{p(x_2,x_3)}{ p(x_2)}$
$\begin{matrix} p(x_2,x_3) & = & \sum_{x_1} \sum_{x_4} p(x_1,x_2,x_3,x_4) ~~~~ \hbox{(marginalization)} \\ & = & \sum_{x_1} \sum_{x_4} f(x_1) f(x_2,x_1) f(x_3,x_2) f(x_4,x_3) \\ & = & \sum_{x_1} f(x_1) f(x_2,x_1) f(x_3,x_2) \cancelto{1}{\sum_{x_4}f(x_4,x_3)} \\ & = & f(x_3,x_2) \sum_{x_1} f(x_1) f(x_2,x_1). \end{matrix}$

We also need

$\begin{matrix} p(x_2) & = & \sum_{x_1}\sum_{x_3}\sum_{x_4} f(x_1) f(x_2,x_1) f(x_3,x_2) f(x_4,x_3) \\ & = & \sum_{x_1}\sum_{x_3} f(x_1) f(x_2,x_1) f(x_3,x_2) \\ & = & \sum_{x_1} f(x_1) f(x_2,x_1). \end{matrix}$

Thus,

$\begin{matrix} p(x_3|x_2) & = & \frac{ f(x_3,x_2) \sum_{x_1} f(x_1) f(x_2,x_1)}{ \sum_{x_1} f(x_1) f(x_2,x_1)} \\ & = & f(x_3,x_2). \end{matrix}$

Theorem 1.

$f_i(x_i,x_{\pi_i}) = p(x_i|x_{\pi_i}).\,\!$
$\therefore \ P(X_V) = \prod_{i=1}^n p(x_i|x_{\pi_i})\,\!$
.

In our simple graph, the joint probability can be written as

$p(x_1,x_2,x_3,x_4) = p(x_1)p(x_2|x_1) p(x_3|x_2) p(x_4|x_3).\,\!$

Instead, had we used the chain rule we would have obtained a far more complex equation:

$p(x_1,x_2,x_3,x_4) = p(x_1) p(x_2|x_1)p(x_3|x_2,x_1) p(x_4|x_3,x_2,x_1).\,\!$

The Markov Property, or Memoryless Property is when the variable $X_i$ is only affected by $X_j$ and so the random variable $X_i$ given $X_j$ is independent of every other random variable. In our example the history of $x_4$ is completely determined by $x_3$.
By simply applying the Markov Property to the chain-rule formula we would also have obtained the same result.

Now let us consider the joint probability of the following six-node example found in Figure 7.

Fig.7 Six node example.

If we use Theorem 1 it can be seen that the joint probability density function for Figure 7 can be written as follows:

$P(X_1,X_2,X_3,X_4,X_5,X_6) = P(X_1)P(X_2|X_1)P(X_3|X_1)P(X_4|X_2)P(X_5|X_3)P(X_6|X_5,X_2) \,\!$

Once again, we can apply the Chain Rule and then the Markov Property and arrive at the same result.

$\begin{matrix} && P(X_1,X_2,X_3,X_4,X_5,X_6) \\ && = P(X_1)P(X_2|X_1)P(X_3|X_2,X_1)P(X_4|X_3,X_2,X_1)P(X_5|X_4,X_3,X_2,X_1)P(X_6|X_5,X_4,X_3,X_2,X_1) \\ && = P(X_1)P(X_2|X_1)P(X_3|X_1)P(X_4|X_2)P(X_5|X_3)P(X_6|X_5,X_2) \end{matrix}$

### Independence

#### Marginal independence

We can say that $X_A$ is marginally independent of $X_B$ if:

$\begin{matrix} X_A \perp X_B : & & \\ P(X_A,X_B) & = & P(X_A)P(X_B) \\ P(X_A|X_B) & = & P(X_A) \end{matrix}$

#### Conditional independence

We can say that $X_A$ is conditionally independent of $X_B$ given $X_C$ if:

$\begin{matrix} X_A \perp X_B | X_C : & & \\ P(X_A,X_B | X_C) & = & P(X_A|X_C)P(X_B|X_C) \\ P(X_A|X_B,X_C) & = & P(X_A|X_C) \end{matrix}$

Note: Both equations are equivalent. Aside: Before we move on further, we first define the following terms:

1. I is defined as an ordering for the nodes in graph C.
2. For each $i \in V$, $V_i$ is defined as a set of all nodes that appear earlier than i excluding its parents $\pi_i$.

Let us consider the example of the six node figure given above (Figure 7). We can define $I$ as follows:

$I = \{1,2,3,4,5,6\} \,\!$

We can then easily compute $V_i$ for say $i=3,6$.

$V_3 = \{2\}, V_6 = \{1,3,4\}\,\!$

while $\pi_i$ for $i=3,6$ will be.

$\pi_3 = \{1\}, \pi_6 = \{2,5\}\,\!$

We would be interested in finding the conditional independence between random variables in this graph. We know $X_i \perp X_{v_i} | X_{\pi_i}$ for each $i$. In other words, given its parents the node is independent of all earlier nodes. So:
$X_1 \perp \phi | \phi$,
$X_2 \perp \phi | X_1$,
$X_3 \perp X_2 | X_1$,
$X_4 \perp \{X_1,X_3\} | X_2$,
$X_5 \perp \{X_1,X_2,X_4\} | X_3$,
$X_6 \perp \{X_1,X_3,X_4\} | \{X_2,X_5\}$
To illustrate why this is true we can take a simple example. Show that:

$P(X_4|X_1,X_2,X_3) = P(X_4|X_2)\,\!$

Proof: first, we know $P(X_1,X_2,X_3,X_4,X_5,X_6) = P(X_1)P(X_2|X_1)P(X_3|X_1)P(X_4|X_2)P(X_5|X_3)P(X_6|X_5,X_2)\,\!$

then

$\begin{matrix} P(X_4|X_1,X_2,X_3) & = & \frac{P(X_1,X_2,X_3,X_4)}{P(X_1,X_2,X_3)}\\ & = & \frac{ \sum_{X_5} \sum_{X_6} P(X_1,X_2,X_3,X_4,X_5,X_6)}{ \sum_{X_4} \sum_{X_5} \sum_{X_6}P(X_1,X_2,X_3,X_4,X_5,X_6)}\\ & = & \frac{P(X_1)P(X_2|X_1)P(X_3|X_1)P(X_4|X_2)}{P(X_1)P(X_2|X_1)P(X_3|X_1)}\\ & = & P(X_4|X_2) \end{matrix}$

The other conditional independences can be proven through a similar process.

#### Sampling

Even if using graphical models helps a lot facilitate obtaining the joint probability, exact inference is not always feasible. Exact inference is feasible in small to medium-sized networks only. Exact inference consumes such a long time in large networks. Therefore, we resort to approximate inference techniques which are much faster and usually give pretty good results.

In sampling, random samples are generated and values of interest are computed from samples, not original work.

As an input you have a Bayesian network with set of nodes $X\,\!$. The sample taken may include all variables (except evidence E) or a subset. Sample schemas dictate how to generate samples (tuples). Ideally samples are distributed according to $P(X|E)\,\!$

Some sampling algorithms:

• Forward Sampling
• Likelihood weighting
• Gibbs Sampling (MCMC)
• Blocking
• Rao-Blackwellised
• Importance Sampling

## Bayes Ball

The Bayes Ball algorithm can be used to determine if two random variables represented in a graph are independent. The algorithm can show that either two nodes in a graph are independent OR that they are not necessarily independent. The Bayes Ball algorithm can not show that two nodes are dependant. The algorithm will be discussed further in later parts of this section.

### Canonical Graphs

In order to understand the Bayes Ball algorithm we need to first introduce 3 canonical graphs.

#### Markov Chain (also called serial connection)

In the following graph (Figure 8 X is independent of Z given Y.

We say that: $X$ $\perp$ $Z$ $|$ $Y$

Fig.8 Markov chain.

We can prove this independence:

$\begin{matrix} P(Z|X,Y) & = & \frac{P(X,Y,Z)}{P(X,Y)}\\ & = & \frac{P(X)P(Y|X)P(Z|Y)}{P(X)P(Y|X)}\\ & = & P(Z|Y) \end{matrix}$

Where

$\begin{matrix} P(X,Y) & = & \displaystyle \sum_Z P(X,Y,Z) \\ & = & \displaystyle \sum_Z P(X)P(Y|X)P(Z|Y) \\ & = & P(X)P(Y | X) \displaystyle \sum_Z P(Z|Y) \\ & = & P(X)P(Y | X)\\ \end{matrix}$

#### Hidden Cause (diverging connection)

In the Hidden Cause case we can say that X is independent of Z given Y. In this case Y is the hidden cause and if it is known then Z and X are considered independent.

We say that: $X$ $\perp$ $Z$ $|$ $Y$

Fig.9 Hidden cause graph.

The proof of the independence:

$\begin{matrix} P(Z|X,Y) & = & \frac{P(X,Y,Z)}{P(X,Y)}\\ & = & \frac{P(X)P(Y|X)P(Z|Y)}{P(X)P(Y|X)}\\ & = & P(Z|Y) \end{matrix}$

The Hidden Cause case is best illustrated with an example:

File:plot44.png
Fig.10 Hidden cause example.

In Figure 10 it can be seen that both "Shoe Size" and "Grey Hair" are dependant on the age of a person. The variables of "Shoe size" and "Grey hair" are dependent in some sense, if there is no "Age" in the picture. Without the age information we must conclude that those with a large shoe size also have a greater chance of having gray hair. However, when "Age" is observed, there is no dependence between "Shoe size" and "Grey hair" because we can deduce both based only on the "Age" variable.

#### Explaining-Away (converging connection)

Finally, we look at the third type of canonical graph: Explaining-Away Graphs. This type of graph arises when a phenomena has multiple explanations. Here, the conditional independence statement is actually a statement of marginal independence: $X \perp Z$. This type of graphs is also called "V-structure" or "V-shape" because of its illustration (Fig. 11).

Fig.11 The missing edge between node X and node Z implies that there is a marginal independence between the two: $X \perp Z$.

In these types of scenarios, variables X and Z are independent. However, once the third variable Y is observed, X and Z become dependent (Fig. 11).

To clarify these concepts, suppose Bob and Mary are supposed to meet for a noontime lunch. Consider the following events:

$late =\begin{cases} 1, & \hbox{if Mary is late}, \\ 0, & \hbox{otherwise}. \end{cases}$
$aliens =\begin{cases} 1, & \hbox{if aliens kidnapped Mary}, \\ 0, & \hbox{otherwise}. \end{cases}$
$watch =\begin{cases} 1, & \hbox{if Bob's watch is incorrect}, \\ 0, & \hbox{otherwise}. \end{cases}$

If Mary is late, then she could have been kidnapped by aliens. Alternatively, Bob may have forgotten to adjust his watch for daylight savings time, making him early. Clearly, both of these events are independent. Now, consider the following probabilities:

$\begin{matrix} P( late = 1 ) \\ P( aliens = 1 ~|~ late = 1 ) \\ P( aliens = 1 ~|~ late = 1, watch = 0 ) \end{matrix}$

We expect $P( late = 1 ) \lt P( aliens = 1 ~|~ late = 1 )$ since $P( aliens = 1 ~|~ late = 1 )$ does not provide any information regarding Bob's watch. Similarly, we expect $P( aliens = 1 ~|~ late = 1 ) \lt P( aliens = 1 ~|~ late = 1, watch = 0 )$. Since $P( aliens = 1 ~|~ late = 1 ) \neq P( aliens = 1 ~|~ late = 1, watch = 0 )$, aliens and watch are not independent given late. To summarize,

• If we do not observe late, then aliens $~\perp~ watch$ ($X~\perp~ Z$)
• If we do observe late, then aliens $~\cancel{\perp}~ watch ~|~ late$ ($X ~\cancel{\perp}~ Z ~|~ Y$)

### Bayes Ball Algorithm

Goal: We wish to determine whether a given conditional statement such as $X_{A} ~\perp~ X_{B} ~|~ X_{C}$ is true given a directed graph.

The algorithm is as follows:

1. Shade nodes, $~X_{C}~$, that are conditioned on.
2. The initial position of the ball is $~X_{A}~$.
3. If the ball cannot reach $~X_{B}~$, then the nodes $~X_{A}~$ and $~X_{B}~$ must be conditionally independent.
4. If the ball can reach $~X_{B}~$, then the nodes $~X_{A}~$ and $~X_{B}~$ are not necessarily independent.

The biggest challenge in the Bayes Ball Algorithm is to determine what happens to a ball going from node X to node Z as it passes through node Y. The ball could continue its route to Z or it could be blocked. It is important to note that the balls are allowed to travel in any direction, independent of the direction of the edges in the graph.

We use the canonical graphs previously studied to determine the route of a ball traveling through a graph. Using these three graphs we establish base rules which can be extended upon for more general graphs.

#### Markov Chain (serial connection)

Fig.12 (a) When the middle node is shaded, the ball is blocked. (b) When the middle ball is not shaded, the ball passes through Y.

A ball traveling from X to Z or from Z to X will be blocked at node Y if this node is shaded. Alternatively, if Y is unshaded, the ball will pass through.

In (Fig. 12(a)), X and Z are conditionally independent ( $X ~\perp~ Z ~|~ Y$ ) while in (Fig.12(b)) X and Z are not necessarily independent.

#### Hidden Cause (diverging connection)

Fig.13 (a) When the middle node is shaded, the ball is blocked. (b) When the middle ball is not shaded, the ball passes through Y.

A ball traveling through Y will be blocked at Y if it is shaded. If Y is unshaded, then the ball passes through.

(Fig. 13(a)) demonstrates that X and Z are conditionally independent when Y is shaded.

#### Explaining-Away (converging connection)

A ball traveling through Y is blocked when Y is unshaded. If Y is shaded, then the ball passes through. Hence, X and Z are conditionally independent when Y is unshaded.

Fig.14 (a) When the middle node is shaded, the ball passes through Y. (b) When the middle ball is unshaded, the ball is blocked.

### Bayes Ball Examples

#### Example 1

In this first example, we wish to identify the behavior of a ball going from X to Y in two-node graphs.

Fig.15 (a)The ball is blocked at Y. (b)The ball passes through Y. (c)The ball passes through Y. (d) The ball is blocked at Y.

The four graphs in (Fig. 15 show different scenarios. In (a), the ball is blocked at Y. In (b) the ball passes through Y. In both of these cases, we use the rules of the Explaining Away Canonical Graph (refer to Fig. 14.) Finally, for the last two graphs, we used the rules of the Hidden Cause Canonical Graph (Fig. 13). In (c), the ball passes through Y while in (d), the ball is blocked at Y.

#### Example 2

Suppose your home is equipped with an alarm system. There are two possible causes for the alarm to ring:

• Your house is being burglarized
• There is an earthquake

Hence, we define the following events:

$burglary =\begin{cases} 1, & \hbox{if your house is being burglarized}, \\ 0, & \hbox{if your house is not being burglarized}. \end{cases}$
$earthquake =\begin{cases} 1, & \hbox{if there is an earthquake}, \\ 0, & \hbox{if there is no earthquake}. \end{cases}$
$alarm =\begin{cases} 1, & \hbox{if your alarm is ringing}, \\ 0, & \hbox{if your alarm is off}. \end{cases}$
$report =\begin{cases} 1, & \hbox{if a police report has been written}, \\ 0, & \hbox{if no police report has been written}. \end{cases}$

The burglary and earthquake events are independent if the alarm does not ring. However, if the alarm does ring, then the burglary and the earthquake events are not necessarily independent. Also, if the alarm rings then it is possible for a police report to be issued.

We can use the Bayes Ball Algorithm to deduce conditional independence properties from the graph. Firstly, consider figure (16(a)) and assume we are trying to determine whether there is conditional independence between the burglary and earthquake events. In figure (\ref{fig:AlarmExample1}(a)), a ball starting at the burglary event is blocked at the alarm node.

Fig.16 If we only consider the events burglary, earthquake, and alarm, we find that a ball traveling from burglary to earthquake would be blocked at the alarm node. However, if we also consider the report node, we can find a path between burglary and earthquake.

Nonetheless, this does not prove that the burglary and earthquake events are independent. Indeed, (Fig. 16(b)) disproves this as we have found an alternate path from burglary to earthquake passing through report. It follows that $burglary ~\cancel{\amalg}~ earthquake ~|~ report$

#### Example 3

Referring to figure (Fig. 17), we wish to determine whether the following conditional probabilities are true:

$\begin{matrix} X_{1} ~\amalg~ X_{3} ~|~ X_{2} \\ X_{1} ~\amalg~ X_{5} ~|~ \{X_{3},X_{4}\} \end{matrix}$
Fig.17 Simple Markov Chain graph.

To determine if the conditional probability Eq.\ref{eq:c1} is true, we shade node $X_{2}$. This blocks balls traveling from $X_{1}$ to $X_{3}$ and proves that Eq.\ref{eq:c1} is valid.

After shading nodes $X_{3}$ and $X_{4}$ and applying the Bayes Balls Algorithm}, we find that the ball travelling from $X_{1}$ to $X_{5}$ is blocked at $X_{3}$. Similarly, a ball going from $X_{5}$ to $X_{1}$ is blocked at $X_{4}$. This proves that Eq.\ref{eq:c2 also holds.

#### Example 4

Fig.18 Directed graph.

Consider figure (Fig. 18). Using the Bayes Ball Algorithm we wish to determine if each of the following statements are valid:

$\begin{matrix} X_{4} ~\amalg~ \{X_{1},X_{3}\} ~|~ X_{2} \\ X_{1} ~\amalg~ X_{6} ~|~ \{X_{2},X_{3}\} \\ X_{2} ~\amalg~ X_{3} ~|~ \{X_{1},X_{6}\} \end{matrix}$
Fig.19 (a) A ball cannot pass through $X_{2}$ or $X_{6}$. (b) A ball cannot pass through $X_{2}$ or $X_{3}$. (c) A ball can pass from $X_{2}$ to $X_{3}$.

To disprove Eq.\ref{eq:c3}, we must find a path from $X_{4}$ to $X_{1}$ and $X_{3}$ when $X_{2}$ is shaded (Refer to Fig. 19(a)). Since there is no route from $X_{4}$ to $X_{1}$ and $X_{3}$ we conclude that Eq.\ref{eq:c3} is true.

Similarly, we can show that there does not exist a path between $X_{1}$ and $X_{6}$ when $X_{2}$ and $X_{3}$ are shaded (Refer to Fig.19(b)). Hence, Eq.\ref{eq:c4} is true.

Finally, (Fig. 19(c)) shows that there is a route from $X_{2}$ to $X_{3}$ when $X_{1}$ and $X_{6}$ are shaded. This proves that the statement \ref{eq:c4} is false.

Theorem 2.
Define $p(x_{v}) = \prod_{i=1}^{n}{p(x_{i} ~|~ x_{\pi_{i}})}$ to be the factorization as a multiplication of some local probability of a directed graph.
Let $D_{1} = \{ p(x_{v}) = \prod_{i=1}^{n}{p(x_{i} ~|~ x_{\pi_{i}})}\}$
Let $D_{2} = \{ p(x_{v}):$satisfy all conditional independence statements associated with a graph $\}$.
Then $D_{1} = D_{2}$.

#### Example 5

Given the following Bayesian network (Fig.19 ): Determine whether the following statements are true or false?

a.) $x4\perp \{x1,x3\}$

Ans. True


b.) $x1\perp x6\{x2,x3\}$

Ans. True


c.) $x2\perp x3 \{x1,x6\}$

Ans. False


## Undirected Graphical Model

Generally, the graphical model is divided into two major classes, directed graphs and undirected graphs. Directed graphs and its characteristics was described previously. In this section we discuss undirected graphical model which is also known as Markov random fields. We can define an undirected graphical model with a graph $G = (V, E)$ where $V$ is a set of vertices corresponding to a set of random variables and $E$ is a set of undirected edges as shown in (Fig.20)

#### Conditional independence

For directed graphs Bayes ball method was defined to determine the conditional independence properties of a given graph. We can also employ the Bayes ball algorithm to examine the conditional independency of undirected graphs. Here the Bayes ball rule is simpler and more intuitive. Considering (Fig.21) , a ball can be thrown either from x to z or from z to x if y is not observed. In other words, if y is not observed a ball thrown from x can reach z and vice versa. On the contrary, given a shaded y, the node can block the ball and make x and z conditionally independent. With this definition one can declare that in an undirected graph, a node is conditionally independent of non-neighbors given neighbors. Technically speaking, $X_A$ is independent of $X_C$ given $X_B$ if the set of nodes $X_B$ separates the nodes $X_A$ from the nodes $X_C$. Hence, if every path from a node in $X_A$ to a node in $X_C$ includes at least one node in $X_B$, then we claim that $X_A \perp X_c | X_B$.

#### Question

Is it possible to convert undirected models to directed models or vice versa?

In order to answer this question, consider (Fig.22 ) which illustrates an undirected graph with four nodes - $X$, $Y$,$Z$ and $W$. We can define two facts using Bayes ball method:

$\begin{matrix} X \perp Y | \{W,Z\} & & \\ W \perp Z | \{X,Y\} \\ \end{matrix}$
Fig.22 There is no directed equivalent to this graph.

It is simple to see there is no directed graph satisfying both conditional independence properties. Recalling that directed graphs are acyclic, converting undirected graphs to directed graphs result in at least one node in which the arrows are inward-pointing(a v structure). Without loss of generality we can assume that node $Z$ has two inward-pointing arrows. By conditional independence semantics of directed graphs, we have $X \perp Y|W$, yet the $X \perp Y|\{W,Z\}$ property does not hold. On the other hand, (Fig.23 ) depicts a directed graph which is characterized by the singleton independence statement $X \perp Y$. There is no undirected graph on three nodes which can be characterized by this singleton statement. Basically, if we consider the set of all distribution over $n$ random variables, a subset of which can be represented by directed graphical models while there is another subset which undirected graphs are able to model that. There is a narrow intersection region between these two subsets in which probabilistic graphical models may be represented by either directed or undirected graphs.

Fig.23 There is no undirected equivalent to this graph.

#### Parameterization

Having undirected graphical models, we would like to obtain "local" parameterization like what we did in the case of directed graphical models. For directed graphical models, "local" had the interpretation of a set of node and its parents, $\{i, \pi_i\}$. The joint probability and the marginals are defined as a product of such local probabilities which was inspired from the chain rule in the probability theory. In undirected GMs "local" functions cannot be represented using conditional probabilities, and we must abandon conditional probabilities altogether. Therefore, the factors do not have probabilistic interpretation any more, but we can choose the "local" functions arbitrarily. However, any "local" function for undirected graphical models should satisfy the following condition: - Consider $X_i$ and $X_j$ that are not linked, they are conditionally independent given all other nodes. As a result, the "local" function should be able to do the factorization on the joint probability such that $X_i$ and $X_j$ are placed in different factors.

Before defining the "local" functions, we have to introduce a new terminology in graph theory called clique. Clique is a subset of fully connected nodes in a graph G. Every node in the clique C is directly connected to every other node in C. In addition, maximal clique is a clique where if any other node from the graph G is added to it then the new set is no longer a clique. Consider the undirected graph shown in (Fig. 24), we can list all the cliques as follow:

File:graph.png
Fig.24 Undirected graph

- $\{X_1, X_3\}$ - $\{X_1, X_2\}$ - $\{X_3, X_5\}$ - $\{X_2, X_4\}$ - $\{X_5, X_6\}$ - $\{X_2, X_5\}$ - $\{X_2, X_5, X_6\}$

According to the definition, $\{X_2,X_5\}$ is not a maximal clique since we can add one more node, $X_6$ and still have a clique. Let C be set of all maximal cliques in $G(V, E)$:

$C = \{c_1, c_2,..., c_n\}$

where in aforementioned example $c_1$ would be $\{X_1, X_3\}$, and so on. We define the joint probability over all nodes as:

$P(x_{V}) = \frac{1}{Z} \prod_{c_i \epsilon C} \psi_{c_i} (x_{c_i})$

where $\psi_{c_i} (x_{c_i})$ is an arbitrarily function with some restrictions. This function is not necessarily probability and is defined over each clique. There are only two restrictions for this function, non-negative and real-valued. Usually $\psi_{c_i} (x_{c_i})$ is called potential function. The $Z$ is normalization factor and determined by:

$Z = \sum_{X_V} { \prod_{c_i \epsilon C} \psi_{c_i} (x_{c_i})}$

As a matter of fact, normalization factor, $Z$, is not very important since in most of the time is canceled out during computation. For instance, to calculate conditional probability $P(X_A | X_B)$, $Z$ is crossed out between the nominator $P(X_A, X_B)$ and the denominator $P(X_B)$.

As was mentioned above, sum-product of the potential functions determines the joint probability over all nodes. Because of the fact that potential functions are arbitrarily defined, assuming exponential functions for $\psi_{c_i} (x_{c_i})$ simplifies and reduces the computations. Let potential function be:

$\psi_{c_i} (x_{c_i}) = exp (- H(x_i))$

the joint probability is given by:

$P(x_{V}) = \frac{1}{Z} \prod_{c_i \epsilon C} exp(-H(x_i)) = \frac{1}{Z} exp (- \sum_{c_i} {H_{c_i} (x_i)})$

-

There is a lot of information contained in the joint probability distribution $P(x_{V})$. We define 6 tasks listed bellow that we would like to accomplish with various algorithms for a given distribution $P(x_{V})$.

• Marginalization

Given $P(x_{V})$ find $P(x_{A})$
\underline{ex.} Given $P(x_1, x_2, ... , x_6)$ find $P(x_2, x_6)$

• Conditioning

Given $P(x_V)$ find $P(x_A|x_B) = \frac{P(x_A, x_B)}{P(x_B)}$ .

• Evaluation

Evaluate the probability for a certain configuration.

• Completion

Compute the most probable configuration. In other words, which of the $P(x_A|x_B)$ is the largest for a specific combinations of $A$ and $B$.

• Simulation

Generate a random configuration for $P(x_V)$ .

• Learning

We would like to find parameters for $P(x_V)$ .

### Exact Algorithms:

We will be looking at three exact algorithms. An exact algorithm is an algorithm that will find the exact answer to one of the above tasks. The main disadvantage to the exact algorithms approach is that for large graphs which have a large number of nodes these algorithms take a long time to produce a result. When this occurs we can use inexact algorithms to more efficiently find a useful estimate.

• Elimination
• Sum-Product
• Max-Product
• Junction Tree

# Elimination Algorithm

## Elimination Algorithm on Directed Graphs

Given a graph G =(V,E), an evidence set E, and a query node F, we first choose an elimination ordering I such that F appears last in this ordering. The following figure shows the steps required to perform the elimination algorithm for probabilistic inference on directed graphs:

 ELIMINATE (G,E,F) INITIALIZE (G,F) EVIDENCE(E) UPDATE(G) 

NORMALIZE(F) 

INITIALIZE(G,F) Choose an ordering $I$ such that $F$ appear last 

 For each node $X_i$ in $V$ Place $p(x_i|x_{\pi_i})$ on the active list End EVIDENCE(E) For each $i$ in $E$ Place $\delta(x_i|\overline{x_i})$ on the active list End Update(G) For each $i$ in $I$ Find all potentials from the active list that reference $x_i$ and remove them from the active list Let $\phi_i(x_Ti)$ denote the product of these potentials Let $m_i(x_Si)=\sum_{x_i}\phi_i(x_Ti)$ Place $m_i(x_Si)$ on the active list End Normalize(F) $p(x_F|\overline{x_E})$ ← $\phi_F(x_F)/\sum_{x_F}\phi_F(x_F)$ 

Example:
For the graph in figure 21 $G =(V,''E'')$. Consider once again that node $x_1$ is the query node and $x_6$ is the evidence node.
$I = \left\{6,5,4,3,2,1\right\}$ (1 should be the last node, ordering is crucial)

Fig.21 Six node example.

We must now create an active list. There are two rules that must be followed in order to create this list.

1. For i$\in{V}$ place $p(x_i|x_{\pi_i})$ in active list.
2. For i$\in${E} place $\delta(x_i|\overline{x_i})$ in active list.

Here, our active list is: $p(x_1), p(x_2|x_1), p(x_3|x_1), p(x_4|x_2), p(x_5|x_3),\underbrace{p(x_6|x_2, x_5)\delta{(\overline{x_6},x_6)}}_{\phi_6(x_2,x_5, x_6), \sum_{x6}{\phi_6}=m_{6}(x2,x5) }$

We first eliminate node $X_6$. We place $m_{6}(x_2,x_5)$ on the active list, having removed $X_6$. We now eliminate $X_5$.

$\underbrace{p(x_5|x_3)*m_6(x_2,x_5)}_{m_5(x_2,x_3)}$

Likewise, we can also eliminate $X_4, X_3, X_2$(which yields the unnormalized conditional probability $p(x_1|\overline{x_6})$ and $X_1$. Then it yields $m_1 = \sum_{x_1}{\phi_1(x_1)}$ which is the normalization factor, $p(\overline{x_6})$.

## Elimination Algorithm on Undirected Graphs

File:graph.png
Fig.22 Undirected graph G'

The first task is to find the maximal cliques and their associated potential functions.
maximal clique: $\left\{x_1, x_2\right\}$, $\left\{x_1, x_3\right\}$, $\left\{x_2, x_4\right\}$, $\left\{x_3, x_5\right\}$, $\left\{x_2,x_5,x_6\right\}$
potential functions: $\varphi{(x_1,x_2)},\varphi{(x_1,x_3)},\varphi{(x_2,x_4)}, \varphi{(x_3,x_5)}$ and $\varphi{(x_2,x_3,x_6)}$

$p(x_1|\overline{x_6})=p(x_1,\overline{x_6})/p(\overline{x_6})\cdots\cdots\cdots\cdots\cdots(*)$

$p(x_1,x_6)=\frac{1}{Z}\sum_{x_2,x_3,x_4,x_5,x_6}\varphi{(x_1,x_2)}\varphi{(x_1,x_3)}\varphi{(x_2,x_4)}\varphi{(x_3,x_5)}\varphi{(x_2,x_3,x_6)}\delta{(x_6,\overline{x_6})}$

The $\frac{1}{Z}$ looks crucial, but in fact it has no effect because for (*) both the numerator and the denominator have the $\frac{1}{Z}$ term. So in this case we can just cancel it.
The general rule for elimination in an undirected graph is that we can remove a node as long as we connect all of the parents of that node together. Effectively, we form a clique out of the parents of that node. The algorithm used to eliminate nodes in an undirected graph is:

 

UndirectedGraphElimination(G,l) 

 For each node $X_i$ in $I$ Connect all of the remaining neighbours of $X_i$ Remove $X_i$ from the graph End 

 

Example:
For the graph G in figure 24
when we remove x1, G becomes as in figure 25
while if we remove x2, G becomes as in figure 26

An interesting thing to point out is that the order of the elimination matters a great deal. Consider the two results. If we remove one node the graph complexity is slightly reduced. But if we try to remove another node the complexity is significantly increased. The reason why we even care about the complexity of the graph is because the complexity of a graph denotes the number of calculations that are required to answer questions about that graph. If we had a huge graph with thousands of nodes the order of the node removal would be key in the complexity of the algorithm. Unfortunately, there is no efficient algorithm that can produce the optimal node removal order such that the elimination algorithm would run quickly.

## Moralization

So far we have shown how to use elimination to successively remove nodes from an undirected graph. We know that this is useful in the process of marginalization. We can now turn to the question of what will happen when we have a directed graph. It would be nice if we could somehow reduce the directed graph to an undirected form and then apply the previous elimination algorithm. This reduction is called moralization and the graph that is produced is called a moral graph.

To moralize a graph we first need to connect the parents of each node together. This makes sense intuitively because the parents of a node need to be considered together in the undirected graph and this is only done if they form a type of clique. By connecting them together we create this clique.

After the parents are connected together we can just drop the orientation on the edges in the directed graph. By removing the directions we force the graph to become undirected.

The previous elimination algorithm can now be applied to the new moral graph. We can do this by assuming that the probability functions in directed graph $P(x_i|\pi_{x_i})$ are the same as the mass functions from the undirected graph. $\psi_{c_i}(c_{x_i})$

Example:
I = $\left\{x_6,x_5,x_4,x_3,x_2,x_1\right\}$
When we moralize the directed graph in figure 27, we obtain the undirected graph in figure 28.

File:moral.png
Fig.27 Original Directed Graph
File:moral3.png
Fig.28 Moral Undirected Graph

# Elimination Algorithm on Trees

Definition of a tree:
A tree is an undirected graph in which any two vertices are connected by exactly one simple path. In other words, any connected graph without cycles is a tree.

If we have a directed graph then we must moralize it first. If the moral graph is a tree then the directed graph is also considered a tree.

## Belief Propagation Algorithm (Sum Product Algorithm)

One of the main disadvantages to the elimination algorithm is that the ordering of the nodes defines the number of calculations that are required to produce a result. The optimal ordering is difficult to calculate and without a decent ordering the algorithm may become very slow. In response to this we can introduce the sum product algorithm. It has one major advantage over the elimination algorithm: it is faster. The sum product algorithm has the same complexity when it has to compute the probability of one node as it does to compute the probability of all the nodes in the graph. Unfortunately, the sum product algorithm also has one disadvantage. Unlike the elimination algorithm it can not be used on any graph. The sum product algorithm works only on trees.

For undirected graphs if there is only one path between any two pair of nodes then that graph is a tree (Fig.29). If we have a directed graph then we must moralize it first. If the moral graph is a tree then the directed graph is also considered a tree (Fig.30).

Fig.29 Undirected tree
Fig.30 Directed tree

For the undirected graph $G(v, \varepsilon)$ (Fig.30) we can write the joint probability distribution function in the following way.

$P(x_v) = \frac{1}{Z(\psi)}\prod_{i \varepsilon v}\psi(x_i)\prod_{i,j \varepsilon \varepsilon}\psi(x_i, x_j)$

We know that in general we can not convert a directed graph into an undirected graph. There is however an exception to this rule when it comes to trees. In the case of a directed tree there is an algorithm that allows us to convert it to an undirected tree with the same properties.
Take the above example (Fig.30) of a directed tree. We can write the joint probability distribution function as:

$P(x_v) = P(x_1)P(x_2|x_1)P(x_3|x_1)P(x_4|x_2)P(x_5|x_2)$

If we want to convert this graph to the undirected form shown in (Fig. \ref{fig:UnDirTree}) then we can use the following set of rules. \begin{thinlist}

• If $\gamma$ is the root then: $\psi(x_\gamma) = P(x_\gamma)$.
• If $\gamma$ is NOT the root then: $\psi(x_\gamma) = 1$.
• If $\left\lbrace i \right\rbrace$ = $\pi_j$ then: $\psi(x_i, x_j) = P(x_j | x_i)$.

\end{thinlist} So now we can rewrite the above equation for (Fig.30) as:

$P(x_v) = \frac{1}{Z(\psi)}\psi(x_1)...\psi(x_5)\psi(x_1, x_2)\psi(x_1, x_3)\psi(x_2, x_4)\psi(x_2, x_5)$
$= \frac{1}{Z(\psi)}P(x_1)P(x_2|x_1)P(x_3|x_1)P(x_4|x_2)P(x_5|x_2)$

## Elimination Algorithm on a Tree

Fig.31 Message-passing in Elimination Algorithm

We will derive the Sum-Product algorithm from the point of view of the Eliminate algorithm. To marginalize $x_1$ in Fig.31,

$\begin{matrix} p(x_i)&=&\sum_{x_2}\sum_{x_3}\sum_{x_4}\sum_{x_5}p(x_1)p(x_2|x_1)p(x_3|x_2)p(x_4|x_2)p(x_5|x_3) \\ &=&p(x_1)\sum_{x_2}p(x_2|x_1)\sum_{x_3}p(x_3|x_2)\sum_{x_4}p(x_4|x_2)\underbrace{\sum_{x_5}p(x_5|x_3)} \\ &=&p(x_1)\sum_{x_2}p(x_2|x_1)\underbrace{\sum_{x_3}p(x_3|x_2)m_5(x_3)}\underbrace{\sum_{x_4}p(x_4|x_2)} \\ &=&p(x_1)\underbrace{\sum_{x_2}m_3(x_2)m_4(x_2)} \\ &=&p(x_1)m_2(x_1) \end{matrix}$

where,

$\begin{matrix} m_5(x_3)=\sum_{x_5}p(x_5|x_3)=\psi(x_5)\psi(x_5,x_3)=\mathbf{m_{53}(x_3)} \\ m_4(x_2)=\sum_{x_4}p(x_4|x_2)=\psi(x_4)\psi(x_4,x_2)=\mathbf{m_{42}(x_2)} \\ m_3(x_2)=\sum_{x_3}p(x_3|x_2)=\psi(x_3)\psi(x_3,x_2)m_5(x_3)=\mathbf{m_{32}(x_2)}, \end{matrix}$

which is essentially (potential of the node)$\times$(potential of the edge)$\times$(message from the child).

The term "$m_{ji}(x_i)$" represents the intermediate factor between the eliminated variable, j, and the remaining neighbor of the variable, i. Thus, in the above case, we will use $m_{53}(x_3)$ to denote $m_5(x_3)$, $m_{42}(x_2)$ to denote $m_4(x_2)$, and $m_{32}(x_2)$ to denote $m_3(x_2)$. We refer to the intermediate factor $m_{ji}(x_i)$ as a "message" that j sends to i. (Fig. \ref{fig:TreeStdEx})

In general,
$\begin{matrix} m_{ji}=\sum_{x_i}( \psi(x_j)\psi(x_j,x_i)\prod_{k\in{\mathcal{N}(j)/ i}}m_{kj}) \end{matrix}$

## Elimination To Sum Product Algorithm

Fig.32 All of the messages needed to compute all singleton marginals

The Sum-Product algorithm allows us to compute all marginals in the tree by passing messages inward from the leaves of the tree to an (arbitrary) root, and then passing it outward from the root to the leaves, again using the above equation at each step. The net effect is that a single message will flow in both directions along each edge. (See Fig.32) Once all such messages have been computed using the above equation, we can compute desired marginals.

As shown in Fig.32, to compute the marginal of $X_1$ using elimination, we eliminate $X_5$, which involves computing a message $m_{53}(x_3)$, then eliminate $X_4$ and $X_3$ which involves messages $m_{32}(x_2)$ and $m_{42}(x_2)$. We subsequently eliminate $X_2$, which creates a message $m_{21}(x_1)$.

Suppose that we want to compute the marginal of $X_2$. As shown in Fig.33, we first eliminate $X_5$, which creates $m_{53}(x_3)$, and then eliminate $X_3$, $X_4$, and $X_1$, passing messages $m_{32}(x_2)$, $m_{42}(x_2)$ and $m_{12}(x_2)$ to $X_2$.

Fig.33 The messages formed when computing the marginal of $X_2$

Since the messages can be "reused", marginals over all possible elimination orderings can be computed by computing all possible messages which is small in numbers compared to the number of possible elimination orderings.

The Sum-Product algorithm is not only based on the above equation, but also Message-Passing Protocol. Message-Passing Protocol tells us that a node can send a message to a neighboring node when (and only when) it has received messages from all of its other neighbors.

### For Directed Graph

Previously we stated that:

$p(x_F,\bar{x}_E)=\sum_{x_E}p(x_F,x_E)\delta(x_E,\bar{x}_E),$

Using the above equation (\ref{eqn:Marginal}), we find the marginal of $\bar{x}_E$.

$\begin{matrix} p(\bar{x}_E)&=&\sum_{x_F}\sum_{x_E}p(x_F,x_E)\delta(x_F,\bar{x}_E) \\ &=&\sum_{x_v}p(x_F,x_E)\delta (x_E,\bar{x}_E) \end{matrix}$

Now we denote:

$p^E(x_v) = p(x_v) \delta (x_E,\bar{x}_E)$

Since the sets, F and E, add up to $\mathcal{V}$, $p(x_v)$ is equal to $p(x_F,x_E)$. Thus we can substitute the equation (\ref{eqn:Dir8}) into (\ref{eqn:Marginal}) and (\ref{eqn:Dir7}), and they become:

$\begin{matrix} p(x_F,\bar{x}_E) = \sum_{x_E} p^E(x_v), \\ p(\bar{x}_E) = \sum_{x_v}p^E(x_v) \end{matrix}$

We are interested in finding the conditional probability. We substitute previous results, (\ref{eqn:Dir9}) and (\ref{eqn:Dir10}) into the conditional probability equation.

$\begin{matrix} p(x_F|\bar{x}_E)&=&\frac{p(x_F,\bar{x}_E)}{p(\bar{x}_E)} \\ &=&\frac{\sum_{x_E}p^E(x_v)}{\sum_{x_v}p^E(x_v)} \end{matrix}$

$p^E(x_v)$ is an unnormalized version of conditional probability, $p(x_F|\bar{x}_E)$.

### For Undirected Graphs

We denote $\psi^E$ to be:

$\begin{matrix} \psi^E(x_i) = \psi(x_i)\delta(x_i,\bar{x}_i),& & if i\in{E} \\ \psi^E(x_i) = \psi(x_i),& & otherwise \end{matrix}$

## Max-Product

Because multiplication distributes over max as well as sum:

$\begin{matrix} max(ab,ac) = a & \max(b,c) \end{matrix}$

Formally, both the sum-product and max-product are commutative semirings.

We would like to find the Maximum probability that can be achieved by some set of random variables given a set of configurations. The algorithm is similar to the sum product except we replace the sum with max.

File:suks.png
Fig.33 Max Product Example
$\begin{matrix} \max_{x_1}{P(x_i)} & = & \max_{x_1}\max_{x_2}\max_{x_3}\max_{x_4}\max_{x_5}{P(x_1)P(x_2|x_1)P(x_3|x_2)P(x_4|x_2)P(x_5|x_3)} \\ & = & \max_{x_1}{P(x_1)}\max_{x_2}{P(x_2|x_1)}\max_{x_3}{P(x_3|x_4)}\max_{x_4}{P(x_4|x_2)}\max_{x_5}{P(x_5|x_3)} \end{matrix}$

$p(x_F|\bar{x}_E)$

$m_{ji}(x_i)=\sum_{x_j}{\psi^{E}{(x_j)}\psi{(x_i,x_j)}\prod_{k\in{N(j)\backslash{i}}}m_{kj}}$
$m^{max}_{ji}(x_i)=\max_{x_j}{\psi^{E}{(x_j)}\psi{(x_i,x_j)}\prod_{k\in{N(j)\backslash{i}}}m_{kj}}$

Example: Consider the graph in Figure.33.

$m^{max}_{53}(x_5)=\max_{x_5}{\psi^{E}{(x_5)}\psi{(x_3,x_5)}}$
$m^{max}_{32}(x_3)=\max_{x_3}{\psi^{E}{(x_3)}\psi{(x_3,x_5)}m^{max}_{5,3}}$

## Maximum configuration

We would also like to find the value of the $x_i$s which produces the largest value for the given expression. To do this we replace the max from the previous section with argmax.
$m_{53}(x_5)= argmax_{x_5}\psi{(x_5)}\psi{(x_5,x_3)}$
$\log{m^{max}_{ji}(x_i)}=\max_{x_j}{\log{\psi^{E}{(x_j)}}}+\log{\psi{(x_i,x_j)}}+\sum_{k\in{N(j)\backslash{i}}}\log{m^{max}_{kj}{(x_j)}}$
In many cases we want to use the log of this expression because the numbers tend to be very high. Also, it is important to note that this also works in the continuous case where we replace the summation sign with an integral.

# Parameter Learning

The goal of graphical models is to build a useful representation of the input data to understand and design learning algorithm. Thereby, graphical model provide a representation of joint probability distribution over nodes (random variables). One of the most important features of a graphical model is representing the conditional independence between the graph nodes. This is achieved using local functions which are gathered to compose factorizations. Such factorizations, in turn, represent the joint probability distributions and hence, the conditional independence lying in such distributions. However that doesn’t mean the graphical model represent all the necessary independence assumptions.

## Basic Statistical Problems

In statistics there are a number of different 'standard' problems that always appear in one form or another. They are as follows:

• Regression
• Classification
• Clustering
• Density Estimation

### Regression

In regression we have a set of data points $(x_i, y_i)$ for $i = 1...n$ and we would like to determine the way that the variables x and y are related. In certain cases such as (Fig.34) we try to fit a line (or other type of function) through the points in such a way that it describes the relationship between the two variables.

File:regression.png
Fig.34 Regression

Once the relationship has been determined we can give a functional value to the following expression. In this way we can determine the value (or distribution) of y if we have the value for x. $P(y|x)=\frac{P(y,x)}{P(x)} = \frac{P(y,x)}{\int_{y}{P(y,x)dy}}$

### Classification

In classification we also have a set of data points which each contain set features $(x_1, x_2,.. ,x_i)$ for $i = 1...n$ and we would like to assign the data points into one of a given number of classes y. Consider the example in (Fig.35) where two sets of features have been divided into the set + and - by a line. The purpose of classification is to find this line and then place any new points into one group or the other.

Fig.35 Classify Points into Two Sets

We would like to obtain the probability distribution of the following equation where c is the class and x and y are the data points. In simple terms we would like to find the probability that this point is in class c when we know that the values of x and Y are x and y.

$P(c|x,y)=\frac{P(c,x,y)}{P(x,y)} = \frac{P(c,x,y)}{\sum_{c}{P(c,x,y)}}$

### Clustering

Clustering is unsupervised learning method that assign different a set of data point into a group or cluster based on the similarity between the data points. Clustering is somehow like classification only that we do not know the groups before we gather and examine the data. We would like to find the probability distribution of the following equation without knowing the value of c.

$P(c|x)=\frac{P(c,x)}{P(x)}\ \ c\ unknown$

### Density Estimation

Density Estimation is the problem of modeling a probability density function p(x), given a finite number of data points drawn from that density function.

$P(y|x)=\frac{P(y,x)}{P(x)} \ \ x\ unknown$

We can use graphs to represent the four types of statistical problems that have been introduced so far. The first graph (Fig.36(a)) can be used to represent either the Regression or the Classification problem because both the X and the Y variables are known. The second graph (Fig.36(b)) we see that the value of the Y variable is unknown and so we can tell that this graph represents the Clustering and Density Estimation situation.

Fig.36(a) Regression or classification (b) Clustering or Density Estimation

## Likelihood Function

Recall that the probability model $p(x|\theta)$ has the intuitive interpretation of assigning probability to X for each fixed value of $\theta$. In the Bayesian approach this intuition is formalized by treating $p(x|\theta)$ as a conditional probability distribution. In the Frequentist approach, however, we treat $p(x|\theta)$ as a function of $\theta$ for fixed x, and refer to $p(x|\theta)$ as the likelihood function.

$L(\theta;x)= p(x|\theta)$

where $p(x|\theta)$ is the likelihood L($\theta, x$)

$l(\theta,x)=log(p(x|\theta))$

where $log(p(x|\theta))$ is the log likelihood $l(\theta, x)$

Since $p(x)$ in the denominator of Bayes Rule is independent of $\theta$ we can consider it as a constant and we can draw the conclusion that:

$p(\theta|x) \propto p(x|\theta)p(\theta)$

Symbolically, we can interpret this as follows:

$Posterior \propto likelihood \times prior$

where we see that in the Bayesian approach the likelihood can be viewed as a data-dependent operator that transforms between the prior probability and the posterior probability.

### Maximum likelihood

The idea of estimating the maximum is to find the optimum values for the parameters by maximizing a likelihood function form the training data. Suppose in particular that we force the Bayesian to choose a particular value of $\theta$; that is, to remove the posterior distribution $p(\theta|x)$ to a point estimate. Various possibilities present themselves; in particular one could choose the mean of the posterior distribution or perhaps the mode.

(i) the mean of the posterior (expectation):

$\hat{\theta}_{Bayes}=\int \theta p(\theta|x)\,d\theta$

is called Bayes estimate.

OR

(ii) the mode of posterior:

$\begin{matrix} \hat{\theta}_{MAP}&=&argmax_{\theta} p(\theta|x) \\ &=&argmax_{\theta}p(x|\theta)p(\theta) \end{matrix}$

Note that MAP is Maximum a posterior.

$MAP -------\gt \hat\theta_{ML}$

When the prior probabilities, $p(\theta)$ is taken to be uniform on $\theta$, the MAP estimate reduces to the maximum likelihood estimate, $\hat{\theta}_{ML}$.

$MAP = argmax_{\theta} p(x|\theta) p(\theta)$

When the prior is not taken to be uniform, the MAP estimate will be the maximization over probability distributions(the fact that the logarithm is a monotonic function implies that it does not alter the optimizing value).

Thus, one has:

$\hat{\theta}_{MAP}=argmax_{\theta} \{ log p(x|\theta) + log p(\theta) \}$

as an alternative expression for the MAP estimate.

Here, $log (p(x|\theta))$ is log likelihood and the "penalty" is the additive term $log(p(\theta))$. Penalized log likelihoods are widely used in Frequentist statistics to improve on maximum likelihood estimates in small sample settings.

### Example : Bernoulli trials

Consider the simple experiment where a biased coin is tossed four times. Suppose now that we also have some data $D$:
e.g. $D = \left\lbrace h,h,h,t\right\rbrace$. We want to use this data to estimate $\theta$. The probability of observing head is $p(H)= \theta$ and the probability of observing a tail is $p(T)= 1-\theta$.

where the conditional probability is
$P(x|\theta) = \theta^{x_i}(1-\theta)^{(1-x_i)}$

We would now like to use the ML technique.Since all of the variables are iid then there are no dependencies between the variables and so we have no edges from one node to another.

How do we find the joint probability distribution function for these variables? Well since they are all independent we can just multiply the marginal probabilities and we get the joint probability.

$L(\theta;x) = \prod_{i=1}^n P(x_i|\theta)$

This is in fact the likelihood that we want to work with. Now let us try to maximise it:

$\begin{matrix} l(\theta;x) & = & log(\prod_{i=1}^n P(x_i|\theta)) \\ & = & \sum_{i=1}^n log(P(x_i|\theta)) \\ & = & \sum_{i=1}^n log(\theta^{x_i}(1-\theta)^{1-x_i}) \\ & = & \sum_{i=1}^n x_ilog(\theta) + \sum_{i=1}^n (1-x_i)log(1-\theta) \\ \end{matrix}$

Take the derivative and set it to zero:

$\frac{\partial l}{\partial\theta} = 0$
$\frac{\partial l}{\partial\theta} = \sum_{i=0}^{n}\frac{x_i}{\theta} - \sum_{i=0}^{n}\frac{1-x_i}{1-\theta} = 0$
$\Rightarrow \frac{\sum_{i=0}^{n}x_i}{\theta} = \frac{\sum_{i=0}^{n}(1-x_i)}{1-\theta}$
$\frac{NH}{\theta} = \frac{NT}{1-\theta}$

Where:

NH = number of all the observed of heads
NT = number of all the observed tails
Hence, $NT + NH = n$


And now we can solve for $\theta$:

$\begin{matrix} \theta & = & \frac{(1-\theta)NH}{NT} \\ \theta + \theta\frac{NH}{NT} & = & \frac{NH}{NT} \\ \theta(\frac{NT+NH}{NT}) & = & \frac{NH}{NT} \\ \theta & = & \frac{\frac{NH}{NT}}{\frac{n}{NT}} = \frac{NH}{n} \end{matrix}$

### Example : Multinomial trials

Recall from the previous example that a Bernoulli trial has only two outcomes (e.g. Head/Tail, Failure/Success,…). A Multinomial trial is a multivariate generalization of the Bernoulli trial with K number of possible outcomes, where K > 2. Let $p(k) = \theta_k$ be the probability of outcome k. All the $\theta_k$ parameters must be:

$0 \leq \theta_k \leq 1$

and

$\sum_k \theta_k = 1$

Consider the example of rolling a die M times and recording the number of times each of the six die's faces observed. Let $N_k$ be the number of times that face k was observed.

Let $[x^m = k]$ be a binary indicator, such that the whole term would equals one if $x^m = k$, and zero otherwise. The likelihood function for the Multinomial distribution is:

$l(\theta; D) = log( p(D|\theta) )$

$= log(\prod_m \theta_{x^m}^{x})$

$= log(\prod_m \theta_{1}^{[x^m = 1]} ... \theta_{k}^{[x^m = k]})$

$= \sum_k log(\theta_k) \sum_m [x^m = k]$

$= \sum_k N_k log(\theta_k)$

Take the derivatives and set it to zero:

$\frac{\partial l}{\partial\theta_k} = 0$

$\frac{\partial l}{\partial\theta_k} = \frac{N_k}{\theta_k} - M = 0$

$\Rightarrow \theta_k = \frac{N_k}{M}$

### Example: Univariate Normal

Now let us assume that the observed values come from normal distribution.
\includegraphics{images/fig4Feb6.eps} \newline Our new model looks like:

$P(x_i|\theta) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}(\frac{x_i-\mu}{\sigma})^{2}}$

Now to find the likelihood we once again multiply the independent marginal probabilities to obtain the joint probability and the likelihood function.

$L(\theta;x) = \prod_{i=1}^{n}\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}(\frac{x_i-\mu}{\sigma})^{2}}$
$\max_{\theta}l(\theta;x) = \max_{\theta}\sum_{i=1}^{n}(-\frac{1}{2}(\frac{x_i-\mu}{\sigma})^{2}+log\frac{1}{\sqrt{2\pi}\sigma}$

Now, since our parameter theta is in fact a set of two parameters,

$\theta = (\mu, \sigma)$

we must estimate each of the parameters separately.

$\frac{\partial}{\partial u} = \sum_{i=1}^{n} \left( \frac{\mu - x_i}{\sigma} \right) = 0 \Rightarrow \hat{\mu} = \frac{1}{n}\sum_{i=1}^{n}x_i$
$\frac{\partial}{\partial \mu ^{2}} = -\frac{1}{2\sigma ^4} \sum _{i=1}^{n}(x_i-\mu)^2 + \frac{n}{2} \frac{1}{\sigma ^2} = 0$
$\Rightarrow \hat{\sigma} ^2 = \frac{1}{n}\sum_{i=1}{n}(x_i - \hat{\mu})^2$

# Appendix: Graph Drawing Tools

## Graphviz

"Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks. It has important applications in networking, bioinformatics, software engineering, database and web design, machine learning, and in visual interfaces for other technical domains." <ref>http://www.graphviz.org/</ref>

## AISee

AISee is a commercial graph visualization software. The free trial version has almost all the features of the full version except that it should not be used for commercial purposes.

## TikZ

"TikZ and PGF are TeX packages for creating graphics programmatically. TikZ is build on top of PGF and allows you to create sophisticated graphics in a rather intuitive and easy manner." <ref> http://www.texample.net/tikz/ </ref>