stat341 / CM 361: Difference between revisions

From statwiki
Jump to navigation Jump to search
m (Conversion script moved page Stat341 / CM 361 to stat341 / CM 361: Converting page titles to lowercase)
 
(234 intermediate revisions by 16 users not shown)
Line 9: Line 9:
===[[Generating Random Numbers]] - May 12, 2009===
===[[Generating Random Numbers]] - May 12, 2009===


Generating random numbers in a computational setting presents challenges. A good way to generate random numbers in computational statistics involves analyzing various distributions using computational methods. As a result, the probability distribution of each possible number appears to be uniform (pseudo-random). Outside a computational setting, presenting a uniform distribution is fairly easy (for example, rolling a fair die repetitively to produce a series of random numbers from 1 to 6).
Generating random numbers in a computational setting presents challenges. A good way to generate random numbers in computational statistics involves analyzing various distributions using computational methods. <s>As a result, the probability distribution of each possible number appears to be uniform</s> (pseudo-random). Outside a computational setting, presenting a uniform distribution is fairly easy (for example, rolling a fair die repetitively to produce a series of random numbers from 1 to 6).


We begin by considering the simplest case: the uniform distribution.
We begin by considering the simplest case: the uniform distribution.
Line 76: Line 76:
'''Proof''':
'''Proof''':


Recall that, if ''f'' is the pdf corresponding to F, then,
Recall that, if ''f'' is the pdf corresponding to F where f is defined as 0 outside of its domain, then,


:<math>F(x) = P(X \leq x) = \int_{-\infty}^x f(x)</math>
:<math>F(x) = P(X \leq x) = \int_{-\infty}^x f(x)</math>
Line 113: Line 113:
:<math> F^{-1}(x) = \frac{-\log(1-y)}{\theta} = \frac{-\log(u)}{\theta} </math>
:<math> F^{-1}(x) = \frac{-\log(1-y)}{\theta} = \frac{-\log(u)}{\theta} </math>


Now we can generate our random sample <math>i=1\dots n</math> from <math>f(x)</math> by:
Now we can generate our random sample <math>u_1\dots u_n</math> from <math>F(x)</math> by:


:<math>1)\ u_i \sim Unif[0, 1]</math>
:<math>1)\ u_i \sim Unif[0, 1]</math>
Line 126: Line 126:
  x=-log(1-u)/1;
  x=-log(1-u)/1;
  hist(x)
  hist(x)
[[Image:HistRandNum.jpg|center|300px|"Histogram showing the expected exponentional distribution" ]]


The histogram shows exponential pattern as expected.


[[File:HistRandNum.jpg]]
   
   
The major problem with this approach is that we have to find <math>F^{-1}</math> and for many distributions it is too difficult (or impossible) to find the inverse of <math>F(x)</math>. Further, for some distributions it is not even possible to find <math>F(x)</math> (i.e. a closed form expression for the distribution function, or otherwise; even if the closed form expression exists, it's usually difficult to find <math>F^{-1}</math>).
The major problem with this approach is that we have to find <math>F^{-1}</math> and for many distributions it is too difficult (or impossible) to find the inverse of <math>F(x)</math>. Further, for some distributions it is not even possible to find <math>F(x)</math> (i.e. a closed form expression for the distribution function, or otherwise; even if the closed form expression exists, it's usually difficult to find <math>F^{-1}</math>).
Line 200: Line 199:


====Acceptance/Rejection Method====
====Acceptance/Rejection Method====
[[Image:edit.JPG|thumb|left|500px|Ali: Some statements are incorrect, inaccurate or misleading. Acceptance-Rejection Method needs to be motivated in more details. ]]<br /><br /><br /><br /><br /><br />


Suppose we wish to sample from a target distribution <math>f(x)</math> that is difficult or impossible to sample from directly. Suppose also that we have a proposal distribution <math>g(x)</math> from which we have a reasonable method of sampling (e.g. the uniform distribution). Then, if there is a constant <math>c \ |\ c \cdot g(x) \geq f(x)\  \forall x</math>, accepting samples drawn in successions from <math> c \cdot g(x)</math> with ratio <math> \frac {f(x)}{c \cdot g(x)} </math> close to 1 will yield a sample that follows the target distribution <math>f(x)</math>; on the other hand we would reject the samples if the ratio is not close to 1.


The following graph shows the pdf of <math>f(x)</math> (target distribution) and <math> c \cdot g(x)</math> (proposal distribution)


[[File:fxcgx.JPG]]
Suppose we wish to sample from a target distribution <math>f(x)</math> that is difficult to sample from directly.


At x=7; sampling from <math> c \cdot g(x)</math> will yield a sample that follows the target distribution <math>f(x)</math>
Let <math>g(x)</math> be a distribution that is easy to sample from and satisfies the condition: <br /><br />


At x=9; we will reject samples according to the ratio <math> \frac {f(x)}{c \cdot g(x)} </math> after sampling from <math> c \cdot g(x)</math>
<math>\forall x: f(x) \leq c \cdot g(x)\ </math>, where  <math> c \in \Re^+</math>
<br /><br />
[[Image:fxcgx.JPG|thumb|right|300px|"Graph of the pdf of <math>f(x)</math> (target distribution) and <math> c g(x)</math> (proposal distribution)"]]
Since c*g(x) > f(x) for all x, it is possible to obtain samples that follows f(x) by rejecting a proportion of samples drawn from c*g(x).<br /><br />
 
This proportion depends on how different f(x) and g(x) are and may vary at different values of x.
 
That is, if <math> f(x) \approx g(x) \text { at } x = x_1  \text { and } f(x) \ll g(x) \text { at } x = x_2 </math>, we will need to reject more samples drawn at <math> \,x_2 </math> than at <math> \,x_1 </math>.
 
Overall, it can shown that by accepting samples drawn from g(x) with probability <math> \frac {f(x)}{c \cdot g(x)} </math>, we can obtain samples that follows f(x)
<br /><br />
 
Consider the example in the graph,<br />
Sampling y = 7 from <math> cg(x)</math> will yield a sample that follows the target distribution <math>f(x)</math> and will y be accepted w/p 1.
 
Sampling y = 9 from <math> cg(x)</math> will yield a point that is distant from <math>f(x)</math> and will be accepted with a low probability.


'''Proof'''
'''Proof'''
[[Image:edit.JPG|thumb|left|500px|Ali: Proof of what? . ]]<br /><br /><br /><br /><br /><br />


Note the following:
Show that if points are sampled according to the Acceptance/Rejection method then they follow the target distribution.<br /><br />
*<math> Pr(X|accept) = \frac{Pr(accept|X) \times Pr(X)}{Pr(accept)} </math>  (Bayes' theorem)
*<math> Pr(accept|X) = \frac{f(x)}{c \cdot g(x)} </math>
*<math> Pr(X) = g(x)\frac{}{}</math>


So,
<math> P(X=x|accept) = \frac{P(accept|X=x)P(X=x)}{P(accept)}</math><br /> 
<math> Pr(accept) = \int^{}_x Pr(accept|X) \times Pr(X) dx </math>
by Bayes' theorem
<math>           = \int^{}_x \frac{f(x)}{c \cdot g(x)} g(x) dx </math>
<br /><br />
<math>          = \frac{1}{c} \int^{}_x f(x) dx </math>
<math>\begin{align} &P(accept|X=x) = \frac{f(x)}{c \cdot g(x)}\\ &Pr(X=x) = g(x)\frac{}{} \end{align}</math><br /><br />by hypothesis.<br /><br />
<math>           = \frac{1}{c} </math>


Therefore,
Then,<br />
<math> Pr(X|accept) = \frac{\frac{f(x)}{c\ \cdot g(x)} \times g(x)}{\frac{1}{c}} = f(x) </math> as required.
<math>\begin{align} P(accept) &= \int^{}_x P(accept|X=x)P(X=x) dx \\
          &= \int^{}_x \frac{f(x)}{c \cdot g(x)} g(x) dx\\
          &= \frac{1}{c} \int^{}_x f(x) dx\\
          &= \frac{1}{c} \end{align} </math>
<br /><br />
Therefore,<br />
<math> P(X=x|accept) = \frac{\frac{f(x)}{c\ \cdot g(x)}g(x)}{\frac{1}{c}} = f(x) </math> as required.


'''Procedure (Continuous Case)'''
'''Procedure (Continuous Case)'''
Line 323: Line 339:
: <math> \Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda) </math>
: <math> \Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda) </math>


We can use the Inverse Transform Method and sample from independent uniform distributions seen before to generate a sample following a Gamma distribution.
Using this property, we can use the Inverse Transform Method to generate samples from an exponential distribution with appropriate variables, and use these to generate a sample following a Gamma distribution.




Line 381: Line 397:
It can also be shown that the joint distribution of r and θ is given by,
It can also be shown that the joint distribution of r and θ is given by,


:<math>\begin{matrix}  f(r,\theta) = \frac{1}{2}e^{-\frac{d}{2}}*\frac{1}{2\pi},\quad  d = r^2 \end{matrix} </math>
:<math>\begin{matrix}  f(r,\theta) = \frac{1}{2}e^{-\frac{d}{2}}\times\frac{1}{2\pi},\quad  d = r^2 \end{matrix} </math>
Note that <math> \begin{matrix}f(r,\theta)\end{matrix}</math> consists of two density functions, Exponential and Uniform, so assuming that r and <math>\theta</math> are independent
Note that <math> \begin{matrix}f(r,\theta)\end{matrix}</math> consists of two density functions, Exponential and Uniform, so assuming that r and <math>\theta</math> are independent
<math> \begin{matrix} \Rightarrow d \sim~ Exp(1/2),  \theta \sim~ Unif[0,2\pi] \end{matrix} </math>
<math> \begin{matrix} \Rightarrow d \sim~ Exp(1/2),  \theta \sim~ Unif[0,2\pi] \end{matrix} </math>
Line 428: Line 444:
[[File:BothMay19.jpg]]
[[File:BothMay19.jpg]]


=====Useful Properties (Single and Multivariate)=====
=====Useful Properties (Single and Multivariate Normal)=====


Box-Muller can be used to sample a standard normal distribution. However, there are many properties of Normal distributions that allow us to use the samples from Box-Muller method to sample any Normal distribution in general.
The Box-Muller method as described above samples only from the standard normal distribution. However, both singlevariate and multivariate normal distributions have properties that allow us to use samples generated by the Box-Muller method to sample any normal distribution in general.




Line 456: Line 472:
[[File:MultiVariateMay19.jpg]]
[[File:MultiVariateMay19.jpg]]


Note: In the example above, we had to generate the square root of <math>\Sigma</math> using the Cholesky decomposition,  
Note: In the example above, we generated the square root of <math>\Sigma</math> using the Cholesky decomposition,  
   
   
  ss = chol(sigma);
  ss = chol(sigma);
Line 467: Line 483:


===[[Bayesian and Frequentist Schools of Thought]] - May 21, 2009===
===[[Bayesian and Frequentist Schools of Thought]] - May 21, 2009===
In this lecture we will continue to discuss sampling from specific distributions , introduce '''Monte Carlo Integration''', and also talk about the differences between the Bayesian and Frequentist views on probability, along with references to '''Bayesian Inference'''.


==[[Monte Carlo Integration]] - May 26, 2009==
====Binomial Distribution====
Today's lecture completes the discussion on the Frequentists and Bayesian schools of thought and introduces '''Basic Monte Carlo Integration'''.<br><br>
A Binomial distribution <math>X \sim~ Bin(n,p) </math> is the sum of <math>n</math> independent Bernoulli trials, each with probability of success <math>p</math> <math>(0 \leq p \leq 1)</math>. For each trial we generate an independent uniform random variable: <math>U_1, \ldots, U_n \sim~ Unif(0,1)</math>. Then X is the number of times that <math>U_i \leq p</math>. In this case if n is large enough, by the central limit theorem, the Normal distribution can be used to approximate a Binomial distribution.
 
Sampling from Binomial distribution in Matlab is done using the following code:
n=3;
p=0.5;
trials=1000;
X=sum((rand(trials,n))'<=p);
hist(X)
 
Where the histogram is a Binomial distribution, and for higher <math>n</math>, it would resemble a Normal distribution.


====Frequentist vs Bayesian Example - Estimating Parameters====
====Monte Carlo Integration====


Estimating parameters of a univariate Gaussian:
Monte Carlo Integration is a numerical method of approximating the evaluation of integrals by simulation. In this course we will mainly look at three methods for approximating integrals:
# Basic Monte Carlo Integration
# Importance Sampling
# Markov Chain Monte Carlo (MCMC)


Frequentist: <math>f(x|\theta)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}*(\frac{x-\mu}{\sigma})^2}</math><br>
====Bayesian VS Frequentists====
Bayesian: <math>f(\theta|x)=\frac{f(x|\theta)f(\theta)}{f(x)}</math>


=====Frequentist Approach=====
During the history of statistics, two major schools of thought emerged along the way and have been locked in an on-going struggle in trying to determine which one has the correct view on probability. These two schools are known as the Bayesian and Frequentist schools of thought. Both the Bayesians and the Frequentists holds a different philosophical view on what defines probability. Below are some fundamental differences between the Bayesian and Frequentist schools of thought:


Let <math>X^N</math> denote <math>(x_1, x_2, ..., x_n)</math>. Using the Maximum Likelihood Estimation approach for estimating parameters we get:<br>
'''Frequentist'''
:<math>L(X^N; \theta) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}(\frac{x_i- \mu} {\sigma})^2}</math>
*Probability is '''objective''' and refers to the limit of an event's relative frequency in a large number of trials. For example, a coin with a 50% probability of heads will turn up heads 50% of the time.
:<math>l(X^N; \theta) = \sum_{i=1}^N -\frac{1}{2}log (2\pi) - log(\sigma) - \frac{1}{2} \left(\frac{x_i- \mu}{\sigma}\right)^2 </math>
*Parameters are all fixed and unknown constants.
:<math>\frac{dl}{d\mu} = \displaystyle\sum_{i=1}^N(x_i-\mu)</math>
*Any statistical process only has interpretations based on limited frequencies. For example, a 95% C.I. of a given parameter will contain the true value of the parameter 95% of the time.
Setting <math>\frac{dl}{d\mu} = 0</math> we get
:<math>\displaystyle\sum_{i=1}^Nx_i = \displaystyle\sum_{i=1}^N\mu</math>
:<math>\displaystyle\sum_{i=1}^Nx_i = N\mu \rightarrow \mu = \frac{1}{N}\displaystyle\sum_{i=1}^Nx_i</math><br>


=====Bayesian Approach=====
'''Bayesian'''
*Probability is '''subjective''' and can be applied to single events based on degree of confidence or beliefs. For example, Bayesian can refer to tomorrow's weather as having 50% of rain, whereas this would not make sense to a Frequentist because tomorrow is just one unique event, and cannot be referred to as a relative frequency in a large number of trials.
*Parameters are random variables that has a given distribution, and other probability statements can be made about them.
*Probability has a distribution over the parameters, and point estimates are usually done by either taking the mode or the mean of the distribution.


Assuming the prior is Gaussian:
====Bayesian Inference====
:<math>P(\theta) = \frac{1}{\sqrt{2\pi}\tau}e^{-\frac{1}{2}(\frac{x-\mu_0}{\tau})^2}</math>
:<math>f(\theta|x) \propto \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}(\frac{x_i-\mu}{\sigma})^2} * \frac{1}{\sqrt{2\pi}\tau}e^{-\frac{1}{2}(\frac{x-\mu_0}{\tau})^2}</math>
By completing the square we conclude that the posterior is Gaussian as well:
:<math>f(\theta|x)=\frac{1}{\sqrt{2\pi}\tilde{\sigma}}e^{-\frac{1}{2}(\frac{x-\tilde{\mu}}{\tilde{\sigma}})^2}</math>
Where
:<math>\tilde{\mu} = \frac{\frac{N}{\sigma^2}}{{\frac{N}{\sigma^2}}+\frac{1}{\tau^2}}\bar{x} + \frac{\frac{1}{\tau^2}}{{\frac{N}{\sigma^2}}+\frac{1}{\tau^2}}\mu_0</math>
The expectation from the posterior is different from the MLE method.
Note that <math>\displaystyle\lim_{N\to\infty}\tilde{\mu} = \bar{x}</math>. Also note that when <math>N = 0</math> we get <math>\tilde{\mu} = \mu_0</math>.


====Basic Monte Carlo Integration====
'''Example''':
If we have a screen that only displays single digits from 0 to 9, and this screen is split into a 4x5 matrix of pixels, then all together the 20 pixels that make up the screen can be referred to as <math>\vec{X}</math>, which is our data, and the parameter of the data for this case, which we will refer to as <math> \theta </math>, would be a discrete random variable that can take on the values of 0 to 9. In this example, a Bayesian would be interested in finding <math> Pr(\theta=a|\vec{X}=\vec{x})</math>, whereas a Frequentist would be more interested in finding <math> Pr(\vec{X}=\vec{x}|\theta=a)</math>


Although it is almost impossible to find a precise definition of "Monte Carlo Method", the method is widely used and has numerous descriptions in articles and monographs. As an interesting fact, the term '''Monte Carlo''' is claimed to have been first used by Ulam and von Neumann as a Los Alamos code word for the stochastic simulations they applied to building better atomic bombs. ''Stochastic simulation'' refers to a random process in which its future evolution is described by probability distributions (counterpart to a deterministic process), and these simulation methods are known as ''Monte Carlo methods''. [Stochastic process, Wikipedia]. The following example (external link) illustrates a Monte Carlo Calculation of Pi: [http://www.chem.unl.edu/zeng/joy/mclab/mcintro.html]
=====Bayes' Rule=====


:<math>f(\theta|X) = \frac{f(X | \theta)\, f(\theta)}{f(X)}.</math>


<!-- EDITING, BACK OFF -->
Note: In this case <math>f (\theta|X)</math> is referred to as '''posterior''', <math>f (X | \theta)</math> as '''likelihood''', <math>f (\theta)</math> as '''prior''', and <math>f (X)</math> as the '''marginal''', where <math>\theta</math> is the parameter and <math>X</math> is the observed variable.
We start with a simple example:
:<math>I = \displaystyle\int_a^b h(x)\,dx</math>
::<math> = \displaystyle\int_a^b w(x)f(x)\,dx</math>
where
:<math>\displaystyle w(x) = h(x)(b-a)</math>
:<math>f(x) = \frac{1}{b-a} \rightarrow</math> the p.d.f. is Unif<math>(a,b)</math>
:<math>\hat{I} = E_f[w(x)] = \frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i)</math>
If <math>x_i \sim~ Unif(a,b)</math> then by the '''Law of Large Numbers''' <math>\frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i) \rightarrow \displaystyle\int w(x)f(x)\,dx = E_f[w(x)]</math>


=====Process=====
'''Procedure in Bayesian Inference'''
Given <math>I = \displaystyle\int^b_ah(x)\,dx</math>
*First choose a probability distribution as the prior, which represents our beliefs about the parameters.
# <math>\begin{matrix} w(x) = h(x)(b-a)\end{matrix}</math>
*Then choose a probability distribution for the likelihood, which represents our beliefs about the data.
# <math> \begin{matrix} x_1, x_2, ..., x_n \sim UNIF(a,b)\end{matrix}</math>
*Lastly compute the posterior, which represents an update of our beliefs about the parameters after having observed the data.
# <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i)</math>


From this we can compute other statistics, such as
As mentioned before, for a Bayesian, finding point estimates usually involves finding the mode or the mean of the parameter's distribution.
# <math> SE=\frac{s}{\sqrt{N}}</math> where  <math>s^2=\frac{\sum_{i=1}^{N}(Y_i-\hat{I})^2 }{N-1} </math> with <math> \begin{matrix}Y_i=w(x_i)\end{matrix}</math>
# <math>\begin{matrix} 1-\alpha \end{matrix}</math> CI's can be estimated as <math> \hat{I}\pm Z_\frac{\alpha}{2}*SE</math>


'''Example 1'''
'''Methods'''
*Mode: <math>\theta = \arg\max_{\theta} f(\theta|X) \gets</math>  value of <math>\theta</math> that maximizes <math>f(\theta|X)</math>
*Mean: <math> \bar\theta = \int^{}_\theta \theta \cdot f(\theta|X)d\theta</math>


Find <math> E[\sqrt{x}]</math> for <math>\begin{matrix} f(x) = e^{-x}\end{matrix} </math>
If it is the case that <math>\theta</math> is high-dimensional, and we are only interested in one of the components of <math>\theta</math>, for example, we want <math>\theta_1</math> from <math> \vec{\theta}=(\theta_1,\dots,\theta_n)</math>, then we would have to calculate the integral: <math>\int^{} \int^{} \dots \int^{}f(\theta|X)d\theta_2d\theta_3 \dots d\theta_n </math>


# We need to draw from <math>\begin{matrix} f(x) = e^{-x}\end{matrix} </math>
This sort of calculation is usually very difficult or not feasible to compute, and thus we would need to do it by simulation.
# <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i)</math>


This example can be illustrated in Matlab using the code below:
'''Note''':
#<math>f(x)=\int^{}_\theta f(X | \theta)f(\theta) d\theta</math> is not a function of <math>\theta</math>, and is called the '''Normalization Factor'''
#Therefore, since f(x) is like a constant, the posterior is proportional to the likelihood times the prior: <math>f(\theta|X)\propto f(X | \theta)f(\theta)</math>


u=rand(100,1)
==[[Monte Carlo Integration]] - May 26, 2009==
x=-log(u)
Today's lecture completes the discussion on the Frequentists and Bayesian schools of thought and introduces '''Basic Monte Carlo Integration'''.<br><br>
h= x.* .5
mean(h)
%The value obtained using the Monte Carlo method
F = @ (x) sqrt (x). * exp(-x)
quad(F,0,50)
%The value of the real function using Matlab


'''Example 2'''
====Frequentist vs Bayesian Example - Estimating Parameters====
Find <math> I = \displaystyle\int^1_0h(x)\,dx, h(x) = x^3 </math>
# <math> \displaystyle I = x^4/4 = 1/4 </math>
# <math>\displaystyle  W(x) = x^3*(1-0)</math>
# <math> Xi \sim~Unif(0,1)</math>
# <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^N(x_i^3)</math>


This example can be illustrated in Matlab using the code below:
Estimating parameters of a univariate Gaussian:
x = rand (1000)
mean(x^3)


'''Example 3'''
Frequentist: <math>f(x|\theta)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}*(\frac{x-\mu}{\sigma})^2}</math><br>
To estimate an infinite integral
Bayesian: <math>f(\theta|x)=\frac{f(x|\theta)f(\theta)}{f(x)}</math>
such as <math> I = \displaystyle\int^\infty_5 h(x)\,dx, h(x) = 3e^{-x} </math>
# Substitute in <math> y=\frac{1}{x-5+1} => dy=-\frac{1}{(x-4)^2}dx => dy=-y^2dx </math>
# <math> I = \displaystyle\int^1_0 \frac{3e^{-(\frac{1}{y}+4)}}{-y^2}\,dy </math>
# <math> w(y) = \frac{3e^{-(\frac{1}{y}+4)}}{-y^2}(1-0)</math>
# <math> Y_i \sim~Unif(0,1)</math>
# <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^N(\frac{3e^{-(\frac{1}{y_i}+4)}}{-y_i^2})</math>


==[[Importance Sampling and Monte Carlo Simulation]] - May 28, 2009==
=====Frequentist Approach=====
<!-- UNDER CONSTRUCTION! -->


During this lecture we covered two more examples of Monte Carlo simulation, finishing that topic, and begun talking about Importance Sampling.
Let <math>X^N</math> denote <math>(x_1, x_2, ..., x_n)</math>. Using the Maximum Likelihood Estimation approach for estimating parameters we get:<br>
:<math>L(X^N; \theta) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}(\frac{x_i- \mu} {\sigma})^2}</math>
:<math>l(X^N; \theta) = \sum_{i=1}^N -\frac{1}{2}log (2\pi) - log(\sigma) - \frac{1}{2} \left(\frac{x_i- \mu}{\sigma}\right)^2 </math>
:<math>\frac{dl}{d\mu} = \displaystyle\sum_{i=1}^N(x_i-\mu)</math>
Setting <math>\frac{dl}{d\mu} = 0</math> we get
:<math>\displaystyle\sum_{i=1}^Nx_i = \displaystyle\sum_{i=1}^N\mu</math>
:<math>\displaystyle\sum_{i=1}^Nx_i = N\mu \rightarrow \mu = \frac{1}{N}\displaystyle\sum_{i=1}^Nx_i</math><br>


====Binomial Probability Monte Carlo Simulations====
=====Bayesian Approach=====


=====Example 1:=====
Assuming the prior is Gaussian:
You are given two independent Binomial distributions with probabilities <math>\displaystyle p_1\text{, }p_2</math>. Using a Monte Carlo simulation, approximate the value of <math>\displaystyle \delta</math>, where <math>\displaystyle \delta = p_1 - p_2</math>.<br>
:<math>P(\theta) = \frac{1}{\sqrt{2\pi}\tau}e^{-\frac{1}{2}(\frac{x-\mu_0}{\tau})^2}</math>
:<math>\displaystyle X \sim BIN(n, p_1)</math>; <math>\displaystyle Y \sim BIN(n, p_2)</math>; <math>\displaystyle \delta = p_1 - p_2</math><br><br>
:<math>f(\theta|x) \propto \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}(\frac{x_i-\mu}{\sigma})^2} * \frac{1}{\sqrt{2\pi}\tau}e^{-\frac{1}{2}(\frac{x-\mu_0}{\tau})^2}</math>
By completing the square we conclude that the posterior is Gaussian as well:
:<math>f(\theta|x)=\frac{1}{\sqrt{2\pi}\tilde{\sigma}}e^{-\frac{1}{2}(\frac{x-\tilde{\mu}}{\tilde{\sigma}})^2}</math>
Where
:<math>\tilde{\mu} = \frac{\frac{N}{\sigma^2}}{{\frac{N}{\sigma^2}}+\frac{1}{\tau^2}}\bar{x} + \frac{\frac{1}{\tau^2}}{{\frac{N}{\sigma^2}}+\frac{1}{\tau^2}}\mu_0</math>
The expectation from the posterior is different from the MLE method.
Note that <math>\displaystyle\lim_{N\to\infty}\tilde{\mu} = \bar{x}</math>. Also note that when <math>N = 0</math> we get <math>\tilde{\mu} = \mu_0</math>.


So <math>\displaystyle f(p_1, p_2 | x,y) = \frac{f(x, y|p_1, p_2)*f(p_1,p_2)}{f(x,y)}</math> where <math>\displaystyle f(x,y)</math> is a flat distribution and the expected value of <math>\displaystyle \delta</math> is as follows:<br>
====Basic Monte Carlo Integration====
:<math>\displaystyle \hat{\delta} = \int\int\delta f(p_1,p_2|X,Y)\,dp_1dp_2</math><br><br>


Since X, Y are independent, we can split the conditional probability distribution:<br>
Although it is almost impossible to find a precise definition of "Monte Carlo Method", the method is widely used and has numerous descriptions in articles and monographs. As an interesting fact, the term '''Monte Carlo''' is claimed to have been first used by Ulam and von Neumann as a Los Alamos code word for the stochastic simulations they applied to building better atomic bombs. ''Stochastic simulation'' refers to a random process in which its future evolution is described by probability distributions (counterpart to a deterministic process), and these simulation methods are known as ''Monte Carlo methods''. [Stochastic process, Wikipedia]. The following example (external link) illustrates a Monte Carlo Calculation of Pi: [http://www.chem.unl.edu/zeng/joy/mclab/mcintro.html]
:<math>\displaystyle f(p_1,p_2|X,Y) \propto f(p_1|X)f(p_2|Y)</math><br><br>


We need to find conditional distribution functions for <math>\displaystyle p_1, p_2</math> to draw samples from. In order to get a distribution for the probability 'p' of a Binomial, we have to divide the Binomial distribution by n. This new distribution has the same shape as the original, but is scaled. A Beta distribution is a suitable approximation. Let<br>
:<math>\displaystyle f(p_1 | X) \sim \text{Beta}(x+1, n-x+1)</math> and <math>\displaystyle f(p_2 | Y) \sim \text{Beta}(y+1, n-y+1)</math>, where<br>
:<math>\displaystyle \text{Beta}(\alpha,\beta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}p^{\alpha-1}(1-p)^{\beta-1}</math><br><br>


'''Process:'''
<!-- EDITING, BACK OFF -->
# Draw samples for <math>\displaystyle p_1</math> and <math>\displaystyle p_2</math>: <math>\displaystyle (p_1,p_2)^{(1)}</math>, <math>\displaystyle (p_1,p_2)^{(2)}</math>, ..., <math>\displaystyle (p_1,p_2)^{(n)}</math>;
We start with a simple example:
# Compute <math>\displaystyle \delta = p_1 - p_2</math> in order to get n values for <math>\displaystyle \delta</math>;
:<math>\begin{align}I &= \displaystyle\int_a^b h(x)\,dx
# <math>\displaystyle \hat{\delta}=\frac{\displaystyle\sum_{\forall i}\delta^{(i)}}{N}</math>.<br><br>
    &= \displaystyle\int_a^b w(x)f(x)\,dx \end{align}</math>
where
:<math>\displaystyle w(x) = h(x)(b-a)</math>
:<math>f(x) = \frac{1}{b-a} \rightarrow</math> the p.d.f. is Unif<math>(a,b)</math>
:<math>\hat{I} = E_f[w(x)] = \frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i)</math>
If <math>x_i \sim~ Unif(a,b)</math> then by the '''Law of Large Numbers''' <math>\frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i) \rightarrow \displaystyle\int w(x)f(x)\,dx = E_f[w(x)]</math>


'''Matlab Code:'''<br>
=====Process=====
:The Matlab code for recreating the above example is as follows:
Given <math>I = \displaystyle\int^b_ah(x)\,dx</math>
n=100; %number of trials for X
# <math>\begin{matrix} w(x) = h(x)(b-a)\end{matrix}</math>
m=100; %number of trials for Y
# <math> \begin{matrix} x_1, x_2, ..., x_n \sim UNIF(a,b)\end{matrix}</math>
x=80; %number of successes for X trials
# <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i)</math>
y=60; %number of successes for y trials
p1=betarnd(x+1, n-x+1, 1, 1000);
p2=betarnd(y+1, m-y+1, 1, 1000);
delta=p1-p2;
mean(delta);


The mean in this example is given by 0.1938.
From this we can compute other statistics, such as
# <math> SE=\frac{s}{\sqrt{N}}</math> where  <math>s^2=\frac{\sum_{i=1}^{N}(Y_i-\hat{I})^2 }{N-1} </math> with <math> \begin{matrix}Y_i=w(x_i)\end{matrix}</math>
# <math>\begin{matrix} 1-\alpha \end{matrix}</math> CI's can be estimated as <math> \hat{I}\pm Z_\frac{\alpha}{2}\times SE</math>


A 95% confidence interval for <math>\delta</math> is represented by the interval between the 2.5% and 97.5% quantiles which covers 95% of the probability distribution. In Matlab, this can be calculated as follows:
'''Example 1'''
q1=quantile(delta,0.025);
q2=quantile(delta,0.975);


The interval is approximately <math> 95% CI \approx (0.06606, 0.32204) </math>
Find <math> E[\sqrt{x}]</math> for  <math>\begin{matrix} f(x) = e^{-x}\end{matrix} </math>


The histogram of delta is:<br>
# We need to draw from <math>\begin{matrix} f(x) = e^{-x}\end{matrix} </math>
[[File:Delta_hist.jpg‎]]
# <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i)</math>


Note: In this case, we can also find <math>E(\delta)</math> analytically since
This example can be illustrated in Matlab using the code below:
<math>E(\delta) = E(p_1 - p_2) = E(p_1) - E(p_2) = \frac{x+1}{n+2} - \frac{y+1}{m+2} \approx 0.1961 </math>. Compare this with the maximum likelihood estimate for <math>\delta</math>: <math>\frac{x}{n} - \frac{y}{m} = 0.2</math>.


=====Example 2:=====
u=rand(100,1)
Bayesian Inference for Dose Response
x=-log(u)
h= x.^ .5
mean(h)
%The value obtained using the Monte Carlo method
F = @ (x) sqrt (x). * exp(-x)
quad(F,0,50)
%The value of the real function using Matlab


We conduct an experiment by giving rats one of ten possible doses of a drug, where each subsequent dose is more lethal than the previous one:
'''Example 2'''
:<math>\displaystyle x_1<x_2<...<x_{10}</math><br>
Find <math> I = \displaystyle\int^1_0h(x)\,dx, h(x) = x^3 </math>
For each dose <math>\displaystyle x_i</math> we test n rats and observe <math>\displaystyle Y_i</math>, the number of rats that survive. Therefore,<br>
# <math> \displaystyle I = x^4/4 = 1/4 </math>
:<math>\displaystyle Y_i \sim~ BIN(n, p_i)</math><br>.
# <math>\displaystyle W(x) = x^3*(1-0)</math>
We can assume that the probability of death grows with the concentration of drug given, i.e. <math>\displaystyle p_1<p_2<...<p_{10}</math>. Estimate the dose at which the animals have at least 50% chance of dying.<br>
# <math> Xi \sim~Unif(0,1)</math>
:Let <math>\displaystyle \delta=x_j</math> where <math>\displaystyle j=min\{i|p_i\geq0.5\}</math>
# <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^N(x_i^3)</math>
:We are interested in <math>\displaystyle \delta</math> since any higher concentrations are known to have a higher death rate.<br><br>


'''Solving this analytically is difficult:'''
This example can be illustrated in Matlab using the code below:
:<math>\displaystyle \delta = g(p_1, p_2, ..., p_{10})</math> where g is an unknown function
x = rand (1000)
:<math>\displaystyle \hat{\delta} = \int \int..\int_A \delta f(p_1,p_2,...,p_{10}|Y_1,Y_2,...,Y_{10})\,dp_1dp_2...dp_{10}</math><br>
mean(x^3)
:: where <math>\displaystyle A=\{(p_1,p_2,...,p_{10})|p_1\leq p_2\leq ...\leq p_{10} \}</math><br><br>


'''Process: Monte Carlo'''<br>
'''Example 3'''
We assume that
To estimate an infinite integral
# Draw <math>\displaystyle p_i \sim~ BETA(y_i+1, n-y_i+1)</math>
such as <math> I = \displaystyle\int^\infty_5 h(x)\,dx, h(x) = 3e^{-x} </math>
# Keep sample only if it satisfies <math>\displaystyle p_1\leq p_2\leq ...\leq p_{10}</math>, otherwise discard and try again.
# Substitute in <math> y=\frac{1}{x-5+1} => dy=-\frac{1}{(x-4)^2}dx => dy=-y^2dx </math>
# Compute <math>\displaystyle \delta</math> by finding the first <math>\displaystyle p_i</math> sample with over 50% deaths.
# <math> I = \displaystyle\int^1_0 \frac{3e^{-(\frac{1}{y}+4)}}{-y^2}\,dy </math>
# Repeat process n times to get n estimates for <math>\displaystyle \delta_1, \delta_2, ..., \delta_N </math>.
# <math> w(y) = \frac{3e^{-(\frac{1}{y}+4)}}{-y^2}(1-0)</math>
# <math>\displaystyle \bar{\delta} = \frac{\displaystyle\sum_{\forall i} \delta_i}{N}</math>.
# <math> Y_i \sim~Unif(0,1)</math>
# <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^N(\frac{3e^{-(\frac{1}{y_i}+4)}}{-y_i^2})</math>


For instance, for each dose level <math>X_i</math>, for <math>1<=i<=10</math>, 10 rats are used and it is observed that the numbers that are dying is <math>Y_i</math>, where <math>Y_1 = 4, Y_2 = 3, </math>etc.
==[[Importance Sampling and Monte Carlo Simulation]] - May 28, 2009==
<!-- UNDER CONSTRUCTION! -->


====Importance Sampling====
During this lecture we covered two more examples of Monte Carlo simulation, finishing that topic, and begun talking about Importance Sampling.


In statistics, Importance Sampling helps estimating the properties of a particular distribution. As in the case with the Acceptance/Rejection method, we choose a good distribution from which to simulate the given random variables. The main difference in importance sampling however, is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. [Importance Sampling, Wikipedia] The following diagram illustrates a Monte Carlo approximation for g(x):
====Binomial Probability Monte Carlo Simulations====
<br>
<br>
[[File:ImpSampling.PNG]]


As the figure above shows, the uniform distribution <math>U\sim~Unif[0,1]</math> is a proposal distribution to sample from and g(x) is the target distribution. Here we cast the integral <math>\int_{0}^1 g(x)dx</math>, as the expectation with respect to U such that <math>\int_{0}^1 g(x)= E(g(U))</math>. Hence we can approximate by <math>\frac{1}{n}\displaystyle\sum_{i=1}^{n} g(u_i)</math>. <br>
=====Example 1:=====
[Source: Monte Carlo Methods and Importance Sampling, Eric C. Anderson (1999). Retrieved June 9th from URL: http://ib.berkeley.edu/labs/slatkin/eriq/classes/guest_lect/mc_lecture_notes.pdf]
You are given two independent Binomial distributions with probabilities <math>\displaystyle p_1\text{, }p_2</math>. Using a Monte Carlo simulation, approximate the value of <math>\displaystyle \delta</math>, where <math>\displaystyle \delta = p_1 - p_2</math>.<br>
<br>
:<math>\displaystyle X \sim BIN(n, p_1)</math>; <math>\displaystyle Y \sim BIN(m, p_2)</math>; <math>\displaystyle \delta = p_1 - p_2</math><br><br>
<br>
In <math>I = \displaystyle\int h(x)f(x)\,dx</math>, Monte Carlo simulation can be used only if it easy to sample from f(x). Otherwise, another method must be applied. If sampling from f(x) is difficult but there exists a probability distribution function g(x) which is easy to sample from, then <math>I</math> can be written as<br>
:: <math>I = \displaystyle\int h(x)f(x)\,dx </math>
:: <math>= \displaystyle\int \frac{h(x)f(x)}{g(x)}g(x)\,dx</math>
:: <math>= \displaystyle E_g(w(x)) \rightarrow</math>the expectation of w(x) with respect to g(x) and therefore <math>\displaystyle x_1,x_2,...,x_N \sim~ g</math>
:: <math>= \frac{\displaystyle\sum_{i=1}^{N} w(x_i)}{N}</math> where <math>\displaystyle w(x) = \frac{h(x)f(x)}{g(x)}</math><br><br>


'''Process'''<br>
So <math>\displaystyle f(p_1, p_2 | x,y) = \frac{f(x, y|p_1, p_2)f(p_1,p_2)}{f(x,y)}</math> where <math>\displaystyle f(x,y)</math> is a flat distribution and the expected value of <math>\displaystyle \delta</math> is as follows:<br>
# Choose <math>\displaystyle g(x)</math> such that it's easy to sample from.
:<math>\displaystyle E(\delta) = \int\int\delta f(p_1,p_2|X,Y)\,dp_1dp_2</math><br><br>
# Compute <math>\displaystyle w(x)=\frac{h(x)f(x)}{g(x)}</math>
# <math>\displaystyle \hat{I} = \frac{\displaystyle\sum_{i=1}^{N} w(x_i)}{N}</math><br><br>


Since X, Y are independent, we can split the conditional probability distribution:<br>
:<math>\displaystyle f(p_1,p_2|X,Y) \propto f(p_1|X)f(p_2|Y)</math><br><br>


Note: By the law of large number, we can say that <math>\hat{I}</math> converges in probability to <math>I </math>.
We need to find conditional distribution functions for <math>\displaystyle p_1, p_2</math> to draw samples from. In order to get a distribution for the probability 'p' of a Binomial, we have to divide the Binomial distribution by n. This new distribution has the same shape as the original, but is scaled. A Beta distribution is a suitable approximation. Let<br>
:<math>\displaystyle f(p_1 | X) \sim \text{Beta}(x+1, n-x+1)</math> and <math>\displaystyle f(p_2 | Y) \sim \text{Beta}(y+1, m-y+1)</math>, where<br>
:<math>\displaystyle \text{Beta}(\alpha,\beta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}p^{\alpha-1}(1-p)^{\beta-1}</math><br><br>


'''"Weighted" average'''<br>
'''Process:'''
:The term "importance sampling" is used to describe this method because a higher 'importance' or 'weighting' is given to the values sampled from <math>\displaystyle g(x)</math> that are closer to the original distribution <math>\displaystyle f(x)</math>, which we would ideally like to sample from (but cannot because it is too difficult).<br>
# Draw samples for <math>\displaystyle p_1</math> and <math>\displaystyle p_2</math>: <math>\displaystyle (p_1,p_2)^{(1)}</math>, <math>\displaystyle (p_1,p_2)^{(2)}</math>, ..., <math>\displaystyle (p_1,p_2)^{(n)}</math>;
:<math>\displaystyle I = \int\frac{h(x)f(x)}{g(x)}g(x)\,dx</math>
# Compute <math>\displaystyle \delta = p_1 - p_2</math> in order to get n values for <math>\displaystyle \delta</math>;
:<math>=\displaystyle \int \frac{f(x)}{g(x)}h(x)g(x)\,dx</math>
# <math>\displaystyle \hat{\delta}=\frac{1}{N}\sum_{\forall i}\delta^{(i)}</math>.<br><br>
:<math>=\displaystyle \int \frac{f(x)}{g(x)}E_g(h(x))\,dx</math> which is the same thing as saying that we are applying a regular Monte Carlo Simulation method to <math>\displaystyle\int h(x)g(x)\,dx </math>, taking each result from this process and weighting the more accurate ones (i.e. the ones for which <math>\displaystyle \frac{f(x)}{g(x)}</math> is high) higher.


One can view <math> \frac{f(x)}{g(x)}\ = B(x)</math> as a weight.
'''Matlab Code:'''<br>
:The Matlab code for recreating the above example is as follows:
n=100; %number of trials for X
m=100; %number of trials for Y
x=80; %number of successes for X trials
y=60; %number of successes for y trials
p1=betarnd(x+1, n-x+1, 1, 1000);
p2=betarnd(y+1, m-y+1, 1, 1000);
delta=p1-p2;
mean(delta);


Then <math>\displaystyle \hat{I} = \frac{\displaystyle\sum_{i=1}^{N} w(x_i)}{N} = \frac{\displaystyle\sum_{i=1}^{N} B(x_i)*h(x_i)}{N}</math><br><br>
The mean in this example is given by 0.1938.


i.e. we are computing a weighted sum of <math> h(x_i) </math> instead of a sum
A 95% confidence interval for <math>\delta</math> is represented by the interval between the 2.5% and 97.5% quantiles which covers 95% of the probability distribution. In Matlab, this can be calculated as follows:
q1=quantile(delta,0.025);
q2=quantile(delta,0.975);


===[[A Deeper Look into Importance Sampling]] - June 2, 2009 ===
The interval is approximately <math> 95%  CI \approx (0.06606, 0.32204) </math>
From last class, we have determined that an integral can be written in the form <math>I = \displaystyle\int h(x)f(x)\,dx </math> <math>= \displaystyle\int \frac{h(x)f(x)}{g(x)}g(x)\,dx</math> We continue our discussion of Importance Sampling here.


====Importance Sampling====
The histogram of delta is:<br>
[[File:Delta_hist.jpg‎]]


We can see that the integral <math>\displaystyle\int \frac{h(x)f(x)}{g(x)}g(x)\,dx = \int \frac{f(x)}{g(x)}h(x) g(x)\,dx</math> is just <math> \displaystyle E_g(h(x)) \rightarrow</math>the expectation of h(x) with respect to g(x), where <math>\displaystyle \frac{f(x)}{g(x)} </math> is a weight <math>\displaystyle\beta(x)</math>. In the case where <math>\displaystyle f > g</math>, a greater weight for <math>\displaystyle\beta(x)</math> will be assigned. Thus, the points with more weight are deemed more important, hence "importance sampling". This can be seen as a variance reduction technique.
Note: In this case, we can also find <math>E(\delta)</math> analytically since
<math>E(\delta) = E(p_1 - p_2) = E(p_1) - E(p_2) = \frac{x+1}{n+2} - \frac{y+1}{m+2} \approx 0.1961 </math>. Compare this with the maximum likelihood estimate for <math>\delta</math>: <math>\frac{x}{n} - \frac{y}{m} = 0.2</math>.


=====Problem=====
=====Example 2:=====
The method of Importance Sampling is simple but can lead to some problems. The <math> \displaystyle \hat I </math> estimated by Importance Sampling could have infinite standard error.
Bayesian Inference for Dose Response


Given <math>\displaystyle I= \int w(x) g(x) dx </math>
We conduct an experiment by giving rats one of ten possible doses of a drug, where each subsequent dose is more lethal than the previous one:
<math>= \displaystyle E_g(w(x)) </math>
:<math>\displaystyle x_1<x_2<...<x_{10}</math><br>
<math>= \displaystyle \frac{1}{N}\sum_{i=1}^{N} w(x_i) </math>
For each dose <math>\displaystyle x_i</math> we test n rats and observe <math>\displaystyle Y_i</math>, the number of rats that die. Therefore,<br>
where <math>\displaystyle w(x)=\frac{f(x)h(x)}{g(x)} </math>.
:<math>\displaystyle Y_i \sim~ BIN(n, p_i)</math><br>.
We can assume that the probability of death grows with the concentration of drug given, i.e. <math>\displaystyle p_1<p_2<...<p_{10}</math>. Estimate the dose at which the animals have at least 50% chance of dying.<br>
:Let <math>\displaystyle \delta=x_j</math> where <math>\displaystyle j=min\{i|p_i\geq0.5\}</math>
:We are interested in <math>\displaystyle \delta</math> since any higher concentrations are known to have a higher death rate.<br><br>


Obtaining the second moment,
'''Solving this analytically is difficult:'''
::<math>\displaystyle E[(w(x))^2] </math>
:<math>\displaystyle \delta = g(p_1, p_2, ..., p_{10})</math> where g is an unknown function
::<math>\displaystyle = \int (\frac{h(x)f(x)}{g(x)})^2 g(x) dx</math>
:<math>\displaystyle \hat{\delta} = \int \int..\int_A \delta f(p_1,p_2,...,p_{10}|Y_1,Y_2,...,Y_{10})\,dp_1dp_2...dp_{10}</math><br>
::<math>\displaystyle = \int \frac{h^2(x) f^2(x)}{g^2(x)} g(x) dx </math>
:: where <math>\displaystyle A=\{(p_1,p_2,...,p_{10})|p_1\leq p_2\leq ...\leq p_{10} \}</math><br><br>
::<math>\displaystyle = \int \frac{h^2(x)f^2(x)}{g(x)} dx </math>
 
'''Process: Monte Carlo'''<br>
We assume that
# Draw <math>\displaystyle p_i \sim~ BETA(y_i+1, n-y_i+1)</math>
# Keep sample only if it satisfies <math>\displaystyle p_1\leq p_2\leq ...\leq p_{10}</math>, otherwise discard and try again.
# Compute <math>\displaystyle \delta</math> by finding the first <math>\displaystyle p_i</math> sample with over 50% deaths.
# Repeat process n times to get n estimates for <math>\displaystyle \delta_1, \delta_2, ..., \delta_N </math>.
# <math>\displaystyle \bar{\delta} = \frac{\displaystyle\sum_{\forall i} \delta_i}{N}</math>.


We can see that if <math>\displaystyle g(x) \rightarrow 0 </math>, then <math>\displaystyle E[(w(x))^2] \rightarrow \infty </math>. This occurs if <math>\displaystyle g </math> has a thinner tail than <math>\displaystyle f </math> then <math>\frac{h^2(x)f^2(x)}{g(x)} </math> could be infinitely large. The general idea here is that <math>\frac{f(x)}{g(x)} </math> should not be large.
For instance, for each dose level <math>X_i</math>, for <math>1<=i<=10</math>, 10 rats are used and it is observed that the numbers that are dying is <math>Y_i</math>, where <math>Y_1 = 4, Y_2 = 3, </math>etc.


=====Remark 1=====
====Importance Sampling====
It is evident that <math>\displaystyle g(x) </math> should be chosen such that it has a thicker tail than <math>\displaystyle f(x) </math>.
[[Image:ImpSampling.PNG|thumb|right|300px|'A diagram of a Monte Carlo Approximation of g(x)[Importance Sampling, Wikipedia]]]
If <math>\displaystyle f</math> is large over set <math>\displaystyle A</math> but <math>\displaystyle g</math> is small, then <math>\displaystyle \frac{f}{g} </math> would be large and it would result in a large variance.
In statistics, Importance Sampling helps estimating the properties of a particular distribution. As in the case with the Acceptance/Rejection method, we choose a good distribution from which to simulate the given random variables. The main difference in importance sampling however, is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. As shown in the figure, the uniform distribution <math>U\sim~Unif[0,1]</math> is a proposal distribution to sample from and g(x) is the target distribution.


=====Remark 2=====
<br />Here we cast the integral <math>\int_{0}^1 g(x)dx</math>, as the expectation of <math>g(x)</math> with respect to U such that,<br /><br />
It is useful if we can choose <math>\displaystyle g </math> to be similar to <math>\displaystyle f</math> in terms of shape. Ideally, the optimal <math>\displaystyle g </math> should be similar to <math>\displaystyle \left| h(x) \right|f(x)</math>, and have a thicker tail. It's important to take the absolute value of <math>\displaystyle h(x)</math>, since a variance can't be negative. Analytically, we can show that the best <math>\displaystyle g</math> is the one that would result in a variance that is minimized.
<math>I = \int_{0}^1 g(x)= E_U(g(x))</math>. <br /><br />Hence we can approximate this integral by,<br /><br /> <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^{N} g(u_i)</math>. <br><br />
[Source: Monte Carlo Methods and Importance Sampling, Eric C. Anderson (1999). Retrieved June 9th from URL: http://ib.berkeley.edu/labs/slatkin/eriq/classes/guest_lect/mc_lecture_notes.pdf]
<br>
<br>
In <math>I = \displaystyle\int h(x)f(x)\,dx</math>, Monte Carlo simulation can be used only if it easy to sample from f(x). Otherwise, another method must be applied. If sampling from f(x) is difficult but there exists a probability distribution function g(x) which is easy to sample from, then <math>\ I</math> can be written as<br>


=====Remark 3=====
:: <math>\begin{align}I &= \int h(x)f(x)\,dx = \int \frac{h(x)f(x)}{g(x)}g(x)\,dx \\
Choose <math>\displaystyle g </math> such that it is similar to <math>\displaystyle \left| h(x) \right| f(x) </math> in terms of shape. That is, we want <math>\displaystyle g \propto \displaystyle \left| h(x) \right| f(x)  </math>
&= E_g(w(x)) \end{align}</math><br /><br />the expectation of <math>\displaystyle w(x) = \frac{h(x)f(x)}{g(x)}</math> with respect to g(x) and therefore <math>\displaystyle x_1,x_2,...,x_N \sim~ g</math>
:: <math> \hat{I}= \frac{1}{N}\sum_{i=1}^{N} w(x_i)</math> <br><br>


'''Process'''<br>
# Choose <math>\displaystyle g(x)</math> such that it's easy to sample from and draw <math>x_i \sim~ g(x)</math>
# Compute <math> w(x_i)=\frac{h(x_i)f(x_i)}{g(x_i)}</math>
# <math> \hat{I} = \frac{1}{N}\sum_{i=1}^{N} w(x_i)</math><br><br>


====Theorem (Minimum Variance Choice of <math>\displaystyle g</math>) ====
The choice of <math>\displaystyle g</math> that minimizes variance of <math>\hat I</math> is <math>\displaystyle g^*(x)=\frac{\left| h(x) \right| f(x)}{\int \left| h(s) \right| f(s) ds}</math>.


=====Proof:=====
Note: By the law of large numbers, we can say that <math>\hat{I}</math> converges in probability to <math>I </math>.
We know that <math>\displaystyle w(x)=\frac{f(x)h(x)}{g(x)} </math>


The variance of <math>\displaystyle w(x) </math> is
'''"Weighted" average'''<br>
:: <math>\displaystyle Var[w(x)] </math>
:The term "importance sampling" is used to describe this method because a higher 'importance' or 'weighting' is given to the values sampled from <math>\displaystyle g(x)</math> that are closer to the original distribution <math>\displaystyle f(x)</math>, which we would ideally like to sample from (but cannot because it is too difficult).<br>
:: <math>\displaystyle = E[(w(x)^2)] - [E[w(x)]]^2 </math>
:<math>\displaystyle I = \int\frac{h(x)f(x)}{g(x)}g(x)\,dx</math>
:: <math>\displaystyle = \int \left(\frac{f(x)h(x)}{g(x)} \right)^2 g(x) dx - \left[\int \frac{f(x)h(x)}{g(x)}g(x)dx \right]^2 </math>
:<math>=\displaystyle \int \frac{f(x)}{g(x)}h(x)g(x)\,dx</math><br /><br />
:: <math>\displaystyle = \int \left(\frac{f(x)h(x)}{g(x)} \right)^2 g(x) dx - \left[\int f(x)h(x) \right]^2 </math>


As we can see, the second term does not depend on <math>\displaystyle g(x) </math>. Therefore to minimize <math>\displaystyle Var[w(x)] </math> we only need to minimize the first term. In doing so we will use '''Jensen's Inequality'''.
:<math>\displaystyle \int \frac{f(x)}{g(x)}E_g(h(x))\,dx</math> which is like saying that we are applying a regular Monte Carlo Simulation method to <math>\displaystyle\int h(x)g(x)\,dx </math>, and taking each result from this process and weighting the more accurate ones (i.e. the ones for which <math>\displaystyle \frac{f(x)}{g(x)}</math> is high) higher.


One can view <math> \frac{f(x)}{g(x)}\ = B(x)</math> as a weight.


::::::::::<math>\displaystyle ======Aside: Jensen's Inequality====== </math>
Then <math>\displaystyle \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^{N} w(x_i) = \frac{1}{N}\displaystyle\sum_{i=1}^{N} B(x_i)h(x_i)</math><br><br>
::
If <math>\displaystyle g </math> is a convex function ( twice differentiable and <math>\displaystyle g''(x) \geq 0 </math> ) then <math>\displaystyle g(\alpha x_1 + (1-\alpha)x_2) \leq \alpha g(x_1) + (1-\alpha) g(x_2)</math><br />
Essentially the definition of convexity implies that the line segment between two points on a curve lies above the curve, which can then be generalized to higher dimensions:
::<math>\displaystyle g(\alpha_1 x_1 + \alpha_2 x_2 + ... + \alpha_n x_n) \leq \alpha_1 g(x_1) + \alpha_2 g(x_2) + ... + \alpha_n g(x_n) </math> where <math>\displaystyle \alpha_1 + \alpha_2 + ... + \alpha_n = 1 </math>
::::::::::=======================================================


=====Proof (cont)=====
i.e. we are computing a weighted sum of <math> h(x_i) </math> instead of a sum
Using Jensen's Inequality, <br />
::<math>\displaystyle g(E[x]) \leq E[g(x)] </math> as <math>\displaystyle g(E[x]) = g(p_1 x_1 + ... p_n x_n) \leq p_1 g(x_1) + ... + p_n g(x_n) = E[g(x)] </math>
Therefore
::<math>\displaystyle E[(w(x))^2] \geq (E[\left| w(x) \right|])^2 </math>
::<math>\displaystyle E[(w(x))^2] \geq \left(\int \left| \frac{f(x)h(x)}{g(x)} \right| g(x) dx \right)^2 </math> <br />
and
::<math>\displaystyle \left(\int \left| \frac{f(x)h(x)}{g(x)} \right| g(x) dx \right)^2 </math>
::<math>\displaystyle = \left(\int \frac{f(x)\left| h(x) \right|}{g(x)} g(x) dx \right)^2 </math>
::<math>\displaystyle = \left(\int \left| h(x) \right| f(x) dx \right)^2 </math> since <math>\displaystyle f </math> and <math>\displaystyle g</math> are density functions, <math>\displaystyle f, g </math> cannot be negative.  <br />


Thus, this is a lower bound on <math>\displaystyle E[(w(x))^2]</math>. If we replace <math>\displaystyle g^*(x) </math> into <math>\displaystyle E[g^*(x)]</math>, we can see that the result is as we require. Details omitted.<br />
===[[A Deeper Look into Importance Sampling]] - June 2, 2009 ===
From last class, we have determined that an integral can be written in the form <math>I = \displaystyle\int h(x)f(x)\,dx </math> <math>= \displaystyle\int \frac{h(x)f(x)}{g(x)}g(x)\,dx</math> We continue our discussion of Importance Sampling here.


However, this is mostly of theoritical interest. In practice, it is impossible or very difficult to compute <math>\displaystyle g^*</math>.
====Importance Sampling====


Note: Jensen's inequality is actually unnecessary here. We just use it to get <math>E[(w(x))^2] \geq (E[|w(x)|])^2</math>, which could be derived using variance properties: <math>0 \leq Var[|w(x)|] = E[|w(x)|^2] - (E[|w(x)|])^2 = E[(w(x))^2] - (E[|w(x)|])^2</math>.
We can see that the integral <math>\displaystyle\int \frac{h(x)f(x)}{g(x)}g(x)\,dx = \int \frac{f(x)}{g(x)}h(x) g(x)\,dx</math> is just <math> \displaystyle E_g(h(x)) \rightarrow</math>the expectation of h(x) with respect to g(x), where <math>\displaystyle \frac{f(x)}{g(x)} </math> is a weight <math>\displaystyle\beta(x)</math>. In the case where <math>\displaystyle f > g</math>, a greater weight for <math>\displaystyle\beta(x)</math> will be assigned. Thus, the points with more weight are deemed more important, hence "importance sampling". This can be seen as a variance reduction technique.


===[[Importance Sampling and Markov Chain Monte Carlo (MCMC)]] - June 4, 2009 ===
=====Problem=====
Remark 4:
The method of Importance Sampling is simple but can lead to some problems. The <math> \displaystyle \hat I </math> estimated by Importance Sampling could have infinite standard error.
<math> I = \displaystyle\int^\ h(x)f(x)\,dx </math>
:: <math>= \displaystyle\int \ h(x)\frac{f(x)}{g(x)}g(x)\,dx</math>
:: <math>= \displaystyle\sum_{i=1}^{N} h(x_i)b(x_i)</math> where <math>\displaystyle b(x_i) = \frac{f(x_i)}{g(x_i)}</math>
:: <math>=\displaystyle \frac{\int\ h(x)f(x)\,dx}{\int f(x) dx}</math>
Apply the idea of importance sampling to both the numerator and denominator:
:: <math>=\displaystyle \frac{\int\ h(x)\frac{f(x)}{g(x)}g(x)\,dx}{\int\frac{f(x)}{g(x)}g(x) dx}</math>
:: <math>= \displaystyle\frac{\sum_{i=1}^{N} h(x_i)b(x_i)}{\sum_{1=1}^{N} b(x_i)}</math>
:: <math>= \displaystyle\sum_{i=1}^{N} h(x_i)b'(x_i)</math> where <math>\displaystyle b'(x_i) = \frac{b(x_i)}{\sum_{i=1}^{N} b(x_i)}</math>
The above results in the following form of Importance Sampling:
::<math> \hat{I} = \displaystyle\sum_{i=1}^{N} b'(x_i)h(x_i) </math> where <math>\displaystyle b'(x_i) = \frac{b(x_i)}{\sum_{i=1}^{N} b(x_i)}</math>
This is very important and useful especially when f is known only up to a proportionality constant. Often, this is the case in the Bayesian approach when f is a posterior density function.
==== Example of Importance Sampling ====
Estimate <math> I = \displaystyle\ Pr (Z>3) </math> when <math>Z \sim~ N(0,1) </math>
::<math> I = \displaystyle\int^\infty_3 f(x)\,dx \approx 0.0013 </math>
::<math>  = \displaystyle\int^\infty_3 h(x)f(x)\,dx </math>
:Define <math>
h(x) = \begin{cases}
0, & \text{if } x < 3 \\
1, & \text{if } 3 \leq x
\end{cases}</math>


<br>'''Approach I: Monte Carlo'''<br>
Given <math>\displaystyle I= \int w(x) g(x) dx </math>
:<math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)</math> where <math>X \sim~ N(0,1) </math>
<math>= \displaystyle E_g(w(x)) </math>
The idea here is to sample from normal distribution and to count number of observations that is greater than 3.
<math>= \displaystyle \frac{1}{N}\sum_{i=1}^{N} w(x_i) </math>
where <math>\displaystyle w(x)=\frac{f(x)h(x)}{g(x)} </math>.


The variability will be high in this case if using Monte Carlo since this is considered a low probability event (a tail event), and different runs may give significantly different values. For example: the first run may give only 3 occurences (i.e if we generate 1000 samples, thus the probability will be .003), the second run may give 5 occurences (probability .005), etc.
Obtaining the second moment,
 
::<math>\displaystyle E[(w(x))^2] </math>
This example can be illustrated in Matlab using the code below (we will be generating 100 samples in this case):
::<math>\displaystyle = \int (\frac{h(x)f(x)}{g(x)})^2 g(x) dx</math>
::<math>\displaystyle = \int \frac{h^2(x) f^2(x)}{g^2(x)} g(x) dx </math>
::<math>\displaystyle = \int \frac{h^2(x)f^2(x)}{g(x)} dx </math>
 
We can see that if <math>\displaystyle g(x) \rightarrow 0 </math>, then <math>\displaystyle E[(w(x))^2] \rightarrow \infty </math>. This occurs if <math>\displaystyle g </math> has a thinner tail than <math>\displaystyle f </math> then <math>\frac{h^2(x)f^2(x)}{g(x)} </math> could be infinitely large. The general idea here is that <math>\frac{f(x)}{g(x)} </math> should not be large.


format long
=====Remark 1=====
for i = 1:100
It is evident that <math>\displaystyle g(x) </math> should be chosen such that it has a thicker tail than <math>\displaystyle f(x) </math>.
    a(i) = sum(randn(100,1)>=3)/100;
If <math>\displaystyle f</math> is large over set <math>\displaystyle A</math> but <math>\displaystyle g</math> is small, then <math>\displaystyle \frac{f}{g} </math> would be large and it would result in a large variance.
end
meanMC  = mean(a)
varMC  = var(a)


On running this, we get <math> meanMC = 0.0005 </math> and <math> varMC \approx 1.31313 * 10^{-5} </math>
=====Remark 2=====
It is useful if we can choose <math>\displaystyle g </math> to be similar to <math>\displaystyle f</math> in terms of shape. Ideally, the optimal <math>\displaystyle g </math> should be similar to <math>\displaystyle \left| h(x) \right|f(x)</math>, and have a thicker tail. It's important to take the absolute value of <math>\displaystyle h(x)</math>, since a variance can't be negative. Analytically, we can show that the best <math>\displaystyle g</math> is the one that would result in a variance that is minimized.


<br>'''Approach II: Importance Sampling'''<br>
=====Remark 3=====
Choose <math>\displaystyle g </math> such that it is similar to <math>\displaystyle \left| h(x) \right| f(x) </math> in terms of shape. That is, we want <math>\displaystyle g \propto \displaystyle \left| h(x) \right| f(x)  </math>


:<math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)\frac{f(x_i)}{g(x_i)}</math> where <math>f(x)</math> is standard normal and <math>g(x)</math> needs to be chosen wisely so that it is similar to the target distribution.


:Let <math>g(x) \sim~ N(4,1) </math>
====Theorem (Minimum Variance Choice of <math>\displaystyle g</math>) ====
:<math>b(x) = \frac{f(x)}{g(x)} = e^{(8-4x)}</math>
The choice of <math>\displaystyle g</math> that minimizes variance of <math>\hat I</math> is <math>\displaystyle g^*(x)=\frac{\left| h(x) \right| f(x)}{\int \left| h(s) \right| f(s) ds}</math>.
:<math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nb(x_i)h(x_i)</math>


This example can be illustrated in Matlab using the code below:
=====Proof:=====
for j = 1:100
We know that <math>\displaystyle w(x)=\frac{f(x)h(x)}{g(x)} </math>
    N = 100;
    x = randn (N,1) + 4;
      for ii = 1:N
          h = x(ii)>=3;
          b = exp(8-4*x(ii));
          w(ii) = h*b;
      end
  I(j) = sum(w)/N;
end
MEAN = mean(I)
VAR = var(I)


Running the above code gave us <math> MEAN \approx 0.001353 </math> and <math> VAR \approx 9.666 * 10^{-8} </math> which is very close to 0, and is much less than the variability observed when using Monte Carlo
The variance of <math>\displaystyle w(x) </math> is
:: <math>\displaystyle Var[w(x)] </math>
:: <math>\displaystyle = E[(w(x)^2)] - [E[w(x)]]^2 </math>
:: <math>\displaystyle = \int \left(\frac{f(x)h(x)}{g(x)} \right)^2 g(x) dx - \left[\int \frac{f(x)h(x)}{g(x)}g(x)dx \right]^2 </math>
:: <math>\displaystyle = \int \left(\frac{f(x)h(x)}{g(x)} \right)^2 g(x) dx - \left[\int f(x)h(x) \right]^2 </math>


==== Markov Chain Monte Carlo (MCMC) ====
As we can see, the second term does not depend on <math>\displaystyle g(x) </math>. Therefore to minimize <math>\displaystyle Var[w(x)] </math> we only need to minimize the first term. In doing so we will use '''Jensen's Inequality'''.
Before we tackle Markov chain Monte Carlo methods, which essentially are a 'class of algorithms for sampling from probability distributions based on constructing a Markov chain' [MCMC, Wikipedia], we will first give a formal definition of Markov Chain.  


Consider the same integral:
<math> I = \displaystyle\int^\ h(x)f(x)\,dx </math>


Idea: If <math>\displaystyle X_1, X_2,...X_N</math> is a Markov Chain with stationary distribution f(x), then under some conditions
::::::::::<math>\displaystyle ======Aside: Jensen's Inequality====== </math>
::
If <math>\displaystyle g </math> is a convex function ( twice differentiable and <math>\displaystyle g''(x) \geq 0 </math> ) then <math>\displaystyle g(\alpha x_1 + (1-\alpha)x_2) \leq \alpha g(x_1) + (1-\alpha) g(x_2)</math><br />
Essentially the definition of convexity implies that the line segment between two points on a curve lies above the curve, which can then be generalized to higher dimensions:
::<math>\displaystyle g(\alpha_1 x_1 + \alpha_2 x_2 + ... + \alpha_n x_n) \leq \alpha_1 g(x_1) + \alpha_2 g(x_2) + ... + \alpha_n g(x_n) </math> where <math>\displaystyle \alpha_1 + \alpha_2 + ... + \alpha_n = 1 </math>
::::::::::=======================================================


:<math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)\xrightarrow{P}\int^\ h(x)f(x)\,dx = I</math>
=====Proof (cont)=====
<br>'''Stochastic Process:'''<br>
Using Jensen's Inequality, <br />
A Stochastic Process is a collection of random variables <math>\displaystyle \{ X_t : t \in T \}</math>
::<math>\displaystyle g(E[x]) \leq E[g(x)] </math> as <math>\displaystyle g(E[x]) = g(p_1 x_1 + ... p_n x_n) \leq p_1 g(x_1) + ... + p_n g(x_n) = E[g(x)] </math>
*'''State Space Set:'''<math>\displaystyle X </math>is the set that random variables <math>\displaystyle X_t</math> takes values from.
Therefore
*'''Indexed Set:'''<math>\displaystyle T </math>is the set that t takes values from, which could be discrete or continuous in general, but we are only interested in discrete case in this course.
::<math>\displaystyle E[(w(x))^2] \geq (E[\left| w(x) \right|])^2 </math>
::<math>\displaystyle E[(w(x))^2] \geq \left(\int \left| \frac{f(x)h(x)}{g(x)} \right| g(x) dx \right)^2 </math> <br />
and
::<math>\begin{align} \displaystyle \left(\int \left| \frac{f(x)h(x)}{g(x)} \right| g(x) dx \right)^2 &= \left(\int \frac{f(x)\left| h(x) \right|}{g(x)} g(x) dx \right)^2 \\ &= \left(\int \left| h(x) \right| f(x) dx \right)^2 \end{align}</math><br /><br /> since <math>\displaystyle f </math> and <math>\displaystyle g</math> are density functions, <math>\displaystyle f, g </math> are non negative. <br />


<br>'''Example 1'''<br>
Thus, this is a lower bound on <math>\displaystyle E[(w(x))^2]</math>. If we replace <math>\displaystyle g^*(x) </math> into <math>\displaystyle E[g^*(x)]</math>, we can see that the result is as we require. Details omitted.<br />
i.i.d random variables
:<math> \{ X_t : t \in T \}, X_t \in X </math>
:<math> X = \{0, 1, 2, 3, 4, 5, 6, 7, 8\} \rightarrow</math>'''State Space'''
:<math> T = \{1, 2, 3, 4, 5\} \rightarrow</math>'''Indexed Set'''


<br>'''Example 2'''<br>
However, this is mostly of theoritical interest. In practice, it is impossible or very difficult to compute <math>\displaystyle g^*</math>.
:<math>\displaystyle X_t</math>: price of a stock
:<math>\displaystyle t</math>: opening date of the market
::
'''Basic Fact:''' In general, if we have random variables <math>\displaystyle X_1,...X_n</math>
:<math>\displaystyle f(X_1,...X_n)= f(X_1)f(X_2|X_1)f(X_3|X_2,X_1)...f(X_n|X_n-1,...,X_1)</math>
:<math>\displaystyle f(X_1,...X_n)= \prod_{i = 1}^n f(X_i|Past_i)</math> where <math>\displaystyle Past_i = (X_{i-1}, X_{i-2},...,X_1)</math>
<br>'''Markov Chain:'''<br>
A Markov Chain is a special form of stochastic process in which <math>\displaystyle X_t</math> depends only on <math> X_t-1</math>.


For example,
Note: Jensen's inequality is actually unnecessary here. We just use it to get <math>E[(w(x))^2] \geq (E[|w(x)|])^2</math>, which could be derived using variance properties: <math>0 \leq Var[|w(x)|] = E[|w(x)|^2] - (E[|w(x)|])^2 = E[(w(x))^2] - (E[|w(x)|])^2</math>.
:<math>\displaystyle f(X_1,...X_n)= f(X_1)f(X_2|X_1)f(X_3|X_2)...f(X_n|X_n-1)</math>


<br>'''Transition Probability:'''<br>
===[[Importance Sampling and Markov Chain Monte Carlo (MCMC)]] - June 4, 2009 ===
The probability of going from one state to another state.
Remark 4:<br />
:<math>p_{ij} = \Pr(X=X_j\mid X= X_i). \,</math>
:<math>\begin{align} I &= \int^\ h(x)f(x)\,dx \\
        &= \int \ h(x)\frac{f(x)}{g(x)}g(x)\,dx \\
      \hat{I}&=\frac{1}{N} \sum_{i=1}^{N} h(x_i)b(x_i)\end{align}</math><br /><br />
where <math>\displaystyle b(x_i) = \frac{f(x_i)}{g(x_i)}</math><br /><br />
:<math>I =\displaystyle \frac{\int\ h(x)f(x)\,dx}{\int f(x) dx}</math>
Apply the idea of importance sampling to both the numerator and denominator:
:: <math>=\displaystyle \frac{\int\ h(x)\frac{f(x)}{g(x)}g(x)\,dx}{\int\frac{f(x)}{g(x)}g(x) dx}</math>
:: <math>= \displaystyle\frac{\sum_{i=1}^{N} h(x_i)b(x_i)}{\sum_{1=1}^{N} b(x_i)}</math>
:: <math>= \displaystyle\sum_{i=1}^{N} h(x_i)b'(x_i)</math> where <math>\displaystyle b'(x_i) = \frac{b(x_i)}{\sum_{i=1}^{N} b(x_i)}</math>
The above results in the following form of Importance Sampling:
::<math> \hat{I} = \displaystyle\sum_{i=1}^{N} b'(x_i)h(x_i) </math> where <math>\displaystyle b'(x_i) = \frac{b(x_i)}{\sum_{i=1}^{N} b(x_i)}</math>
This is very important and useful especially when f is known only up to a proportionality constant. Often, this is the case in the Bayesian approach when f is a posterior density function.
==== Example of Importance Sampling ====
Estimate <math> I = \displaystyle\ Pr (Z>3) </math> when <math>Z \sim~ N(0,1)</math><br /><br />
:<math>\begin{align} Pr(Z>3) &= \int^\infty_3 f(x)\,dx \approx 0.0013 \\
  &= \int^\infty_{-\infty} h(x)f(x)\,dx \end{align}</math><br />
:Where <math>
h(x) = \begin{cases}
0, & \text{if } x < 3 \\
1, & \text{if } x \ge 3
\end{cases}</math>
 
<br>'''Approach I: Monte Carlo'''<br>
:<math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)</math> where <math>X \sim~ N(0,1) </math>
The idea here is to sample from normal distribution and to count number of observations that is greater than 3.


<br>'''Transition Matrix:'''<br>
The variability will be high in this case if using Monte Carlo since this is considered a low probability event (a tail event), and different runs may give significantly different values. For example: the first run may give only 3 occurences (i.e if we generate 1000 samples, thus the probability will be .003), the second run may give 5 occurences (probability .005), etc.
For n states, transition matrix P is an <math>N \times N</math> matrix with entries <math>\displaystyle P_{ij}</math> as below:


:<math>P=\left(\begin{matrix}p_{1,1}&p_{1,2}&\dots&p_{1,j}&\dots\\
This example can be illustrated in Matlab using the code below (we will be generating 100 samples in this case):
p_{2,1}&p_{2,2}&\dots&p_{2,j}&\dots\\
\vdots&\vdots&\ddots&\vdots&\ddots\\
p_{i,1}&p_{i,2}&\dots&p_{i,j}&\dots\\
\vdots&\vdots&\ddots&\vdots&\ddots
\end{matrix}\right)</math>


<br>'''Example:'''<br>
format long
A "Random Walk" is an example of a Markov Chain. Let's suppose that the direction of our next step is decided in a probabilistic way. The probability of moving to the right is <math>\displaystyle Pr(heads) = p</math>. And the probability of moving to the left is <math>\displaystyle Pr(tails) = q = 1-p </math>. Once the first or the last state is reached, then we stop. The transition matrix that express this process is shown as below:
for i = 1:100
:<math>P=\left(\begin{matrix}1&0&\dots&0&\dots\\
    a(i) = sum(randn(100,1)>=3)/100;
p&0&q&0&\dots\\
end
0&p&0&q&0\dots\\
meanMC  = mean(a)
\vdots&\vdots&\ddots&\vdots&\ddots\\
varMC  = var(a)
p_{i,1}&p_{i,2}&\dots&p_{i,j}&\dots\\
\vdots&\vdots&\ddots&\vdots&\ddots\\
0&0&\dots&0&1
\end{matrix}\right)</math>


<br /><br /><br />
On running this, we get <math> meanMC = 0.0005 </math> and <math> varMC \approx 1.31313 * 10^{-5} </math>


===<big>'''[[Markov Chain Definitions]]''' - June 9, 2009</big>===
<br>'''Approach II: Importance Sampling'''<br>
Practical application for estimation:
The general concept for the application of this lies within having a set of generated <math>x_i</math> which approach a distribution <math>f(x)</math> so that a variation of importance estimation can be used to estimate an integral in the form<br />
<math> I = \displaystyle\int^\ h(x)f(x)\,dx </math> by <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)</math><br />
All that is required is a Markov chain which eventually converges to <math>f(x)</math>.
<br /><br />
In the previous example, the entries <math>p_{ij}</math> in the transition matrix <math>P</math> represent the probability of reaching state <math>j</math> from state <math>i</math> after one step.  For this reason, the sum over all entries j in a particular column sum to 1, as this itself must be a pmf if a transition from <math>i</math> is to lead to a state still within the state set for <math>X_t</math>.


'''Homogeneous Markov Chain'''<br />
:<math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)\frac{f(x_i)}{g(x_i)}</math> where <math>f(x)</math> is standard normal and <math>g(x)</math> needs to be chosen wisely so that it is similar to the target distribution.
The probability matrix <math>P</math> is the same for all indicies <math>n\in T</math>.
<math>\displaystyle Pr(X_n=j|X_{n-1}=i)= Pr(X_1=j|X_0=i)</math>


If we denote the pmf of <math>X_n</math> by a probability vector <math>\frac{}{}\mu_n = [P(X_n=x_1),P(X_n=x_2),..,P(X_n=x_i)]</math> <br />
:Let <math>g(x) \sim~ N(4,1) </math>
where <math>i</math> denotes an ordered index of all possible states of <math>X</math>.<br />
:<math>b(x) = \frac{f(x)}{g(x)} = e^{(8-4x)}</math>
Then we have a definition for the<br />
:<math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nb(x_i)h(x_i)</math>
'''marginal probabilty''' <math>\frac{}{}\mu_n(i) = P(X_n=i)</math><br />
where we simplify <math>X_n</math> to represent the ordered index of a state rather than the state itself.
<br /><br />
From this definition it can be shown that,
<math>\frac{}{}\mu_{n-1}P=\mu_0P^n</math>


<big>'''Proof:'''</big>
This example can be illustrated in Matlab using the code below:
for j = 1:100
    N = 100;
    x = randn (N,1) + 4;
      for ii = 1:N
          h = x(ii)>=3;
          b = exp(8-4*x(ii));
          w(ii) = h*b;
      end
  I(j) = sum(w)/N;
end
MEAN = mean(I)
VAR = var(I)


<math>\mu_{n-1}P=[\sum_{\forall i}(\mu_{n-1}(i))P_{i1},\sum_{\forall i}(\mu_{n-1}(i))P_{i2},..,\sum_{\forall i}(\mu_{n-1}(i))P_{ij}]</math>
Running the above code gave us <math> MEAN \approx 0.001353 </math> and <math> VAR \approx 9.666 * 10^{-8} </math> which is very close to 0, and is much less than the variability observed when using Monte Carlo
and since
<blockquote>
<math>\sum_{\forall i}(\mu_{n-1}(i))P_{ij}=\sum_{\forall i}P(X_n=x_i)Pr(X_n=j|X_{n-1}=i)=\sum_{\forall i}P(X_n=x_i)\frac{Pr(X_n=j,X_{n-1}=i)}{P(X_n=x_i)}</math>
<math>=\sum_{\forall i}Pr(X_n=j,X_{n-1}=i)=Pr(X_n=j)=\mu_{n}(j)</math>


</blockquote>
==== Markov Chain Monte Carlo (MCMC) ====
Therefore,<br />
Before we tackle Markov chain Monte Carlo methods, which essentially are a 'class of algorithms for sampling from probability distributions based on constructing a Markov chain' [MCMC, Wikipedia], we will first give a formal definition of Markov Chain.  
<math>\frac{}{}\mu_{n-1}P=[\mu_{n}(1),\mu_{n}(2),...,\mu_{n}(i)]=\mu_{n}</math>


With this, it is possible to define <math>P(n)</math> as an n-step transition matrix where <math>\frac{}{}P_{ij}(n) = Pr(X_n=j|X_0=i)</math><br />
Consider the same integral:
<math> I = \displaystyle\int^\ h(x)f(x)\,dx </math>


'''Theorem''': <math>\frac{}{}\mu_n=\mu_0P^n</math><br />
Idea: If <math>\displaystyle X_1, X_2,...X_N</math> is a Markov Chain with stationary distribution f(x), then under some conditions
'''Proof''':  <math>\frac{}{}\mu_n=\mu_{n-1}P</math> From the previous conclusion<br />
<math>\frac{}{}=\mu_{n-2}PP=...=\mu_0\prod_{i = 1}^nP</math> And since this is a homogeneous Markov chain, <math>P</math> does not depend on <math>i</math> so<br />
<math>\frac{}{}=\mu_0P^n</math>


From this it becomes easy to define the n-step transition matrix as <math>\frac{}{}P(n)=P^n</math>
:<math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)\xrightarrow{P}\int^\ h(x)f(x)\,dx = I</math>
<br>'''Stochastic Process:'''<br>
A Stochastic Process is a collection of random variables <math>\displaystyle \{ X_t : t \in T \}</math>
*'''State Space Set:'''<math>\displaystyle X </math>is the set that random variables <math>\displaystyle X_t</math> takes values from.
*'''Indexed Set:'''<math>\displaystyle T </math>is the set that t takes values from, which could be discrete or continuous in general, but we are only interested in discrete case in this course.


====Summary of definitions====
<br>'''Example 1'''<br>
*'''transition matrix''' is an NxN when <math>N=|X|</math> matrix with <math>P_{ij}=Pr(X_1=j|X_0=i)</math> where <math>i,j \in X</math><br />
i.i.d random variables
*'''n-step transition matrix''' also NxN with <math>P_{ij}(n)=Pr(X_n=j|X_0=i)</math><br />
:<math> \{ X_t : t \in T \}, X_t \in X </math>
*'''marginal (probability of X)'''<math>\mu_n(i) = Pr(X_n=i)</math><br />
:<math> X = \{0, 1, 2, 3, 4, 5, 6, 7, 8\} \rightarrow</math>'''State Space'''
*'''Theorem:''' <math>P_n=P^n</math><br />
:<math> T = \{1, 2, 3, 4, 5\} \rightarrow</math>'''Indexed Set'''
*'''Theorem:''' <math>\mu_n=\mu_0P^n</math><br />
---


====Definitions of different types of state sets====
<br>'''Example 2'''<br>
Define <math>i,j \in</math> State Space<br />
:<math>\displaystyle X_t</math>: price of a stock
If <math>P_{ij}(n) > 0</math> for some <math>n</math> , then we say <math>i</math> reaches <math>j</math> denoted by <math>i\longrightarrow j</math> <br />
:<math>\displaystyle t</math>: opening date of the market
This also mean j is accessible by i: <math>j\longleftarrow i</math> <br />
::
If <math>i\longrightarrow j</math> and <math>j\longrightarrow i</math> then we say <math>i</math> and <math>j</math> communicate, denoted by <math>i\longleftrightarrow j</math>
'''Basic Fact:''' In general, if we have random variables <math>\displaystyle X_1,...X_n</math>
<br /><br />
:<math>\displaystyle f(X_1,...X_n)= f(X_1)f(X_2|X_1)f(X_3|X_2,X_1)...f(X_n|X_n-1,...,X_1)</math>
'''Theorems'''<br />
:<math>\displaystyle f(X_1,...X_n)= \prod_{i = 1}^n f(X_i|Past_i)</math> where <math>\displaystyle Past_i = (X_{i-1}, X_{i-2},...,X_1)</math>
1) <math>i\longleftrightarrow i</math><br />
<br>'''Markov Chain:'''<br>
2) <math>i\longleftrightarrow j \Rightarrow j\longleftrightarrow i</math><br />
A Markov Chain is a special form of stochastic process in which <math>\displaystyle X_t</math> depends only on <math> \displaystyle X_{t-1}</math>.
3) If <math>i\longleftrightarrow j,j\longleftrightarrow k\Rightarrow i\longleftrightarrow k</math><br />
4) The set of states of <math>X</math> can be written as a unique disjoint union of subsets (equivalence classes) <math>X=X_1\bigcup X_2\bigcup ...\bigcup X_k,k>0 </math> where two states <math>i</math> and <math>j</math> communicate <math>IFF</math> they belong to the same subset
<br /><br />
'''More Definitions'''<br />
A set is '''Irreducible''' if all states communicate with each other (has only one equivalence class).<br />
A subset of states is '''Closed''' if once you enter it, you can never leave.<br />
A subset of states is '''Open''' if once you leave it, you can never return.<br />
An '''Absorbing Set''' is a closed set with only 1 element (i.e. consists of a single state).<br />


<b>Note</b>
For example,
*We cannot have <math>\displaystyle i\longleftrightarrow j</math> with i recurrent and j transient since <math>\displaystyle i\longleftrightarrow j \Rightarrow j\longleftrightarrow i</math>.
:<math>\displaystyle f(X_1,...X_n)= f(X_1)f(X_2|X_1)f(X_3|X_2)...f(X_n|X_{n-1})</math>
*All states in an open class are transient.
*A Markov Chain with a finite number of states must have at least 1 recurrent state.
*A closed class with an infinite number of states has all transient or all recurrent states.<br />


===[[Again on Markov Chain]] - June 11, 2009===
<br>'''Transition Probability:'''<br>
The probability of going from one state to another state.
:<math>p_{ij} = \Pr(X_{n}=j\mid X_{n-1}= i). \,</math>


<br>'''Transition Matrix:'''<br>
For n states, transition matrix P is an <math>N \times N</math> matrix with entries <math>\displaystyle P_{ij}</math> as below:


====Decomposition of Markov chain====
:<math>P=\left(\begin{matrix}p_{1,1}&p_{1,2}&\dots&p_{1,j}&\dots\\
 
p_{2,1}&p_{2,2}&\dots&p_{2,j}&\dots\\
In the previous lecture it was shown that a Markov Chain can be written as the disjoint union of its classes. This decomposition is always possible and it is reduced to one class only in the case of an irreducible chain.
\vdots&\vdots&\ddots&\vdots&\ddots\\
p_{i,1}&p_{i,2}&\dots&p_{i,j}&\dots\\
\vdots&\vdots&\ddots&\vdots&\ddots
\end{matrix}\right)</math>


<br>'''Example:'''<br>
<br>'''Example:'''<br>
Let <math>\displaystyle X = \{1, 2, 3, 4\}</math> and the transition matrix be:
A "Random Walk" is an example of a Markov Chain. Let's suppose that the direction of our next step is decided in a probabilistic way. The probability of moving to the right is <math>\displaystyle Pr(heads) = p</math>. And the probability of moving to the left is <math>\displaystyle Pr(tails) = q = 1-p </math>. Once the first or the last state is reached, then we stop. The transition matrix that express this process is shown as below:
 
:<math>P=\left(\begin{matrix}1&0&\dots&0&\dots\\
 
p&0&q&0&\dots\\
:<math>P=\left(\begin{matrix}1/3&2/3&0&0\\
0&p&0&q&0\dots\\
2/3&1/3&0&0\\
\vdots&\vdots&\ddots&\vdots&\ddots\\
1/4&1/4&1/4&1/4\\
p_{i,1}&p_{i,2}&\dots&p_{i,j}&\dots\\
0&0&0&1
\vdots&\vdots&\ddots&\vdots&\ddots\\
0&0&\dots&0&1
\end{matrix}\right)</math>
\end{matrix}\right)</math>


<br /><br /><br />


The decomposition in classes is:
===<big>'''[[Markov Chain Definitions]]''' - June 9, 2009</big>===
::::class 1: <math>\displaystyle \{1, 2\} \rightarrow </math> From the matrix we see that the states 1 and 2 have only <math>\displaystyle P_{12}</math> and <math>\displaystyle P_{21}</math> as nonzero transition probability
Practical application for estimation:
::::class 2: <math>\displaystyle \{3\} \rightarrow </math> The state 3 can go to every other state but none of the others can go to it
The general concept for the application of this lies within having a set of generated <math>x_i</math> which approach a distribution <math>f(x)</math> so that a variation of importance estimation can be used to estimate an integral in the form<br />
::::class 3: <math>\displaystyle \{4\} \rightarrow </math> This is an absorbing state since it is a close class and there is only one element
<math> I = \displaystyle\int^\ h(x)f(x)\,dx </math> by <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)</math><br />
::
All that is required is a Markov chain which eventually converges to <math>f(x)</math>.
<br /><br />
In the previous example, the entries <math>p_{ij}</math> in the transition matrix <math>P</math> represent the probability of reaching state <math>j</math> from state <math>i</math> after one step.  For this reason, the sum over all entries in a particular row sum to 1, as this itself must be a pmf if a transition from <math>i</math> is to lead to a state still within the state set for <math>X_t</math>.


====Recurrent and Transient states====
'''Homogeneous Markov Chain'''<br />
The probability matrix <math>P</math> is the same for all indicies <math>n\in T</math>.
<math>\displaystyle Pr(X_n=j|X_{n-1}=i)= Pr(X_1=j|X_0=i)</math>


A state i is called <math>\emph{recurrent}</math> or <math>\emph{persistent}</math> if
If we denote the pmf of <math>X_n</math> by a probability vector <math>\frac{}{}\mu_n = [P(X_n=x_1),P(X_n=x_2),..,P(X_n=x_i)]</math> <br />
:<math>\displaystyle Pr(x_{n}=i</math> for some <math>\displaystyle n\geq 1 | x_{0}=i)=1 </math>
where <math>i</math> denotes an ordered index of all possible states of <math>X</math>.<br />
That means that the probability to come back to the state i, starting from the state i, is 1.
Then we have a definition for the<br />
'''marginal probabilty''' <math>\frac{}{}\mu_n(i) = P(X_n=i)</math><br />
where we simplify <math>X_n</math> to represent the ordered index of a state rather than the state itself.
<br /><br />
From this definition it can be shown that,
<math>\frac{}{}\mu_{n-1}P=\mu_0P^n</math>


If it is not the case (ie. probability less than 1), then state i is <math>\emph{transient} </math>.
<big>'''Proof:'''</big>


It is straight forward to prove that a finite irreducible chain is recurrent.
<math>\mu_{n-1}P=[\sum_{\forall i}(\mu_{n-1}(i))P_{i1},\sum_{\forall i}(\mu_{n-1}(i))P_{i2},..,\sum_{\forall i}(\mu_{n-1}(i))P_{ij}]</math>
::
and since
<br>'''Theorem'''<br>
<blockquote>
Given a Markov chain,  
<math>\sum_{\forall i}(\mu_{n-1}(i))P_{ij}=\sum_{\forall i}P(X_{n-1}=i)Pr(X_n=j|X_{n-1}=i)=\sum_{\forall i}P(X_{n-1}=i)\frac{Pr(X_n=j,X_{n-1}=i)}{P(X_{n-1}=i)}</math>
<br>A state <math>\displaystyle i</math> is <math>\emph{recurrent}</math> if and only if <math>\displaystyle \sum_{\forall n}P_{ii}(n)=\infty</math>
<math>=\sum_{\forall i}Pr(X_n=j,X_{n-1}=i)=Pr(X_n=j)=\mu_{n}(j)</math>  
<br>A state <math>\displaystyle i</math> is <math>\emph{transient}</math> if and only if <math>\displaystyle \sum_{\forall n}P_{ii}(n)< \infty</math>


<br>'''Properties'''<br>
</blockquote>
*If <math>\displaystyle i</math> is <math>\emph{recurrent}</math> and <math>i\longleftrightarrow j</math> then <math>\displaystyle j</math> is <math>\emph{recurrent}</math>
Therefore,<br />
*If <math>\displaystyle i</math> is <math>\emph{transient}</math> and <math>i\longleftrightarrow j</math> then <math>\displaystyle j</math> is <math>\emph{transient}</math>
<math>\frac{}{}\mu_{n-1}P=[\mu_{n}(1),\mu_{n}(2),...,\mu_{n}(i)]=\mu_{n}</math>
*In an equivalence class, either all states are recurrent or all states are transient
*A finite Markov chain should have at least one recurrent state
*The states of a finite, irreducible Markov chain are all recurrent (proved using the previous preposition and the fact that there is only one class in this kind of chain)


In the example above, state one and two are a closed set, so they are both recurrent states. State four is an absorbing state, so it is also recurrent. State three is transient.
With this, it is possible to define <math>P(n)</math> as an n-step transition matrix where <math>\frac{}{}P_{ij}(n) = Pr(X_n=j|X_0=i)</math><br />


<br>'''Example'''<br>
'''Theorem''': <math>\frac{}{}\mu_n=\mu_0P^n</math><br />
Let <math>\displaystyle X=\{\cdots,-2,-1,0,1,2,\cdots\}</math> and suppose that <math>\displaystyle P_{i,i+1}=p </math>, <math>\displaystyle P_{i,i-1}=q=1-p</math> and <math>\displaystyle P_{i,j}=0</math> otherwise.
'''Proof''':  <math>\frac{}{}\mu_n=\mu_{n-1}P</math> From the previous conclusion<br />
This is the Random Walk that we have already seen in a previous lecture, except it extends infinitely in both directions.
<math>\frac{}{}=\mu_{n-2}PP=...=\mu_0\prod_{i = 1}^nP</math> And since this is a homogeneous Markov chain, <math>P</math> does not depend on <math>i</math> so<br />
<math>\frac{}{}=\mu_0P^n</math>
We now see other properties of this particular Markov chain:
*Since all states communicate if one of them is recurrent, then all states will be recurrent. On the other hand, if one of them is transient, then all the other will be transient too.
*Consider now the case in which we are in state <math>\displaystyle 0</math>. If we move of n steps to the right or to the left, the only way to go back to <math>\displaystyle 0</math> is to have n steps on the opposite direction.
<math>\displaystyle Pr(x_{2n}=0/X_{0}=0)=P_{00}(2n)=[ {2n \choose n} ]p^{n}q^{n}</math>
We now want to know if this event is transient or recurrent or, equivalently, whether <math>\displaystyle \sum_{\forall i}P_{ii}(n)\geq\infty</math> or not.


To proceed with the analysis, we use the <math>\emph{Stirling }</math> <math>\displaystyle\emph{formula}</math>:
From this it becomes easy to define the n-step transition matrix as <math>\frac{}{}P(n)=P^n</math>


<math>\displaystyle n!\sim~n^{n}\sqrt(n)e^{-n}\sqrt(2\pi)</math>
====Summary of definitions====
*'''transition matrix''' is an NxN when <math>N=|X|</math> matrix with <math>P_{ij}=Pr(X_1=j|X_0=i)</math> where <math>i,j \in X</math><br />
*'''n-step transition matrix''' also NxN with <math>P_{ij}(n)=Pr(X_n=j|X_0=i)</math><br />
*'''marginal (probability of X)'''<math>\mu_n(i) = Pr(X_n=i)</math><br />
*'''Theorem:''' <math>P_n=P^n</math><br />
*'''Theorem:''' <math>\mu_n=\mu_0P^n</math><br />
---


The probability could therefore be approximated by:
====Definitions of different types of state sets====
Define <math>i,j \in</math> State Space<br />
If <math>P_{ij}(n) > 0</math> for some <math>n</math> , then we say <math>i</math> reaches <math>j</math> denoted by <math>i\longrightarrow j</math> <br />
This also mean j is accessible by i: <math>j\longleftarrow i</math> <br />
If <math>i\longrightarrow j</math> and <math>j\longrightarrow i</math> then we say <math>i</math> and <math>j</math> communicate, denoted by <math>i\longleftrightarrow j</math>
<br /><br />
'''Theorems'''<br />
1) <math>i\longleftrightarrow i</math><br />
2) <math>i\longleftrightarrow j \Rightarrow j\longleftrightarrow i</math><br />
3) If <math>i\longleftrightarrow j,j\longleftrightarrow k\Rightarrow i\longleftrightarrow k</math><br />
4) The set of states of <math>X</math> can be written as a unique disjoint union of subsets (equivalence classes) <math>X=X_1\bigcup X_2\bigcup ...\bigcup X_k,k>0 </math> where two states <math>i</math> and <math>j</math> communicate <math>IFF</math> they belong to the same subset
<br /><br />
'''More Definitions'''<br />
A set is '''Irreducible''' if all states communicate with each other (has only one equivalence class).<br />
A subset of states is '''Closed''' if once you enter it, you can never leave.<br />
A subset of states is '''Open''' if once you leave it, you can never return.<br />
An '''Absorbing Set''' is a closed set with only 1 element (i.e. consists of a single state).<br />


<math>\displaystyle P_{00}(n)=\sim~\frac{(4pq)^{n}}{\sqrt(n\pi)}</math>
<b>Note</b>
*We cannot have <math>\displaystyle i\longleftrightarrow j</math> with i recurrent and j transient since <math>\displaystyle i\longleftrightarrow j \Rightarrow j\longleftrightarrow i</math>.
*All states in an open class are transient.
*A Markov Chain with a finite number of states must have at least 1 recurrent state.
*A closed class with an infinite number of states has all transient or all recurrent states.<br />


And the formula becomes:
===[[Again on Markov Chain]] - June 11, 2009===


<math>\displaystyle \sum_{\forall n}P_{00}(n)=\sum_{\forall n}\frac{(4pq)^{n}}{\sqrt(n\pi)}</math>


We can conclude that if <math>\displaystyle 4pq < 1</math> then the state is transient, otherwise is recurrent.
====Decomposition of Markov chain====


<math>\displaystyle
In the previous lecture it was shown that a Markov Chain can be written as the disjoint union of its classes. This decomposition is always possible and it is reduced to one class only in the case of an irreducible chain.
\sum_{\forall n}P_{00}(n)=\sum_{\forall n}\frac{(4pq)^{n}}{\sqrt(n\pi)} = \begin{cases}
\infty, & \text{if } p = \frac{1}{2} \\
< \infty, & \text{if } p\neq \frac{1}{2}
\end{cases}</math>


An alternative to Stirling's approximation is to use the generalized binomial theorem to get the following formula:
<br>'''Example:'''<br>
Let <math>\displaystyle X = \{1, 2, 3, 4\}</math> and the transition matrix be:


<math>
\frac{1}{\sqrt{1 - 4x}} = \sum_{n=0}^{\infty} \binom{2n}{n} x^n
</math>


Then substitute in <math>x = pq</math>.
:<math>P=\left(\begin{matrix}1/3&2/3&0&0\\
2/3&1/3&0&0\\
1/4&1/4&1/4&1/4\\
0&0&0&1
\end{matrix}\right)</math>


<math>
\frac{1}{\sqrt{1 - 4pq}} = \sum_{n=0}^{\infty} \binom{2n}{n} p^n q^n = \sum_{n=0}^{\infty} P_{00}(2n)
</math>


So we reach the same conclusion: all states are recurrent iff <math>p = q = \frac{1}{2}</math>.
The decomposition in classes is:
::::class 1: <math>\displaystyle \{1, 2\} \rightarrow </math> From the matrix we see that the states 1 and 2 have only <math>\displaystyle P_{12}</math> and <math>\displaystyle P_{21}</math> as nonzero transition probability
::::class 2: <math>\displaystyle \{3\} \rightarrow </math> The state 3 can go to every other state but none of the others can go to it
::::class 3: <math>\displaystyle \{4\} \rightarrow </math> This is an absorbing state since it is a close class and there is only one element
::


====Convergence of Markov chain====
====Recurrent and Transient states====
We define the <math>\displaystyle \emph{Recurrence}</math> <math>\emph{time}</math><math>\displaystyle T_{i,j}</math> as the minimum time to go from the state i to the state j. It is also possible that the state j is not reachable from the state i.


<math>\displaystyle T_{ij}=\begin{cases}
A state i is called <math>\emph{recurrent}</math> or <math>\emph{persistent}</math> if
min\{n: x_{n}=i\}, & \text{if }\exists n \\
:<math>\displaystyle Pr(x_{n}=i</math> for some <math>\displaystyle n\geq 1 | x_{0}=i)=1 </math>
\infty, & \text{otherwise }
That means that the probability to come back to the state i, starting from the state i, is 1.
\end{cases}</math>


The mean of the recurrent time <math>\displaystyle m_{i}</math>is defined as:
If it is not the case (ie. probability less than 1), then state i is <math>\emph{transient} </math>.


<math>m_{i}=\displaystyle E(T_{ij})=\sum nf_{ii} </math>
It is straight forward to prove that a finite irreducible chain is recurrent.
::
<br>'''Theorem'''<br>
Given a Markov chain,
<br>A state <math>\displaystyle i</math> is <math>\emph{recurrent}</math> if and only if <math>\displaystyle \sum_{\forall n}P_{ii}(n)=\infty</math>
<br>A state <math>\displaystyle i</math> is <math>\emph{transient}</math> if and only if <math>\displaystyle \sum_{\forall n}P_{ii}(n)< \infty</math>


where <math>\displaystyle f_{ij}=Pr(x_{1}\neq j,x_{2}\neq j,\cdots,x_{n-1}\neq j,x_{n}=j/x_{0}=i)</math>
<br>'''Properties'''<br>
*If <math>\displaystyle i</math> is <math>\emph{recurrent}</math> and <math>i\longleftrightarrow j</math> then <math>\displaystyle j</math> is <math>\emph{recurrent}</math>
*If <math>\displaystyle i</math> is <math>\emph{transient}</math> and <math>i\longleftrightarrow j</math> then <math>\displaystyle j</math> is <math>\emph{transient}</math>
*In an equivalence class, either all states are recurrent or all states are transient
*A finite Markov chain should have at least one recurrent state
*The states of a finite, irreducible Markov chain are all recurrent (proved using the previous preposition and the fact that there is only one class in this kind of chain)


In the example above, state one and two are a closed set, so they are both recurrent states. State four is an absorbing state, so it is also recurrent. State three is transient.


Using the objects we just introduced, we say that:
<br>'''Example'''<br>
 
Let <math>\displaystyle X=\{\cdots,-2,-1,0,1,2,\cdots\}</math> and suppose that <math>\displaystyle P_{i,i+1}=p </math>, <math>\displaystyle P_{i,i-1}=q=1-p</math> and <math>\displaystyle P_{i,j}=0</math> otherwise.
<math>\displaystyle \text{state } i=\begin{cases}
This is the Random Walk that we have already seen in a previous lecture, except it extends infinitely in both directions.
\text{null}, & \text{if } m_{i}=\infty \\
\text{non-null or positive} , & \text{otherwise }  
We now see other properties of this particular Markov chain:
\end{cases}</math>
*Since all states communicate if one of them is recurrent, then all states will be recurrent. On the other hand, if one of them is transient, then all the other will be transient too.
*Consider now the case in which we are in state <math>\displaystyle 0</math>. If we move of n steps to the right or to the left, the only way to go back to <math>\displaystyle 0</math> is to have n steps on the opposite direction.
<math>\displaystyle Pr(x_{2n}=0|X_{0}=0)=P_{00}(2n)=[ {2n \choose n} ]p^{n}q^{n}</math>
We now want to know if this event is transient or recurrent or, equivalently, whether <math>\displaystyle \sum_{\forall i}P_{ii}(n)\geq\infty</math> or not.


<br>'''Lemma'''<br>
To proceed with the analysis, we use the <math>\emph{Stirling }</math> <math>\displaystyle\emph{formula}</math>:
In a finite state Markov Chain, all the recurrent state are positive
 
====Periodic and aperiodic Markov chain====
A Markov chain is called <math>\emph{periodic}</math> of period <math>\displaystyle n</math> if, starting from a state, we will return to it every <math>\displaystyle n</math> steps with probability <math>\displaystyle 1</math>.


<br>'''Example'''<br>
<math>\displaystyle n!\sim~n^{n}\sqrt{n}e^{-n}\sqrt{2\pi}</math>
Considerate the three-state chain:


The probability could therefore be approximated by:


<math>P=\left(\begin{matrix}
<math>\displaystyle P_{00}(n)=\sim~\frac{(4pq)^{n}}{\sqrt{n\pi}}</math>
0&1&0\\
0&0&1\\
1&0&0
\end{matrix}\right)</math>


It's evident that, starting from the state 1, we will return to it on every <math>3^{rd}</math> step and so it works for the other two states. The chain is therefore periodic with perdiod <math>d=3</math>
And the formula becomes:


<math>\displaystyle \sum_{\forall n}P_{00}(n)=\sum_{\forall n}\frac{(4pq)^{n}}{\sqrt{n\pi}}</math>


An irreducible Markov chain is called <math>\emph{aperiodic}</math> if:
We can conclude that if <math>\displaystyle 4pq < 1</math> then the state is transient, otherwise is recurrent.


<math>\displaystyle Pr(x_{n}=j | x_{0}=i) > 0 \text{ and } Pr(x_{n+1}=j | x_{0}=i) > 0 \text{   for some } n\ge 0 </math>
<math>\displaystyle
\sum_{\forall n}P_{00}(n)=\sum_{\forall n}\frac{(4pq)^{n}}{\sqrt{n\pi}} = \begin{cases}
\infty, & \text{if } p = \frac{1}{2} \\
< \infty, & \text{if } p\neq \frac{1}{2}  
\end{cases}</math>


<br>'''Another Example'''<br>
An alternative to Stirling's approximation is to use the generalized binomial theorem to get the following formula:
Consider the chain
<math>P=\left(\begin{matrix}
0&0.5&0&0.5\\
0.5&0&0.5&0\\
0&0.5&0&0.5\\
0.5&0&0.5&0\\
\end{matrix}\right)</math>


This chain is periodic by definition. You can only get back to state 1 after at least 2 steps <math>\Rightarrow</math> period <math>d=2</math>
<math>
\frac{1}{\sqrt{1 - 4x}} = \sum_{n=0}^{\infty} \binom{2n}{n} x^n
</math>


Then substitute in <math>x = pq</math>.


==Markov Chains and their Stationary Distributions - June 16, 2009==
<math>
====New Definition:Ergodic====
\frac{1}{\sqrt{1 - 4pq}} = \sum_{n=0}^{\infty} \binom{2n}{n} p^n q^n = \sum_{n=0}^{\infty} P_{00}(2n)
A state is '''Ergodic''' if it is non-null, recurrent, and aperiodic.  A Markov Chain is ergodic if all its states are ergodic.
</math>


Define a vector <math>\pi</math> where <math>\pi_i > 0 \forall i</math> and <math>\sum_i \pi_i = 1</math>(ie. <math>\pi</math> is a pmf)
So we reach the same conclusion: all states are recurrent iff <math>p = q = \frac{1}{2}</math>.


<math>\pi</math> is a stationary distribution if <math>\pi=\pi P</math> where P is a transition matrix.
====Convergence of Markov chain====
We define the <math>\displaystyle \emph{Recurrence}</math> <math>\emph{time}</math><math>\displaystyle T_{i,j}</math> as the minimum time to go from the state i to the state j. It is also possible that the state j is not reachable from the state i.


====Limiting Distribution====
<math>\displaystyle T_{ij}=\begin{cases}
If as <math>n \longrightarrow \infty , P^n \longrightarrow \left[ \begin{matrix}
min\{n: x_{n}=i\}, & \text{if }\exists n \\
\pi\\
\infty, & \text{otherwise }
\pi\\
\end{cases}</math>
\vdots\\
\pi\\
\end{matrix}\right]</math>
then <math>\pi</math> is the limiting distribution of the Markov Chain represented by P.<br />
'''Theorem:''' An irreducible, ergodic Markov Chain has a unique stationary distribution <math>\pi</math> and there exists a limiting distribution which is also <math>\pi</math>.


====Detailed Balance====
The mean of the recurrent time <math>\displaystyle m_{i}</math>is defined as:


The condition for detailed balanced is <math>\displaystyle \pi_i p_{ij} = p_{ji} \pi_j </math>
<math>m_{i}=\displaystyle E(T_{ij})=\sum nf_{ii} </math>


=====Theorem=====
where <math>\displaystyle f_{ij}=Pr(x_{1}\neq j,x_{2}\neq j,\cdots,x_{n-1}\neq j,x_{n}=j/x_{0}=i)</math>
If <math>\pi</math> satisfies detailed balance then it is a stationary distribution.


'''Proof.
'''<br>
We need to show <math>\pi = \pi P</math>
<math>\displaystyle [\pi p]_j = \sum_{i} \pi_i p_{ij} = \sum_{i} p_{ji} \pi_j = \pi_j \sum_{i} p_{ji}= \pi_j </math> as required


Warning! A chain that has a stationary distribution does not necessarily converge.
Using the objects we just introduced, we say that:
 
<math>\displaystyle \text{state } i=\begin{cases}
\text{null}, & \text{if } m_{i}=\infty \\
\text{non-null or positive} , & \text{otherwise }
\end{cases}</math>
 
<br>'''Lemma'''<br>
In a finite state Markov Chain, all the recurrent state are positive
 
====Periodic and aperiodic Markov chain====
A Markov chain is called <math>\emph{periodic}</math> of period <math>\displaystyle n</math> if, starting from a state, <s>we will return to it every <math>\displaystyle n</math> steps with probability <math>\displaystyle 1</math></s> we can only return to that state in a multiple of n steps.
 
<br>'''Example'''<br>
Considerate the three-state chain:
 


For example,
<math>P=\left(\begin{matrix}
<math>P=\left(\begin{matrix}
0&1&0\\
0&1&0\\
0&0&1\\
0&0&1\\
1&0&0
1&0&0
\end{matrix}\right)</math> has a stationary distribution <math>\left(\begin{matrix}
\end{matrix}\right)</math>
1/3&1/3&1/3
\end{matrix}\right)</math> but it will not converge.


====Stationary Distribution====
It's evident that, starting from the state 1, we will return to it on every <math>3^{rd}</math> step and so it works for the other two states. The chain is therefore periodic with perdiod <math>d=3</math>
<math>\pi</math> is stationary (or invariant) distribution if <math>\pi</math> = <math>\pi * p</math>
[0.5 0 0.5]
Half of time their chain will spend half time in 1st state and half time in 3rd state.


====Theorem====


An irreducible ergodic Markov Chain has a unique stationary distribution <math>\pi</math>.
An irreducible Markov chain is called <math>\emph{aperiodic}</math> if:
The limiting distribution exists and is equal to <math>\pi</math>.


If g is any bounded function, then with probability 1:
<math>\displaystyle Pr(x_{n}=j | x_{0}=i) > 0 \text{ and } Pr(x_{n+1}=j | x_{0}=i) > 0 \text{   for some } n\ge 0 </math>
<math>lim \frac{1}{N}\displaystyle\sum_{i=1}^Ng(x_n)\longrightarrow E_n(g)=\displaystyle\sum_{j}g(j)\pi_j</math>


 
<br>'''Another Example'''<br>
====Example====
Consider the chain
 
<math>P=\left(\begin{matrix}
Find the limiting distribution of
0&0.5&0&0.5\\
<math>P=\left(\begin{matrix}
0.5&0&0.5&0\\
1/2&1/2&0\\
0&0.5&0&0.5\\
1/2&1/4&1/4\\
0.5&0&0.5&0\\
0&1/3&2/3
\end{matrix}\right)</math>
\end{matrix}\right)</math>


Solve <math>\pi=\pi P</math>
This chain is periodic by definition. You can only get back to state 1 after at least 2 steps <math>\Rightarrow</math> period <math>d=2</math>


<math>\displaystyle \pi_0 = 1/2\pi_0 + 1/2\pi_1</math><br />
==Markov Chains and their Stationary Distributions - June 16, 2009==
<math>\displaystyle \pi_1 = 1/2\pi_0 + 1/4\pi_1 + 1/3\pi_2</math><br />
====New Definition:Ergodic====
<math>\displaystyle \pi_2 = 1/4\pi_1 + 2/3\pi_2</math><br />
A state is '''Ergodic''' if it is non-null, recurrent, and aperiodic.  A Markov Chain is ergodic if all its states are ergodic.


Also <math>\displaystyle \sum_i \pi_i = 1 \longrightarrow \pi_0 + \pi_1 + \pi_2 = 1</math><br />
Define a vector <math>\pi</math> where <math>\pi_i > 0 \forall i</math> and <math>\sum_i \pi_i = 1</math>(ie. <math>\pi</math> is a pmf)


We can solve the above system of equations and obtain <br />
<math>\pi</math> is a stationary distribution if <math>\pi=\pi P</math> where P is a transition matrix.
<math>\displaystyle \pi_2 = 3/4\pi_1</math><br />
<math>\displaystyle \pi_0 = \pi_1</math><br />


Thus, <math>\displaystyle \pi_0 + \pi_1 + 3/4\pi_1 = 1</math>
====Limiting Distribution====
and we get <math>\displaystyle \pi_1 = 4/11</math>
If as <math>n \longrightarrow \infty , P^n \longrightarrow \left[ \begin{matrix}
 
\pi\\
Subbing <math>\displaystyle \pi_1 = 4/11</math> back into the system of equations we obtain <br />
\pi\\
<math>\displaystyle \pi_0 = 4/11</math> and <math>\displaystyle \pi_2 = 3/11</math>
\vdots\\
\pi\\
\end{matrix}\right]</math>
then <math>\pi</math> is the limiting distribution of the Markov Chain represented by P.<br />
'''Theorem:''' An irreducible, ergodic Markov Chain has a unique stationary distribution <math>\pi</math> and there exists a limiting distribution which is also <math>\pi</math>.


Therefore the limiting distribution is <math>\displaystyle \pi = (4/11, 4/11, 3/11)</math>
====Detailed Balance====


==Monte Carlo using Markov Chain - June 18, 2009==
The condition for detailed balanced is <math>\displaystyle \pi_i p_{ij} = p_{ji} \pi_j </math>


Consider the problem of computing <math> I = \displaystyle\int^\ h(x)f(x)\,dx </math>
=====Theorem=====
If <math>\pi</math> satisfies detailed balance then it is a stationary distribution.


<math>\bullet</math> Generate <math>\displaystyle X_1</math>, <math>\displaystyle X_2</math>,... from a Markov Chain with stationary distribution <math>\displaystyle f(x)</math>
'''Proof.
'''<br>
We need to show <math>\pi = \pi P</math>
<math>\displaystyle [\pi p]_j = \sum_{i} \pi_i p_{ij} = \sum_{i} p_{ji} \pi_j = \pi_j \sum_{i} p_{ji}= \pi_j </math> as required


<math>\bullet</math> <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)\longrightarrow E_f(h(x))=\hat{I}</math>
Warning! A chain that has a stationary distribution does not necessarily converge.


====''''' Metropolis Hastings Algorithm''''' ====
For example,
The Metropolis Hastings Algorithm first originated in the physics community in 1953 and was adopted later on by statisticians. It was originally used for the computation of a Boltzmann distribution, which describes the distribution of energy for particles in a system. In 1970, Hastings extended the algorithm to the general procedure described below.
<math>P=\left(\begin{matrix}
 
0&1&0\\
Suppose we wish to sample from the distribution <math>\displaystyle f(x)</math>. Let <math>q(y\mid{x})</math> be a distribution that is easy to sample from, we call it the "Proposal Distribution".
0&0&1\\
 
1&0&0
:'''<math>\emph{Procedure:}</math> '''
\end{matrix}\right)</math> has a stationary distribution <math>\left(\begin{matrix}
1/3&1/3&1/3
\end{matrix}\right)</math> but it will not converge.


:<br\>1. Initialize <math>\displaystyle X_0</math>, this is the starting point of the chain, choose it randomly and set index <math>\displaystyle i=0</math>
====Stationary Distribution====
:<br\>2. <math>Y~ \sim~ q(y\mid{x})</math>
<math>\pi</math> is stationary (or invariant) distribution if <math>\pi</math> = <math>\pi * p</math>
:<br\>3. Compute <math>\displaystyle r(X_i,Y)</math>, where <math>r(x,y)=min{\{\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})},1}\}</math>
[0.5 0 0.5]
:<br\>4. <math> U~ \sim~ Unif [0,1] </math>
Half of time their chain will spend half time in 1st state and half time in 3rd state.
:<br\>5. If <math>\displaystyle U<r </math>
:::then <math>\displaystyle X_{i+1}=Y </math>
:::else <math>\displaystyle X_{i+1}=X_i </math>
:<br\>6. Update index <math>\displaystyle i=i+1</math>, and go to step 2


====Theorem====


'''''A couple of remarks about the algorithm'''''
An irreducible ergodic Markov Chain has a unique stationary distribution <math>\pi</math>.
The limiting distribution exists and is equal to <math>\pi</math>.


'''Remark 1:''' A good choice for <math>q(y\mid{x})</math> is <math>\displaystyle N(x,b^2)</math> where <math>\displaystyle b>0 </math> is a constant. The starting point of the algorithm <math>X_0=x</math>, i.e. the proposal distibution is a normal centered at the current, randomly chosen, state.
If g is any bounded function, then with probability 1:
<math>lim \frac{1}{N}\displaystyle\sum_{i=1}^Ng(x_n)\longrightarrow E_n(g)=\displaystyle\sum_{j}g(j)\pi_j</math>


'''Remark 2:''' If the proposal distribution is symmetric, <math>q(y\mid{x})=q(x\mid{y})</math>, then <math>r(x,y)=min{\{\frac{f(y)}{f(x)},1}\}</math>. This is called the Metropolis Algorithm, which is a special case of the original algorithm of Metropolis (1953).


'''Remark 3:''' <math>\displaystyle N(x,b^2)</math> is symmetric. Probability of setting mean to x and sampling y is equal to the probability of setting mean to y and samplig x.
====Example====


Find the limiting distribution of
<math>P=\left(\begin{matrix}
1/2&1/2&0\\
1/2&1/4&1/4\\
0&1/3&2/3
\end{matrix}\right)</math>


Solve <math>\pi=\pi P</math>


'''Example:''' The Cauchy distribution has density <math> f(x)=\frac{1}{\pi}*\frac{1}{1+x^2}</math>
<math>\displaystyle \pi_0 = 1/2\pi_0 + 1/2\pi_1</math><br />
 
<math>\displaystyle \pi_1 = 1/2\pi_0 + 1/4\pi_1 + 1/3\pi_2</math><br />
Let the proposal distribution be <math>q(y\mid{x})=N(x,b^2) </math>
<math>\displaystyle \pi_2 = 1/4\pi_1 + 2/3\pi_2</math><br />
 
<math>r(x,y)=min{\{\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})},1}\}</math>
::<math>=min{\{\frac{f(y)}{f(x)},1}\}</math> since <math>q(y\mid{x})</math> is symmetric <math>\Rightarrow</math> <math>\frac{q(x\mid{y})}{q(y\mid{x})}=1</math>
::<math>=min{\{\frac{  \frac{1}{\pi}\frac{1}{1+y^2} }{  \frac{1}{\pi} \frac{1}{1+x^2} },1}\}</math>
::<math>=min{\{\frac{1+x^2 }{1+y^2},1}\}</math>


Now, having calculated <math>\displaystyle r(x,y)</math>, we complete the problem in Matlab using the following code:
Also <math>\displaystyle \sum_i \pi_i = 1 \longrightarrow \pi_0 + \pi_1 + \pi_2 = 1</math><br />
b=2; % let b=2 for now, we will see what happens when b is smaller or larger
X(1)=randn;
for i=2:10000
    Y=b*randn+X(i-1); % we want to decide whether we accept this Y
    r=min( (1+X(i-1)^2)/(1+Y^2),1);
    u=rand;
    if u<r
        X(i)=Y; % accept Y
    else
        X(i)=X(i-1); % reject Y remaining in the current state
    end;
end;


'''''We need to be careful about choosing b!'''''
We can solve the above system of equations and obtain <br />
<math>\displaystyle \pi_2 = 3/4\pi_1</math><br />
<math>\displaystyle \pi_0 = \pi_1</math><br />


:'''If b is too large'''
Thus, <math>\displaystyle \pi_0 + \pi_1 + 3/4\pi_1 = 1</math>
and we get <math>\displaystyle \pi_1 = 4/11</math>


::Then the fraction <math>\frac{f(y)}{f(x)}</math> would be very small <math>\Rightarrow</math> <math>r=min{\{\frac{f(y)}{f(x)},1}\}</math> is very small aswell.
Subbing <math>\displaystyle \pi_1 = 4/11</math> back into the system of equations we obtain <br />
<math>\displaystyle \pi_0 = 4/11</math> and <math>\displaystyle \pi_2 = 3/11</math>


::It is highly unlikely that <math>\displaystyle u<r</math>, the probability of rejecting <math>\displaystyle Y</math> is high so the chain is likely to get stuck in the same state for a long time <math>\rightarrow</math> chain may not coverge to the right distribution.
Therefore the limiting distribution is <math>\displaystyle \pi = (4/11, 4/11, 3/11)</math>


::It is easy to observe by looking at the histogram of <math>\displaystyle X</math>, the shape will not resemble the shape of the target <math>\displaystyle f(x)</math>
==Monte Carlo using Markov Chain - June 18, 2009==


:: Most likely we reject y and the chain will get stuck.
Consider the problem of computing <math> I = \displaystyle\int^\ h(x)f(x)\,dx </math>


::For the above example, the following output occurs when choosing B too large (B=1000)
<math>\bullet</math> Generate <math>\displaystyle X_1</math>, <math>\displaystyle X_2</math>,... from a Markov Chain with stationary distribution <math>\displaystyle f(x)</math>


[[File:Blarge.jpg]]
<math>\bullet</math> <math>\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)\longrightarrow E_f(h(x))=\hat{I}</math>


:'''If b is too small
====''''' Metropolis Hastings Algorithm''''' ====
The Metropolis Hastings Algorithm first originated in the physics community in 1953 and was adopted later on by statisticians. It was originally used for the computation of a Boltzmann distribution, which describes the distribution of energy for particles in a system. In 1970, Hastings extended the algorithm to the general procedure described below.


::Then we are setting up our proposal distribution <math>q(y\mid{x})</math> to be much narrower then than the target <math>\displaystyle f(x)</math> so the chain will not have a chance to explore the sample state space and visit majority of the states of the target <math>\displaystyle f(x)</math>.
Suppose we wish to sample from <math>f(x)</math>, the 'target' distribution. Choose <math>q(y | x)</math>, the 'proposal' distribution that is easily sampled from.<br /><br />


::For the above example, the following output occurs when choosing B too small (B=0.001)
:'''<math>\emph{Procedure:}</math> '''


[[File:Bsmall.JPG]]
:<br\>1. Initialize <math>\displaystyle X_0</math>, this is the starting point of the chain, choose it randomly and set index <math>\displaystyle i=0</math>
:<br\>2. <math>Y~ \sim~ q(y\mid{x})</math>
:<br\>3. Compute <math>\displaystyle r(X_i,Y)</math>, where <math>r(x,y)=min{\{\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})},1}\}</math>
:<br\>4. <math> U~ \sim~ Unif [0,1] </math>
:<br\>5. If <math>\displaystyle U<r </math>
:::then <math>\displaystyle X_{i+1}=Y </math>
::else <math>\displaystyle X_{i+1}=X_i </math>
:<br\>6. Update index <math>\displaystyle i=i+1</math>, and go to step 2


:'''If b is just right'''
::Well chosen b will help avoid the issues mentioned above and we can say that the chain is "mixing well".


::For the above example, the following output occurs when choosing a good value for B (B=2)
'''''A couple of remarks about the algorithm'''''


[[File:Bgood.JPG]]
'''Remark 1:''' A good choice for <math>q(y\mid{x})</math> is <math>\displaystyle N(x,b^2)</math> where <math>\displaystyle b>0 </math> is a constant. The starting point of the algorithm <math>X_0=x</math>, i.e. the proposal distibution is a normal centered at the current, randomly chosen, state.


'''Mathematical explanation for why this algorithm works:'''
'''Remark 2:''' If the proposal distribution is symmetric, <math>q(y\mid{x})=q(x\mid{y})</math>, then <math>r(x,y)=min{\{\frac{f(y)}{f(x)},1}\}</math>. This is called the Metropolis Algorithm, which is a special case of the original algorithm of Metropolis (1953).


We talked about <math>\emph{discrete}</math> MC so far.  
'''Remark 3:''' <math>\displaystyle N(x,b^2)</math> is symmetric. Probability of setting mean to x and sampling y is equal to the probability of setting mean to y and samplig x.


<br\> We have seen that: <br\>- <math>\displaystyle \pi</math> satisfies detailed balance if <math>\displaystyle \pi_iP_{ij}=P_{ji}\pi_j</math> and <br\>- if <math>\displaystyle\pi</math> satisfies <math>\emph{detailed}</math> <math>\emph{balance}</math>then it is a stationary  distribution <math>\displaystyle \pi=\pi P</math>




In <math>\emph{continuous}</math>case we write the Detailed Balance as <math>\displaystyle f(x)P(x,y)=P(y,x)f(y)</math> and  say that <br\><math>\displaystyle f(x)</math> is <math>\emph{stationary}</math> <math>\emph{distribution}</math> if <math>f(x)=\int f(y)P(y,x)dy</math>.
'''Example:''' The Cauchy distribution has density <math> f(x)=\frac{1}{\pi}*\frac{1}{1+x^2}</math>


Let the proposal distribution be <math>q(y\mid{x})=N(x,b^2) </math>


We want to show that if Detailed Balance holds (i.e. assume <math>\displaystyle f(x)P(x,y)=P(y,x)f(y)</math>) then <math>\displaystyle f(x)</math> is stationary distribution.
<math>r(x,y)=min{\{\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})},1}\}</math>
::<math>=min{\{\frac{f(y)}{f(x)},1}\}</math> since <math>q(y\mid{x})</math> is symmetric <math>\Rightarrow</math> <math>\frac{q(x\mid{y})}{q(y\mid{x})}=1</math>
::<math>=min{\{\frac{  \frac{1}{\pi}\frac{1}{1+y^2} }{  \frac{1}{\pi} \frac{1}{1+x^2} },1}\}</math>
::<math>=min{\{\frac{1+x^2 }{1+y^2},1}\}</math>


That is to show: <math>\displaystyle f(x)P(x,y)=P(y,x)f(y)\Rightarrow </math> <math>\displaystyle f(x)</math> is stationary distribution.
Now, having calculated <math>\displaystyle r(x,y)</math>, we complete the problem in Matlab using the following code:
b=2; % let b=2 for now, we will see what happens when b is smaller or larger
X(1)=randn;
for i=2:10000
    Y=b*randn+X(i-1); % we want to decide whether we accept this Y
    r=min( (1+X(i-1)^2)/(1+Y^2),1);
    u=rand;
    if u<r
        X(i)=Y; % accept Y
    else
        X(i)=X(i-1); % reject Y remaining in the current state
    end;
end;


:<math>f(x)=\int f(y)P(y,x)dy</math>
'''''We need to be careful about choosing b!'''''
:::<math>=\int f(x)P(x,y)dy</math>
:::<math>=f(x)\int P(x,y)dy</math> and since <math>\int P(x,y)dy=1</math>
:::<math>=\displaystyle f(x)</math>


:'''If b is too large'''


'''''Now, we need to show that detailed balance holds in the Metropolis-Hastings...'''''
::Then the fraction <math>\frac{f(y)}{f(x)}</math> would be very small <math>\Rightarrow</math> <math>r=min{\{\frac{f(y)}{f(x)},1}\}</math> is very small aswell.  


Consider 2 points <math>\displaystyle x</math> and <math>\displaystyle y</math>:
::It is highly unlikely that <math>\displaystyle u<r</math>, the probability of rejecting <math>\displaystyle Y</math> is high so the chain is likely to get stuck in the same state for a long time <math>\rightarrow</math> chain may not coverge to the right distribution.


:'''Either''' <math>\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})}>1</math> '''OR''' <math>\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})}<1</math> (ignoring that it might equal to 1)
::It is easy to observe by looking at the histogram of <math>\displaystyle X</math>, the shape will not resemble the shape of the target <math>\displaystyle f(x)</math>


Without loss of generality. suppose that the product is <math>\displaystyle<1</math>.  
:: Most likely we reject y and the chain will get stuck.


:'''If b is too small


In this case <math>r(x,y)=\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})}</math> and <math>\displaystyle r(y,x)=1</math>
::Then we are setting up our proposal distribution <math>q(y\mid{x})</math> to be much narrower then than the target <math>\displaystyle f(x)</math> so the chain will not have a chance to explore the sample state space and visit majority of the states of the target <math>\displaystyle f(x)</math>.


:'''If b is just right'''
::Well chosen b will help avoid the issues mentioned above and we can say that the chain is "mixing well".


:''Some intuitive meanings before we continue:''
:<math>\displaystyle P(x,y)</math> is jumping from <math>\displaystyle x</math> to <math>\displaystyle y</math> if proposal distribution generates <math>\displaystyle y</math> '''and''' <math>\displaystyle y</math> is accepted
:<math>\displaystyle P(y,x)</math> is jumping from <math>\displaystyle y</math> to <math>\displaystyle x</math> if proposal distribution generates <math>\displaystyle x</math> '''and''' <math>\displaystyle x</math> is accepted
:<math>q(y\mid{x})</math> is the probability of generating <math>\displaystyle y</math>
:<math>q(x\mid{y})</math> is the probability of generating <math>\displaystyle x</math>
:<math>\displaystyle r(x,y)</math> probability of accepting <math>\displaystyle y</math>
:<math>\displaystyle r(y,x)</math> probability of accepting <math>\displaystyle x</math>.


<gallery widths="250px" heights="250px">
Image:Blarge.jpg|B too large (B=1000)
Image:Bsmall.JPG|B too small (B=0.001
Image:Bgood.JPG|Good choice of B (B=2)
</gallery>


With that in mind we can show that <math>\displaystyle f(x)P(x,y)=P(y,x)f(y)</math> as follows:
'''Mathematical explanation for why this algorithm works:'''


We talked about <math>\emph{discrete}</math> MC so far.


<br\> We have seen that: <br\>- <math>\displaystyle \pi</math> satisfies detailed balance if <math>\displaystyle \pi_iP_{ij}=P_{ji}\pi_j</math> and <br\>- if <math>\displaystyle\pi</math> satisfies <math>\emph{detailed}</math> <math>\emph{balance}</math>then it is a stationary  distribution <math>\displaystyle \pi=\pi P</math>


<math>P(x,y)=q(y\mid{x})*(r(x,y))=q(y\mid{x})\frac{f(y)}{f(x)}\frac{q(x\mid{y})}{q(y\mid{x})}</math> Cancelling out <math>\displaystyle q(y\mid{x})</math> and bringing <math>\displaystyle f(x)</math> to the other side we get
<br\><math>f(x)P(x,y)=f(y)q(y\mid{x})</math> <math>\Leftarrow</math> '''equation''' <math>\clubsuit</math>


In <math>\emph{continuous}</math>case we write the Detailed Balance as <math>\displaystyle f(x)P(x,y)=P(y,x)f(y)</math> and  say that <br\><math>\displaystyle f(x)</math> is <math>\emph{stationary}</math> <math>\emph{distribution}</math> if <math>f(x)=\int f(y)P(y,x)dy</math>.




<math>P(y,x)=q(x\mid{y})*(r(y,x))=q(x\mid{y})*1</math> Multiplying both sides by <math>\displaystyle f(y)</math> we get
We want to show that if Detailed Balance holds (i.e. assume <math>\displaystyle f(x)P(x,y)=P(y,x)f(y)</math>) then <math>\displaystyle f(x)</math> is stationary distribution.
<br\><math>f(y)P(y,x)=f(y)q(x\mid{y})</math> <math>\Leftarrow</math> '''equation''' <math>\clubsuit\clubsuit</math>


That is to show: <math>\displaystyle f(x)P(x,y)=P(y,x)f(y)\Rightarrow </math> <math>\displaystyle f(x)</math> is stationary distribution.


:<math>f(x)=\int f(y)P(y,x)dy</math>
:::<math>=\int f(x)P(x,y)dy</math>
:::<math>=f(x)\int P(x,y)dy</math> and since <math>\int P(x,y)dy=1</math>
:::<math>=\displaystyle f(x)</math>


Noticing that the right hand sides of the '''equation''' <math>\clubsuit</math> and '''equation''' <math>\clubsuit\clubsuit</math> are equal we conclude that:


<br\><math>\displaystyle f(x)P(x,y)=P(y,x)f(y)</math> as desired and thus showing that Metropolis-Hastings satisfies detailed balance.  
'''''Now, we need to show that detailed balance holds in the Metropolis-Hastings...'''''


Consider 2 points <math>\displaystyle x</math> and <math>\displaystyle y</math>:


Next lecture we will see that Metropolis-Hastings is also irreducible and ergodic thus showing that it converges.
:'''Either''' <math>\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})}>1</math> '''OR''' <math>\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})}<1</math> (ignoring that it might equal to 1)


==Metropolis Hastings Algorithm Continued - June 25 ==
Without loss of generality. suppose that the product is <math>\displaystyle<1</math>.


Metropolis–Hastings algorithm is a Markov chain Monte Carlo method. It is used to help sample from probability distributions that are difficult to sample from. The algorithm was named after Nicholas Metropolis (1915-1999), also co-author of the Simulated Anealing method (that is introduced in this lecture as well). The Gibbs sampling algorithm, that will be introduced next lecture, is a special case of the Metropolis–Hastings algorithm. This is a more efficient method, although less applicable at times.


In the last class, we showed that Metropolis Hastings satisfied the the detail-balance equations. i.e.
In this case <math>r(x,y)=\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})}</math> and <math>\displaystyle r(y,x)=1</math>


<br\><math>\displaystyle f(x)P(x,y)=P(y,x)f(y)</math>, which means <math>\displaystyle f(x) </math> is the stationary distribution of the chain.


But this is not enough, we want the chain to converge to the stationary distribution as well.
:''Some intuitive meanings before we continue:''
:<math>\displaystyle P(x,y)</math> is jumping from <math>\displaystyle x</math> to <math>\displaystyle y</math> if proposal distribution generates <math>\displaystyle y</math> '''and''' <math>\displaystyle y</math> is accepted
:<math>\displaystyle P(y,x)</math> is jumping from <math>\displaystyle y</math> to <math>\displaystyle x</math> if proposal distribution generates <math>\displaystyle x</math> '''and''' <math>\displaystyle x</math> is accepted
:<math>q(y\mid{x})</math> is the probability of generating <math>\displaystyle y</math>
:<math>q(x\mid{y})</math> is the probability of generating <math>\displaystyle x</math>
:<math>\displaystyle r(x,y)</math> probability of accepting <math>\displaystyle y</math>
:<math>\displaystyle r(y,x)</math> probability of accepting <math>\displaystyle x</math>.


Thus, we also need it to be:


<b>Irreducible:</b> There is a positive probability to reach any non-empty set of states from any starting point. This is trivial for many choice of <math>\emph{q}</math> including the one that we used in the example in the previous lecture (which was normally distributed)
With that in mind we can show that <math>\displaystyle f(x)P(x,y)=P(y,x)f(y)</math> as follows:


<b>Aperiodic:</b> The chain will not oscillate between different set of states. In the previous example, <math> q(y\mid{x}) </math> is <math> \displaystyle N(x,b^2)</math>, which will clearly not oscillate.


Next we discuss a couple of variations of Metropolis Hastings


====''''' Random Walk Metropolis Hastings''''' ====
<math>P(x,y)=q(y\mid{x})*(r(x,y))=q(y\mid{x})\frac{f(y)}{f(x)}\frac{q(x\mid{y})}{q(y\mid{x})}</math> Cancelling out <math>\displaystyle q(y\mid{x})</math> and bringing <math>\displaystyle f(x)</math> to the other side we get
<br\><math>f(x)P(x,y)=f(y)q(x\mid{y})</math> <math>\Leftarrow</math> '''equation''' <math>\clubsuit</math>


:'''<math>\emph{Procedure:}</math> '''


:<br\>1. Draw <math>\displaystyle Y = X_i + \epsilon</math>, where <math>\displaystyle \epsilon </math> has distribution <math>\displaystyle g </math>; <math>\epsilon = Y-X_i \sim~ g </math>; <math>\displaystyle X_i </math> is current state & <math>\displaystyle Y </math> is going to be close to <math>\displaystyle X_i </math>
:<br\>2. It means <math>q(y\mid{x}) = g(y-x)</math>. (Note that <math>\displaystyle g </math> is a function of distance between the current state and the state the chain is going to travel to, i.e. it's of the form <math>\displaystyle g(|y-x|) </math>. Hence we know in this version that <math>\displaystyle q </math> is symmetric <math>\Rightarrow q(y\mid{x}) = g(|y-x|) = g(|x-y|) = q(x\mid{y})</math>)
:<br\>3. <math>Y=min{\{\frac{f(y)}{f(x)},1}\}</math>


Recall in our previous example we wanted to sample from the Cauchy distribution and our proposal distribution was <math> q(y\mid{x}) </math> <math>\sim~ N(x,b^2) </math>
<math>P(y,x)=q(x\mid{y})*(r(y,x))=q(x\mid{y})*1</math> Multiplying both sides by <math>\displaystyle f(y)</math> we get
<br\><math>f(y)P(y,x)=f(y)q(x\mid{y})</math> <math>\Leftarrow</math> '''equation''' <math>\clubsuit\clubsuit</math>


In matlab, we defined this as


<math>\displaystyle Y = b* randn + x </math> (i.e <math>\displaystyle Y = X_i + randn*b) </math>


In this case, we need <math>\displaystyle \epsilon \sim~ N(0,b^2) </math>
Noticing that the right hand sides of the '''equation''' <math>\clubsuit</math> and '''equation''' <math>\clubsuit\clubsuit</math> are equal we conclude that:


The hard problem is to choose b so that the chain will mix well.
<br\><math>\displaystyle f(x)P(x,y)=P(y,x)f(y)</math> as desired and thus showing that Metropolis-Hastings satisfies detailed balance.  


<b>Rule of thumb: </b> choose b such that the rejection probability is 0.5 (i.e. half the time accept, half the time reject)


<b> Example </b>
Next lecture we will see that Metropolis-Hastings is also irreducible and ergodic thus showing that it converges.


[[File:Figure.JPG]]
==Metropolis Hastings Algorithm Continued - June 25 ==


Metropolis–Hastings algorithm is a Markov chain Monte Carlo method. It is used to help sample from probability distributions that are difficult to sample from. The algorithm was named after Nicholas Metropolis (1915-1999), also co-author of the Simulated Anealing method (that is introduced in this lecture as well). The Gibbs sampling algorithm, that will be introduced next lecture, is a special case of the Metropolis–Hastings algorithm. This is a more efficient method, although less applicable at times.


In the last class, we showed that Metropolis Hastings satisfied the the detail-balance equations. i.e.


If we draw <math>\displaystyle y_1 </math> then <math>{\frac{f(y_1)}{f(x)}} > 1 \Rightarrow min{\{\frac{f(y_1)}{f(x)},1}\} = 1</math>, accept <math>\displaystyle y_1</math> with probability 1
<br\><math>\displaystyle f(x)P(x,y)=P(y,x)f(y)</math>, which means <math>\displaystyle f(x) </math> is the stationary distribution of the chain.


If we draw <math>\displaystyle y_2 </math> then <math>{\frac{f(y_2)}{f(x)}} < 1 \Rightarrow min{\{\frac{f(y_2)}{f(x)},1}\} = \frac{f(y_2)}{f(x)}</math>, accept <math>\displaystyle y_2</math> with probability <math>\frac{f(y_2)}{f(x)}</math>
But this is not enough, we want the chain to converge to the stationary distribution as well.


Hence, each point drawn from the proposal that belongs to a region with higher density will be accepted for sure (with probability 1), and if a point belongs to a region with less density, then the chance that it will be accepted will be less than 1.
Thus, we also need it to be:


====''''' Independence Metropolis Hastings''''' ====
<b>Irreducible:</b> There is a positive probability to reach any non-empty set of states from any starting point. This is trivial for many choice of <math>\emph{q}</math> including the one that we used in the example in the previous lecture (which was normally distributed)


In this case, the proposal distribution is independent of the current state, i.e. <math>\displaystyle q(y\mid{x}) = q(y)</math>
<b>Aperiodic:</b> The chain will not oscillate between different set of states. In the previous example, <math> q(y\mid{x}) </math> is <math> \displaystyle N(x,b^2)</math>, which will clearly not oscillate.


We draw from a fixed distribution
Next we discuss a couple of variations of Metropolis Hastings


And define <math>r = min{\{\frac{f(y)}{f(x)} \cdot \frac{q(x)}{q(y)},1}\}</math>
====''''' Random Walk Metropolis Hastings''''' ====


And, this does not work unless <math>\displaystyle q </math> is very similar to the target distribution <math>\displaystyle f </math> (i.e. usually used when <math>\displaystyle f </math> is known up to a proportionality constant - the form of the distibution is known, but the distribution is not exactly known)
:'''<math>\emph{Procedure:}</math> '''


Now, we pose the question: if <math>\displaystyle q(y\mid{x}) </math> does not depend on <math>\displaystyle X</math>, does it mean that the sequence generated from this chain is really independent?
:<br\>1. Draw <math>\displaystyle Y = X_i + \epsilon</math>, where <math>\displaystyle \epsilon </math> has distribution <math>\displaystyle g </math>; <math>\epsilon = Y-X_i \sim~ g </math>; <math>\displaystyle X_i </math> is current state & <math>\displaystyle Y </math> is going to be close to <math>\displaystyle X_i </math>
:<br\>2. It means <math>q(y\mid{x}) = g(y-x)</math>. (Note that <math>\displaystyle g </math> is a function of distance between the current state and the state the chain is going to travel to, i.e. it's of the form <math>\displaystyle g(|y-x|) </math>. Hence we know in this version that <math>\displaystyle q </math> is symmetric <math>\Rightarrow q(y\mid{x}) = g(|y-x|) = g(|x-y|) = q(x\mid{y})</math>)
:<br\>3. <math>r=min{\{\frac{f(y)}{f(x)},1}\}</math>


Answer: Even though <math> Y \sim~ g(y) </math> does not depend on <math>\displaystyle X </math>, but <math>\displaystyle r </math> depends on <math>\displaystyle X </math>. So it's not really an independent sequence!
Recall in our previous example we wanted to sample from the Cauchy distribution and our proposal distribution was <math> q(y\mid{x}) </math> <math>\sim~ N(x,b^2) </math>


:<math>x_{i+1} = \begin{cases}
In matlab, we defined this as
x_i \\
y \\
\end{cases}</math>


Thus, the sequence is not really independent because acceptance probability <math>\displaystyle r </math> depends on the state <math>\displaystyle X_i </math>
<math>\displaystyle Y = b* randn + x </math> (i.e <math>\displaystyle Y = X_i + randn*b) </math>


====''''' Simulated Annealing ''''' ====
In this case, we need <math>\displaystyle \epsilon \sim~ N(0,b^2) </math>


This is essentially a method for optimization and an application of Metropolis Hastings.
The hard problem is to choose b so that the chain will mix well.


Consider the problem of <math>\displaystyle \min_{x}(h(x)) </math>, i.e. we need to find x that minimizes <math>\displaystyle h(x) </math>. But, this is the same problem as <math>\displaystyle \max(e^{\frac{-h(x)}{T}})</math> for some constant T (since the exponential function is a monotone function)
<b>Rule of thumb: </b> choose b such that the rejection probability is 0.5 (i.e. half the time accept, half the time reject)


We then consider some distribution function <math>\displaystyle f</math> such that <math>\displaystyle f \propto e^{\frac{-h(x)}{T}}</math>, where T is called the temperature, and define the following procedure:
<b> Example </b>


:'''<math>\emph{Procedure:}</math> '''
[[File:Figure.JPG]]


:#Set T to be a large number
:#Start with some random <math>\displaystyle X_0,</math> <math>\displaystyle i = 0</math>
:#<math> Y \sim~ q(y\mid{x}) </math> (note that <math>\displaystyle  q </math> is usually chosen to be symmetric)
:#<math> U \sim~ Unif[0,1] </math>
:#Define <math>r = min{\{\frac{f(y)}{f(x)},1}\}</math> (when <math>\displaystyle  q </math> is symmetric)
:#<math>X_{i+1} = \begin{cases}
Y, & \text{with probability r} \\
X_i & \text{with probability 1-r}\\
\end{cases}</math>
:#Decrease T and go to Step 2


Now, we know that <math>r = min{\{\frac{f(y)}{f(x)},1}\}</math>


Consider <math> \frac{f(y)}{f(x)}
If we draw <math>\displaystyle y_1 </math> then <math>{\frac{f(y_1)}{f(x)}} > 1 \Rightarrow min{\{\frac{f(y_1)}{f(x)},1}\} = 1</math>, accept <math>\displaystyle y_1</math> with probability 1


= \frac{e^{\frac{-h(y)}{T}}}{e^{\frac{-h(x)}{T}}}
If we draw <math>\displaystyle y_2 </math> then <math>{\frac{f(y_2)}{f(x)}} < 1 \Rightarrow min{\{\frac{f(y_2)}{f(x)},1}\} = \frac{f(y_2)}{f(x)}</math>, accept <math>\displaystyle y_2</math> with probability <math>\frac{f(y_2)}{f(x)}</math>
= e^{\frac{h(x)-h(y)}{T}} </math>


<b>Now, suppose T is large,</b>
Hence, each point drawn from the proposal that belongs to a region with higher density will be accepted for sure (with probability 1), and if a point belongs to a region with less density, then the chance that it will be accepted will be less than 1.


<math>\rightarrow </math> if <math> h(y)<h(x) \Rightarrow r = 1 </math> and we therefore accept <math>\displaystyle y </math> with probability 1
====''''' Independence Metropolis Hastings''''' ====


<math>\rightarrow </math> if <math> h(y)>h(x) \Rightarrow r = e^{\frac{h(x)-h(y)}{T}} < 1 </math> and we therefore accept <math>\displaystyle y </math> with probability <math>\displaystyle <1 </math>
In this case, the proposal distribution is independent of the current state, i.e. <math>\displaystyle q(y\mid{x}) = q(y)</math>


<b>On the other hand, suppose T is small (<math> T \rightarrow 0 </math>),</b>
We draw from a fixed distribution


<math>\rightarrow </math> if <math> h(y)<h(x) \Rightarrow r = 1 </math> and we therefore accept <math>\displaystyle y </math> with probability 1
And define <math>r = min{\{\frac{f(y)}{f(x)} \cdot \frac{q(x)}{q(y)},1}\}</math>


<math>\rightarrow </math> if <math> h(y)>h(x) \Rightarrow r = 0 </math> and we therefore reject <math>\displaystyle y </math>
And, this does not work unless <math>\displaystyle q </math> is very similar to the target distribution <math>\displaystyle f </math> (i.e. usually used when <math>\displaystyle f </math> is known up to a proportionality constant - the form of the distibution is known, but the distribution is not exactly known)


Now, we pose the question: if <math>\displaystyle q(y\mid{x}) </math> does not depend on <math>\displaystyle X</math>, does it mean that the sequence generated from this chain is really independent?


Answer: Even though <math> Y \sim~ g(y) </math> does not depend on <math>\displaystyle X </math>, but <math>\displaystyle r </math> depends on <math>\displaystyle X </math>. So it's not really an independent sequence!


<b>Example 1</b>
:<math>x_{i+1} = \begin{cases}
x_i \\
y \\
\end{cases}</math>


Consider the problem of minimizing the function <math>\displaystyle f(x) = -2x^3 - x^2 + 40x + 3 </math>
Thus, the sequence is not really independent because acceptance probability <math>\displaystyle r </math> depends on the state <math>\displaystyle X_i </math>


We can plot this function and observe that it makes a local minimum near <math>\displaystyle x = -3 </math>
====''''' Simulated Annealing ''''' ====


[[File:ezplotf0.jpg]]
Simulated Annealing is a method of optimization and an application of the Metropolis Hastings Algorithm.


We then plot the graphs of <math>\displaystyle \frac{f(x)}{T}</math> for <math>\displaystyle T = 100, 0.1</math> and observe that the distribution expands for a large T, and contracts for T small - i.e. T plays the role of variance - making the distribution expand and contract accordingly.
Consider the problem where we want to find <math>x</math> such that the objective function <math>h(x)</math> is at it's minimum,<br /><br />


[[File:ezplotf1.jpg]]
<math>\ \min_{x}(h(x)) </math><br /><br />


[[File:ezplotf2.jpg]]
Given a constant T and since the exponential function is monotone, this optimization problem is equivalent to,<br /><br />
<math>\ \max_{x}(e^{\frac{-h(x)}{T}})</math> <br /><br />


At the end, we get T to be pretty small, our distribution that we're sampling from becomes sharper, and the points that we sample are close to the local max of the exponential function (which is the mode of the distribution), thereby corresponding to the local min of our original function (as can be seen above).
We consider a distribution function <math>\displaystyle f</math> such that <math>\displaystyle f \propto e^{\frac{-h(x)}{T}}</math>, where T is called the temperature, and define the following procedure:


====''''' Example 2 (from June 30th lecture) ''''' ====
:'''<math>\emph{Procedure:}</math> '''


Suppose we want to minimize the function <math>\displaystyle f(x) = (x - 2)^2 </math>
:#Set T to be a large number
:#Start with some random <math>\displaystyle X_0,</math> <math>\displaystyle i = 0</math>
:#<math> Y \sim~ q(y\mid{x}) </math> (note that <math>\displaystyle  q </math> is usually chosen to be symmetric)
:#<math> U \sim~ Unif[0,1] </math>
:#Define <math>r = min{\{\frac{f(y)}{f(x)},1}\}</math> (when <math>\displaystyle  q </math> is symmetric)
:#<math>X_{i+1} = \begin{cases}
Y, & \text{with probability r} \\
X_i & \text{with probability 1-r}\\
\end{cases}</math><br /><br />IE. <math>X_{i+1} = Y</math> if <math>U<r</math><br /><br />
:#Decrease T and go to Step 2


Now, we know that <math>r = min{\{\frac{f(y)}{f(x)},1}\}</math>


Intuitively, we know that the answer is 2. To apply the Simulated Annealing procedure however, we require a proposal distribution. Suppose we use <math>\displaystyle Y \sim~ N(x, b^2)</math> and we begin with <math>\displaystyle T = 10</math>
Consider <math> \frac{f(y)}{f(x)}


Then the problem may be solved in MATLAB using the following:
= \frac{e^{\frac{-h(y)}{T}}}{e^{\frac{-h(x)}{T}}}
function v = obj(x)
= e^{\frac{h(x)-h(y)}{T}} </math>
v = (x - 2).^2;


T = 10; %this is the initial value of T, which we must gradually decrease
<b>Now, suppose T is large,</b>
b = 2;
X(1) = 0;
for i = 2:100 %as we change T, we will change i (e.g. i=101:200)
    Y = b*randn + X(i-1);
    r = min(1 , exp(-obj(Y)/T)/exp(-obj(X(i-1))/T) );
    U = rand;
    if U < r
        X(i) = Y; %accept Y
    else
        X(i) = X(i-1); %reject Y
    end;
end;


The first run (with <math>\displaystyle T = 10 </math>) gives us <math>\displaystyle X = 1.2792 </math>
<math>\rightarrow </math> if <math> h(y)<h(x) \Rightarrow r = 1 </math> and we therefore accept <math>\displaystyle y </math> with probability 1


Next, if we let <math>\displaystyle T = {9, 5, 2, 0.9, 0.1, 0.01, 0.005, 0.001}</math> in the order displayed, then we get the following graph when we plot X:
<math>\rightarrow </math> if <math> h(y)>h(x) \Rightarrow r = e^{\frac{h(x)-h(y)}{T}} < 1 </math> and we therefore accept <math>\displaystyle y </math> with probability <math>\displaystyle <1 </math>


[[File:SA_Example2.jpg]]
<b>On the other hand, suppose T is arbitrarily small (<math> T \rightarrow 0 </math>),</b>


<math>\rightarrow </math> if <math> h(y)<h(x) \Rightarrow r = 1 </math> and we therefore accept <math>\displaystyle y </math> with probability 1


<math>\rightarrow </math> if <math> h(y)>h(x) \Rightarrow r = 0 </math> and we therefore reject <math>\displaystyle y </math>


i.e. it converges to the minimum of the function


<b>Travelling Salesman Problem</b>


This problem consists of finding the shortest path connecting a group of cities. The salesman must visit each city once and come back to the start in the shortest possible circuit. This problem is essentially one of optimization because the goal is to minimize the salesman's cost function (this function consists of the costs associated with travelling between two cities on a given path).
<b>Example 1</b>


The travelling salesman problem is one of the most intensely investigated problems in computational mathematics and has been researched by many from diverse academic backgrounds including mathematics, CS, chemistry, physics, psychology, etc... Consequently, the travelling salesman problem now has applications in manufacturing, telecommunications, and neuroscience to name a few.<ref>
Consider the problem of minimizing the function <math>\displaystyle f(x) = -2x^3 - x^2 + 40x + 3 </math>
Applegate, D.L., Bixby, R.E., Chvátal, V., Cook, W.J., ''The Travelling Salesman Problem: A Computational Study'' Copyright 2007 Princeton University Press
</ref>


<br />
We can plot this function and observe that it makes a local minimum near <math>\displaystyle x = -3 </math>
For a good introduction to the travelling salesman problem, along with a description of the theory involved in the problem and examples of its application, refer to a paper by Michael Hahsler and Kurt Hornik entitled ''Introduction to TSP - Infrastructure for the Travelling Salesman Problem''. [http://cran.r-project.org/web/packages/TSP/vignettes/TSP.pdf]
The examples are particularly useful because they are implemented using R (a statistical computing software environment).


<br />
[[File:ezplotf0.jpg]]


==Gibbs Sampling - June 30, 2009==
We then plot the graphs of <math>\displaystyle \frac{f(x)}{T}</math> for <math>\displaystyle T = 100, 0.1</math> and observe that the distribution expands for a large T, and contracts for T small - i.e. T plays the role of variance - making the distribution expand and contract accordingly.


This algorithm is a specific form of Metropolis-Hastings and is the most widely used version of the algorithm. It is used to generate a sequence of samples from the joint distribution of multiple random variables. It was first introduced by Geman and Geman (1984) and then further developed by Gelfand and Smith (1990).<ref>
<gallery widths="250px" heights="250px">
Gentle, James E. ''Elements of Computational Statistics'' Copyright 2002 Springer Science +Business Media, LLC
Image:ezplotf1.jpg|T = 100
</ref> In order to use Gibbs Sampling, we must know how to sample from the conditional distributions. The point of Gibbs sampling is that given a multivariate distribution, it is simpler to sample from a conditional distribution than to integrate over a joint distribution. Gibbs Sampling also satisfies detailed balance equation, similar to Metropolis-Hastings
Image:ezplotf2.jpg|T = 0.1
:<math>
</gallery>
\,f(x) p_{xy} = f(y) p_{yx}
</math>


This implies that the chain is irreversible. The procedure of proving this balance equation is similar to what was done with Metropolis-Hasting proof.


<br />
In the end, T is small and the region  we are trying to sample from becomes sharper. The points that we accept are increasingly close to the local max of the exponential function (which is the mode of the distribution), thereby corresponding to the local min of our original function (as can be seen above).
<b>Advantages</b>


*The algorithm has an acceptance rate of 1. Thus it is efficient because we keep all the points we sample.
In practice the algorithm may get 'stuck' in another local minimum nearby for T too small and we don't get the convergence we looking for.
*It is useful for high-dimensional distributions.


<br />
====''''' Example 2 (from June 30th lecture) ''''' ====
<b>Disadvantages</b><ref>
Gentle, James E. ''Elements of Computational Statistics'' Copyright 2002 Springer Science +Business Media, LLC
</ref>


*We rarely know how to sample from the conditional distributions.
Suppose we want to minimize the function <math>\displaystyle f(x) = (x - 2)^2 </math>
*The algorithm can be extremely slow to converge.
*It is often difficult to know when convergence has occurred.
*The method is not practical when there are relatively small correlations between the random variables.


<b> Example: </b> Gibbs sampling is used if we want to sample from a multidimensional distribution - i.e. <math>\displaystyle f(x_1, x_2, ... , x_n) </math>


We can use Gibbs sampling (assuming we know how to draw from the conditional distributions) by drawing
Intuitively, we know that the answer is 2. To apply the Simulated Annealing procedure however, we require a proposal distribution. Suppose we use <math>\displaystyle Y \sim~ N(x, b^2)</math> and we begin with <math>\displaystyle T = 10</math>


<math>\displaystyle
Then the problem may be solved in MATLAB using the following:
function v = obj(x)
v = (x - 2).^2;


x_1 \sim~ f(x_1|x_2, x_3, ... , x_n)</math>
T = 10; %this is the initial value of T, which we must gradually decrease
b = 2;
X(1) = 0;
for i = 2:100 %as we change T, we will change i (e.g. i=101:200)
    Y = b*randn + X(i-1);
    r = min(1 , exp(-obj(Y)/T)/exp(-obj(X(i-1))/T) );
    U = rand;
    if U < r
        X(i) = Y; %accept Y
    else
        X(i) = X(i-1); %reject Y
    end;
end;


<math>x_2 \sim~ f(x_2|x_1, x_3, ... , x_n)</math>
The first run (with <math>\displaystyle T = 10 </math>) gives us <math>\displaystyle X = 1.2792 </math>


<math>
Next, if we let <math>\displaystyle T = {9, 5, 2, 0.9, 0.1, 0.01, 0.005, 0.001}</math> in the order displayed, then we get the following graph when we plot X:
\vdots
</math>


<math>
[[File:SA_Example2.jpg]]
x_n \sim~ f(x_n|x_1, x_2, ... , x_{n-1})
</math>


and the resulting set of points drawn <math>\displaystyle (x_1, x_2, \ldots, x_n) </math> follows the required multivariate distribution.


<br />
Suppose we want to sample from a bivariate distribution <math>\displaystyle f(x,y) </math> with initial point <math>\displaystyle(x_i, y_i) = (0,0) </math>, i = 0 <br />
Furthermore, suppose that we know how to sample from the conditional distributions <math>\displaystyle f_{X|Y}(x|y)</math> and <math>\displaystyle f_{Y|X}(y|x)</math>


<math>\emph{Procedure:}</math>
i.e. it converges to the minimum of the function
 
<b>Travelling Salesman Problem</b>


# <math>\displaystyle Y_{i+1} \sim~ f_{Y_i|X_i}(y|x) </math> (i.e. given the previous point, sample a new point)
This problem consists of finding the shortest path connecting a group of cities. The salesman must visit each city once and come back to the start in the shortest possible circuit. This problem is essentially one of optimization because the goal is to minimize the salesman's cost function (this function consists of the costs associated with travelling between two cities on a given path).
# <math>\displaystyle X_{i+1} \sim~ f_{X_{i}|Y_{i+1}}(x|y)</math> (note: it must be <math>\displaystyle Y_{i+1}</math> not <math>Y_{i}</math>, otherwise detailed balance may not hold)
# Repeat Steps 1 and 2


<b>Note</b> This method have usually a long time before convergence called "burning time". For this reason the distribution will be sampled better using only some of the last <math>\displaystyle X_i </math> rather than all of them.
The travelling salesman problem is one of the most intensely investigated problems in computational mathematics and has been researched by many from diverse academic backgrounds including mathematics, CS, chemistry, physics, psychology, etc... Consequently, the travelling salesman problem now has applications in manufacturing, telecommunications, and neuroscience to name a few.<ref>
Applegate, D.L., Bixby, R.E., Chvátal, V., Cook, W.J., ''The Travelling Salesman Problem: A Computational Study'' Copyright 2007 Princeton University Press
</ref>


<b>Example</b>
<br />
Suppose we want to generate samples from a bivariate normal distribution where <math>\displaystyle \mu = \left[\begin{matrix} 1 \\ 2 \end{matrix}\right]</math> and <math>\sigma = \left[\begin{matrix} 1 & \rho \\ \rho & 1 \end{matrix}\right]</math>
For a good introduction to the travelling salesman problem, along with a description of the theory involved in the problem and examples of its application, refer to a paper by Michael Hahsler and Kurt Hornik entitled ''Introduction to TSP - Infrastructure for the Travelling Salesman Problem''. [http://cran.r-project.org/web/packages/TSP/vignettes/TSP.pdf]
The examples are particularly useful because they are implemented using R (a statistical computing software environment).


<br />
<br />
Note that for a bivariate distribution it may be shown that the conditional distributions are normal. So, <math>\displaystyle f(x_2|x_1) \sim~ N(\mu_2 + \rho(x_1 - \mu_1), 1 - \rho^2)</math> and <math>\displaystyle f(x_1|x_2) \sim~ N(\mu_1 + \rho(x_2 - \mu_2), 1 - \rho^2)</math>


The problem (for a specified value <math>\displaystyle \rho</math>) may be solved in MATLAB using the following:
==Gibbs Sampling - June 30, 2009==
Y = [1 ; 2];
rho = 0.01;
sigma = sqrt(1 - rho^2);
X(1,:) = [0 0];


for i = 2:5000
This algorithm is a specific form of Metropolis-Hastings and is the most widely used version of the algorithm. It is used to generate a sequence of samples from the joint distribution of multiple random variables. It was first introduced by Geman and Geman (1984) and then further developed by Gelfand and Smith (1990).<ref>
    mu = Y(1) + rho*(X(i-1,2) - Y(2));
Gentle, James E. ''Elements of Computational Statistics'' Copyright 2002 Springer Science +Business Media, LLC
    X(i,1) = mu + sigma*randn;
</ref> In order to use Gibbs Sampling, we must know how to sample from the conditional distributions. The point of Gibbs sampling is that given a multivariate distribution, it is simpler to sample from a conditional distribution than to integrate over a joint distribution. Gibbs Sampling also satisfies detailed balance equation, similar to Metropolis-Hastings
    mu = Y(2) + rho*(X(i-1,1) - Y(1));
:<math>
    X(i,2) = mu + sigma*randn;
\,f(x) p_{xy} = f(y) p_{yx}
end;
</math>
%plot(X(:,1),X(:,2),'.') plots all of the points
%plot(X(1000:end,1),X(1000:end,2),'.') plots the last 4000 points ->  
    this demonstrates that convergence occurs after a while
    (this is called the burning time)


The output of plotting all points is:
This implies that the chain is irreversible. The procedure of proving this balance equation is similar to what was done with Metropolis-Hasting proof.


[[File:Gibbs_Sampling.jpg]]
<br />
<b>Advantages</b>


==Metropolis-Hastings within Gibbs Sampling - July 2==
*The algorithm has an acceptance rate of 1. Thus it is efficient because we keep all the points we sample.
*It is useful for high-dimensional distributions.


Thus far when discussing Gibbs Sampling, it has been assumed that we know how to sample from the conditional distributions. Even if this is not known, it is still possible to use Gibbs Sampling by utilizing the Metropolis-Hastings algorithm.
<br />
<b>Disadvantages</b><ref>
Gentle, James E. ''Elements of Computational Statistics'' Copyright 2002 Springer Science +Business Media, LLC
</ref>


*Choose <math>\displaystyle q </math> as a proposal distribution for X (assuming Y fixed).
*We rarely know how to sample from the conditional distributions.
*Choose <math>\displaystyle \tilde{q} </math> as a proposal distribution for Y (assuming X fixed).
*The algorithm can be extremely slow to converge.
*Do a Metropolis-Hastings step for X, treating Y as fixed.
*It is often difficult to know when convergence has occurred.
*Do a Metropolis-Hastings step for Y, treating X as fixed.
*The method is not practical when there are relatively small correlations between the random variables.


:'''<math>\emph{Procedure:}</math> '''
<b> Example: </b> Gibbs sampling is used if we want to sample from a multidimensional distribution - i.e. <math>\displaystyle f(x_1, x_2, ... , x_n) </math>


:# Start with some random variables <math>\displaystyle X_0, Y_0, n = 0</math>
We can use Gibbs sampling (assuming we know how to draw from the conditional distributions) by drawing
:# Draw <math>Z~ \sim~ q(Z\mid{X_n})</math>
:# Set <math>r = \min \{ 1, \frac{f(Z,Y_n)}{f(X_n,Y_n)}  \frac{q(X_n\mid{Z})}{q(Z\mid{X_n})} \} </math>
:# <math>X_{n+1} = \begin{cases}
Z, & \text{with probability r}\\
X_n, & \text{with probability 1-r}\\
\end{cases}</math>
:# Draw <math>Z~ \sim~ \tilde{q}(Z\mid{Y_n})</math>
:# Set <math>r = \min \{ 1, \frac{f(X_{n+1},Z)}{f(X_{n+1},Y_n)}  \frac{\tilde{q}(Y_n\mid{Z})}{\tilde{q}(Z\mid{Y_n})} \}</math>
:# <math>Y_{n+1} = \begin{cases}
Z, & \text{with probability r} \\
Y_n, & \text{with probability 1-r}\\
\end{cases}</math>
:# Set <math>\displaystyle n = n + 1 </math>, return to step 2 and repeat the same procedure


==Page Ranking and Review of Linear Algebra - July 7==
<math>\displaystyle


===Page Ranking===
x_1^* \sim~ f(x_1|x_2, x_3, ... , x_n)</math>
Page Rank is a form of link analysis algorithm, and it is named after Larry Page, who is a computer scientist and is one of the co-founders of Google. As an interesting note, the name "PageRank" is a trademark of Google, and the PageRank process has been patented. However the patent has been assigned to Stanford University instead of Google.


In the real world, the Page Ranking process is used by Internet search engines, namely Google. It assigns a numerical weighting to each web page within the World Wide Web which measures the relative importance of each page. To rank a web page in terms of importance, we look at the number of web pages that link to it. Additionally, we consider the relative importance of the linking web page. 
<math>x_2^* \sim~ f(x_2|x_1^*, x_3, ... , x_n)</math>


We rank pages based on the weighted number of links to that particular page. A web page is important if so many pages point to it.
<math>
\vdots
</math>


====Factors relating to importance of links====
<math>
1) Importance (rank) of linking web page (higher importance is better).
x_n^* \sim~ f(x_n|x_1^*, x_2^*, ... , x_{n-1}^*)
</math>


2) Number of outgoing links from linking web page (lower is better - since the importance of the original page itself may be diminished if it has a large number of outgoing links).
and the resulting set of points drawn <math>\displaystyle (x_1^*, x_2^*, \ldots, x_n^*) </math> follows the required multivariate distribution.


====Definitions====
<br />
<math>L_{i,j} = \begin{cases}
Suppose we want to sample from a bivariate distribution <math>\displaystyle f(x,y) </math> with initial point <math>\displaystyle(x_i, y_i) = (0,0) </math>, i = 0 <br />
1 , & \text{if j links to i}\\
Furthermore, suppose that we know how to sample from the conditional distributions <math>\displaystyle f_{X|Y}(x|y)</math> and <math>\displaystyle f_{Y|X}(y|x)</math>
0 , & \text{else}\\ \end{cases}</math>


<math>\emph{Procedure:}</math>


<math>c_{j}=\sum_{i=1}^N L_{i,j}\text{ = number of outgoing links from website j} </math>
# <math>\displaystyle Y_{i+1} \sim~ f_{Y_i|X_i}(y|x) </math> (i.e. given the previous point, sample a new point)
# <math>\displaystyle X_{i+1} \sim~ f_{X_{i}|Y_{i+1}}(x|y)</math> (note: it must be <math>\displaystyle Y_{i+1}</math> not <math>Y_{i}</math>, otherwise detailed balance may not hold)
# Repeat Steps 1 and 2


<math>P_{i} = (1-d)\times 1+ (d) \times \sum_{j=1}^n \frac{L_{i,j} \times P_j}{c_j} \text{ = rank of i}
<b>Note</b> This method have usually a long time before convergence called "burning time". For this reason the distribution will be sampled better using only some of the last <math>\displaystyle X_i </math> rather than all of them.
\text{          where } 0 \leq d \leq 1 </math>  


Under this formula, <math>\displaystyle P_i</math> is never zero.  We weight the sum and the constant using <math>\displaystyle d </math>(which is just a coefficient between 0 and 1 used to balance the objective function).
<b>Example</b>
Suppose we want to generate samples from a bivariate normal distribution where <math>\displaystyle \mu = \left[\begin{matrix} 1 \\ 2 \end{matrix}\right]</math> and <math>\sigma = \left[\begin{matrix} 1 & \rho \\ \rho & 1 \end{matrix}\right]</math>


<br />
Note that for a bivariate distribution it may be shown that the conditional distributions are normal. So, <math>\displaystyle f(x_2|x_1) \sim~ N(\mu_2 + \rho(x_1 - \mu_1), 1 - \rho^2)</math> and <math>\displaystyle f(x_1|x_2) \sim~ N(\mu_1 + \rho(x_2 - \mu_2), 1 - \rho^2)</math>


'''In Matrix Form'''
The problem (for a specified value <math>\displaystyle \rho</math>) may be solved in MATLAB using the following:
Y = [1 ; 2];
rho = 0.01;
sigma = sqrt(1 - rho^2);
X(1,:) = [0 0];


for i = 2:5000
    mu = Y(1) + rho*(X(i-1,2) - Y(2));
    X(i,1) = mu + sigma*randn;
    mu = Y(2) + rho*(X(i-1,1) - Y(1));
    X(i,2) = mu + sigma*randn;
end;
%plot(X(:,1),X(:,2),'.') plots all of the points
%plot(X(1000:end,1),X(1000:end,2),'.') plots the last 4000 points ->
    this demonstrates that convergence occurs after a while
    (this is called the burning time)


<math>\displaystyle P =  (1-d)\times e + d \times L \times D^{-1} \times P </math>
The output of plotting all points is:
 


where
[[File:Gibbs_Sampling.jpg]]
<math>P=\left(\begin{matrix}P_{1}\\
P_{2}\\ \vdots \\ P_{N} \end{matrix}\right)</math>
<math>e=\left(\begin{matrix} 1\\
1\\ \vdots \\1  \end{matrix}\right)</math>


are both <math>\displaystyle N</math> X <math>\displaystyle 1</math> matrices
==Metropolis-Hastings within Gibbs Sampling - July 2==


<math>L=\left(\begin{matrix}L_{1,1}&L_{1,2}&\dots&L_{1,N}\\
Thus far when discussing Gibbs Sampling, it has been assumed that we know how to sample from the conditional distributions. Even if this is not known, it is still possible to use Gibbs Sampling by utilizing the Metropolis-Hastings algorithm.
L_{2,1}&L_{2,2}&\dots&L_{2,N}\\
\vdots&\vdots&\ddots&\vdots\\
L_{N,1}&L_{N,2}&\dots&L_{N,N}
\end{matrix}\right)</math>


<math>D=\left(\begin{matrix}c_{1}& 0 &\dots& 0 \\
*Choose <math>\displaystyle q </math> as a proposal distribution for X (assuming Y fixed).
0 & c_{2}&\dots&0\\
*Choose <math>\displaystyle \tilde{q} </math> as a proposal distribution for Y (assuming X fixed).
\vdots&\vdots&\ddots&\vdots&\\
*Do a Metropolis-Hastings step for X, treating Y as fixed.
0&0&\dots&c_{N} \end{matrix}\right)</math>
*Do a Metropolis-Hastings step for Y, treating X as fixed.


<math>D^{-1}=\left(\begin{matrix}1/c_{1}& 0 &\dots& 0 \\
:'''<math>\emph{Procedure:}</math> '''
0 & 1/c_{2}&\dots&0\\
\vdots&\vdots&\ddots&\vdots&\\
0&0&\dots&1/c_{N} \end{matrix}\right)</math>


====Solving for P====
:# Start with some random variables <math>\displaystyle X_n, Y_n, n = 0</math>
Since rank is a relative term, if we make an assumption that
:# Draw <math>Z~ \sim~ q(Z\mid{X_n})</math>
:# Set <math>r = \min \{ 1, \frac{f(Z,Y_n)}{f(X_n,Y_n)}  \frac{q(X_n\mid{Z})}{q(Z\mid{X_n})} \} </math>
:# <math>X_{n+1} = \begin{cases}
Z, & \text{with probability r}\\
X_n, & \text{with probability 1-r}\\
\end{cases}</math>
:# Draw <math>Z~ \sim~ \tilde{q}(Z\mid{Y_n})</math>
:# Set <math>r = \min \{ 1, \frac{f(X_{n+1},Z)}{f(X_{n+1},Y_n)}  \frac{\tilde{q}(Y_n\mid{Z})}{\tilde{q}(Z\mid{Y_n})} \}</math>
:# <math>Y_{n+1} = \begin{cases}
Z, & \text{with probability r} \\
Y_n, & \text{with probability 1-r}\\
\end{cases}</math>
:# Set <math>\displaystyle n = n + 1 </math>, return to step 2 and repeat the same procedure


<math>\sum_{i=1}^N P_i = 1</math>
==Page Ranking and Review of Linear Algebra - July 7==


then we can solve for P (in matrix form this is <math>\displaystyle e^T \times P = 1</math>
===Page Ranking===
Page Rank is a form of link analysis algorithm, and it is named after Larry Page, who is a computer scientist and is one of the co-founders of Google. As an interesting note, the name "PageRank" is a trademark of Google, and the PageRank process has been patented. However the patent has been assigned to Stanford University instead of Google.


<math>\displaystyle P = (1-d)\times e \times 1 + d \times L \times D^{-1} \times P </math>
In the real world, the Page Ranking process is used by Internet search engines, namely Google. It assigns a numerical weighting to each web page within the World Wide Web which measures the relative importance of each page. To rank a web page in terms of importance, we look at the number of web pages that link to it.  Additionally, we consider the relative importance of the linking web page.  


<math>\displaystyle P =  (1-d)\times e \times e^T \times P + d \times L \times D^{-1} \times P \text{    by replacing 1 with } e^T \times P </math>
We rank pages based on the weighted number of links to that particular page. A web page is important if so many pages point to it.


<math>\displaystyle P = [(1-d) \times e \times e^T + d \times L \times D^{-1}] \times P  \text{      by factoring out the } P </math>
====Factors relating to importance of links====
1) Importance (rank) of linking web page (higher importance is better).


<math>\displaystyle P =  A \times P  \text{      by defining A (notice that everything in A is known )} </math>
2) Number of outgoing links from linking web page (lower is better - since the importance of the original page itself may be diminished if it has a large number of outgoing links).


====Definitions====
<math>L_{i,j} = \begin{cases}
1 , & \text{if j links to i}\\
0 , & \text{else}\\ \end{cases}</math>


We can solve for P using two different methods. Firstly, we can recognize that P is an eigenvector corresponding to eigenvalue 1, for matrix A.  Secondly, we can recognize that P is the stationary distribution for a transition matrix A.


If we look at this as a Markov Chain, this represents a random walk on the internet.  There is a chance of jumping to an unlinked page (from the constant) and the probability of going to a page increases as the number of links to it increases.
<math>c_{j}=\sum_{i=1}^N L_{i,j}\text{ = number of outgoing links from website j} </math>


<math>P_{i} = (1-d)\times 1+ (d) \times \sum_{j=1}^n \frac{L_{i,j} \times P_j}{c_j} \text{ = rank of i}
\text{          where } 0 \leq d \leq 1 </math>


To solve for P, we start with a random guess <math>\displaystyle P_0</math> and repeatedly apply
Under this formula, <math>\displaystyle P_i</math> is never zero.  We weight the sum and the constant using <math>\displaystyle d </math>(which is just a coefficient between 0 and 1 used to balance the objective function).


<math>\displaystyle P_i <= A \times P_i-1 </math>


Since this is a stationary series, for large n <math>\displaystyle P_n = P</math>.
'''In Matrix Form'''


===Linear Algebra Review===


<math>\displaystyle P =  (1-d)\times e + d \times L \times D^{-1} \times P </math>


<b>Inner Product</b><ref>
Lay, David, ''Linear Algebra and its Applications'', Copyright 2006, Pearson Education Inc., Boston, MA, USA.
</ref>


Note that the inner product is also referred to as the dot product.
where
If <math> \vec{u} = \left[\begin{matrix} u_1 \\ u_2 \\ \vdots \\ u_n \end{matrix}\right] \text{  and    } \vec{v} = \left[\begin{matrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{matrix}\right] </math> then the inner product is :
<math>P=\left(\begin{matrix}P_{1}\\
P_{2}\\ \vdots \\ P_{N} \end{matrix}\right)</math>
<math>e=\left(\begin{matrix} 1\\
1\\ \vdots \\\end{matrix}\right)</math>


<math> \vec{u} \cdot \vec{v} = \vec{u}^T\vec{v} = \left[\begin{matrix} u_1 & u_2 & \dots & u_n \end{matrix}\right] \left[\begin{matrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{matrix}\right] = u_1v_1 + u_2v_2 + u_3v_3 + \dots + u_nv_n</math>
are both <math>\displaystyle N</math> X <math>\displaystyle 1</math> matrices


<math>L=\left(\begin{matrix}L_{1,1}&L_{1,2}&\dots&L_{1,N}\\
L_{2,1}&L_{2,2}&\dots&L_{2,N}\\
\vdots&\vdots&\ddots&\vdots\\
L_{N,1}&L_{N,2}&\dots&L_{N,N}
\end{matrix}\right)</math>


The <b>length (or norm)</b> of <math>\displaystyle \vec{v} </math> is the non-negative scalar <math>\displaystyle||\vec{v}||</math> defined by
<math>D=\left(\begin{matrix}c_{1}& 0 &\dots& 0 \\
<math>\displaystyle ||\vec{v}|| = \sqrt{\vec{v} \cdot \vec{v}} = \sqrt{v_1^2 + v_2^2 + \dots + v_n^2} </math>
0 & c_{2}&\dots&0\\
\vdots&\vdots&\ddots&\vdots&\\
0&0&\dots&c_{N} \end{matrix}\right)</math>


<math>D^{-1}=\left(\begin{matrix}1/c_{1}& 0 &\dots& 0 \\
0 & 1/c_{2}&\dots&0\\
\vdots&\vdots&\ddots&\vdots&\\
0&0&\dots&1/c_{N} \end{matrix}\right)</math>


For <math>\displaystyle \vec{u} </math> and <math>\displaystyle \vec{v} </math> in <math>\mathbf{R}^n</math> , the <b>distance between <math>\displaystyle \vec{u} </math> and <math>\displaystyle \vec{v} </math> </b>written as <math> \displaystyle dist(\vec{u},\vec{v}) </math>, is the length of the vector <math> \vec{u} - \vec{v}</math>. That is,
====Solving for P====
<math> \displaystyle dist(\vec{u},\vec{v}) = ||\vec{u} - \vec{v}||</math>
Since rank is a relative term, if we make an assumption that


<math>\sum_{i=1}^N P_i = 1</math>


If <math> \vec{u} </math> and <math> \vec{v} </math> are non-zero vectors in <math>\mathbf{R}^2</math> or <math>\mathbf{R}^3</math> , then the angle between <math> \vec{u} </math> and <math> \vec{v} </math> is given as <math>\vec{u} \cdot \vec{v} = ||\vec{u}|| \ ||\vec{v}|| \ cos\theta</math>
then we can solve for P (in matrix form this is <math>\displaystyle e^T \times P = 1</math>


<math>\displaystyle P =  (1-d)\times e \times 1 + d \times L \times D^{-1} \times P </math>


<math>\displaystyle P =  (1-d)\times e \times e^T \times P + d \times L \times D^{-1} \times P \text{    by replacing 1 with } e^T \times P </math>


<b>Orthogonal and Orthonormal</b><ref>
<math>\displaystyle P =  [(1-d) \times e \times e^T + d \times L \times D^{-1}] \times P  \text{      by factoring out the } P </math>
Lay, David, ''Linear Algebra and its Applications'', Copyright 2006, Pearson Education Inc., Boston, MA, USA.
</ref>


Two vectors <math> \vec{u} </math> and <math> \vec{v} </math> in <math>\mathbf{R}^n</math> are <b>orthogonal</b> (to each other) if <math>\vec{u} \cdot \vec{v} = 0</math>
<math>\displaystyle P =  A \times P  \text{       by defining A (notice that everything in A is known )} </math>


By the Pythagorean Theorem, it may also be said that two vectors <math> \vec{u} </math> and <math> \vec{v} </math> are orthogonal if and only if <math> ||\vec{u}+\vec{v}||^2 = ||\vec{u}||^2 + ||\vec{v}||^2 </math>


Two vectors <math> \vec{u} </math> and <math> \vec{v} </math> in <math>\mathbf{R}^n</math> are <b>orthonormal</b> if <math>\vec{u} \cdot \vec{v} = 0</math> and <math>||\vec{u}||=||\vec{v}||=0</math>
We can solve for P using two different methods. Firstly, we can recognize that P is an eigenvector corresponding to eigenvalue 1, for matrix A. Secondly, we can recognize that P is the stationary distribution for a transition matrix A.


If we look at this as a Markov Chain, this represents a random walk on the internet.  There is a chance of jumping to an unlinked page (from the constant) and the probability of going to a page increases as the number of links to it increases.


An <b>orthonormal matrix <math>\displaystyle U</math></b> is a ''square invertible'' matrix, such that <math>\displaystyle U^{-1} = U^T</math> or alternatively <math>\displaystyle U^T \ U = U \ U^T = I</math>


Note that an orthogonal matrix is an orthonormal matrix.
To solve for P, we start with a random guess <math>\displaystyle P_0</math> and repeatedly apply


<math>\displaystyle P_i <= A \times P_i-1 </math>


Since this is a stationary series, for large n <math>\displaystyle P_n = P</math>.


<b>Dependence and Independence</b><ref>
Lay, David, ''Linear Algebra and its Applications'', Copyright 2006, Pearson Education Inc., Boston, MA, USA.
</ref>


The set of vectors <math> \{ \vec{v_1}, \dots , \vec{v_p} \} </math> in <math>\mathbf{R}^n</math> is said to be <b>linearly independent</b> if the vector equation <math>\displaystyle a_1\vec{v_1} + a_2\vec{v_2} + \dots + a_p\vec{v_p} = \vec{0}</math> has only the trivial solution (i.e. <math>\displaystyle a_k = 0 \ \forall k </math> ).


==== Exemple of utilisation of Page Ranking : ====


The set of vectors <math> \{ \vec{v_1}, \dots , \vec{v_p} \} </math> in <math>\mathbf{R}^n</math> is said to be <b>linearly dependent</b> if there exists a set of coefficients  <math> \{ a_1, \dots , a_p \} </math> (not all zero), such that <math>\displaystyle a_1\vec{v_1} + a_2\vec{v_2} + \dots + a_p\vec{v_p} = \vec{0}</math>.


Suppose that the links between four webpages are:


If a set contains more vectors than there are entries in each vector, then the set is linearly dependent.
'''1  →  2'''


That is, any vector set <math> \{ \vec{v_1}, \dots , \vec{v_p} \} </math> in <math>\mathbf{R}^n</math> is linearly dependent if p > n.
'''↕  ↙  '''
                                                                                                                                                                           
'''3  ←  4'''   




If a vector set <math> \{ \vec{v_1}, \dots , \vec{v_p} \} </math> in <math>\mathbf{R}^n</math> contains the zero vector, then the set is linearly dependent.
According to the factors relating to importance of links, we can consider two possible rankings :




<math>\displaystyle  3 > 2 > 1 > 4 </math> 
                                                 
or
                                                     
<math>\displaystyle 3>1>2>4 </math> 
if we consider that the high importance of the link from 3 to 1 is more influent than the fact that there are two outgoing links from page 1 and only one from page 2.


<b>Trace and Rank</b><ref>
Lay, David, ''Linear Algebra and its Applications'', Copyright 2006, Pearson Education Inc., Boston, MA, USA.
</ref>


The <b>trace</b> of a ''square matrix'' <math>\displaystyle A_{nxn} </math>, denoted by <math>\displaystyle tr(A)</math>, is the sum of the diagonal entries in <math>\displaystyle A </math>. That is, <math>\displaystyle tr(A) = \sum_{i = 1}^n a_{ii}</math>
To rank our pages , we use the formula :


Note that an alternate definition for the trace is:


<math>\displaystyle tr(A) = \sum_{i = 1}^n \lambda_{ii}</math>
<math>\displaystyle P =  [(1-d) \times\frac{ e \times e^T }{n}+ d \times L \times D^{-1}] \times P  </math>


i.e. it is the sum of all the eigenvalues of the matrix


The <b>rank</b> of a matrix <math>\displaystyle A </math>, denoted by <math>\displaystyle rank(A) </math>, is the dimension of the column space of A. That is, the rank of a matrix is number of linearly independent rows (or columns) of A.
Where


<math>L_{i,j} = \begin{cases}
1 , & \text{if j links to i}\\
0 , & \text{else}\\ \end{cases}</math>


A ''square matrix'' is <b>non-singular</b> if and only if its <b>rank</b> equals the number of rows (or columns). Alternatively, a matrix is non-singular if it is invertible (i.e. its determinant is NOT zero).
A matrix that is not invertible is sometimes called a <b>singular matrix</b>.


A matrix is said to be ''non-singular'' if and only if its rank equals the number of rows or columns. A non-singular matrix has a non-zero determinant.
In this example,


A square matrix is said to be ''orthogonal'' if <math> AA^T=A^TA=I</math>.


For a square matrix A,
1)      <math>L=\left(\begin{matrix}0&0&1&0\\
*if <math> x^TAx > 0 for all x \neq 0</math>,then A is said to be ''positive-definite''.
1&0&0&0\\
*if <math> x^TAx \geq 0</math> for all <math>x \neq 0</math>,then A is said to be ''positive-semidefinite''.
1&1&0&1\\
0&0&0&0
\end{matrix}\right)</math>


The ''inverse'' of a square matrix A is denoted by <math>A^{-1}</math> and is such that <math>AA^{-1}=A^{-1}A=I</math>. The inverse of a matrix A exists if and only if A is non-singular.


The ''pseudo-inverse'' matrix <math>A^{\dagger}</math> is typically used whenever <math>A^{-1}</math> does not exist because A is either not square or singular: <math>A^{\dagger} = (A^TA)^{-1}A^T</math> with <math>A^{\dagger}A = I</math>.


2)      <math>c_{j}=\sum_{i=1}^N L_{i,j} </math>,


<b>Vector Spaces</b>


The n-dimensional space in which all the n-dimensional vectors reside is called a vector space.
So that here,


A set of vectors <math>\{u_1, u_2, u_3, ... u_n\}</math> is said to form a ''basis'' for a vector space if any arbitrary vector x can be represented by a linear combination of the <math>\{u_i\}</math>:
<math>x = a_1u_1 + a_2u_2 + ... + a_nu_n</math>
*The coefficients <math>\{a_1, a_2, ... a_n\}</math> are called the ''components'' of vector x with the basis <math>\{u_i\}</math>.
*In order to form a basis, it is necessary and sufficient that the <math>\{u_i\}</math> vectors be linearly independent.


A basis <math>\{u_i\}</math> is said to be ''orthogonal'' if
<math> c=\left[\begin{matrix} 2&1&1&1 \end{matrix}\right] </math>
<math>u^T_i u_j\begin{cases}
\neq 0, & \text{ if }i=j\\
= 0, & \text{ if } i\neq j\\
\end{cases}</math>


A basis <math>\{u_i\}</math> is said to be ''orthonormal'' if
<math>u^T_i u_j = \begin{cases}
1, & \text{ if }i=j\\
0, & \text{ if } i\neq j\\
\end{cases}</math>




<b>Eigenvectors and Eigenvalues</b>
3)          We have D=diag(C), so:


Given matrix <math>A_{NxN}</math>, we say that v is an '''eigenvector'' if there exists a scalar <math>\lambda</math> (the eigenvalue) such that <math>Av = \lambda v</math> where <math>\lambda</math> is the corresponding eigenvalue.


Computation of eigenvalues
<math> D=\left(\begin{matrix}2&0&0&0\\
<math>Av = \lambda v \Rightarrow Av - \lambda v = 0 \Rightarrow (A-\lambda I)v = 0 \Rightarrow \begin{cases}
0&1&0&0\\
v = 0, & \text{trivial solution}\\
0&0&1&0\\
(A-\lambda v) = 0, & \text{non-trivial solution}\\
0&0&0&1
\end{cases}</math>
\end{matrix}\right)</math>
<math>(A-\lambda v) = 0 \Rightarrow |A-\lambda v| = 0 \Rightarrow \lambda^N + a_1\lambda^{N-1} + a_2\lambda^{N-2} + ... + a_{N-1}\lambda + a_0 = 0 \leftarrow</math> Characteristic Equation


Properties
*If A is non-singular all eigenvalues are non-zero.
*If A is real and symmetric, all eigenvalues are real and the associated eigenvectors are orthogonal.
*If A is positive-definite all eigenvalues are positive




<b>Linear Transformations</b>
====Algorithm on matlab:====


A ''linear transformation'' is a mapping from a vector space <math>X^N</math> onto a vector space <math>Y^M</math>, and it is represented by a matrix
*Given vector <math>x \in X^N</math>, the corresponding vector y on <math>Y^M</math> is computed as <math> y = Ax</math>.
*The dimensionality of the two spaces does not have to be the same (M and N do not have to be equal).


A linear transformation represented by a square matrix A is said to be ''orthonormal'' when <math>AA^T=A^TA=I</math>
[[First we define our matrix]] :
*implies that <math>A^T=A^{-1}</math>
*An orthonormal transformation has the property of preserving the magnitude of the vectors:
<math>|y| = \sqrt{y^Ty}  = \sqrt{(Ax)^T Ax} = \sqrt{x^Tx} = |x|</math>
*An orthonormal matrix can be thought of as a rotation of the reference frame
*The ''row vectors'' of an orthonormal transformation form a set of orthonormal basis vectors.


L=[0 0 1 0; 1 0 0 0; 1 1 0 1; 0 0 0 0]


<b>Interpretation of Eigenvalues and Eigenvectors</b>
C=[2 1 1 1]


If we view matrix A as a linear transformation, an eigenvector represents an invariant direction in the vector space.
D=diag(C)
*When transformed by A, any point lying on the direction defined by v will remain on that direction and its magnitude will be multiplied by the corresponding eigenvalue.


Given the covariance matrix <math>\sum</math> of a Gaussian distribution
d=0.85
*The eigenvectors of <math>\sum</math> are the principal directions of the distribution
*The eigenvalues are the variances of the corresponding principal directions


The linear transformation defined by the eigenvectors of <math>\sum</math> leads to vectors that are uncorrelated regardless of the form of the distribution (This is used in [http://en.wikipedia.org/wiki/Principal_component_analysis Principal Component Analysis]).
A=(1-d)*ones(4)/4+d*L*inv(D)
*If the distribution is Gaussian, then the transformed vectors will be statistically independent.


==Principal Component Analysis - July 9==
[vec val]=eig(A)
[http://en.wikipedia.org/wiki/Principal_component_analysis Principal Component Analysis] (PCA) is a powerful technique for reducing the dimensionality of a data set. It has applications in data visualization, data mining, classification, etc. It is mostly used for data analysis and for making predictive models.


===Rough definition===
Given a high-dimensional sample of vectors, applying PCA produces an orthogonal set of vectors (called principal components) such that the first principal component is the direction of greatest variance in the sample. The second principal component is the direction of second greatest variance (orthogonal to the first component), etc.


If we ignore the last few principal components (directions with the smallest variance) then we can approximate the data by a lower-dimensional subspace, which is easier to analyze and plot.
[[We are now going to choose a random vector p]]:


===Principal Components of handwritten digits===
p=rand(1,4)
Suppose that we have a set of 103 images (28 by 23 pixels) of handwritten threes, similar to the assignment dataset.


[[File:threes_dataset.png]]
p=p/sum(p)


We can represent each image as a vector of length 644 (<math>644 = 23 \times 28</math>) just like in assignment 5. Then we can represent the entire data set as a 644 by 103 matrix, shown below. Each column represents one image (644 rows = 644 pixels).
p=p'


[[File:matrix_decomp_PCA.png]]


Using PCA, we can approximate the data as the product of two smaller matrices, which I will call <math>V \in M_{644,2}</math> and <math>W \in M_{2,103}</math>. If we expand the matrix product then each image is approximated by a linear combination of the columns of V: <math> \hat{f}(\lambda) = \bar{x} + \lambda_1 v_1 + \lambda_2 v_2 </math>, where <math>\lambda = [\lambda_1, \lambda_2]^T</math> is a column of W.
[[We now look at the probability of going on each page if we repeat many times a selection of a webpage]]


[[File:linear_comb_PCA.png]]
p=A*p


Don't worry about the constant term for now. The point is that we can represent an image using just 2 coefficients instead of 644. Also notice that the coefficients correspond to features of the handwritten digits. The picture below shows the first two principal components for the set of handwritten threes.
p=A*p


[[File:PCA_plot.png]]
etc...


The first coefficient represents the width of the entire digit, and the second coefficient represents the thickness of the stroke.
p=A*p


===More examples===
The slides cover several examples. Some of them use PCA, others use similar, more sophisticated techniques outside the scope of this course (see [http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction Nonlinear dimensionality reduction], PCA is linear).
*Handwritten digits.
*Recognition of hand orientation. (Isomap??)
*Recognition of facial expressions. (LLE - Locally Linear Embedding?)
*Arranging words based on semantic meaning.
*Detecting beards and glasses on faces. (MDS - Multidimensional scaling?)


===Derivation of PCA===
[[We finally get]]:
We want to find the direction of maximum variation. So take a direction <math>w = [w_1, \ldots, w_D]^T</math> and a data point <math>x = [x_1, \ldots, x_D]^T </math> then compute the length of the projection of the point in direction.


<math>
p =
u = \frac{\textbf{w}^T \textbf{x}}{\sqrt{\textbf{w}^T\textbf{w}}}
</math>


Of course, the direction <math>\textbf{w}</math> is the same as <math>2\textbf{w}</math> or in general <math>c\textbf{w}</math>, and it doesn't matter which one we use. So without loss of generality, let the length of <math>\textbf{w}</math> be 1. Therefore <math>\textbf{w}^T \textbf{w} = 1</math> so the equation simplifies to just
    0.3725
    0.1958
    0.3942
    0.0375


<math>
So we conclude that  <math> \displaystyle  3 > 1 > 2 > 4 </math>  
u = \textbf{w}^T \textbf{x}.
as the probability of being on page three > Pr (page 1) > Pr (page 2) > Pr (page 4)
</math>


Let <math>x_1, \ldots, x_D</math> be a random variables, then our goal is to maximize the variance of <math>u</math>, which is
===Linear Algebra Review===


<math>
\textrm{var}(u) = \textrm{var}(\textbf{w}^T \textbf{x}) = \textbf{w}^T \Sigma \textbf{w},
</math>


where <math>\Sigma</math> is the covariance matrix. For a finite data set we can replace <math>\Sigma</math> by <math>s</math>, the sample covariance matrix.
<b>Inner Product</b><ref>
Lay, David, ''Linear Algebra and its Applications'', Copyright 2006, Pearson Education Inc., Boston, MA, USA.
</ref>


So, <math>\displaystyle w^T sw </math> is the variance of any vector <math>\displaystyle u </math>, formed by the weight vector <math>\displaystyle w </math>
Note that the inner product is also referred to as the dot product.
If <math> \vec{u} = \left[\begin{matrix} u_1 \\ u_2 \\ \vdots \\ u_n \end{matrix}\right] \text{  and    } \vec{v} = \left[\begin{matrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{matrix}\right] </math> then the inner product is :


The first principal component is the vector that maximizes the variance
<math> \vec{u} \cdot \vec{v} = \vec{u}^T\vec{v} = \left[\begin{matrix} u_1 & u_2 & \dots & u_n \end{matrix}\right] \left[\begin{matrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{matrix}\right] = u_1v_1 + u_2v_2 + u_3v_3 + \dots + u_nv_n</math>


<math>
\textrm{PC} = \underset{\textbf{w}}{\operatorname{arg\,max}} \, \left( \operatorname{var}(u) \right) = \underset{\textbf{w}}{\operatorname{arg\,max}} \, \left( \textbf{w}^T s \textbf{w} \right) 
</math>


where [http://en.wikipedia.org/wiki/Arg_max arg max] denotes the value of <math>w</math> that maximizes the function.
The <b>length (or norm)</b> of <math>\displaystyle \vec{v} </math> is the non-negative scalar <math>\displaystyle||\vec{v}||</math> defined by
<math>\displaystyle ||\vec{v}|| = \sqrt{\vec{v} \cdot \vec{v}} = \sqrt{v_1^2 + v_2^2 + \dots + v_n^2} </math>


<math>
\textbf{w}^T s \textbf{w} \leq \| \textbf{w}^T s \textbf{w} \| \leq \| s \| \| \textbf{w} \| =  \| s \|
</math>


Therefore the variance is bounded, so the maximum exists. Our goal is to find the weights <math>\displaystyle w </math> that maximize this variability, but subject to a constraint. The problem then becomes:
For <math>\displaystyle \vec{u} </math> and <math>\displaystyle \vec{v} </math> in <math>\mathbf{R}^n</math> , the <b>distance between <math>\displaystyle \vec{u} </math> and <math>\displaystyle \vec{v} </math> </b>written as <math> \displaystyle dist(\vec{u},\vec{v}) </math>, is the length of the vector <math> \vec{u} - \vec{v}</math>. That is,
<math> \displaystyle dist(\vec{u},\vec{v}) = ||\vec{u} - \vec{v}||</math>


<math>
\underset{\textbf{w}}{\operatorname{max}} \, \left( \textbf{w}^T s \textbf{w} \right) 
</math>
such that
<math>\textbf{w}^T \textbf{w} = 1</math>


If <math> \vec{u} </math> and <math> \vec{v} </math> are non-zero vectors in <math>\mathbf{R}^2</math> or <math>\mathbf{R}^3</math> , then the angle between <math> \vec{u} </math> and <math> \vec{v} </math> is given as <math>\vec{u} \cdot \vec{v} = ||\vec{u}|| \ ||\vec{v}|| \ cos\theta</math>


Next lecture we will actually find the maximum.


===Principal Component Analysis Continued - July 14===
From the previous lecture, we have seen that to take the direction of maximum variance, the problem becomes: <math>
\underset{\textbf{w}}{\operatorname{max}} \, \left( \textbf{w}^T s \textbf{w} \right) 
</math>
with constraint
<math>\textbf{w}^T \textbf{w} = 1</math>.


Before we can proceed, we must review Lagrange Multipliers.
<b>Orthogonal and Orthonormal</b><ref>
Lay, David, ''Linear Algebra and its Applications'', Copyright 2006, Pearson Education Inc., Boston, MA, USA.
</ref>


====Lagrange Multiplier====
Two vectors <math> \vec{u} </math> and <math> \vec{v} </math> in <math>\mathbf{R}^n</math> are <b>orthogonal</b> (to each other) if <math>\vec{u} \cdot \vec{v} = 0</math>
To find the maximum (or minimum) of a function <math>\displaystyle f(x,y)</math> subject to constraints <math>\displaystyle g(x,y) = 0 </math>, we define a new variable <math>\displaystyle \lambda</math> called a Lagrange Multiplier and we form the Lagrangian L:
<math>\displaystyle L(x,y,\lambda) = f(x,y) - \lambda g(x,y)</math>


If <math>\displaystyle (x^*,y^*)</math> is the max of <math>\displaystyle f(x,y)</math>, there exists <math>\displaystyle \lambda^*</math> such that <math>\displaystyle (x^*,y^*,\lambda^*) </math> is a stationary point of L (partial derivatives are 0).
By the Pythagorean Theorem, it may also be said that two vectors <math> \vec{u} </math> and <math> \vec{v} </math> are orthogonal if and only if <math> ||\vec{u}+\vec{v}||^2 = ||\vec{u}||^2 + ||\vec{v}||^2 </math>
<br>In addition <math>\displaystyle (x^*,y^*)</math> is a point in which functions <math>\displaystyle f</math> and <math>\displaystyle g</math> touches but do not cross. At this point, the tangent of <math>\displaystyle f</math> and <math>\displaystyle g</math> are parallel (or the gradient of <math>\displaystyle f</math> and <math>\displaystyle g</math> are parallel).


<math>\displaystyle \nabla_{x,y } f = \lambda \nabla_{x,y } g</math>
Two vectors <math> \vec{u} </math> and <math> \vec{v} </math> in <math>\mathbf{R}^n</math> are <b>orthonormal</b> if <math>\vec{u} \cdot \vec{v} = 0</math> and <math>||\vec{u}||=||\vec{v}||=1</math>
<br>
<br>
where <math>\displaystyle \nabla_{x,y} f = (\frac{\delta f}{dx},\frac{\delta f}{dy})</math>
<br>
and <math>\displaystyle \nabla_{x,y} g = (\frac{\delta g}{dx},\frac{\delta g}{dy})</math>
<br>
To incorporate these into one equation, we define L as <math>\displaystyle L(x,y,\lambda) = f(x,y) - \lambda g(x,y)</math>.


<br>
Back to the original problem, from the Lagrangian we obtain <math>\displaystyle L(\textbf{w},\lambda) = \textbf{w}^T s \textbf{w} - \lambda (\textbf{w}^T \textbf{w} - 1)</math>


(Note that to take the derivative with respect to '''w''' below, <math> \textbf{w}^T s \textbf{w} </math> can be thought of as a quadratic function in '''w''', hence the '''2sw''' below)
An <b>orthonormal matrix <math>\displaystyle U</math></b> is a ''square invertible'' matrix, such that <math>\displaystyle U^{-1} = U^T</math> or alternatively <math>\displaystyle U^T \ U = U \ U^T = I</math>
 
Note that an orthogonal matrix is an orthonormal matrix.


Taking the derivative with respect to '''w''', we get:
<br>
<math>\displaystyle \frac{\delta L}{\delta \textbf{w}} = 2s\textbf{w} - 2\lambda\textbf{w} </math>
<br>
Set <math> \displaystyle \frac{\delta L}{\delta \textbf{w}} = 0 </math>, we get
<br>
<math>\displaystyle s\textbf{w} = \lambda\textbf{w} </math>
<br>
This equation means that <math>\textbf{w}</math> is an eigenvector of s and <math>\lambda</math> is an eigenvalue of s.
<br>
If we substitute <math>\displaystyle\textbf{w}</math> in <math>\displaystyle \textbf{w}^T s\textbf{w}</math> we obtain <math>\displaystyle\textbf{w}^T s\textbf{w} = \textbf{w}^T \lambda \textbf{w} = \lambda w^T w = \lambda </math>
<br>
In order to maximize the objective function we need to choose the eigenvector with the largest eigenvalue.


We choose the first PC, '''u1''' to have the maximum variance (i.e. capturing as much variability in in <math>\displaystyle x_1, x_2,...,x_D </math> as possible


Subsequent principal components will take up successively smaller parts of the total variability
<b>Dependence and Independence</b><ref>
Lay, David, ''Linear Algebra and its Applications'', Copyright 2006, Pearson Education Inc., Boston, MA, USA.
</ref>


Note that the Principal Components decompose the total variance in the data:
The set of vectors <math> \{ \vec{v_1}, \dots , \vec{v_p} \} </math> in <math>\mathbf{R}^n</math> is said to be <b>linearly independent</b> if the vector equation <math>\displaystyle a_1\vec{v_1} + a_2\vec{v_2} + \dots + a_p\vec{v_p} = \vec{0}</math> has only the trivial solution (i.e. <math>\displaystyle a_k = 0 \ \forall k </math> ).


<math>\displaystyle \sum_{i = 1}^D Var(u_i) = \sum_{i = 1}^D \lambda_i = Tr(S) = \sum_{i = 1}^D Var(x_i)</math>


i.e. the sum of variations in all directions is the variation in the whole data
The set of vectors <math> \{ \vec{v_1}, \dots , \vec{v_p} \} </math> in <math>\mathbf{R}^n</math> is said to be <b>linearly dependent</b> if there exists a set of coefficients  <math> \{ a_1, \dots , a_p \} </math> (not all zero), such that <math>\displaystyle a_1\vec{v_1} + a_2\vec{v_2} + \dots + a_p\vec{v_p} = \vec{0}</math>.


<b> Example from class </b>


We apply PCA to the noise data, making the assumption that the intrinsic dimensionality of the data is 10. We now try to compute the reconstructed images using the top 10 eigenvectors and plot the proginal and reconstructed images
If a set contains more vectors than there are entries in each vector, then the set is linearly dependent.  


The Matlab code is as follows:
That is, any vector set <math> \{ \vec{v_1}, \dots , \vec{v_p} \} </math> in <math>\mathbf{R}^n</math> is linearly dependent if p > n.
 
 
  load('C:\Documents and Settings\r2malik\Desktop\STAT 341\noisy.mat')
 
  imagesc(reshape(X(:,1),20,28)')
If a vector set <math> \{ \vec{v_1}, \dots , \vec{v_p} \} </math> in <math>\mathbf{R}^n</math> contains the zero vector, then the set is linearly dependent.
  colormap gray
 
  imagesc(reshape(X(:,1),20,28)')
 
  [u s v] = svd(X);
 
  xHat = u(:,1:10)*s(1:10,1:10)*v(:,1:10)';
<b>Trace and Rank</b><ref>
  figure
Lay, David, ''Linear Algebra and its Applications'', Copyright 2006, Pearson Education Inc., Boston, MA, USA.
  imagesc(reshape(xHat(:,1000),20,28)')
</ref>
  colormap gray
 
The <b>trace</b> of a ''square matrix'' <math>\displaystyle A_{nxn} </math>, denoted by <math>\displaystyle tr(A)</math>, is the sum of the diagonal entries in <math>\displaystyle A </math>. That is, <math>\displaystyle tr(A) = \sum_{i = 1}^n a_{ii}</math>
 
Note that an alternate definition for the trace is:
 
<math>\displaystyle tr(A) = \sum_{i = 1}^n \lambda_{ii}</math>


Running the above code gives us 2 images - the first one represents the noisy data - we can barely make out the face
i.e. it is the sum of all the eigenvalues of the matrix


The second one is the denoised image
The <b>rank</b> of a matrix <math>\displaystyle A </math>, denoted by <math>\displaystyle rank(A) </math>, is the dimension of the column space of A. That is, the rank of a matrix is number of linearly independent rows (or columns) of A.


The noisy face:


[[File:face1.jpg]]
A ''square matrix'' is <b>non-singular</b> if and only if its <b>rank</b> equals the number of rows (or columns). Alternatively, a matrix is non-singular if it is invertible (i.e. its determinant is NOT zero).
A matrix that is not invertible is sometimes called a <b>singular matrix</b>.


The de-noised face:
A matrix is said to be ''non-singular'' if and only if its rank equals the number of rows or columns. A non-singular matrix has a non-zero determinant.


[[File:face2.jpg]]
A square matrix is said to be ''orthogonal'' if <math> AA^T=A^TA=I</math>.
 
For a square matrix A,
*if <math> x^TAx > 0,\ \forall x \neq 0</math>,then A is said to be ''positive-definite''.
*if <math> x^TAx \geq 0,\ \forall x \neq 0</math>,then A is said to be ''positive-semidefinite''.
 
The ''inverse'' of a square matrix A is denoted by <math>A^{-1}</math> and is such that <math>AA^{-1}=A^{-1}A=I</math>. The inverse of a matrix A exists if and only if A is non-singular.
 
The ''pseudo-inverse'' matrix <math>A^{\dagger}</math> is typically used whenever <math>A^{-1}</math> does not exist because A is either not square or singular: <math>A^{\dagger} = (A^TA)^{-1}A^T</math> with <math>A^{\dagger}A = I</math>.
 
 
<b>Vector Spaces</b>
 
The n-dimensional space in which all the n-dimensional vectors reside is called a vector space.
 
A set of vectors <math>\{u_1, u_2, u_3, ... u_n\}</math> is said to form a ''basis'' for a vector space if any arbitrary vector x can be represented by a linear combination of the <math>\{u_i\}</math>:
<math>x = a_1u_1 + a_2u_2 + ... + a_nu_n</math>
*The coefficients <math>\{a_1, a_2, ... a_n\}</math> are called the ''components'' of vector x with the basis <math>\{u_i\}</math>.
*In order to form a basis, it is necessary and sufficient that the <math>\{u_i\}</math> vectors be linearly independent.
 
A basis <math>\{u_i\}</math> is said to be ''orthogonal'' if
<math>u^T_i u_j\begin{cases}
\neq 0, & \text{ if }i=j\\
= 0, & \text{ if } i\neq j\\
\end{cases}</math>
 
A basis <math>\{u_i\}</math> is said to be ''orthonormal'' if
<math>u^T_i u_j = \begin{cases}
1, & \text{ if }i=j\\
0, & \text{ if } i\neq j\\
\end{cases}</math>
 
 
<b>Eigenvectors and Eigenvalues</b>
 
Given matrix <math>A_{NxN}</math>, we say that v is an '''eigenvector'' if there exists a scalar <math>\lambda</math> (the eigenvalue) such that <math>Av = \lambda v</math> where <math>\lambda</math> is the corresponding eigenvalue.
 
Computation of eigenvalues
<math>Av = \lambda v \Rightarrow Av - \lambda v = 0 \Rightarrow (A-\lambda I)v = 0 \Rightarrow \begin{cases}
v = 0, & \text{trivial solution}\\
(A-\lambda v) = 0, & \text{non-trivial solution}\\
\end{cases}</math>
<math>(A-\lambda v) = 0 \Rightarrow |A-\lambda v| = 0 \Rightarrow \lambda^N + a_1\lambda^{N-1} + a_2\lambda^{N-2} + ... + a_{N-1}\lambda + a_0 = 0 \leftarrow</math> Characteristic Equation
 
Properties
*If A is non-singular all eigenvalues are non-zero.
*If A is real and symmetric, all eigenvalues are real and the associated eigenvectors are orthogonal.
*If A is positive-definite all eigenvalues are positive
 
 
<b>Linear Transformations</b>
 
A ''linear transformation'' is a mapping from a vector space <math>X^N</math> onto a vector space <math>Y^M</math>, and it is represented by a matrix
*Given vector <math>x \in X^N</math>, the corresponding vector y on <math>Y^M</math> is computed as <math> y = Ax</math>.
*The dimensionality of the two spaces does not have to be the same (M and N do not have to be equal).
 
A linear transformation represented by a square matrix A is said to be ''orthonormal'' when <math>AA^T=A^TA=I</math>
*implies that <math>A^T=A^{-1}</math>
*An orthonormal transformation has the property of preserving the magnitude of the vectors:
<math>|y| = \sqrt{y^Ty}  = \sqrt{(Ax)^T Ax} = \sqrt{x^Tx} = |x|</math>
*An orthonormal matrix can be thought of as a rotation of the reference frame
*The ''row vectors'' of an orthonormal transformation form a set of orthonormal basis vectors.
 
 
<b>Interpretation of Eigenvalues and Eigenvectors</b>
 
If we view matrix A as a linear transformation, an eigenvector represents an invariant direction in the vector space.
*When transformed by A, any point lying on the direction defined by v will remain on that direction and its magnitude will be multiplied by the corresponding eigenvalue.
 
Given the covariance matrix <math>\sum</math> of a Gaussian distribution
*The eigenvectors of <math>\sum</math> are the principal directions of the distribution
*The eigenvalues are the variances of the corresponding principal directions
 
The linear transformation defined by the eigenvectors of <math>\sum</math> leads to vectors that are uncorrelated regardless of the form of the distribution (This is used in Principal Component Analysis).
*If the distribution is Gaussian, then the transformed vectors will be statistically independent.
 
==Principal Component Analysis - July 9==
[http://en.wikipedia.org/wiki/Principal_component_analysis Principal Component Analysis] (PCA) is a powerful technique for reducing the dimensionality of a data set. It has applications in data visualization, data mining, classification, etc. It is mostly used for data analysis and for making predictive models.
 
===Rough definition===
 
Keepings two important aspects of data analysis in mind:
* Reducing covariance in data
* Preserving information stored in data(Variance is a source of information)
<br />
PCA takes a sample of ''d'' - dimensional vectors and produces an orthogonal(zero covariance) set of ''d'' 'Principal Components'. The first Principal Component is the direction of greatest variance in the sample. The second principal component is the direction of second greatest variance (orthogonal to the first component), etc.
 
Then we can preserve most of the variance in the sample in a lower dimension by choosing the first ''k'' Principle Components and approximating the data in ''k'' - dimensional space, which is easier to analyze and plot.
 
===Principal Components of handwritten digits===
Suppose that we have a set of 103 images (28 by 23 pixels) of handwritten threes, similar to the assignment dataset.
 
[[File:threes_dataset.png]]
 
We can represent each image as a vector of length 644 (<math>644 = 23 \times 28</math>) just like in assignment 5. Then we can represent the entire data set as a 644 by 103 matrix, shown below. Each column represents one image (644 rows = 644 pixels).
 
[[File:matrix_decomp_PCA.png]]
 
Using PCA, we can approximate the data as the product of two smaller matrices, which I will call <math>V \in M_{644,2}</math> and <math>W \in M_{2,103}</math>. If we expand the matrix product then each image is approximated by a linear combination of the columns of V: <math> \hat{f}(\lambda) = \bar{x} + \lambda_1 v_1 + \lambda_2 v_2 </math>, where <math>\lambda = [\lambda_1, \lambda_2]^T</math> is a column of W.
 
[[File:linear_comb_PCA.png]]
 
Don't worry about the constant term for now. The point is that we can represent an image using just 2 coefficients instead of 644. Also notice that the coefficients correspond to features of the handwritten digits. The picture below shows the first two principal components for the set of handwritten threes.
 
[[File:PCA_plot.png]]
 
The first coefficient represents the width of the entire digit, and the second coefficient represents the thickness of the stroke.
 
===More examples===
The slides cover several examples. Some of them use PCA, others use similar, more sophisticated techniques outside the scope of this course (see [http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction Nonlinear dimensionality reduction]).
*Handwritten digits.
*Recognition of hand orientation. (Isomap??)
*Recognition of facial expressions. (LLE - Locally Linear Embedding?)
*Arranging words based on semantic meaning.
*Detecting beards and glasses on faces. (MDS - Multidimensional scaling?)
 
===Derivation of the first Principle Component===
We want to find the direction of maximum variation. Let <math>\boldsymbol{w}</math>  be an arbitrary direction, <math>\boldsymbol{x}</math> a data point and <math>\displaystyle u</math> the length of the projection of <math>\boldsymbol{x}</math> in direction <math>\boldsymbol{w}</math>.
<br /><br />
<math>\begin{align}
\textbf{w} &= [w_1, \ldots, w_D]^T \\
\textbf{x} &= [x_1, \ldots, x_D]^T \\
u &= \frac{\textbf{w}^T \textbf{x}}{\sqrt{\textbf{w}^T\textbf{w}}}
\end{align}
</math>
<br /><br />
The direction <math>\textbf{w}</math> is the same as <math>c\textbf{w}</math> so without loss of generality,<br>
<br />
<math>
\begin{align}
|\textbf{w}| &= \sqrt{\textbf{w}^T\textbf{w}} = 1 \\
u &= \textbf{w}^T \textbf{x}
\end{align}
</math>
<br /><br />
Let <math>x_1, \ldots, x_D</math> be random variables, then our goal is to maximize the variance of <math>u</math>,
<br /><br />
<math>
\textrm{var}(u) = \textrm{var}(\textbf{w}^T \textbf{x}) = \textbf{w}^T \Sigma \textbf{w},
</math>
<br /><br />
For a finite data set we replace the covariance matrix <math>\Sigma</math> by <math>s</math>, the sample covariance matrix,
<br /><br />
<math>\textrm{var}(u) = \textbf{w}^T s\textbf{w} </math>
<br /><br />
is the variance of any vector <math>\displaystyle u </math>, formed by the weight vector <math>\displaystyle w </math>. The first principal component is the vector that maximizes the variance,
<br /><br />
<math>
\textrm{PC} = \underset{\textbf{w}}{\operatorname{arg\,max}} \, \left( \operatorname{var}(u) \right) = \underset{\textbf{w}}{\operatorname{arg\,max}} \, \left( \textbf{w}^T s \textbf{w} \right) 
</math>
<br /><br />
where [http://en.wikipedia.org/wiki/Arg_max arg max] denotes the value of <math>w</math> that maximizes the function. Our goal is to find the weights <math>\displaystyle w </math> that maximize this variability, subject to a constraint.<br /> The problem then becomes,
<br /><br />
<math>
\underset{\textbf{w}}{\operatorname{max}} \, \left( \textbf{w}^T s \textbf{w} \right) 
</math>
such that
<math>\textbf{w}^T \textbf{w} = 1</math>
<br /><br />
Notice,<br />
<br />
<math>
\textbf{w}^T s \textbf{w} \leq \| \textbf{w}^T s \textbf{w} \| \leq \| s \| \| \textbf{w} \| =  \| s \|
</math>
<br /><br />
Therefore the variance is bounded, so the maximum exists. We find the this maximum using the method of Lagrange multipliers.
 
===Principal Component Analysis Continued - July 14===
From the previous lecture, we have seen that to take the direction of maximum variance, the problem becomes: <math>
\underset{\textbf{w}}{\operatorname{max}} \, \left( \textbf{w}^T s \textbf{w} \right) 
</math>
with constraint
<math>\textbf{w}^T \textbf{w} = 1</math>.
 
Before we can proceed, we must review Lagrange Multipliers.
 
====Lagrange Multiplier====
[[Image:LagrangeMultipliers2D.svg.png|right|thumb|200px|"The red line shows the constraint g(x,y) = c. The blue lines are contours of f(x,y). The point where the red line tangentially touches a blue contour is our solution." [Lagrange Multipliers, Wikipedia]]]
 
To find the maximum (or minimum) of a function <math>\displaystyle f(x,y)</math> subject to constraints <math>\displaystyle g(x,y) = 0 </math>, we define a new variable <math>\displaystyle \lambda</math> called a [http://en.wikipedia.org/wiki/Lagrange_multipliers Lagrange Multiplier] and we form the Lagrangian,<br /><br />
<math>\displaystyle L(x,y,\lambda) = f(x,y) - \lambda g(x,y)</math>
<br /><br />
If <math>\displaystyle (x^*,y^*)</math> is the max of <math>\displaystyle f(x,y)</math>, there exists <math>\displaystyle \lambda^*</math> such that <math>\displaystyle (x^*,y^*,\lambda^*) </math> is a stationary point of <math>\displaystyle L</math> (partial derivatives are 0).
<br>In addition <math>\displaystyle (x^*,y^*)</math> is a point in which functions <math>\displaystyle f</math> and <math>\displaystyle g</math> touch but do not cross. At this point, the tangents of <math>\displaystyle f</math> and <math>\displaystyle g</math> are parallel or gradients of <math>\displaystyle f</math> and <math>\displaystyle g</math> are parallel, such that:
<br /><br />
<math>\displaystyle \nabla_{x,y } f = \lambda \nabla_{x,y } g</math>
<br>
<br>
where,<br /> <math>\displaystyle \nabla_{x,y} f = (\frac{\partial f}{\partial x},\frac{\partial f}{\partial{y}}) \leftarrow</math>  the gradient of <math>\, f</math>
<br>
<math>\displaystyle \nabla_{x,y} g = (\frac{\partial g}{\partial{x}},\frac{\partial{g}}{\partial{y}}) \leftarrow</math> the gradient of <math>\, g </math>
<br><br />
 
====Example====
Suppose we wish to maximise the function <math>\displaystyle f(x,y)=x-y</math> subject to the constraint <math>\displaystyle x^{2}+y^{2}=1</math>. We can apply the Lagrange multiplier method on this example; the lagrangian is:
 
<math>\displaystyle L(x,y,\lambda) = x-y - \lambda (x^{2}+y^{2}-1)</math>
 
We want the partial derivatives equal to zero:
 
<br />
<math>\displaystyle \frac{\partial L}{\partial x}=1+2 \lambda x=0  </math> <br />
<br /> <math>\displaystyle \frac{\partial L}{\partial y}=-1+2\lambda y=0</math>
<br> <br />
<math>\displaystyle \frac{\partial L}{\partial \lambda}=x^2+y^2-1</math>
<br><br />
 
Solving the system we obtain 2 stationary points: <math>\displaystyle (\sqrt{2}/2,-\sqrt{2}/2)</math> and <math>\displaystyle (-\sqrt{2}/2,\sqrt{2}/2)</math>. In order to understand which one is the maximum, we just need to substitute it in <math>\displaystyle f(x,y)</math> and see which one as the biggest value. In this case the maximum is <math>\displaystyle (\sqrt{2}/2,-\sqrt{2}/2)</math>.
 
====Determining '''W''' ====
Back to the original problem, from the Lagrangian we obtain,
<br /><br />
<math>\displaystyle L(\textbf{w},\lambda) = \textbf{w}^T S \textbf{w} - \lambda (\textbf{w}^T \textbf{w} - 1)</math>
<br /><br />
 
If <math> \textbf{w}^T \textbf{w} </math> is a unit vector then the second part of the equation is 0.
 
If <math> \textbf{w}^T  \textbf{w} </math> is not a unit vector the the second part of the equation increases. Thus decreasing overall <math>\displaystyle L(\textbf{w},\lambda)</math>. Maximization happens when <math> \textbf{w}^T \textbf{w} =1 </math>
 
 
(Note that to take the derivative with respect to '''w''' below, <math> \textbf{w}^T S \textbf{w} </math> can be thought of as a quadratic function of '''w''', hence the '''2sw''' below. For more matrix derivatives, see section 2 of the [http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/3274/pdf/imm3274.pdf Matrix Cookbook])
 
Taking the derivative with respect to '''w''', we get:
<br><br />
<math>\displaystyle \frac{\partial L}{\partial \textbf{w}} = 2S\textbf{w} - 2\lambda\textbf{w} </math>
<br><br />
Set <math> \displaystyle \frac{\partial L}{\partial \textbf{w}} = 0 </math>, we get
<br><br />
<math>\displaystyle S\textbf{w}^* = \lambda^*\textbf{w}^* </math>
<br><br />
From the eigenvalue equation <math>\, \textbf{w}^*</math> is an eigenvector of '''S''' and <math>\, \lambda^*</math> is the corresponding eigenvalue of '''S'''. If we substitute <math>\displaystyle\textbf{w}^*</math> in <math>\displaystyle \textbf{w}^T S\textbf{w}</math> we obtain, <br /><br />
<math>\displaystyle\textbf{w}^{*T} S\textbf{w}^* = \textbf{w}^{*T} \lambda^* \textbf{w}^* = \lambda^* \textbf{w}^{*T} \textbf{w}^* = \lambda^* </math>
<br><br />
In order to maximize the objective function we choose the eigenvector corresponding to the largest eigenvalue. We choose the first PC, '''u<sub>1</sub>''' to have the maximum variance<br /> (i.e. capturing as much variability in in <math>\displaystyle x_1, x_2,...,x_D </math> as possible.) Subsequent principal components will take up successively smaller parts of the total variability.
 
 
D dimensional data will have D eigenvectors
 
<math>\lambda_1 \geq \lambda_2 \geq ... \geq \lambda_D </math> where each <math>\, \lambda_i</math> represents the amount of variation in direction <math>\, i </math>
 
so that
 
<math>Var(u_1) \geq Var(u_2) \geq ... \geq Var(u_D)</math>
 
 
Note that the Principal Components decompose the total variance in the data:
<br /><br />
<math>\displaystyle \sum_{i = 1}^D Var(u_i) = \sum_{i = 1}^D \lambda_i = Tr(S) = \sum_{i = 1}^D Var(x_i)</math>
<br /><br />
i.e. the sum of variations in all directions is the variation in the whole data
<br /><br />
<b> Example from class </b>
 
We apply PCA to the noise data, making the assumption that the intrinsic dimensionality of the data is 10. We now try to compute the reconstructed images using the top 10 eigenvectors and plot the original and reconstructed images
 
The Matlab code is as follows:
 
  load('C:\Documents and Settings\r2malik\Desktop\STAT 341\noisy.mat')
  who
  size(X)
  imagesc(reshape(X(:,1),20,28)')
  colormap gray
  imagesc(reshape(X(:,1),20,28)')
  [u s v] = svd(X);
  xHat = u(:,1:10)*s(1:10,1:10)*v(:,1:10)'; % use ten principal components
  figure
  imagesc(reshape(xHat(:,1000),20,28)') % here '1000' can be changed to different values, e.g. 105, 500, etc.
  colormap gray
 
Running the above code gives us 2 images - the first one represents the noisy data - we can barely make out the face
 
The second one is the denoised image
 
 
<gallery>
Image:face1.jpg|"Noisy Face"
Image:face2.jpg|"De-noised Face"
</gallery>
 
 
 
As you can clearly see, more features can be distinguished from the picture of the de-noised face compared to the picture of the noisy face. If we took more principal components, at first the image would improve since the intrinsic dimensionality is probably more than 10. But if you include all the components you get the noisy image, so not all of the principal components improve the image. In general, it is difficult to choose the optimal number of components.


===Principle Component Analysis (continued) - July 16 ===
===Principle Component Analysis (continued) - July 16 ===
Main Contribution not complete
====Application of PCA - Feature Abstraction ====
One of the applications of PCA is to group similar data (images etc).  There are generally two methods to do this.  We can classify the data (i.e. give each data a label and compare different types of data) or cluster (i.e. do not label the data and compare output for classes).
Generally speaking, we can do this with the entire data set (if we have an 8X8 picture, we can use all 64 pixels).  However, this is hard, and it is easier to use the reduced data and features of the data. 
=====Example: Comparing Images of 2s and 3s=====
To demonstrate this process, we can compare the images of 2s and 3s - from the same data set we have been using throughout the course.  We will apply PCA to the data, and compare the images of the labeled data.  This is an example in classifying.
MATLAB CODE
====General PCA Algorithm====
The PCA Algorithm is summarized in the following slide (taken from the Lecture Slides).
[[File:PCAalgorithm.JPG]]
Other Notes:
    1. The mean of the data(X) must be 0.  This means we may have to preprocess the data by subtracting off the mean.
    2. Encoding the data means that we are projecting the data onto a lower dimensional subspace by taking the
        inner product.  <math>U^T *X </math> is a (d x n) matrix.
    3. When we reconstruct the training set, we are only using the top d dimensions.  This will eliminate the
          dimensions that have lower variance (e.g. noise)
    4. We can compare the reconstructed test sample to the reconstructed training sample to classify the new data.
==Fisher's Linear Discriminant Analysis (FDA) - July 16(cont) ==
Main Contribution Note complete
Similar to PCA, the goal of FDA is to project the data in a lower dimension.  The difference is that we we are not interested in maximizing variances.  Rather our goal is to find a direction that is useful for classifying the data (i.e. in this case, we are looking for direction representative of a particular characteristic e.g. glasses vs. no-glasses). 


The number of dimensions that we want to reduce the data to depends on the number of classes:
<br>
For a 2 class problem, we want to reduce the data to one dimension (a line), <math>\displaystyle Z \in \mathbb{R}^{1}</math> 
<br>
Generally, for a k class problem, we want k-1 dimensions, <math>\displaystyle Z \in \mathbb{R}^{k-1}</math>


As we will see from our objective function, we want to maximize the separation of the classes.  That is, our ideal situation is that the individual classes are as far away from each other as possible, but the data within each class is close together (i.e. collapse to a single point).
====Application of PCA - Feature Abstraction ====
One of the applications of PCA is to group similar data (e.g. images).  There are generally two methods to do this.  We can classify the data (i.e. give each data a label and compare different types of data) or cluster (i.e. do not label the data and compare output for classes).
 
Generally speaking, we can do this with the entire data set (if we have an 8X8 picture, we can use all 64 pixels).  However, this is hard, and it is easier to use the reduced data and features of the data. 
 
=====Example: Comparing Images of 2s and 3s=====
To demonstrate this process, we can compare the images of 2s and 3s - from the same data set we have been using throughout the course.  We will apply PCA to the data, and compare the images of the labeled data.  This is an example in classifying.
 
[[Image:23plotPCA.jpg‎]]<br /><br /><br /><br />
 
The matlab code is as follows.
      load 2_3  %the size of this file is 64 X 400
      [coefs , scores ] = princomp (X') 
              % performs principal components analysis on the data matrix X
              % returns the principal component coefficients and scores
              % scores is the low dimensional representatioation of the data X
      plot(scores(:,1),scores(:,2))   
              % plots the first most variant dimension on the x-axis
              % and the second highest on the y-axis 
      [[File:Plot1.jpg]]
      plot(scores(1:200,1),scores(1:200,2))
              % same graph as above, only with the 2s (not 3s)
      [[File:Plot2.jpg]]
      hold on  % this command allows us to add to the current plot
      plot (scores(201:400,1),scores(201:400,2),'ro')
              % this addes the data for the 3s
              % the 'ro' command makes them red Os on the plot
      [[File:Plot3.jpg]]
      % If We classify based on the position in this plot (feature),
      % its easier than looking at each of the 64 data pieces
      gname() % displays a figure window and
              % waits for you to press a mouse button or a keyboard key
      figure
      subplot(1,2,1)
      imagesc(reshape(X(:,45),8,8)')
      subplot(1,2,2)
      imagesc(reshape(X(:,93),8,8)')
      [[File:Plot4.jpg]]
 
====General PCA Algorithm====
 
The PCA Algorithm is summarized in the following slide (taken from the Lecture Slides).
 
[[Image:PCAalgorithm.JPG|600px]]
 
 
 
Other Notes:
::#The mean of the data(X) must be 0.  This means we may have to preprocess the data by subtracting off the mean(see details[http://en.wikipedia.org/wiki/Principle_component_analysis PCA in Wikipedia].)
::#Encoding the data means that we are projecting the data onto a lower dimensional subspace by taking the inner product. Encoding: <math>X_{D\times n} \longrightarrow Y_{d\times n}</math> using mapping <math>\, U^T X_{d \times n} </math>.
::#When we reconstruct the training set, we are only using the top d dimensions.This will eliminate the dimensions that have lower variance (e.g. noise). Reconstructing: <math> \hat{X}_{D\times n}\longleftarrow Y_{d \times n}</math> using mapping <math>\, UY_{D \times n} </math>.
::#We can compare the reconstructed test sample to the reconstructed training sample to classify the new data.
 
==Fisher's Linear Discriminant Analysis (FDA) - July 16(cont) ==
 
 
Similar to PCA, the goal of FDA is to project the data in a lower dimension.  The difference is that we we are not interested in maximizing variances.  Rather our goal is to find a direction that is useful for classifying the data (i.e. in this case, we are looking for direction representative of a particular characteristic e.g. glasses vs. no-glasses). 
 
The number of dimensions that we want to reduce the data to, depends on the number of classes:
<br>
For a 2 class problem, we want to reduce the data to one dimension (a line), <math>\displaystyle Z \in \mathbb{R}^{1}</math> 
<br>
Generally, for a k class problem, we want k-1 dimensions, <math>\displaystyle Z \in \mathbb{R}^{k-1}</math>
 
As we will see from our objective function, we want to maximize the separation of the classes and to minimize the within variance of each class.  That is, our ideal situation is that the individual classes are as far away from each other as possible, but the data within each class is close together (i.e. collapse to a single point).
 
The following diagram summarizes this goal.
 
[[File:FDA.JPG]]
 
In fact, the two examples above may represent the same data projected on two different lines.
 
[[File:FDAtwo.PNG]]
 
=== Goal: Maximum Separation ===
 
====1. Minimize the within class variance====
 
<math>\displaystyle \min (w^T\sum_1w) </math>
 
<math>\displaystyle \min (w^T\sum_2w) </math>
 
and this problem reduces to <math>\displaystyle \min (w^T(\sum_1 + \sum_2)w)</math>
<br> (where <math>\displaystyle \sum_1 and \sum_2 </math> are the covariance matrices for the 1st and 2nd class of data respectively)
 
Let <math>\displaystyle \ s_w=\sum_1 + \sum_2</math> be the within classes covariance.
Then, this problem can be rewritten as <math>\displaystyle \min (w^Ts_ww)</math>
 
====2. Maximize the distance between the means of the projected data====
<br />
The optimization problem we want to solve is,
<br /><br />
<math>\displaystyle \max (w^T \mu_1 - w^T \mu_2)^2, </math>
<br /><br />
<math>\begin{align} (w^T \mu_1 - w^T \mu_2)^2 &= (w^T \mu_1 - w^T \mu_2)^T(w^T \mu_1 - w^T \mu_2)\\
&= (\mu_1^Tw - \mu_2^Tw^T)(w^T \mu_1 - w^T \mu_2)\\
&= (\mu_1^T - \mu_2^T)ww^T(\mu_1 - \mu_2) \end{align}</math>
<br /><br />
which is a scalar. Therefore,
<br /><br />
<math>\displaystyle = tr[(\mu_1^T - \mu_2^T)ww^T(\mu_1 - \mu_2)] </math>
 
<math>\displaystyle = tr[w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw] </math>
<br /><br />
(using the property of <math>\displaystyle tr[ABC] = tr[CAB] = tr[BCA] </math>
<br /><br />
<math>\displaystyle = w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw </math>
<br /><br />
Thus our original problem equivalent can be written as,
<br /><br />
<math>\displaystyle \max (w^T \mu_1 - w^T \mu_2)^2 = \displaystyle \max (w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw) </math>
<br /><br />
For a two class problem the between class variance is,
<br /><br />
<math>\displaystyle \ s_B=(\mu_1 - \mu_2)(\mu_1 - \mu_2)^T</math>
<br /><br />
Then this problem can be rewritten as,
<br /><br />
<math>\displaystyle \max (w^Ts_Bw)</math>
 
===Objective Function===
We want an objective function which satisifies both of the goals outlined above (at the same time).<br /><br />
# <math>\displaystyle \min (w^T(\sum_1 + \sum_2)w)</math> or <math>\displaystyle \min (w^Ts_ww)</math>
# <math>\displaystyle \max (w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw) </math> or <math>\displaystyle \max (w^Ts_Bw)</math>
<br /><br />
We take the ratio of the two -- we wish to maximize<br />
<br />
<math>\displaystyle \frac{(w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw)} {(w^T(\sum_1 + \sum_2)w)} </math>
 
or equivalently,<br /><br />
 
<math>\displaystyle \max \frac{(w^Ts_Bw)}{(w^Ts_ww)}</math>
 
 
 
This is a very famous problem which is called "the generalized eigenvector problem".  We can solve this using Lagrange Multipliers.  Since W is a directional vector, we do not care about the size of W.  Therefore we solve a problem similar to that in PCA,
<br /><br />
<math>\displaystyle \max (w^Ts_Bw)</math> <br />
subject to <math>\displaystyle (w^Ts_Ww=1)</math>
<br /><br />
We solve the following Lagrange Multiplier problem,
<br /><br />
<math>\displaystyle L(w,\lambda) = w^Ts_Bw - \lambda (w^Ts_Ww -1) </math><br /><br />
 
== Continuation of Fisher's Linear Discriminant Analysis (FDA) - July 21 ==
 
As discussed in the previous lecture, our Optimization problem for FDA is:
 
<math>
\max_{w} \frac{w^T s_B w}{w^T s_w w}
</math>
 
which we turned into
 
<math>\displaystyle \max (w^Ts_Bw)</math>
<br />Subject to:
<math>\displaystyle (w^Ts_ww=1)</math>
 
where <math>s_B</math> is the covariance matrix between classes and <math>s_w</math> is the covariance matrix within classes.
 
Using Lagrange multipliers, we have a Partial solution to:
<math>\displaystyle (w^Ts_Bw) - \lambda \cdot [(w^Ts_ww)-1] </math>
 
- The optimal solution for w is the eigenvector of
<math>\displaystyle s_w^{-1}s_B </math>
corresponding to the largest eigenvalue;
 
- For a k class problem, we will take the eigenvectors corresponding to the (k-1) highest eigenvalues. 
 
- In the case of two-class problem, the optimal solution for w can be simplfied, such that:
<math>\displaystyle w \propto s_w^{-1}(\mu_2 - \mu_1) </math>
 
=====Example:=====
 
We see now an application of the theory that we just introduced. Using Matlab, we find the principal components and the projection by Fisher Discriminant Analysis of two Bivariate normal distributions to show the difference between the two methods.
      %First of all, we generate the two data set:
      X1 = mvnrnd([1,1],[1 1.5; 1.5 3], 300)
      %In this case: mu_1=[1;1]; Sigma_1=[1 1.5; 1.5 3], where mu and sigma are the mean and covariance matrix.
      X2 = mvnrnd([5,3],[1 1.5; 1.5 3], 300)
      %Here mu_2=[5;3] and Sigma_2=[1 1.5; 1.5 3]
      %The plot of the two distributions is:
     
[[File:Mvrnd.jpg]]
 
      %We compute the principal components:
      X=[X1,X2]
      X=X'
      [coefs, scores]=princomp(X');
      coefs(:,1)    %first principal component
      coefs(:,1)
   
      ans =
      0.76355476446932
      0.64574307712603
 
      plot([0 coefs(1,1)], [0 coefs(2,1)],'b')
      plot([0 coefs(1,1)]*10, [0 coefs(2,1)]*10,'r')
      sw=2*[1 1.5;1.5 3]  % sw=Sigma1+Sigma2=2*Sigma1
      w=sw\[4; 2]      % calculate s_w^{-1}(mu2 - mu1)
      plot ([0 w(1)], [0 w(2)],'g')
 
[[File:Pca_full_1.jpg]]
     
      %We now make the projection:
      Xf=w'*X
      figure
      plot(Xf(1:300),1,'ob') %In this case, since it's a one dimension data, the plot is "Data Vs Indexes"
      hold on
      plot(Xf(301:600),1,'or')
     
 
[[File:Fisher_no_overlap.jpg]]
 
      %We see that in the above picture that there is no overlapping
      Xp=coefs(:,1)'*X
      figure
      plot(Xp(1:300),1,'b')
      hold on
      plot(Xp(301:600),2,'or')
 
 
[[File:Pca_overlap.jpg]]
 
      %In this case there is an overlapping since we project the first principal component on [Xp=coefs(:,1)'*X]
 
== Classification - July 21 (cont) ==
 
The process of classification involves predicting a discrete random variable from another (not necessarily discrete) random variable.  For instance we could be wishing to classify an image as a chair, a desk, or a person.  The discrete random variable, <math>Y</math>, is drawn from the set 'chair,' 'desk,' and 'person,' while the random variable <math>X</math> is the image. (In the case in which Y is a continuous variable, classification is an application of regression.)
 
Consider independent and identically distributed data points <math> \displaystyle (x_1,y_1),...,(x_N,y_N) </math> where <math> \displaystyle x_i \in X \subset \mathbb{R}^d </math> and <math> y_i \in Y</math> and Y is a finite set of discrete values.  The set of data points <math> \displaystyle \{(x_1,y_1),...,(x_N,y_N)\} </math> is called the ''training set.''
 
Then <math>\displaystyle h(x)</math> is a classifier where, given a new data point <math> \displaystyle x</math>, <math> \displaystyle h(x)</math> predicts <math> \displaystyle y</math>.  The function <math> \displaystyle h</math> is found using the training set. i.e. the set ''trains'' <math> \displaystyle h</math> to map <math> \displaystyle X</math> to <math> \displaystyle Y</math>:
 
<math>\, y = h(x)</math>
 
<math>\, h: X \to Y</math>
 
 
To continue with the example before, given a training set of images displaying desks, chairs, and people, the function should be able to read a new image never before seen and predict what the image displays out of the three above options, within a margin of error.
 
 
'''Error Rate'''
 
<math>\, \hat E(h) =Pr(h(x)\neq y) </math>
 
Given test points, how can we find the emperical error rate?
 
We simply count the number of points that have been misclassified and divide by the total number of points.
 
<math>\, \hat E(h) = \frac{1}{N} \sum_{i=1}^N I (Y_i \neq h(x_i)) </math>
 
Error rate is also called misclassification rate, and 1 minus the error rate is sometimes called the classification rate.
 
=== Bayes Classification Rule ===
 
Considering the case of a two-class problem where <math> \mathcal{Y} = \{0,1\} </math>
 
<math>\, r(x)= Pr(Y=1 \mid X=x)= \frac {Pr(X=x \mid Y=1)P(Y=1)} {Pr(X=x)} </math>
 
Where the denominator can be written as <math>\, Pr(X=x) = Pr(X=x \mid Y=1)P(Y=1)+Pr(X=x \mid Y=0)P(Y=0) </math>
 
So our classifier function
<math>h(x) = \begin{cases}
1 & r(x) \geq \frac{1}{2}  \\
0 & o/w\\
\end{cases}</math>
 
This function is considered to be the best classifier in terms of error rate. A problem is that we do not know the joint and marginal probability distributions to calculate <math>\, r(x)</math>. A real-life interpretation of the marginal<math>\, Pr(Y=1 \mid X=x)</math> may even deal with patterns and meaning, which provides an extra challenge in finding a mathematical interpretation.
[[Image:Set Differentiation.jpg|right]]
In the image, which set would it be more appropriate for the question mark to belong to?
 
 
This function is viewed as a theoretical bound - the best that can be achieved by various classification techniques. One such technique is finding the Decision Boundary.
 
=== Decision Boundary ===
 
[[Image:Decision_boundary_Joanna.jpg|thumb|right|250px]]
 
The Decision boundary is given by:
 
<math>\, Pr(Y=1 \mid X=x)= Pr(Y=0 \mid X=x) </math>
 
Suggesting those points where the probabilities of being in both classes are identical. Thus,
 
 
<math>\, D:  \{ x \mid Pr(Y=1 \mid X=x)= Pr(Y=0 \mid X=x)\} </math>
<br /><br />
Linear discriminant analysis has a decision boundary represented by a linear function, while quadratic discriminant analysis has a decision boundary represented by a quadratic function.
<br /><br /><br />
 
=== Linear Discriminant Analysis(LDA) - July 23===
==== Motivation ====
[[Image:LDAmulti.jpg|thumb|right|250px|"LDA decision boundary for 2 classes of multivariate normal data"]]
 
We would like to apply Bayes Classifiaction rule by approximating the class conditional density
<math>\, f_k(x) </math> and the prior <math>\, \pi_k </math>
<math>P(Y=k|X=x) = \frac{f_k(x)\pi_k}{\sum_{\forall{k}}f_k(x_k)\pi_k}</math>
<br /><br />
 
By making the following assumptions we can find a linear approximation to the boundary given by Bayes rule,
#The class conditional density is multivariate gaussian
#The classes have a common covariance matrix
 
==== Derivation ====
 
:'''Note on Quadratic Form'''<br /><br />
: <math>\, (x + a)^TA(x+b) = x^TAx + a^TAb + x^TAb + a^TAx </math>
 
 
 
By assumption (1),<br /><br />
 
<math>f_k(x) = \frac{1}{(2\pi)^{d/2}|\Sigma_k|^{1/2}}e^{-\frac{1}{2}(x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k)}</math><br /><br />
 
where <math>\, \Sigma_k </math> is the class covariance matrix and <math>\, \mu_k </math> is the class mean. By definition of the decision boundary (decision boundary between class <math>\ k </math> and class <math>\ l </math>),<br /><br />
<math>\begin{align}&P(Y=k|X=x) = P(Y=l|X=x)\\ \\ &\frac{1}{(2\pi)^{d/2}|\Sigma_k|^{1/2}}e^{-\frac{1}{2}(x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k)}\pi_k = \frac{1}{(2\pi)^{d/2}|\Sigma_l|^{1/2}}e^{-\frac{1}{2}(x-\mu_l)^T\Sigma_l^{-1}(x-\mu_l)}\pi_l\end{align} </math>
<br /><br />
By assumption (2),<br /><br />
 
<math>\begin{align}& \Sigma_k = \Sigma_l = \Sigma\\ \\ & e^{-\frac{1}{2}(x-\mu_k)^T\Sigma^{-1}(x-\mu_k)}\pi_k = e^{-\frac{1}{2}(x-\mu_l)^T\Sigma^{-1}(x-\mu_l)} \pi_l \\ & -\frac{1}{2}(x-\mu_k)^T\Sigma^{-1}(x-\mu_k) + \log(\pi_k) = -\frac{1}{2}(x-\mu_l)^T\Sigma^{-1}(x-\mu_l) + \log(\pi_l) \\ & -\frac{1}{2}(x^T\Sigma^{-1}x + \mu_k^T\Sigma^{-1}\mu_k - 2x^T\Sigma^{-1}\mu_k) + \frac{1}{2}(x^T\Sigma^{-1}x + \mu_l^T\Sigma^{-1}\mu_l - 2x^T\Sigma^{-1}\mu_l) + \log{\frac{\pi_k}{\pi_l}} = 0\ \\ \\ & x^T(\mu_k - \mu_l) + \frac{1}{2}(\mu_l - \mu_k)^T\Sigma^{-1}(\mu_l + \mu_k) + \log{\frac{\pi_k}{\pi_l}} = 0 \end{align}</math>
<br /><br />
The result is a linear function of <math>\ x </math> of the form <math>\, x^Ta + b = 0 </math>.
 
=====Example:=====
 
      %Load data set
      load 2_3;
      [coefs, scores] = princomp(X');
      size(X)
        % ans = 64 400          % 64 principal components
      size(coefs)
        % ans = 64 64
      size(scores)
        % ans = 400 64
      Y=scores(:, 1:2);
        % just use two of the 64 principal components
      plot(Y(1:200, 1),Y(1:200, 2), 'b.')
      hold on
      plot(Y(201:400, 1),Y(201:400, 2), 'r.')
 
[[File:Pca_2.jpg]]
 
      ll=[zeros(200,1),ones(200,1)];
      [C,err,P,logp] = classify(Y, Y, ll', 'linear');
      %The output of the function classify are:
      %  C, that is a vector whose values define the groups that are classified from the rows of the matrix
      %  err, that is an estimate of the misclassification error
      %  P, that gives the probabilities that the observation n belongs to the ith class
      %  logp, that gives the logaritm of the probability density of the nth observation. Where the latter is the sum of
      %        the conditional probability of the nth observation to belong to the class ith, times the probability of the
      %        class ith, taken over all classes.
 
==== Computational Method ====
 
We can implement this computationally by the following:
 
Define two variables, <math>\, \delta_k </math> and <math>\, \delta_l </math>
 
<math> \,\delta_k  = log(f_k(x)\pi_k) = log (\pi_k) - \frac{1}{2}(x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k) - \frac{d}{2}log(2\pi) - \frac{1}{2}log(|\Sigma_k|) </math>
 
<math> \,\delta_l  = log(f_l(x)\pi_l) = log (\pi_l) - \frac{1}{2}(x-\mu_l)^T\Sigma_l^{-1}(x-\mu_l) - \frac{d}{2}log(2\pi) - \frac{1}{2}log(|\Sigma_l|) </math>
 
 
To classify a point, <math>\ x </math>, first compute <math>\, \delta_k </math> and <math>\, \delta_l </math>. <br /><br />
Classify it to class <math>\ k </math> if <math>\, \delta_k > \delta_l </math> and vise versa. <br /><br />
 
:<math>
h(x) = \begin{cases}
k, & \text{if } \delta_k > \delta_l \\
l, &  otherwise\\
\end{cases}</math>
 
(note: since <math> - \frac{d}{2}log(2\pi) </math> is a constant term, we can simply ignore it in the actual computation since it will cancel out when we do the comparison of the deltas.) <br /><br />
 
'''Special Case: <math>\, \Sigma_k = I </math>''', the identity matrix. Then,


The following diagram summarizes this goal.
<math> \,\delta_k  = log (\pi_k) - \frac{1}{2}(x-\mu_k)^T(x-\mu_k) - \frac{1}{2}log(|I|) = log (\pi_k) - \frac{1}{2}(x-\mu_k)^T(x-\mu_k) </math>


[[File:FDA.JPG]]
We see that in the case <math>\, \Sigma = I </math>, we can simply classify a point, <math>\ x </math>, to a class based on the distances between <math>\ x </math> and the mean of the different classes (adjusted with the log of the prior). <br /><br />


<b> Goal </b>
'''General Case: <math>\, \Sigma_k \ne I </math>''' <br /><br />


<b>1. Minimize the within class variance</b>
<math> \, \Sigma_k = USV^T = USU^T </math> (since <math>\ \Sigma </math> is symmetric)<br /><br />
<math> \, \Sigma_k^{-1} = (USU^T)^{-1} = (U^T)^{-1}S^{-1}U^{-1} = US^{-1}U^T </math><br /><br />


<math>\displaystyle \min (w^T\sum_1w) </math>
So, <math> (x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k) </math>
:<math> \, = (x-\mu_k)^T US^{-1}U^T(x-\mu_k) </math>
:<math> \, = (U^Tx-U^T\mu_k)^T S^{-1}(U^Tx-U^T\mu_k) </math>
:<math> \, = (U^Tx-U^T\mu_k)^T S^{-\frac{1}{2}}S^{-\frac{1}{2}}(U^Tx-U^T\mu_k) </math>
:<math> \, = (S^{-\frac{1}{2}}U^Tx-S^{-\frac{1}{2}}U^T\mu_k)^T I(S^{-\frac{1}{2}}U^Tx-S^{-\frac{1}{2}}U^T\mu_k) </math>
:<math> \, = (x^* - \mu_k^*)^T I(x^* - \mu_k^*) </math> <br /><br />


<math>\displaystyle \min (w^T\sum_2w) </math>
where <math> \, x^* = S^{-\frac{1}{2}}U^Tx </math> and <math> \mu_k^* = S^{-\frac{1}{2}}U^T\mu_k </math>


and this problem reduces to <math>\displaystyle \min (w^T(\sum_1 + \sum_2)w)</math>
Hence the approach taken should be to transpose point <math>x</math> from the beginning,


i.e. Let <math> \, x^* = S^{-\frac{1}{2}}U^Tx </math> <br /><br />


<b>2. Maximize the distance between the means of the projected data after projection</b>
Then compute <math> \, \delta_k </math> and <math> \, \delta_l </math> with <math>x^*</math>, similar to the special case above. <br /><br />


<math>\displaystyle \max (w^T \mu_1 - w^T \mu_2)^2 </math>
If the prior distributions of the 2 classes are the same, then this method only requires us to find the distances from the point x to the mean of the 2 classes.  We would classify x based on the shortest distance to the mean.


<math>\displaystyle = (w^T \mu_1 - w^T \mu_2)^T(w^T \mu_1 - w^T \mu_2) </math>
In the <math>\delta</math> function calculations above, <math>\pi_k = P(X = k)</math> and <math>\pi_l = P(X = l)</math>, and can be approximated using the proportions of <math>k</math> and <math>l</math> elements in the training set.


<math>\displaystyle = (\mu_1^Tw - \mu_2^Tw^T)(w^T \mu_1 - w^T \mu_2) </math>
== More on Quadratic Discriminant Analysis - July 28 ==
The ''curse of dimensionality'' is a problem in many fields, including computer science and statistics. Loosely stated, it says that many problems in computer science and statistics become impractical as the number of dimensions increases.


<math>\displaystyle = (\mu_1^T - \mu_2^T)ww^T(\mu_1 - \mu_2) </math>
For example, the number of points needed to accurately estimate parameters increases with the number of parameters. For linear discriminant analysis we have to estimate <math>\ d(d+1)/2</math> elements of the covariance matrix and <math>\ 2d</math> elements for the mean, for a total of <math>\ (d^2 + 5d)/2</math>. For quadratic discriminant analysis, we have to estimate a second covariance matrix, so we estimate <math>\ d^2 + 3d</math> parameters in total. Clearly we have to estimate more parameters for QDA than LDA, so we need more points to get an accurate estimate for the discriminant.


which is a scalar. Therefore,
Estimation:


<math>\displaystyle = tr[(\mu_1^T - \mu_2^T)ww^T(\mu_1 - \mu_2)] </math>
:<math>\ Y_{N\times 1} = X_{N\times d}^{-1}W_{d\times 1}</math>


<math>\displaystyle = tr[w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw] </math>
:<math>\left[\begin{matrix}y_1\\
y_2\end{matrix}\right]=\left[\begin{matrix}x_{11}&x_{12}\\
x_{21}&x_{22}\end{matrix}\right]\left[\begin{matrix}w_1\\
w_2\end{matrix}\right]</math>


(using the property of <math>\displaystyle tr[ABC] = tr[CAB] = tr[BCA] </math>
:<math>\left[\begin{matrix}y_1\\
y_2\end{matrix}\right]=\left[\begin{matrix}x_{11}&x_{12}&x_{11}^2&x_{12}^2\\
x_{21}&x_{22}&x_{21}^2&x_{22}^2\end{matrix}\right]\left[\begin{matrix}w_1\\
w_2\\
w_3\\
w_4\end{matrix}\right]</math>


<math>\displaystyle = w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw </math>
We can find very flexible boundaries with LDA by expanding the input domain. For example, suppose <math>\ X</math> has two features and can be represented as <math>\ X=[x, y]</math> then,<br />


Thus, we get our original problem equivalent to
We can expand <math>\ X</math> to <math>\ [x, y, x^2, y^2, xy]</math> and get a quadratic boundary.<br />
We can expand <math>\ X</math> to <math>\ [x, y, xy, x^2, y^2, x^3, y^3, x^2y, xy^2]</math> and get a cubic boundary.<br />
ect..


<math>\displaystyle \max (w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw) </math>
In Matlab:
      %To double the input:
      load 2_3;
      [U, Y] = princomp(X');
      ll=zeros(N,1);
      Y1=[Y Y.^2];
      [C,err,p,logp]=classify(Y1,Y1,ll,"linear")
      %Now it classify using not only y but also y^2.

Latest revision as of 08:45, 30 August 2017

Computational Statistics and Data Analysis is a course offered at the University of Waterloo
Spring 2009
Instructor: Ali Ghodsi


Sampling (Generating random numbers)

Generating Random Numbers - May 12, 2009

Generating random numbers in a computational setting presents challenges. A good way to generate random numbers in computational statistics involves analyzing various distributions using computational methods. As a result, the probability distribution of each possible number appears to be uniform (pseudo-random). Outside a computational setting, presenting a uniform distribution is fairly easy (for example, rolling a fair die repetitively to produce a series of random numbers from 1 to 6).

We begin by considering the simplest case: the uniform distribution.

Multiplicative Congruential Method

One way to generate pseudo random numbers from the uniform distribution is using the Multiplicative Congruential Method. This involves three integer parameters a, b, and m, and a seed variable x0. This method deterministically generates a sequence of numbers (based on the seed) with a seemingly random distribution (with some caveats). It proceeds as follows:

[math]\displaystyle{ x_{i+1} = (ax_{i} + b) \mod{m} }[/math]

For example, with a = 13, b = 0, m = 31, x0 = 1, we have:

[math]\displaystyle{ x_{i+1} = 13x_{i} \mod{31} }[/math]

So,

[math]\displaystyle{ \begin{align} x_{0} &{}= 1 \end{align} }[/math]
[math]\displaystyle{ \begin{align} x_{1} &{}= 13 \times 1 + 0 \mod{31} = 13 \\ \end{align} }[/math]
[math]\displaystyle{ \begin{align} x_{2} &{}= 13 \times 13 + 0 \mod{31} = 14 \\ \end{align} }[/math]
[math]\displaystyle{ \begin{align} x_{3} &{}= 13 \times 14 + 0 \mod{31} =27 \\ \end{align} }[/math]

etc.

The above generator of pseudorandom numbers is called a Mixed Congruential Generator or Linear Congruential Generator, as they involve both an additive and a muliplicative term. For correctly chosen values of a, b, and m, this method will generate a sequence of integers including all integers between 0 and m - 1. Scaling the output by dividing the terms of the resulting sequence by m - 1, we create a sequence of numbers between 0 and 1, which is similar to sampling from a uniform distribution.

Of course, not all values of a, b, and m will behave in this way, and will not be suitable for use in generating pseudo random numbers.

For example, with a = 3, b = 2, m = 4, x0 = 1, we have:

[math]\displaystyle{ x_{i+1} = (3x_{i} + 2) \mod{4} }[/math]

So,

[math]\displaystyle{ \begin{align} x_{0} &{}= 1 \end{align} }[/math]
[math]\displaystyle{ \begin{align} x_{1} &{}= 3 \times 1 + 2 \mod{4} = 1 \\ \end{align} }[/math]
[math]\displaystyle{ \begin{align} x_{2} &{}= 3 \times 1 + 2 \mod{4} = 1 \\ \end{align} }[/math]

etc.

For an ideal situation, we want m to be a very large prime number, [math]\displaystyle{ x_{n}\not= 0 }[/math] for any n, and the period is equal to m-1. In practice, it has been found by a paper published in 1988 by Park and Miller, that a = 75, b = 0, and m = 231 - 1 = 2147483647 (the maximum size of a signed integer in a 32-bit system) are good values for the Multiplicative Congruential Method.

Java's Random class is based on a generator with a = 25214903917, b = 11, and m = 248<ref>http://java.sun.com/javase/6/docs/api/java/util/Random.html#next(int)</ref>. The class returns at most 32 leading bits from each xi, so it is possible to get the same value twice in a row, (when x0 = 18698324575379, for instance) without repeating it forever.

General Methods

Since the Multiplicative Congruential Method can only be used for the uniform distribution, other methods must be developed in order to generate pseudo random numbers from other distributions.

Inverse Transform Method

This method uses the fact that when a random sample from the uniform distribution is applied to the inverse of a cumulative density function (cdf) of some distribution, the result is a random sample of that distribution. This is shown by this theorem:

Theorem:

If [math]\displaystyle{ U \sim~ \mathrm{Unif}[0, 1] }[/math] is a random variable and [math]\displaystyle{ X = F^{-1}(U) }[/math], where F is continuous, monotonic, and is the cumulative density function (cdf) for some distribution, then the distribution of the random variable X is given by F(X).

Proof:

Recall that, if f is the pdf corresponding to F where f is defined as 0 outside of its domain, then,

[math]\displaystyle{ F(x) = P(X \leq x) = \int_{-\infty}^x f(x) }[/math]
[math]\displaystyle{ \int_1^{\infty} \frac{x^k}{x^2} dx }[/math]

So F is monotonically increasing, since the probability that X is less than a greater number must be greater than the probability that X is less than a lesser number.

Note also that in the uniform distribution on [0, 1], we have for all a within [0, 1], [math]\displaystyle{ P(U \leq a) = a }[/math].

So,

[math]\displaystyle{ \begin{align} P(F^{-1}(U) \leq x) &{}= P(F(F^{-1}(U)) \leq F(x)) \\ &{}= P(U \leq F(x)) \\ &{}= F(x) \end{align} }[/math]

Completing the proof.

Procedure (Continuous Case)

This method then gives us the following procedure for finding pseudo random numbers from a continuous distribution:

  • Step 1: Draw [math]\displaystyle{ U \sim~ Unif [0, 1] }[/math].
  • Step 2: Compute [math]\displaystyle{ X = F^{-1}(U) }[/math].

Example:

Suppose we want to draw a sample from [math]\displaystyle{ f(x) = \lambda e^{-\lambda x} }[/math] where [math]\displaystyle{ x \gt 0 }[/math] (the exponential distribution).

We need to first find [math]\displaystyle{ F(x) }[/math] and then its inverse, [math]\displaystyle{ F^{-1} }[/math].

[math]\displaystyle{ F(x) = \int^x_0 \theta e^{-\theta u} du = 1 - e^{-\theta x} }[/math]
[math]\displaystyle{ F^{-1}(x) = \frac{-\log(1-y)}{\theta} = \frac{-\log(u)}{\theta} }[/math]

Now we can generate our random sample [math]\displaystyle{ u_1\dots u_n }[/math] from [math]\displaystyle{ F(x) }[/math] by:

[math]\displaystyle{ 1)\ u_i \sim Unif[0, 1] }[/math]
[math]\displaystyle{ 2)\ x_i = \frac{-\log(u_i)}{\theta} }[/math]

The [math]\displaystyle{ x_i }[/math] are now a random sample from [math]\displaystyle{ f(x) }[/math].


This example can be illustrated in Matlab using the code below. Generate [math]\displaystyle{ u_i }[/math], calculate [math]\displaystyle{ x_i }[/math] using the above formula and letting [math]\displaystyle{ \theta=1 }[/math], plot the histogram of [math]\displaystyle{ x_i }[/math]'s for [math]\displaystyle{ i=1,...,100,000 }[/math].

u=rand(1,100000);
x=-log(1-u)/1;
hist(x)
"Histogram showing the expected exponentional distribution"
"Histogram showing the expected exponentional distribution"


The major problem with this approach is that we have to find [math]\displaystyle{ F^{-1} }[/math] and for many distributions it is too difficult (or impossible) to find the inverse of [math]\displaystyle{ F(x) }[/math]. Further, for some distributions it is not even possible to find [math]\displaystyle{ F(x) }[/math] (i.e. a closed form expression for the distribution function, or otherwise; even if the closed form expression exists, it's usually difficult to find [math]\displaystyle{ F^{-1} }[/math]).

Procedure (Discrete Case)

The above method can be easily adapted to work on discrete distributions as well.

In general in the discrete case, we have [math]\displaystyle{ x_0, \dots , x_n }[/math] where:

[math]\displaystyle{ \begin{align}P(X = x_i) &{}= p_i \end{align} }[/math]
[math]\displaystyle{ x_0 \leq x_1 \leq x_2 \dots \leq x_n }[/math]
[math]\displaystyle{ \sum p_i = 1 }[/math]

Thus we can define the following method to find pseudo random numbers in the discrete case (note that the less-than signs from class have been changed to less-than-or-equal-to signs by me, since otherwise the case of [math]\displaystyle{ U = 1 }[/math] is missed):

  • Step 1: Draw [math]\displaystyle{ U~ \sim~ Unif [0,1] }[/math].
  • Step 2:
    • If [math]\displaystyle{ U \lt p_0 }[/math], return [math]\displaystyle{ X = x_0 }[/math]
    • If [math]\displaystyle{ p_0 \leq U \lt p_0 + p_1 }[/math], return [math]\displaystyle{ X = x_1 }[/math]
    • ...
    • In general, if [math]\displaystyle{ p_0+ p_1 + \dots + p_{k-1} \leq U \lt p_0 + \dots + p_k }[/math], return [math]\displaystyle{ X = x_k }[/math]

Example (from class):

Suppose we have the following discrete distribution:

[math]\displaystyle{ \begin{align} P(X = 0) &{}= 0.3 \\ P(X = 1) &{}= 0.2 \\ P(X = 2) &{}= 0.5 \end{align} }[/math]

The cumulative density function (cdf) for this distribution is then:

[math]\displaystyle{ F(x) = \begin{cases} 0, & \text{if } x \lt 0 \\ 0.3, & \text{if } 0 \leq x \lt 1 \\ 0.5, & \text{if } 1 \leq x \lt 2 \\ 1, & \text{if } 2 \leq x \end{cases} }[/math]

Then we can generate numbers from this distribution like this, given [math]\displaystyle{ u_0, \dots, u_n }[/math] from [math]\displaystyle{ U \sim~ Unif[0, 1] }[/math]:

[math]\displaystyle{ x_i = \begin{cases} 0, & \text{if } u_i \leq 0.3 \\ 1, & \text{if } 0.3 \lt u_i \leq 0.5 \\ 2, & \text{if } 0.5 \lt u_i \leq 1 \end{cases} }[/math]

This example can be illustrated in Matlab using the code below:

p=[0.3,0.2,0.5];
for i=1:1000;
  u=rand;
  if u <= p(1)
    x(i)=0;
  elseif u < sum(p(1,2))
    x(i)=1;
  else
    x(i)=2;
  end
end

Acceptance-Rejection Sampling - May 14, 2009

Today, we continue the discussion on sampling (generating random numbers) from general distributions with the Acceptance/Rejection Method.

Acceptance/Rejection Method

File:edit.JPG
Ali: Some statements are incorrect, inaccurate or misleading. Acceptance-Rejection Method needs to be motivated in more details.








Suppose we wish to sample from a target distribution [math]\displaystyle{ f(x) }[/math] that is difficult to sample from directly.

Let [math]\displaystyle{ g(x) }[/math] be a distribution that is easy to sample from and satisfies the condition:

[math]\displaystyle{ \forall x: f(x) \leq c \cdot g(x)\ }[/math], where [math]\displaystyle{ c \in \Re^+ }[/math]

File:fxcgx.JPG
"Graph of the pdf of [math]\displaystyle{ f(x) }[/math] (target distribution) and [math]\displaystyle{ c g(x) }[/math] (proposal distribution)"

Since c*g(x) > f(x) for all x, it is possible to obtain samples that follows f(x) by rejecting a proportion of samples drawn from c*g(x).

This proportion depends on how different f(x) and g(x) are and may vary at different values of x.

That is, if [math]\displaystyle{ f(x) \approx g(x) \text { at } x = x_1 \text { and } f(x) \ll g(x) \text { at } x = x_2 }[/math], we will need to reject more samples drawn at [math]\displaystyle{ \,x_2 }[/math] than at [math]\displaystyle{ \,x_1 }[/math].

Overall, it can shown that by accepting samples drawn from g(x) with probability [math]\displaystyle{ \frac {f(x)}{c \cdot g(x)} }[/math], we can obtain samples that follows f(x)

Consider the example in the graph,
Sampling y = 7 from [math]\displaystyle{ cg(x) }[/math] will yield a sample that follows the target distribution [math]\displaystyle{ f(x) }[/math] and will y be accepted w/p 1.

Sampling y = 9 from [math]\displaystyle{ cg(x) }[/math] will yield a point that is distant from [math]\displaystyle{ f(x) }[/math] and will be accepted with a low probability.

Proof

File:edit.JPG
Ali: Proof of what? .







Show that if points are sampled according to the Acceptance/Rejection method then they follow the target distribution.

[math]\displaystyle{ P(X=x|accept) = \frac{P(accept|X=x)P(X=x)}{P(accept)} }[/math]
by Bayes' theorem

[math]\displaystyle{ \begin{align} &P(accept|X=x) = \frac{f(x)}{c \cdot g(x)}\\ &Pr(X=x) = g(x)\frac{}{} \end{align} }[/math]

by hypothesis.

Then,
[math]\displaystyle{ \begin{align} P(accept) &= \int^{}_x P(accept|X=x)P(X=x) dx \\ &= \int^{}_x \frac{f(x)}{c \cdot g(x)} g(x) dx\\ &= \frac{1}{c} \int^{}_x f(x) dx\\ &= \frac{1}{c} \end{align} }[/math]

Therefore,
[math]\displaystyle{ P(X=x|accept) = \frac{\frac{f(x)}{c\ \cdot g(x)}g(x)}{\frac{1}{c}} = f(x) }[/math] as required.

Procedure (Continuous Case)

  • Choose [math]\displaystyle{ g(x) }[/math] (a density function that is simple to sample from)
  • Find a constant c such that :[math]\displaystyle{ c \cdot g(x) \geq f(x) }[/math]
  1. Let [math]\displaystyle{ Y \sim~ g(y) }[/math]
  2. Let [math]\displaystyle{ U \sim~ Unif [0,1] }[/math]
  3. If [math]\displaystyle{ U \leq \frac{f(x)}{c \cdot g(x)} }[/math] then X=Y; else reject and go to step 1

Example:

Suppose we want to sample from Beta(2,1), for [math]\displaystyle{ 0 \leq x \leq 1 }[/math]. Recall:

[math]\displaystyle{ Beta(2,1) = \frac{\Gamma (2+1)}{\Gamma (2) \Gamma(1)} \times x^1(1-x)^0 = \frac{2!}{1!0!} \times x = 2x }[/math]
  • Choose [math]\displaystyle{ g(x) \sim~ Unif [0,1] }[/math]
  • Find c: c = 2 (see notes below)
  1. Let [math]\displaystyle{ Y \sim~ Unif [0,1] }[/math]
  2. Let [math]\displaystyle{ U \sim~ Unif [0,1] }[/math]
  3. If [math]\displaystyle{ U \leq \frac{2Y}{2} = Y }[/math], then X=Y; else go to step 1

[math]\displaystyle{ c }[/math] was chosen to be 2 in this example since [math]\displaystyle{ \max \left(\frac{f(x)}{g(x)}\right) }[/math] in this example is 2. This condition is important since it helps us in finding a suitable [math]\displaystyle{ c }[/math] to apply the Acceptance/Rejection Method.


In MATLAB, the code that demonstrates the result of this example is:

   j = 1;
       while i < 1000
           y = rand;
           u = rand;
           if u <= y
               x(j) = y;
               j = j + 1;
               i = i + 1;
           end
       end
       hist(x);
       

The histogram produced here should follow the target distribution, [math]\displaystyle{ f(x) = 2x }[/math], for the interval [math]\displaystyle{ 0 \leq x \leq 1 }[/math].

The histogram shows the PDF of a Beta(2,1) distribution as expected.


The Discrete Case

The Acceptance/Rejection Method can be extended for discrete target distributions. The difference compared to the continuous case is that the proposal distribution [math]\displaystyle{ g(x) }[/math] must also be discrete distribution. For our purposes, we can consider g(x) to be a discrete uniform distribution on the set of values that X may take on in the target distribution.

Example

Suppose we want to sample from a distribution with the following probability mass function (pmf):

P(X=1) = 0.15
P(X=2) = 0.55
P(X=3) = 0.20
P(X=4) = 0.10 
  • Choose [math]\displaystyle{ g(x) }[/math] to be the discrete uniform distribution on the set [math]\displaystyle{ \{1,2,3,4\} }[/math]
  • Find c: [math]\displaystyle{ c = \max \left(\frac{f(x)}{g(x)} \right)= 0.55/0.25 = 2.2 }[/math]
  1. Generate [math]\displaystyle{ Y \sim~ Unif \{1,2,3,4\} }[/math]
  2. Let [math]\displaystyle{ U \sim~ Unif [0,1] }[/math]
  3. If [math]\displaystyle{ U \leq \frac{f(x)}{2.2 \times 0.25} }[/math], then X=Y; else go to step 1

Limitations

If the proposed distribution is very different from the target distribution, we may have to reject a large number of points before a good sample size of the target distribution can be established. It may also be difficult to find such [math]\displaystyle{ g(x) }[/math] that satisfies all the conditions of the procedure.

We will now begin to discuss sampling from specific distributions.

Special Technique for sampling from Gamma Distribution

Recall that the cdf of the Gamma distribution is:

[math]\displaystyle{ F(x) = \int_0^{\lambda x} \frac{e^{-y}y^{t-1}}{(t-1)!} dy }[/math]

If we wish to sample from this distribution, it will be difficult for both the Inverse Method (difficulty in computing the inverse function) and the Acceptance/Rejection Method.


Additive Property of Gamma Distribution

Recall that if [math]\displaystyle{ X_1, \dots, X_t }[/math] are independent exponential distributions with mean [math]\displaystyle{ \lambda }[/math] (in other words, [math]\displaystyle{ X_i\sim~ Exp (\lambda) }[/math]), then [math]\displaystyle{ \Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda) }[/math]

It appears that if we want to sample from the Gamma distribution, we can consider sampling from t independent exponential distributions with mean [math]\displaystyle{ \lambda }[/math] (using the Inverse Method) and add them up. Details will be discussed in the next lecture.


Techniques for Normal and Gamma Sampling - May 19, 2009

We have examined two general techniques for sampling from distributions. However, for certain distributions more practical methods exist. We will now look at two cases,
Gamma distributions and Normal distributions, where such practical methods exist.

Gamma Distribution

Given the additive property of the gamma distribution,


If [math]\displaystyle{ X_1, \dots, X_t }[/math] are independent random variables with [math]\displaystyle{ X_i\sim~ Exp (\lambda) }[/math] then,

[math]\displaystyle{ \Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda) }[/math]

Using this property, we can use the Inverse Transform Method to generate samples from an exponential distribution with appropriate variables, and use these to generate a sample following a Gamma distribution.


Procedure
  1. Sample independently from a uniform distribution [math]\displaystyle{ t }[/math] times, giving [math]\displaystyle{ u_1,\dots,u_t }[/math]
  2. Sample independently from an exponential distribution [math]\displaystyle{ t }[/math] times, giving [math]\displaystyle{ x_1,\dots,x_t }[/math] such that,
    [math]\displaystyle{ \begin{align} x_1 \sim~ Exp(\lambda)\\ \vdots \\ x_t \sim~ Exp(\lambda) \end{align} }[/math]

    Using the Inverse Transform Method,
    [math]\displaystyle{ \begin{align} x_i = -\frac {1}{\lambda}\log(u_i) \end{align} }[/math]
  3. Using the additive property,
    [math]\displaystyle{ \begin{align} X &{}= x_1 + x_2 + \dots + x_t \\ X &{}= -\frac {1}{\lambda}\log(u_1) - \frac {1}{\lambda}\log(u_2) \dots - \frac {1}{\lambda}\log(u_t) \\ X &{}= -\frac {1}{\lambda}\log(\prod_{i=1}^{t}u_i) \sim~ Gamma (t, \lambda) \end{align} }[/math]


This procedure can be illustrated in Matlab using the code below with [math]\displaystyle{ t = 5, \lambda = 1 }[/math] :

U = rand(10000,5);
X = sum( -log(U), 2);
hist(X)

Normal Distribution

"Diagram of the Box Muller transform, which transforms uniformly distributed value pairs to normally distributed value pairs." [Box-Muller Transform, Wikipedia]

The cdf for the Standard Normal distribution is:

[math]\displaystyle{ F(Z) = \int_{-\infty}^{Z}\frac{1}{\sqrt{2\pi}}e^{-x^2/2}dx }[/math]

We can see that the normal distribution is difficult to sample from using the general methods seen so far, eg. the inverse is not easy to obtain from F(Z); we may be able to use the Acceptance-Rejection method, but there are still better ways to sample from a Standard Normal Distribution.

Box-Muller Method

[Add a picture WikiSysop 19:25, 1 June 2009 (UTC)]


The Box-Muller or Polar method uses an approach where we have one space that is easy to sample in, and another with the desired distribution under a transformation. If we know such a transformation for the Standard Normal, then all we have to do is transform our easy sample and obtain a sample from the Standard Normal distribution.


Properties of Polar and Cartesian Coordinates
If x and y are points on the Cartesian plane, r is the length of the radius from a point in the polar plane to the pole, and θ is the angle formed with the polar axis then,
  • [math]\displaystyle{ \begin{align} r^2 = x^2 + y^2 \end{align} }[/math]
  • [math]\displaystyle{ \tan(\theta) = \frac{y}{x} }[/math]


  • [math]\displaystyle{ \begin{align} x = r \cos(\theta) \end{align} }[/math]
  • [math]\displaystyle{ \begin{align} y = r \sin(\theta) \end{align} }[/math]


Let X and Y be independent random variables with a standard normal distribution,

[math]\displaystyle{ X \sim~ N(0,1) }[/math]
[math]\displaystyle{ Y \sim~ N(0,1) }[/math]

also, let [math]\displaystyle{ r }[/math] and [math]\displaystyle{ \theta }[/math] be the polar coordinates of (x,y). Then the joint distribution of independent x and y is given by,

[math]\displaystyle{ \begin{align} f(x,y) = f(x)f(y) &{}= \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} \\ &{}=\frac{1}{2\pi}e^{-\frac{x^2+y^2}{2}} \end{align} }[/math]

It can also be shown that the joint distribution of r and θ is given by,

[math]\displaystyle{ \begin{matrix} f(r,\theta) = \frac{1}{2}e^{-\frac{d}{2}}\times\frac{1}{2\pi},\quad d = r^2 \end{matrix} }[/math]

Note that [math]\displaystyle{ \begin{matrix}f(r,\theta)\end{matrix} }[/math] consists of two density functions, Exponential and Uniform, so assuming that r and [math]\displaystyle{ \theta }[/math] are independent [math]\displaystyle{ \begin{matrix} \Rightarrow d \sim~ Exp(1/2), \theta \sim~ Unif[0,2\pi] \end{matrix} }[/math]


Procedure for using Box-Muller Method
  1. Sample independently from a uniform distribution twice, giving [math]\displaystyle{ \begin{align} u_1,u_2 \sim~ \mathrm{Unif}(0, 1) \end{align} }[/math]
  2. Generate polar coordinates using the exponential distribution of d and uniform distribution of θ,
    [math]\displaystyle{ \begin{align} d = -2\log(u_1),& \quad r = \sqrt{d} \\ & \quad \theta = 2\pi u_2 \end{align} }[/math]
  3. Transform r and θ back to x and y,
    [math]\displaystyle{ \begin{align} x = r\cos(\theta) \\ y = r\sin(\theta) \end{align} }[/math]


Notice that the Box-Muller Method generates a pair of independent Standard Normal distributions, x and y.

This procedure can be illustrated in Matlab using the code below:

u1 = rand(5000,1);
u2 = rand(5000,1);

d = -2*log(u1);
theta = 2*pi*u2;

x = d.^(1/2).*cos(theta);
y = d.^(1/2).*sin(theta);

figure(1);

subplot(2,1,1);
hist(x);
title('X');
subplot(2,1,2);
hist(y);
title('Y');

Also, we can confirm that d and theta are indeed exponential and uniform random variables, respectively, in Matlab by:

subplot(2,1,1);
hist(d);
title('d follows an exponential distribution');
subplot(2,1,2);
hist(theta);
title('theta follows a uniform distribution over [0, 2*pi]');

Useful Properties (Single and Multivariate Normal)

The Box-Muller method as described above samples only from the standard normal distribution. However, both singlevariate and multivariate normal distributions have properties that allow us to use samples generated by the Box-Muller method to sample any normal distribution in general.


Properties of Normal distributions
  • [math]\displaystyle{ \begin{align} \text{If } & X = \mu + \sigma Z, & Z \sim~ N(0,1) \\ &\text{then } X \sim~ N(\mu,\sigma ^2) \end{align} }[/math]
  • [math]\displaystyle{ \begin{align} \text{If } & \vec{Z} = (Z_1,\dots,Z_d)^T, & Z_1,\dots,Z_d \sim~ N(0,1) \\ &\text{then } \vec{Z} \sim~ N(\vec{0},I) \end{align} }[/math]
  • [math]\displaystyle{ \begin{align} \text{If } & \vec{X} = \vec{\mu} + \Sigma^{1/2} \vec{Z}, & \vec{Z} \sim~ N(\vec{0},I) \\ &\text{then } \vec{X} \sim~ N(\vec{\mu},\Sigma) \end{align} }[/math]


These properties can be illustrated through the following example in Matlab using the code below:

Example: For a Multivariate Normal distribution [math]\displaystyle{ u=\begin{bmatrix} -2 \\ 3 \end{bmatrix} }[/math] and [math]\displaystyle{ \Sigma=\begin{bmatrix} 1&0.5\\ 0.5&1\end{bmatrix} }[/math]


u = [-2; 3];
sigma = [ 1 1/2; 1/2 1];

r = randn(15000,2);
ss = chol(sigma);

X = ones(15000,1)*u' + r*ss;
plot(X(:,1),X(:,2), '.');

Note: In the example above, we generated the square root of [math]\displaystyle{ \Sigma }[/math] using the Cholesky decomposition,

ss = chol(sigma);

which gives [math]\displaystyle{ ss=\begin{bmatrix} 1&0.5\\ 0&0.8660\end{bmatrix} }[/math]. Matlab also has the sqrtm function:

ss = sqrtm(sigma);

which gives a different matrix, [math]\displaystyle{ ss=\begin{bmatrix} 0.9659&0.2588\\ 0.2588&0.9659\end{bmatrix} }[/math], but the plot looks about the same (X has the same distribution).

Bayesian and Frequentist Schools of Thought - May 21, 2009

In this lecture we will continue to discuss sampling from specific distributions , introduce Monte Carlo Integration, and also talk about the differences between the Bayesian and Frequentist views on probability, along with references to Bayesian Inference.

Binomial Distribution

A Binomial distribution [math]\displaystyle{ X \sim~ Bin(n,p) }[/math] is the sum of [math]\displaystyle{ n }[/math] independent Bernoulli trials, each with probability of success [math]\displaystyle{ p }[/math] [math]\displaystyle{ (0 \leq p \leq 1) }[/math]. For each trial we generate an independent uniform random variable: [math]\displaystyle{ U_1, \ldots, U_n \sim~ Unif(0,1) }[/math]. Then X is the number of times that [math]\displaystyle{ U_i \leq p }[/math]. In this case if n is large enough, by the central limit theorem, the Normal distribution can be used to approximate a Binomial distribution.

Sampling from Binomial distribution in Matlab is done using the following code:

n=3;
p=0.5;
trials=1000;
X=sum((rand(trials,n))'<=p);
hist(X)

Where the histogram is a Binomial distribution, and for higher [math]\displaystyle{ n }[/math], it would resemble a Normal distribution.

Monte Carlo Integration

Monte Carlo Integration is a numerical method of approximating the evaluation of integrals by simulation. In this course we will mainly look at three methods for approximating integrals:

  1. Basic Monte Carlo Integration
  2. Importance Sampling
  3. Markov Chain Monte Carlo (MCMC)

Bayesian VS Frequentists

During the history of statistics, two major schools of thought emerged along the way and have been locked in an on-going struggle in trying to determine which one has the correct view on probability. These two schools are known as the Bayesian and Frequentist schools of thought. Both the Bayesians and the Frequentists holds a different philosophical view on what defines probability. Below are some fundamental differences between the Bayesian and Frequentist schools of thought:

Frequentist

  • Probability is objective and refers to the limit of an event's relative frequency in a large number of trials. For example, a coin with a 50% probability of heads will turn up heads 50% of the time.
  • Parameters are all fixed and unknown constants.
  • Any statistical process only has interpretations based on limited frequencies. For example, a 95% C.I. of a given parameter will contain the true value of the parameter 95% of the time.

Bayesian

  • Probability is subjective and can be applied to single events based on degree of confidence or beliefs. For example, Bayesian can refer to tomorrow's weather as having 50% of rain, whereas this would not make sense to a Frequentist because tomorrow is just one unique event, and cannot be referred to as a relative frequency in a large number of trials.
  • Parameters are random variables that has a given distribution, and other probability statements can be made about them.
  • Probability has a distribution over the parameters, and point estimates are usually done by either taking the mode or the mean of the distribution.

Bayesian Inference

Example: If we have a screen that only displays single digits from 0 to 9, and this screen is split into a 4x5 matrix of pixels, then all together the 20 pixels that make up the screen can be referred to as [math]\displaystyle{ \vec{X} }[/math], which is our data, and the parameter of the data for this case, which we will refer to as [math]\displaystyle{ \theta }[/math], would be a discrete random variable that can take on the values of 0 to 9. In this example, a Bayesian would be interested in finding [math]\displaystyle{ Pr(\theta=a|\vec{X}=\vec{x}) }[/math], whereas a Frequentist would be more interested in finding [math]\displaystyle{ Pr(\vec{X}=\vec{x}|\theta=a) }[/math]

Bayes' Rule
[math]\displaystyle{ f(\theta|X) = \frac{f(X | \theta)\, f(\theta)}{f(X)}. }[/math]

Note: In this case [math]\displaystyle{ f (\theta|X) }[/math] is referred to as posterior, [math]\displaystyle{ f (X | \theta) }[/math] as likelihood, [math]\displaystyle{ f (\theta) }[/math] as prior, and [math]\displaystyle{ f (X) }[/math] as the marginal, where [math]\displaystyle{ \theta }[/math] is the parameter and [math]\displaystyle{ X }[/math] is the observed variable.

Procedure in Bayesian Inference

  • First choose a probability distribution as the prior, which represents our beliefs about the parameters.
  • Then choose a probability distribution for the likelihood, which represents our beliefs about the data.
  • Lastly compute the posterior, which represents an update of our beliefs about the parameters after having observed the data.

As mentioned before, for a Bayesian, finding point estimates usually involves finding the mode or the mean of the parameter's distribution.

Methods

  • Mode: [math]\displaystyle{ \theta = \arg\max_{\theta} f(\theta|X) \gets }[/math] value of [math]\displaystyle{ \theta }[/math] that maximizes [math]\displaystyle{ f(\theta|X) }[/math]
  • Mean: [math]\displaystyle{ \bar\theta = \int^{}_\theta \theta \cdot f(\theta|X)d\theta }[/math]

If it is the case that [math]\displaystyle{ \theta }[/math] is high-dimensional, and we are only interested in one of the components of [math]\displaystyle{ \theta }[/math], for example, we want [math]\displaystyle{ \theta_1 }[/math] from [math]\displaystyle{ \vec{\theta}=(\theta_1,\dots,\theta_n) }[/math], then we would have to calculate the integral: [math]\displaystyle{ \int^{} \int^{} \dots \int^{}f(\theta|X)d\theta_2d\theta_3 \dots d\theta_n }[/math]

This sort of calculation is usually very difficult or not feasible to compute, and thus we would need to do it by simulation.

Note:

  1. [math]\displaystyle{ f(x)=\int^{}_\theta f(X | \theta)f(\theta) d\theta }[/math] is not a function of [math]\displaystyle{ \theta }[/math], and is called the Normalization Factor
  2. Therefore, since f(x) is like a constant, the posterior is proportional to the likelihood times the prior: [math]\displaystyle{ f(\theta|X)\propto f(X | \theta)f(\theta) }[/math]

Monte Carlo Integration - May 26, 2009

Today's lecture completes the discussion on the Frequentists and Bayesian schools of thought and introduces Basic Monte Carlo Integration.

Frequentist vs Bayesian Example - Estimating Parameters

Estimating parameters of a univariate Gaussian:

Frequentist: [math]\displaystyle{ f(x|\theta)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}*(\frac{x-\mu}{\sigma})^2} }[/math]
Bayesian: [math]\displaystyle{ f(\theta|x)=\frac{f(x|\theta)f(\theta)}{f(x)} }[/math]

Frequentist Approach

Let [math]\displaystyle{ X^N }[/math] denote [math]\displaystyle{ (x_1, x_2, ..., x_n) }[/math]. Using the Maximum Likelihood Estimation approach for estimating parameters we get:

[math]\displaystyle{ L(X^N; \theta) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}(\frac{x_i- \mu} {\sigma})^2} }[/math]
[math]\displaystyle{ l(X^N; \theta) = \sum_{i=1}^N -\frac{1}{2}log (2\pi) - log(\sigma) - \frac{1}{2} \left(\frac{x_i- \mu}{\sigma}\right)^2 }[/math]
[math]\displaystyle{ \frac{dl}{d\mu} = \displaystyle\sum_{i=1}^N(x_i-\mu) }[/math]

Setting [math]\displaystyle{ \frac{dl}{d\mu} = 0 }[/math] we get

[math]\displaystyle{ \displaystyle\sum_{i=1}^Nx_i = \displaystyle\sum_{i=1}^N\mu }[/math]
[math]\displaystyle{ \displaystyle\sum_{i=1}^Nx_i = N\mu \rightarrow \mu = \frac{1}{N}\displaystyle\sum_{i=1}^Nx_i }[/math]
Bayesian Approach

Assuming the prior is Gaussian:

[math]\displaystyle{ P(\theta) = \frac{1}{\sqrt{2\pi}\tau}e^{-\frac{1}{2}(\frac{x-\mu_0}{\tau})^2} }[/math]
[math]\displaystyle{ f(\theta|x) \propto \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}(\frac{x_i-\mu}{\sigma})^2} * \frac{1}{\sqrt{2\pi}\tau}e^{-\frac{1}{2}(\frac{x-\mu_0}{\tau})^2} }[/math]

By completing the square we conclude that the posterior is Gaussian as well:

[math]\displaystyle{ f(\theta|x)=\frac{1}{\sqrt{2\pi}\tilde{\sigma}}e^{-\frac{1}{2}(\frac{x-\tilde{\mu}}{\tilde{\sigma}})^2} }[/math]

Where

[math]\displaystyle{ \tilde{\mu} = \frac{\frac{N}{\sigma^2}}{{\frac{N}{\sigma^2}}+\frac{1}{\tau^2}}\bar{x} + \frac{\frac{1}{\tau^2}}{{\frac{N}{\sigma^2}}+\frac{1}{\tau^2}}\mu_0 }[/math]

The expectation from the posterior is different from the MLE method. Note that [math]\displaystyle{ \displaystyle\lim_{N\to\infty}\tilde{\mu} = \bar{x} }[/math]. Also note that when [math]\displaystyle{ N = 0 }[/math] we get [math]\displaystyle{ \tilde{\mu} = \mu_0 }[/math].

Basic Monte Carlo Integration

Although it is almost impossible to find a precise definition of "Monte Carlo Method", the method is widely used and has numerous descriptions in articles and monographs. As an interesting fact, the term Monte Carlo is claimed to have been first used by Ulam and von Neumann as a Los Alamos code word for the stochastic simulations they applied to building better atomic bombs. Stochastic simulation refers to a random process in which its future evolution is described by probability distributions (counterpart to a deterministic process), and these simulation methods are known as Monte Carlo methods. [Stochastic process, Wikipedia]. The following example (external link) illustrates a Monte Carlo Calculation of Pi: [1]


We start with a simple example:

[math]\displaystyle{ \begin{align}I &= \displaystyle\int_a^b h(x)\,dx &= \displaystyle\int_a^b w(x)f(x)\,dx \end{align} }[/math]

where

[math]\displaystyle{ \displaystyle w(x) = h(x)(b-a) }[/math]
[math]\displaystyle{ f(x) = \frac{1}{b-a} \rightarrow }[/math] the p.d.f. is Unif[math]\displaystyle{ (a,b) }[/math]
[math]\displaystyle{ \hat{I} = E_f[w(x)] = \frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i) }[/math]

If [math]\displaystyle{ x_i \sim~ Unif(a,b) }[/math] then by the Law of Large Numbers [math]\displaystyle{ \frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i) \rightarrow \displaystyle\int w(x)f(x)\,dx = E_f[w(x)] }[/math]

Process

Given [math]\displaystyle{ I = \displaystyle\int^b_ah(x)\,dx }[/math]

  1. [math]\displaystyle{ \begin{matrix} w(x) = h(x)(b-a)\end{matrix} }[/math]
  2. [math]\displaystyle{ \begin{matrix} x_1, x_2, ..., x_n \sim UNIF(a,b)\end{matrix} }[/math]
  3. [math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i) }[/math]

From this we can compute other statistics, such as

  1. [math]\displaystyle{ SE=\frac{s}{\sqrt{N}} }[/math] where [math]\displaystyle{ s^2=\frac{\sum_{i=1}^{N}(Y_i-\hat{I})^2 }{N-1} }[/math] with [math]\displaystyle{ \begin{matrix}Y_i=w(x_i)\end{matrix} }[/math]
  2. [math]\displaystyle{ \begin{matrix} 1-\alpha \end{matrix} }[/math] CI's can be estimated as [math]\displaystyle{ \hat{I}\pm Z_\frac{\alpha}{2}\times SE }[/math]

Example 1

Find [math]\displaystyle{ E[\sqrt{x}] }[/math] for [math]\displaystyle{ \begin{matrix} f(x) = e^{-x}\end{matrix} }[/math]

  1. We need to draw from [math]\displaystyle{ \begin{matrix} f(x) = e^{-x}\end{matrix} }[/math]
  2. [math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nw(x_i) }[/math]

This example can be illustrated in Matlab using the code below:

u=rand(100,1)
x=-log(u)
h= x.^ .5
mean(h)
%The value obtained using the Monte Carlo method
F = @ (x) sqrt (x). * exp(-x)
quad(F,0,50)
%The value of the real function using Matlab

Example 2 Find [math]\displaystyle{ I = \displaystyle\int^1_0h(x)\,dx, h(x) = x^3 }[/math]

  1. [math]\displaystyle{ \displaystyle I = x^4/4 = 1/4 }[/math]
  2. [math]\displaystyle{ \displaystyle W(x) = x^3*(1-0) }[/math]
  3. [math]\displaystyle{ Xi \sim~Unif(0,1) }[/math]
  4. [math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^N(x_i^3) }[/math]

This example can be illustrated in Matlab using the code below:

x = rand (1000)
mean(x^3)

Example 3 To estimate an infinite integral such as [math]\displaystyle{ I = \displaystyle\int^\infty_5 h(x)\,dx, h(x) = 3e^{-x} }[/math]

  1. Substitute in [math]\displaystyle{ y=\frac{1}{x-5+1} =\gt dy=-\frac{1}{(x-4)^2}dx =\gt dy=-y^2dx }[/math]
  2. [math]\displaystyle{ I = \displaystyle\int^1_0 \frac{3e^{-(\frac{1}{y}+4)}}{-y^2}\,dy }[/math]
  3. [math]\displaystyle{ w(y) = \frac{3e^{-(\frac{1}{y}+4)}}{-y^2}(1-0) }[/math]
  4. [math]\displaystyle{ Y_i \sim~Unif(0,1) }[/math]
  5. [math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^N(\frac{3e^{-(\frac{1}{y_i}+4)}}{-y_i^2}) }[/math]

Importance Sampling and Monte Carlo Simulation - May 28, 2009

During this lecture we covered two more examples of Monte Carlo simulation, finishing that topic, and begun talking about Importance Sampling.

Binomial Probability Monte Carlo Simulations

Example 1:

You are given two independent Binomial distributions with probabilities [math]\displaystyle{ \displaystyle p_1\text{, }p_2 }[/math]. Using a Monte Carlo simulation, approximate the value of [math]\displaystyle{ \displaystyle \delta }[/math], where [math]\displaystyle{ \displaystyle \delta = p_1 - p_2 }[/math].

[math]\displaystyle{ \displaystyle X \sim BIN(n, p_1) }[/math]; [math]\displaystyle{ \displaystyle Y \sim BIN(m, p_2) }[/math]; [math]\displaystyle{ \displaystyle \delta = p_1 - p_2 }[/math]

So [math]\displaystyle{ \displaystyle f(p_1, p_2 | x,y) = \frac{f(x, y|p_1, p_2)f(p_1,p_2)}{f(x,y)} }[/math] where [math]\displaystyle{ \displaystyle f(x,y) }[/math] is a flat distribution and the expected value of [math]\displaystyle{ \displaystyle \delta }[/math] is as follows:

[math]\displaystyle{ \displaystyle E(\delta) = \int\int\delta f(p_1,p_2|X,Y)\,dp_1dp_2 }[/math]

Since X, Y are independent, we can split the conditional probability distribution:

[math]\displaystyle{ \displaystyle f(p_1,p_2|X,Y) \propto f(p_1|X)f(p_2|Y) }[/math]

We need to find conditional distribution functions for [math]\displaystyle{ \displaystyle p_1, p_2 }[/math] to draw samples from. In order to get a distribution for the probability 'p' of a Binomial, we have to divide the Binomial distribution by n. This new distribution has the same shape as the original, but is scaled. A Beta distribution is a suitable approximation. Let

[math]\displaystyle{ \displaystyle f(p_1 | X) \sim \text{Beta}(x+1, n-x+1) }[/math] and [math]\displaystyle{ \displaystyle f(p_2 | Y) \sim \text{Beta}(y+1, m-y+1) }[/math], where
[math]\displaystyle{ \displaystyle \text{Beta}(\alpha,\beta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}p^{\alpha-1}(1-p)^{\beta-1} }[/math]

Process:

  1. Draw samples for [math]\displaystyle{ \displaystyle p_1 }[/math] and [math]\displaystyle{ \displaystyle p_2 }[/math]: [math]\displaystyle{ \displaystyle (p_1,p_2)^{(1)} }[/math], [math]\displaystyle{ \displaystyle (p_1,p_2)^{(2)} }[/math], ..., [math]\displaystyle{ \displaystyle (p_1,p_2)^{(n)} }[/math];
  2. Compute [math]\displaystyle{ \displaystyle \delta = p_1 - p_2 }[/math] in order to get n values for [math]\displaystyle{ \displaystyle \delta }[/math];
  3. [math]\displaystyle{ \displaystyle \hat{\delta}=\frac{1}{N}\sum_{\forall i}\delta^{(i)} }[/math].

Matlab Code:

The Matlab code for recreating the above example is as follows:
n=100; %number of trials for X
m=100; %number of trials for Y
x=80; %number of successes for X trials
y=60; %number of successes for y trials
p1=betarnd(x+1, n-x+1, 1, 1000);
p2=betarnd(y+1, m-y+1, 1, 1000);
delta=p1-p2;
mean(delta);

The mean in this example is given by 0.1938.

A 95% confidence interval for [math]\displaystyle{ \delta }[/math] is represented by the interval between the 2.5% and 97.5% quantiles which covers 95% of the probability distribution. In Matlab, this can be calculated as follows:

q1=quantile(delta,0.025);
q2=quantile(delta,0.975);

The interval is approximately [math]\displaystyle{ 95% CI \approx (0.06606, 0.32204) }[/math]

The histogram of delta is:

Note: In this case, we can also find [math]\displaystyle{ E(\delta) }[/math] analytically since [math]\displaystyle{ E(\delta) = E(p_1 - p_2) = E(p_1) - E(p_2) = \frac{x+1}{n+2} - \frac{y+1}{m+2} \approx 0.1961 }[/math]. Compare this with the maximum likelihood estimate for [math]\displaystyle{ \delta }[/math]: [math]\displaystyle{ \frac{x}{n} - \frac{y}{m} = 0.2 }[/math].

Example 2:

Bayesian Inference for Dose Response

We conduct an experiment by giving rats one of ten possible doses of a drug, where each subsequent dose is more lethal than the previous one:

[math]\displaystyle{ \displaystyle x_1\lt x_2\lt ...\lt x_{10} }[/math]

For each dose [math]\displaystyle{ \displaystyle x_i }[/math] we test n rats and observe [math]\displaystyle{ \displaystyle Y_i }[/math], the number of rats that die. Therefore,

[math]\displaystyle{ \displaystyle Y_i \sim~ BIN(n, p_i) }[/math]
.

We can assume that the probability of death grows with the concentration of drug given, i.e. [math]\displaystyle{ \displaystyle p_1\lt p_2\lt ...\lt p_{10} }[/math]. Estimate the dose at which the animals have at least 50% chance of dying.

Let [math]\displaystyle{ \displaystyle \delta=x_j }[/math] where [math]\displaystyle{ \displaystyle j=min\{i|p_i\geq0.5\} }[/math]
We are interested in [math]\displaystyle{ \displaystyle \delta }[/math] since any higher concentrations are known to have a higher death rate.

Solving this analytically is difficult:

[math]\displaystyle{ \displaystyle \delta = g(p_1, p_2, ..., p_{10}) }[/math] where g is an unknown function
[math]\displaystyle{ \displaystyle \hat{\delta} = \int \int..\int_A \delta f(p_1,p_2,...,p_{10}|Y_1,Y_2,...,Y_{10})\,dp_1dp_2...dp_{10} }[/math]
where [math]\displaystyle{ \displaystyle A=\{(p_1,p_2,...,p_{10})|p_1\leq p_2\leq ...\leq p_{10} \} }[/math]

Process: Monte Carlo
We assume that

  1. Draw [math]\displaystyle{ \displaystyle p_i \sim~ BETA(y_i+1, n-y_i+1) }[/math]
  2. Keep sample only if it satisfies [math]\displaystyle{ \displaystyle p_1\leq p_2\leq ...\leq p_{10} }[/math], otherwise discard and try again.
  3. Compute [math]\displaystyle{ \displaystyle \delta }[/math] by finding the first [math]\displaystyle{ \displaystyle p_i }[/math] sample with over 50% deaths.
  4. Repeat process n times to get n estimates for [math]\displaystyle{ \displaystyle \delta_1, \delta_2, ..., \delta_N }[/math].
  5. [math]\displaystyle{ \displaystyle \bar{\delta} = \frac{\displaystyle\sum_{\forall i} \delta_i}{N} }[/math].

For instance, for each dose level [math]\displaystyle{ X_i }[/math], for [math]\displaystyle{ 1\lt =i\lt =10 }[/math], 10 rats are used and it is observed that the numbers that are dying is [math]\displaystyle{ Y_i }[/math], where [math]\displaystyle{ Y_1 = 4, Y_2 = 3, }[/math]etc.

Importance Sampling

'A diagram of a Monte Carlo Approximation of g(x)[Importance Sampling, Wikipedia]

In statistics, Importance Sampling helps estimating the properties of a particular distribution. As in the case with the Acceptance/Rejection method, we choose a good distribution from which to simulate the given random variables. The main difference in importance sampling however, is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. As shown in the figure, the uniform distribution [math]\displaystyle{ U\sim~Unif[0,1] }[/math] is a proposal distribution to sample from and g(x) is the target distribution.


Here we cast the integral [math]\displaystyle{ \int_{0}^1 g(x)dx }[/math], as the expectation of [math]\displaystyle{ g(x) }[/math] with respect to U such that,

[math]\displaystyle{ I = \int_{0}^1 g(x)= E_U(g(x)) }[/math].

Hence we can approximate this integral by,

[math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^{N} g(u_i) }[/math].

[Source: Monte Carlo Methods and Importance Sampling, Eric C. Anderson (1999). Retrieved June 9th from URL: http://ib.berkeley.edu/labs/slatkin/eriq/classes/guest_lect/mc_lecture_notes.pdf]

In [math]\displaystyle{ I = \displaystyle\int h(x)f(x)\,dx }[/math], Monte Carlo simulation can be used only if it easy to sample from f(x). Otherwise, another method must be applied. If sampling from f(x) is difficult but there exists a probability distribution function g(x) which is easy to sample from, then [math]\displaystyle{ \ I }[/math] can be written as

[math]\displaystyle{ \begin{align}I &= \int h(x)f(x)\,dx = \int \frac{h(x)f(x)}{g(x)}g(x)\,dx \\ &= E_g(w(x)) \end{align} }[/math]

the expectation of [math]\displaystyle{ \displaystyle w(x) = \frac{h(x)f(x)}{g(x)} }[/math] with respect to g(x) and therefore [math]\displaystyle{ \displaystyle x_1,x_2,...,x_N \sim~ g }[/math]
[math]\displaystyle{ \hat{I}= \frac{1}{N}\sum_{i=1}^{N} w(x_i) }[/math]

Process

  1. Choose [math]\displaystyle{ \displaystyle g(x) }[/math] such that it's easy to sample from and draw [math]\displaystyle{ x_i \sim~ g(x) }[/math]
  2. Compute [math]\displaystyle{ w(x_i)=\frac{h(x_i)f(x_i)}{g(x_i)} }[/math]
  3. [math]\displaystyle{ \hat{I} = \frac{1}{N}\sum_{i=1}^{N} w(x_i) }[/math]


Note: By the law of large numbers, we can say that [math]\displaystyle{ \hat{I} }[/math] converges in probability to [math]\displaystyle{ I }[/math].

"Weighted" average

The term "importance sampling" is used to describe this method because a higher 'importance' or 'weighting' is given to the values sampled from [math]\displaystyle{ \displaystyle g(x) }[/math] that are closer to the original distribution [math]\displaystyle{ \displaystyle f(x) }[/math], which we would ideally like to sample from (but cannot because it is too difficult).
[math]\displaystyle{ \displaystyle I = \int\frac{h(x)f(x)}{g(x)}g(x)\,dx }[/math]
[math]\displaystyle{ =\displaystyle \int \frac{f(x)}{g(x)}h(x)g(x)\,dx }[/math]

[math]\displaystyle{ \displaystyle \int \frac{f(x)}{g(x)}E_g(h(x))\,dx }[/math] which is like saying that we are applying a regular Monte Carlo Simulation method to [math]\displaystyle{ \displaystyle\int h(x)g(x)\,dx }[/math], and taking each result from this process and weighting the more accurate ones (i.e. the ones for which [math]\displaystyle{ \displaystyle \frac{f(x)}{g(x)} }[/math] is high) higher.

One can view [math]\displaystyle{ \frac{f(x)}{g(x)}\ = B(x) }[/math] as a weight.

Then [math]\displaystyle{ \displaystyle \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^{N} w(x_i) = \frac{1}{N}\displaystyle\sum_{i=1}^{N} B(x_i)h(x_i) }[/math]

i.e. we are computing a weighted sum of [math]\displaystyle{ h(x_i) }[/math] instead of a sum

A Deeper Look into Importance Sampling - June 2, 2009

From last class, we have determined that an integral can be written in the form [math]\displaystyle{ I = \displaystyle\int h(x)f(x)\,dx }[/math] [math]\displaystyle{ = \displaystyle\int \frac{h(x)f(x)}{g(x)}g(x)\,dx }[/math] We continue our discussion of Importance Sampling here.

Importance Sampling

We can see that the integral [math]\displaystyle{ \displaystyle\int \frac{h(x)f(x)}{g(x)}g(x)\,dx = \int \frac{f(x)}{g(x)}h(x) g(x)\,dx }[/math] is just [math]\displaystyle{ \displaystyle E_g(h(x)) \rightarrow }[/math]the expectation of h(x) with respect to g(x), where [math]\displaystyle{ \displaystyle \frac{f(x)}{g(x)} }[/math] is a weight [math]\displaystyle{ \displaystyle\beta(x) }[/math]. In the case where [math]\displaystyle{ \displaystyle f \gt g }[/math], a greater weight for [math]\displaystyle{ \displaystyle\beta(x) }[/math] will be assigned. Thus, the points with more weight are deemed more important, hence "importance sampling". This can be seen as a variance reduction technique.

Problem

The method of Importance Sampling is simple but can lead to some problems. The [math]\displaystyle{ \displaystyle \hat I }[/math] estimated by Importance Sampling could have infinite standard error.

Given [math]\displaystyle{ \displaystyle I= \int w(x) g(x) dx }[/math] [math]\displaystyle{ = \displaystyle E_g(w(x)) }[/math] [math]\displaystyle{ = \displaystyle \frac{1}{N}\sum_{i=1}^{N} w(x_i) }[/math] where [math]\displaystyle{ \displaystyle w(x)=\frac{f(x)h(x)}{g(x)} }[/math].

Obtaining the second moment,

[math]\displaystyle{ \displaystyle E[(w(x))^2] }[/math]
[math]\displaystyle{ \displaystyle = \int (\frac{h(x)f(x)}{g(x)})^2 g(x) dx }[/math]
[math]\displaystyle{ \displaystyle = \int \frac{h^2(x) f^2(x)}{g^2(x)} g(x) dx }[/math]
[math]\displaystyle{ \displaystyle = \int \frac{h^2(x)f^2(x)}{g(x)} dx }[/math]

We can see that if [math]\displaystyle{ \displaystyle g(x) \rightarrow 0 }[/math], then [math]\displaystyle{ \displaystyle E[(w(x))^2] \rightarrow \infty }[/math]. This occurs if [math]\displaystyle{ \displaystyle g }[/math] has a thinner tail than [math]\displaystyle{ \displaystyle f }[/math] then [math]\displaystyle{ \frac{h^2(x)f^2(x)}{g(x)} }[/math] could be infinitely large. The general idea here is that [math]\displaystyle{ \frac{f(x)}{g(x)} }[/math] should not be large.

Remark 1

It is evident that [math]\displaystyle{ \displaystyle g(x) }[/math] should be chosen such that it has a thicker tail than [math]\displaystyle{ \displaystyle f(x) }[/math]. If [math]\displaystyle{ \displaystyle f }[/math] is large over set [math]\displaystyle{ \displaystyle A }[/math] but [math]\displaystyle{ \displaystyle g }[/math] is small, then [math]\displaystyle{ \displaystyle \frac{f}{g} }[/math] would be large and it would result in a large variance.

Remark 2

It is useful if we can choose [math]\displaystyle{ \displaystyle g }[/math] to be similar to [math]\displaystyle{ \displaystyle f }[/math] in terms of shape. Ideally, the optimal [math]\displaystyle{ \displaystyle g }[/math] should be similar to [math]\displaystyle{ \displaystyle \left| h(x) \right|f(x) }[/math], and have a thicker tail. It's important to take the absolute value of [math]\displaystyle{ \displaystyle h(x) }[/math], since a variance can't be negative. Analytically, we can show that the best [math]\displaystyle{ \displaystyle g }[/math] is the one that would result in a variance that is minimized.

Remark 3

Choose [math]\displaystyle{ \displaystyle g }[/math] such that it is similar to [math]\displaystyle{ \displaystyle \left| h(x) \right| f(x) }[/math] in terms of shape. That is, we want [math]\displaystyle{ \displaystyle g \propto \displaystyle \left| h(x) \right| f(x) }[/math]


Theorem (Minimum Variance Choice of [math]\displaystyle{ \displaystyle g }[/math])

The choice of [math]\displaystyle{ \displaystyle g }[/math] that minimizes variance of [math]\displaystyle{ \hat I }[/math] is [math]\displaystyle{ \displaystyle g^*(x)=\frac{\left| h(x) \right| f(x)}{\int \left| h(s) \right| f(s) ds} }[/math].

Proof:

We know that [math]\displaystyle{ \displaystyle w(x)=\frac{f(x)h(x)}{g(x)} }[/math]

The variance of [math]\displaystyle{ \displaystyle w(x) }[/math] is

[math]\displaystyle{ \displaystyle Var[w(x)] }[/math]
[math]\displaystyle{ \displaystyle = E[(w(x)^2)] - [E[w(x)]]^2 }[/math]
[math]\displaystyle{ \displaystyle = \int \left(\frac{f(x)h(x)}{g(x)} \right)^2 g(x) dx - \left[\int \frac{f(x)h(x)}{g(x)}g(x)dx \right]^2 }[/math]
[math]\displaystyle{ \displaystyle = \int \left(\frac{f(x)h(x)}{g(x)} \right)^2 g(x) dx - \left[\int f(x)h(x) \right]^2 }[/math]

As we can see, the second term does not depend on [math]\displaystyle{ \displaystyle g(x) }[/math]. Therefore to minimize [math]\displaystyle{ \displaystyle Var[w(x)] }[/math] we only need to minimize the first term. In doing so we will use Jensen's Inequality.


[math]\displaystyle{ \displaystyle ======Aside: Jensen's Inequality====== }[/math]

If [math]\displaystyle{ \displaystyle g }[/math] is a convex function ( twice differentiable and [math]\displaystyle{ \displaystyle g''(x) \geq 0 }[/math] ) then [math]\displaystyle{ \displaystyle g(\alpha x_1 + (1-\alpha)x_2) \leq \alpha g(x_1) + (1-\alpha) g(x_2) }[/math]
Essentially the definition of convexity implies that the line segment between two points on a curve lies above the curve, which can then be generalized to higher dimensions:

[math]\displaystyle{ \displaystyle g(\alpha_1 x_1 + \alpha_2 x_2 + ... + \alpha_n x_n) \leq \alpha_1 g(x_1) + \alpha_2 g(x_2) + ... + \alpha_n g(x_n) }[/math] where [math]\displaystyle{ \displaystyle \alpha_1 + \alpha_2 + ... + \alpha_n = 1 }[/math]
=======================================================
Proof (cont)

Using Jensen's Inequality,

[math]\displaystyle{ \displaystyle g(E[x]) \leq E[g(x)] }[/math] as [math]\displaystyle{ \displaystyle g(E[x]) = g(p_1 x_1 + ... p_n x_n) \leq p_1 g(x_1) + ... + p_n g(x_n) = E[g(x)] }[/math]

Therefore

[math]\displaystyle{ \displaystyle E[(w(x))^2] \geq (E[\left| w(x) \right|])^2 }[/math]
[math]\displaystyle{ \displaystyle E[(w(x))^2] \geq \left(\int \left| \frac{f(x)h(x)}{g(x)} \right| g(x) dx \right)^2 }[/math]

and

[math]\displaystyle{ \begin{align} \displaystyle \left(\int \left| \frac{f(x)h(x)}{g(x)} \right| g(x) dx \right)^2 &= \left(\int \frac{f(x)\left| h(x) \right|}{g(x)} g(x) dx \right)^2 \\ &= \left(\int \left| h(x) \right| f(x) dx \right)^2 \end{align} }[/math]

since [math]\displaystyle{ \displaystyle f }[/math] and [math]\displaystyle{ \displaystyle g }[/math] are density functions, [math]\displaystyle{ \displaystyle f, g }[/math] are non negative.

Thus, this is a lower bound on [math]\displaystyle{ \displaystyle E[(w(x))^2] }[/math]. If we replace [math]\displaystyle{ \displaystyle g^*(x) }[/math] into [math]\displaystyle{ \displaystyle E[g^*(x)] }[/math], we can see that the result is as we require. Details omitted.

However, this is mostly of theoritical interest. In practice, it is impossible or very difficult to compute [math]\displaystyle{ \displaystyle g^* }[/math].

Note: Jensen's inequality is actually unnecessary here. We just use it to get [math]\displaystyle{ E[(w(x))^2] \geq (E[|w(x)|])^2 }[/math], which could be derived using variance properties: [math]\displaystyle{ 0 \leq Var[|w(x)|] = E[|w(x)|^2] - (E[|w(x)|])^2 = E[(w(x))^2] - (E[|w(x)|])^2 }[/math].

Importance Sampling and Markov Chain Monte Carlo (MCMC) - June 4, 2009

Remark 4:

[math]\displaystyle{ \begin{align} I &= \int^\ h(x)f(x)\,dx \\ &= \int \ h(x)\frac{f(x)}{g(x)}g(x)\,dx \\ \hat{I}&=\frac{1}{N} \sum_{i=1}^{N} h(x_i)b(x_i)\end{align} }[/math]

where [math]\displaystyle{ \displaystyle b(x_i) = \frac{f(x_i)}{g(x_i)} }[/math]

[math]\displaystyle{ I =\displaystyle \frac{\int\ h(x)f(x)\,dx}{\int f(x) dx} }[/math]

Apply the idea of importance sampling to both the numerator and denominator:

[math]\displaystyle{ =\displaystyle \frac{\int\ h(x)\frac{f(x)}{g(x)}g(x)\,dx}{\int\frac{f(x)}{g(x)}g(x) dx} }[/math]
[math]\displaystyle{ = \displaystyle\frac{\sum_{i=1}^{N} h(x_i)b(x_i)}{\sum_{1=1}^{N} b(x_i)} }[/math]
[math]\displaystyle{ = \displaystyle\sum_{i=1}^{N} h(x_i)b'(x_i) }[/math] where [math]\displaystyle{ \displaystyle b'(x_i) = \frac{b(x_i)}{\sum_{i=1}^{N} b(x_i)} }[/math]

The above results in the following form of Importance Sampling:

[math]\displaystyle{ \hat{I} = \displaystyle\sum_{i=1}^{N} b'(x_i)h(x_i) }[/math] where [math]\displaystyle{ \displaystyle b'(x_i) = \frac{b(x_i)}{\sum_{i=1}^{N} b(x_i)} }[/math]

This is very important and useful especially when f is known only up to a proportionality constant. Often, this is the case in the Bayesian approach when f is a posterior density function.

Example of Importance Sampling

Estimate [math]\displaystyle{ I = \displaystyle\ Pr (Z\gt 3) }[/math] when [math]\displaystyle{ Z \sim~ N(0,1) }[/math]

[math]\displaystyle{ \begin{align} Pr(Z\gt 3) &= \int^\infty_3 f(x)\,dx \approx 0.0013 \\ &= \int^\infty_{-\infty} h(x)f(x)\,dx \end{align} }[/math]
Where [math]\displaystyle{ h(x) = \begin{cases} 0, & \text{if } x \lt 3 \\ 1, & \text{if } x \ge 3 \end{cases} }[/math]


Approach I: Monte Carlo

[math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i) }[/math] where [math]\displaystyle{ X \sim~ N(0,1) }[/math]

The idea here is to sample from normal distribution and to count number of observations that is greater than 3.

The variability will be high in this case if using Monte Carlo since this is considered a low probability event (a tail event), and different runs may give significantly different values. For example: the first run may give only 3 occurences (i.e if we generate 1000 samples, thus the probability will be .003), the second run may give 5 occurences (probability .005), etc.

This example can be illustrated in Matlab using the code below (we will be generating 100 samples in this case):

format long
for i = 1:100
   a(i) = sum(randn(100,1)>=3)/100;
end
meanMC  = mean(a)
varMC   = var(a)

On running this, we get [math]\displaystyle{ meanMC = 0.0005 }[/math] and [math]\displaystyle{ varMC \approx 1.31313 * 10^{-5} }[/math]


Approach II: Importance Sampling

[math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)\frac{f(x_i)}{g(x_i)} }[/math] where [math]\displaystyle{ f(x) }[/math] is standard normal and [math]\displaystyle{ g(x) }[/math] needs to be chosen wisely so that it is similar to the target distribution.
Let [math]\displaystyle{ g(x) \sim~ N(4,1) }[/math]
[math]\displaystyle{ b(x) = \frac{f(x)}{g(x)} = e^{(8-4x)} }[/math]
[math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nb(x_i)h(x_i) }[/math]

This example can be illustrated in Matlab using the code below:

for j = 1:100
   N = 100;
   x = randn (N,1) + 4;
     for ii = 1:N
         h = x(ii)>=3;
         b = exp(8-4*x(ii));
         w(ii) = h*b;
     end
  I(j) = sum(w)/N;
end
MEAN = mean(I)
VAR = var(I)

Running the above code gave us [math]\displaystyle{ MEAN \approx 0.001353 }[/math] and [math]\displaystyle{ VAR \approx 9.666 * 10^{-8} }[/math] which is very close to 0, and is much less than the variability observed when using Monte Carlo

Markov Chain Monte Carlo (MCMC)

Before we tackle Markov chain Monte Carlo methods, which essentially are a 'class of algorithms for sampling from probability distributions based on constructing a Markov chain' [MCMC, Wikipedia], we will first give a formal definition of Markov Chain.

Consider the same integral: [math]\displaystyle{ I = \displaystyle\int^\ h(x)f(x)\,dx }[/math]

Idea: If [math]\displaystyle{ \displaystyle X_1, X_2,...X_N }[/math] is a Markov Chain with stationary distribution f(x), then under some conditions

[math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)\xrightarrow{P}\int^\ h(x)f(x)\,dx = I }[/math]


Stochastic Process:
A Stochastic Process is a collection of random variables [math]\displaystyle{ \displaystyle \{ X_t : t \in T \} }[/math]

  • State Space Set:[math]\displaystyle{ \displaystyle X }[/math]is the set that random variables [math]\displaystyle{ \displaystyle X_t }[/math] takes values from.
  • Indexed Set:[math]\displaystyle{ \displaystyle T }[/math]is the set that t takes values from, which could be discrete or continuous in general, but we are only interested in discrete case in this course.


Example 1
i.i.d random variables

[math]\displaystyle{ \{ X_t : t \in T \}, X_t \in X }[/math]
[math]\displaystyle{ X = \{0, 1, 2, 3, 4, 5, 6, 7, 8\} \rightarrow }[/math]State Space
[math]\displaystyle{ T = \{1, 2, 3, 4, 5\} \rightarrow }[/math]Indexed Set


Example 2

[math]\displaystyle{ \displaystyle X_t }[/math]: price of a stock
[math]\displaystyle{ \displaystyle t }[/math]: opening date of the market

Basic Fact: In general, if we have random variables [math]\displaystyle{ \displaystyle X_1,...X_n }[/math]

[math]\displaystyle{ \displaystyle f(X_1,...X_n)= f(X_1)f(X_2|X_1)f(X_3|X_2,X_1)...f(X_n|X_n-1,...,X_1) }[/math]
[math]\displaystyle{ \displaystyle f(X_1,...X_n)= \prod_{i = 1}^n f(X_i|Past_i) }[/math] where [math]\displaystyle{ \displaystyle Past_i = (X_{i-1}, X_{i-2},...,X_1) }[/math]


Markov Chain:
A Markov Chain is a special form of stochastic process in which [math]\displaystyle{ \displaystyle X_t }[/math] depends only on [math]\displaystyle{ \displaystyle X_{t-1} }[/math].

For example,

[math]\displaystyle{ \displaystyle f(X_1,...X_n)= f(X_1)f(X_2|X_1)f(X_3|X_2)...f(X_n|X_{n-1}) }[/math]


Transition Probability:
The probability of going from one state to another state.

[math]\displaystyle{ p_{ij} = \Pr(X_{n}=j\mid X_{n-1}= i). \, }[/math]


Transition Matrix:
For n states, transition matrix P is an [math]\displaystyle{ N \times N }[/math] matrix with entries [math]\displaystyle{ \displaystyle P_{ij} }[/math] as below:

[math]\displaystyle{ P=\left(\begin{matrix}p_{1,1}&p_{1,2}&\dots&p_{1,j}&\dots\\ p_{2,1}&p_{2,2}&\dots&p_{2,j}&\dots\\ \vdots&\vdots&\ddots&\vdots&\ddots\\ p_{i,1}&p_{i,2}&\dots&p_{i,j}&\dots\\ \vdots&\vdots&\ddots&\vdots&\ddots \end{matrix}\right) }[/math]


Example:
A "Random Walk" is an example of a Markov Chain. Let's suppose that the direction of our next step is decided in a probabilistic way. The probability of moving to the right is [math]\displaystyle{ \displaystyle Pr(heads) = p }[/math]. And the probability of moving to the left is [math]\displaystyle{ \displaystyle Pr(tails) = q = 1-p }[/math]. Once the first or the last state is reached, then we stop. The transition matrix that express this process is shown as below:

[math]\displaystyle{ P=\left(\begin{matrix}1&0&\dots&0&\dots\\ p&0&q&0&\dots\\ 0&p&0&q&0\dots\\ \vdots&\vdots&\ddots&\vdots&\ddots\\ p_{i,1}&p_{i,2}&\dots&p_{i,j}&\dots\\ \vdots&\vdots&\ddots&\vdots&\ddots\\ 0&0&\dots&0&1 \end{matrix}\right) }[/math]




Markov Chain Definitions - June 9, 2009

Practical application for estimation: The general concept for the application of this lies within having a set of generated [math]\displaystyle{ x_i }[/math] which approach a distribution [math]\displaystyle{ f(x) }[/math] so that a variation of importance estimation can be used to estimate an integral in the form
[math]\displaystyle{ I = \displaystyle\int^\ h(x)f(x)\,dx }[/math] by [math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i) }[/math]
All that is required is a Markov chain which eventually converges to [math]\displaystyle{ f(x) }[/math].

In the previous example, the entries [math]\displaystyle{ p_{ij} }[/math] in the transition matrix [math]\displaystyle{ P }[/math] represent the probability of reaching state [math]\displaystyle{ j }[/math] from state [math]\displaystyle{ i }[/math] after one step. For this reason, the sum over all entries in a particular row sum to 1, as this itself must be a pmf if a transition from [math]\displaystyle{ i }[/math] is to lead to a state still within the state set for [math]\displaystyle{ X_t }[/math].

Homogeneous Markov Chain
The probability matrix [math]\displaystyle{ P }[/math] is the same for all indicies [math]\displaystyle{ n\in T }[/math]. [math]\displaystyle{ \displaystyle Pr(X_n=j|X_{n-1}=i)= Pr(X_1=j|X_0=i) }[/math]

If we denote the pmf of [math]\displaystyle{ X_n }[/math] by a probability vector [math]\displaystyle{ \frac{}{}\mu_n = [P(X_n=x_1),P(X_n=x_2),..,P(X_n=x_i)] }[/math]
where [math]\displaystyle{ i }[/math] denotes an ordered index of all possible states of [math]\displaystyle{ X }[/math].
Then we have a definition for the
marginal probabilty [math]\displaystyle{ \frac{}{}\mu_n(i) = P(X_n=i) }[/math]
where we simplify [math]\displaystyle{ X_n }[/math] to represent the ordered index of a state rather than the state itself.

From this definition it can be shown that, [math]\displaystyle{ \frac{}{}\mu_{n-1}P=\mu_0P^n }[/math]

Proof:

[math]\displaystyle{ \mu_{n-1}P=[\sum_{\forall i}(\mu_{n-1}(i))P_{i1},\sum_{\forall i}(\mu_{n-1}(i))P_{i2},..,\sum_{\forall i}(\mu_{n-1}(i))P_{ij}] }[/math] and since

[math]\displaystyle{ \sum_{\forall i}(\mu_{n-1}(i))P_{ij}=\sum_{\forall i}P(X_{n-1}=i)Pr(X_n=j|X_{n-1}=i)=\sum_{\forall i}P(X_{n-1}=i)\frac{Pr(X_n=j,X_{n-1}=i)}{P(X_{n-1}=i)} }[/math] [math]\displaystyle{ =\sum_{\forall i}Pr(X_n=j,X_{n-1}=i)=Pr(X_n=j)=\mu_{n}(j) }[/math]

Therefore,
[math]\displaystyle{ \frac{}{}\mu_{n-1}P=[\mu_{n}(1),\mu_{n}(2),...,\mu_{n}(i)]=\mu_{n} }[/math]

With this, it is possible to define [math]\displaystyle{ P(n) }[/math] as an n-step transition matrix where [math]\displaystyle{ \frac{}{}P_{ij}(n) = Pr(X_n=j|X_0=i) }[/math]

Theorem: [math]\displaystyle{ \frac{}{}\mu_n=\mu_0P^n }[/math]
Proof: [math]\displaystyle{ \frac{}{}\mu_n=\mu_{n-1}P }[/math] From the previous conclusion
[math]\displaystyle{ \frac{}{}=\mu_{n-2}PP=...=\mu_0\prod_{i = 1}^nP }[/math] And since this is a homogeneous Markov chain, [math]\displaystyle{ P }[/math] does not depend on [math]\displaystyle{ i }[/math] so
[math]\displaystyle{ \frac{}{}=\mu_0P^n }[/math]

From this it becomes easy to define the n-step transition matrix as [math]\displaystyle{ \frac{}{}P(n)=P^n }[/math]

Summary of definitions

  • transition matrix is an NxN when [math]\displaystyle{ N=|X| }[/math] matrix with [math]\displaystyle{ P_{ij}=Pr(X_1=j|X_0=i) }[/math] where [math]\displaystyle{ i,j \in X }[/math]
  • n-step transition matrix also NxN with [math]\displaystyle{ P_{ij}(n)=Pr(X_n=j|X_0=i) }[/math]
  • marginal (probability of X)[math]\displaystyle{ \mu_n(i) = Pr(X_n=i) }[/math]
  • Theorem: [math]\displaystyle{ P_n=P^n }[/math]
  • Theorem: [math]\displaystyle{ \mu_n=\mu_0P^n }[/math]

---

Definitions of different types of state sets

Define [math]\displaystyle{ i,j \in }[/math] State Space
If [math]\displaystyle{ P_{ij}(n) \gt 0 }[/math] for some [math]\displaystyle{ n }[/math] , then we say [math]\displaystyle{ i }[/math] reaches [math]\displaystyle{ j }[/math] denoted by [math]\displaystyle{ i\longrightarrow j }[/math]
This also mean j is accessible by i: [math]\displaystyle{ j\longleftarrow i }[/math]
If [math]\displaystyle{ i\longrightarrow j }[/math] and [math]\displaystyle{ j\longrightarrow i }[/math] then we say [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] communicate, denoted by [math]\displaystyle{ i\longleftrightarrow j }[/math]

Theorems
1) [math]\displaystyle{ i\longleftrightarrow i }[/math]
2) [math]\displaystyle{ i\longleftrightarrow j \Rightarrow j\longleftrightarrow i }[/math]
3) If [math]\displaystyle{ i\longleftrightarrow j,j\longleftrightarrow k\Rightarrow i\longleftrightarrow k }[/math]
4) The set of states of [math]\displaystyle{ X }[/math] can be written as a unique disjoint union of subsets (equivalence classes) [math]\displaystyle{ X=X_1\bigcup X_2\bigcup ...\bigcup X_k,k\gt 0 }[/math] where two states [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] communicate [math]\displaystyle{ IFF }[/math] they belong to the same subset

More Definitions
A set is Irreducible if all states communicate with each other (has only one equivalence class).
A subset of states is Closed if once you enter it, you can never leave.
A subset of states is Open if once you leave it, you can never return.
An Absorbing Set is a closed set with only 1 element (i.e. consists of a single state).

Note

  • We cannot have [math]\displaystyle{ \displaystyle i\longleftrightarrow j }[/math] with i recurrent and j transient since [math]\displaystyle{ \displaystyle i\longleftrightarrow j \Rightarrow j\longleftrightarrow i }[/math].
  • All states in an open class are transient.
  • A Markov Chain with a finite number of states must have at least 1 recurrent state.
  • A closed class with an infinite number of states has all transient or all recurrent states.

Again on Markov Chain - June 11, 2009

Decomposition of Markov chain

In the previous lecture it was shown that a Markov Chain can be written as the disjoint union of its classes. This decomposition is always possible and it is reduced to one class only in the case of an irreducible chain.


Example:
Let [math]\displaystyle{ \displaystyle X = \{1, 2, 3, 4\} }[/math] and the transition matrix be:


[math]\displaystyle{ P=\left(\begin{matrix}1/3&2/3&0&0\\ 2/3&1/3&0&0\\ 1/4&1/4&1/4&1/4\\ 0&0&0&1 \end{matrix}\right) }[/math]


The decomposition in classes is:

class 1: [math]\displaystyle{ \displaystyle \{1, 2\} \rightarrow }[/math] From the matrix we see that the states 1 and 2 have only [math]\displaystyle{ \displaystyle P_{12} }[/math] and [math]\displaystyle{ \displaystyle P_{21} }[/math] as nonzero transition probability
class 2: [math]\displaystyle{ \displaystyle \{3\} \rightarrow }[/math] The state 3 can go to every other state but none of the others can go to it
class 3: [math]\displaystyle{ \displaystyle \{4\} \rightarrow }[/math] This is an absorbing state since it is a close class and there is only one element

Recurrent and Transient states

A state i is called [math]\displaystyle{ \emph{recurrent} }[/math] or [math]\displaystyle{ \emph{persistent} }[/math] if

[math]\displaystyle{ \displaystyle Pr(x_{n}=i }[/math] for some [math]\displaystyle{ \displaystyle n\geq 1 | x_{0}=i)=1 }[/math]

That means that the probability to come back to the state i, starting from the state i, is 1.

If it is not the case (ie. probability less than 1), then state i is [math]\displaystyle{ \emph{transient} }[/math].

It is straight forward to prove that a finite irreducible chain is recurrent.


Theorem
Given a Markov chain,
A state [math]\displaystyle{ \displaystyle i }[/math] is [math]\displaystyle{ \emph{recurrent} }[/math] if and only if [math]\displaystyle{ \displaystyle \sum_{\forall n}P_{ii}(n)=\infty }[/math]
A state [math]\displaystyle{ \displaystyle i }[/math] is [math]\displaystyle{ \emph{transient} }[/math] if and only if [math]\displaystyle{ \displaystyle \sum_{\forall n}P_{ii}(n)\lt \infty }[/math]


Properties

  • If [math]\displaystyle{ \displaystyle i }[/math] is [math]\displaystyle{ \emph{recurrent} }[/math] and [math]\displaystyle{ i\longleftrightarrow j }[/math] then [math]\displaystyle{ \displaystyle j }[/math] is [math]\displaystyle{ \emph{recurrent} }[/math]
  • If [math]\displaystyle{ \displaystyle i }[/math] is [math]\displaystyle{ \emph{transient} }[/math] and [math]\displaystyle{ i\longleftrightarrow j }[/math] then [math]\displaystyle{ \displaystyle j }[/math] is [math]\displaystyle{ \emph{transient} }[/math]
  • In an equivalence class, either all states are recurrent or all states are transient
  • A finite Markov chain should have at least one recurrent state
  • The states of a finite, irreducible Markov chain are all recurrent (proved using the previous preposition and the fact that there is only one class in this kind of chain)

In the example above, state one and two are a closed set, so they are both recurrent states. State four is an absorbing state, so it is also recurrent. State three is transient.


Example
Let [math]\displaystyle{ \displaystyle X=\{\cdots,-2,-1,0,1,2,\cdots\} }[/math] and suppose that [math]\displaystyle{ \displaystyle P_{i,i+1}=p }[/math], [math]\displaystyle{ \displaystyle P_{i,i-1}=q=1-p }[/math] and [math]\displaystyle{ \displaystyle P_{i,j}=0 }[/math] otherwise. This is the Random Walk that we have already seen in a previous lecture, except it extends infinitely in both directions.

We now see other properties of this particular Markov chain:

  • Since all states communicate if one of them is recurrent, then all states will be recurrent. On the other hand, if one of them is transient, then all the other will be transient too.
  • Consider now the case in which we are in state [math]\displaystyle{ \displaystyle 0 }[/math]. If we move of n steps to the right or to the left, the only way to go back to [math]\displaystyle{ \displaystyle 0 }[/math] is to have n steps on the opposite direction.

[math]\displaystyle{ \displaystyle Pr(x_{2n}=0|X_{0}=0)=P_{00}(2n)=[ {2n \choose n} ]p^{n}q^{n} }[/math] We now want to know if this event is transient or recurrent or, equivalently, whether [math]\displaystyle{ \displaystyle \sum_{\forall i}P_{ii}(n)\geq\infty }[/math] or not.

To proceed with the analysis, we use the [math]\displaystyle{ \emph{Stirling } }[/math] [math]\displaystyle{ \displaystyle\emph{formula} }[/math]:

[math]\displaystyle{ \displaystyle n!\sim~n^{n}\sqrt{n}e^{-n}\sqrt{2\pi} }[/math]

The probability could therefore be approximated by:

[math]\displaystyle{ \displaystyle P_{00}(n)=\sim~\frac{(4pq)^{n}}{\sqrt{n\pi}} }[/math]

And the formula becomes:

[math]\displaystyle{ \displaystyle \sum_{\forall n}P_{00}(n)=\sum_{\forall n}\frac{(4pq)^{n}}{\sqrt{n\pi}} }[/math]

We can conclude that if [math]\displaystyle{ \displaystyle 4pq \lt 1 }[/math] then the state is transient, otherwise is recurrent.

[math]\displaystyle{ \displaystyle \sum_{\forall n}P_{00}(n)=\sum_{\forall n}\frac{(4pq)^{n}}{\sqrt{n\pi}} = \begin{cases} \infty, & \text{if } p = \frac{1}{2} \\ \lt \infty, & \text{if } p\neq \frac{1}{2} \end{cases} }[/math]

An alternative to Stirling's approximation is to use the generalized binomial theorem to get the following formula:

[math]\displaystyle{ \frac{1}{\sqrt{1 - 4x}} = \sum_{n=0}^{\infty} \binom{2n}{n} x^n }[/math]

Then substitute in [math]\displaystyle{ x = pq }[/math].

[math]\displaystyle{ \frac{1}{\sqrt{1 - 4pq}} = \sum_{n=0}^{\infty} \binom{2n}{n} p^n q^n = \sum_{n=0}^{\infty} P_{00}(2n) }[/math]

So we reach the same conclusion: all states are recurrent iff [math]\displaystyle{ p = q = \frac{1}{2} }[/math].

Convergence of Markov chain

We define the [math]\displaystyle{ \displaystyle \emph{Recurrence} }[/math] [math]\displaystyle{ \emph{time} }[/math][math]\displaystyle{ \displaystyle T_{i,j} }[/math] as the minimum time to go from the state i to the state j. It is also possible that the state j is not reachable from the state i.

[math]\displaystyle{ \displaystyle T_{ij}=\begin{cases} min\{n: x_{n}=i\}, & \text{if }\exists n \\ \infty, & \text{otherwise } \end{cases} }[/math]

The mean of the recurrent time [math]\displaystyle{ \displaystyle m_{i} }[/math]is defined as:

[math]\displaystyle{ m_{i}=\displaystyle E(T_{ij})=\sum nf_{ii} }[/math]

where [math]\displaystyle{ \displaystyle f_{ij}=Pr(x_{1}\neq j,x_{2}\neq j,\cdots,x_{n-1}\neq j,x_{n}=j/x_{0}=i) }[/math]


Using the objects we just introduced, we say that:

[math]\displaystyle{ \displaystyle \text{state } i=\begin{cases} \text{null}, & \text{if } m_{i}=\infty \\ \text{non-null or positive} , & \text{otherwise } \end{cases} }[/math]


Lemma
In a finite state Markov Chain, all the recurrent state are positive

Periodic and aperiodic Markov chain

A Markov chain is called [math]\displaystyle{ \emph{periodic} }[/math] of period [math]\displaystyle{ \displaystyle n }[/math] if, starting from a state, we will return to it every [math]\displaystyle{ \displaystyle n }[/math] steps with probability [math]\displaystyle{ \displaystyle 1 }[/math] we can only return to that state in a multiple of n steps.


Example
Considerate the three-state chain:


[math]\displaystyle{ P=\left(\begin{matrix} 0&1&0\\ 0&0&1\\ 1&0&0 \end{matrix}\right) }[/math]

It's evident that, starting from the state 1, we will return to it on every [math]\displaystyle{ 3^{rd} }[/math] step and so it works for the other two states. The chain is therefore periodic with perdiod [math]\displaystyle{ d=3 }[/math]


An irreducible Markov chain is called [math]\displaystyle{ \emph{aperiodic} }[/math] if:

[math]\displaystyle{ \displaystyle Pr(x_{n}=j | x_{0}=i) \gt 0 \text{ and } Pr(x_{n+1}=j | x_{0}=i) \gt 0 \text{ for some } n\ge 0 }[/math]


Another Example
Consider the chain [math]\displaystyle{ P=\left(\begin{matrix} 0&0.5&0&0.5\\ 0.5&0&0.5&0\\ 0&0.5&0&0.5\\ 0.5&0&0.5&0\\ \end{matrix}\right) }[/math]

This chain is periodic by definition. You can only get back to state 1 after at least 2 steps [math]\displaystyle{ \Rightarrow }[/math] period [math]\displaystyle{ d=2 }[/math]

Markov Chains and their Stationary Distributions - June 16, 2009

New Definition:Ergodic

A state is Ergodic if it is non-null, recurrent, and aperiodic. A Markov Chain is ergodic if all its states are ergodic.

Define a vector [math]\displaystyle{ \pi }[/math] where [math]\displaystyle{ \pi_i \gt 0 \forall i }[/math] and [math]\displaystyle{ \sum_i \pi_i = 1 }[/math](ie. [math]\displaystyle{ \pi }[/math] is a pmf)

[math]\displaystyle{ \pi }[/math] is a stationary distribution if [math]\displaystyle{ \pi=\pi P }[/math] where P is a transition matrix.

Limiting Distribution

If as [math]\displaystyle{ n \longrightarrow \infty , P^n \longrightarrow \left[ \begin{matrix} \pi\\ \pi\\ \vdots\\ \pi\\ \end{matrix}\right] }[/math] then [math]\displaystyle{ \pi }[/math] is the limiting distribution of the Markov Chain represented by P.
Theorem: An irreducible, ergodic Markov Chain has a unique stationary distribution [math]\displaystyle{ \pi }[/math] and there exists a limiting distribution which is also [math]\displaystyle{ \pi }[/math].

Detailed Balance

The condition for detailed balanced is [math]\displaystyle{ \displaystyle \pi_i p_{ij} = p_{ji} \pi_j }[/math]

Theorem

If [math]\displaystyle{ \pi }[/math] satisfies detailed balance then it is a stationary distribution.

Proof.
We need to show [math]\displaystyle{ \pi = \pi P }[/math] [math]\displaystyle{ \displaystyle [\pi p]_j = \sum_{i} \pi_i p_{ij} = \sum_{i} p_{ji} \pi_j = \pi_j \sum_{i} p_{ji}= \pi_j }[/math] as required

Warning! A chain that has a stationary distribution does not necessarily converge.

For example, [math]\displaystyle{ P=\left(\begin{matrix} 0&1&0\\ 0&0&1\\ 1&0&0 \end{matrix}\right) }[/math] has a stationary distribution [math]\displaystyle{ \left(\begin{matrix} 1/3&1/3&1/3 \end{matrix}\right) }[/math] but it will not converge.

Stationary Distribution

[math]\displaystyle{ \pi }[/math] is stationary (or invariant) distribution if [math]\displaystyle{ \pi }[/math] = [math]\displaystyle{ \pi * p }[/math] [0.5 0 0.5] Half of time their chain will spend half time in 1st state and half time in 3rd state.

Theorem

An irreducible ergodic Markov Chain has a unique stationary distribution [math]\displaystyle{ \pi }[/math]. The limiting distribution exists and is equal to [math]\displaystyle{ \pi }[/math].

If g is any bounded function, then with probability 1: [math]\displaystyle{ lim \frac{1}{N}\displaystyle\sum_{i=1}^Ng(x_n)\longrightarrow E_n(g)=\displaystyle\sum_{j}g(j)\pi_j }[/math]


Example

Find the limiting distribution of [math]\displaystyle{ P=\left(\begin{matrix} 1/2&1/2&0\\ 1/2&1/4&1/4\\ 0&1/3&2/3 \end{matrix}\right) }[/math]

Solve [math]\displaystyle{ \pi=\pi P }[/math]

[math]\displaystyle{ \displaystyle \pi_0 = 1/2\pi_0 + 1/2\pi_1 }[/math]
[math]\displaystyle{ \displaystyle \pi_1 = 1/2\pi_0 + 1/4\pi_1 + 1/3\pi_2 }[/math]
[math]\displaystyle{ \displaystyle \pi_2 = 1/4\pi_1 + 2/3\pi_2 }[/math]

Also [math]\displaystyle{ \displaystyle \sum_i \pi_i = 1 \longrightarrow \pi_0 + \pi_1 + \pi_2 = 1 }[/math]

We can solve the above system of equations and obtain
[math]\displaystyle{ \displaystyle \pi_2 = 3/4\pi_1 }[/math]
[math]\displaystyle{ \displaystyle \pi_0 = \pi_1 }[/math]

Thus, [math]\displaystyle{ \displaystyle \pi_0 + \pi_1 + 3/4\pi_1 = 1 }[/math] and we get [math]\displaystyle{ \displaystyle \pi_1 = 4/11 }[/math]

Subbing [math]\displaystyle{ \displaystyle \pi_1 = 4/11 }[/math] back into the system of equations we obtain
[math]\displaystyle{ \displaystyle \pi_0 = 4/11 }[/math] and [math]\displaystyle{ \displaystyle \pi_2 = 3/11 }[/math]

Therefore the limiting distribution is [math]\displaystyle{ \displaystyle \pi = (4/11, 4/11, 3/11) }[/math]

Monte Carlo using Markov Chain - June 18, 2009

Consider the problem of computing [math]\displaystyle{ I = \displaystyle\int^\ h(x)f(x)\,dx }[/math]

[math]\displaystyle{ \bullet }[/math] Generate [math]\displaystyle{ \displaystyle X_1 }[/math], [math]\displaystyle{ \displaystyle X_2 }[/math],... from a Markov Chain with stationary distribution [math]\displaystyle{ \displaystyle f(x) }[/math]

[math]\displaystyle{ \bullet }[/math] [math]\displaystyle{ \hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)\longrightarrow E_f(h(x))=\hat{I} }[/math]

Metropolis Hastings Algorithm

The Metropolis Hastings Algorithm first originated in the physics community in 1953 and was adopted later on by statisticians. It was originally used for the computation of a Boltzmann distribution, which describes the distribution of energy for particles in a system. In 1970, Hastings extended the algorithm to the general procedure described below.

Suppose we wish to sample from [math]\displaystyle{ f(x) }[/math], the 'target' distribution. Choose [math]\displaystyle{ q(y | x) }[/math], the 'proposal' distribution that is easily sampled from.

[math]\displaystyle{ \emph{Procedure:} }[/math]
<br\>1. Initialize [math]\displaystyle{ \displaystyle X_0 }[/math], this is the starting point of the chain, choose it randomly and set index [math]\displaystyle{ \displaystyle i=0 }[/math]
<br\>2. [math]\displaystyle{ Y~ \sim~ q(y\mid{x}) }[/math]
<br\>3. Compute [math]\displaystyle{ \displaystyle r(X_i,Y) }[/math], where [math]\displaystyle{ r(x,y)=min{\{\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})},1}\} }[/math]
<br\>4. [math]\displaystyle{ U~ \sim~ Unif [0,1] }[/math]
<br\>5. If [math]\displaystyle{ \displaystyle U\lt r }[/math]
then [math]\displaystyle{ \displaystyle X_{i+1}=Y }[/math]
else [math]\displaystyle{ \displaystyle X_{i+1}=X_i }[/math]
<br\>6. Update index [math]\displaystyle{ \displaystyle i=i+1 }[/math], and go to step 2


A couple of remarks about the algorithm

Remark 1: A good choice for [math]\displaystyle{ q(y\mid{x}) }[/math] is [math]\displaystyle{ \displaystyle N(x,b^2) }[/math] where [math]\displaystyle{ \displaystyle b\gt 0 }[/math] is a constant. The starting point of the algorithm [math]\displaystyle{ X_0=x }[/math], i.e. the proposal distibution is a normal centered at the current, randomly chosen, state.

Remark 2: If the proposal distribution is symmetric, [math]\displaystyle{ q(y\mid{x})=q(x\mid{y}) }[/math], then [math]\displaystyle{ r(x,y)=min{\{\frac{f(y)}{f(x)},1}\} }[/math]. This is called the Metropolis Algorithm, which is a special case of the original algorithm of Metropolis (1953).

Remark 3: [math]\displaystyle{ \displaystyle N(x,b^2) }[/math] is symmetric. Probability of setting mean to x and sampling y is equal to the probability of setting mean to y and samplig x.


Example: The Cauchy distribution has density [math]\displaystyle{ f(x)=\frac{1}{\pi}*\frac{1}{1+x^2} }[/math]

Let the proposal distribution be [math]\displaystyle{ q(y\mid{x})=N(x,b^2) }[/math]

[math]\displaystyle{ r(x,y)=min{\{\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})},1}\} }[/math]

[math]\displaystyle{ =min{\{\frac{f(y)}{f(x)},1}\} }[/math] since [math]\displaystyle{ q(y\mid{x}) }[/math] is symmetric [math]\displaystyle{ \Rightarrow }[/math] [math]\displaystyle{ \frac{q(x\mid{y})}{q(y\mid{x})}=1 }[/math]
[math]\displaystyle{ =min{\{\frac{ \frac{1}{\pi}\frac{1}{1+y^2} }{ \frac{1}{\pi} \frac{1}{1+x^2} },1}\} }[/math]
[math]\displaystyle{ =min{\{\frac{1+x^2 }{1+y^2},1}\} }[/math]

Now, having calculated [math]\displaystyle{ \displaystyle r(x,y) }[/math], we complete the problem in Matlab using the following code:

b=2; % let b=2 for now, we will see what happens when b is smaller or larger
X(1)=randn;
for i=2:10000
   Y=b*randn+X(i-1); % we want to decide whether we accept this Y
   r=min( (1+X(i-1)^2)/(1+Y^2),1); 
   u=rand;
   if u<r
       X(i)=Y; % accept Y
   else
       X(i)=X(i-1); % reject Y remaining in the current state
   end;
end;

We need to be careful about choosing b!

If b is too large
Then the fraction [math]\displaystyle{ \frac{f(y)}{f(x)} }[/math] would be very small [math]\displaystyle{ \Rightarrow }[/math] [math]\displaystyle{ r=min{\{\frac{f(y)}{f(x)},1}\} }[/math] is very small aswell.
It is highly unlikely that [math]\displaystyle{ \displaystyle u\lt r }[/math], the probability of rejecting [math]\displaystyle{ \displaystyle Y }[/math] is high so the chain is likely to get stuck in the same state for a long time [math]\displaystyle{ \rightarrow }[/math] chain may not coverge to the right distribution.
It is easy to observe by looking at the histogram of [math]\displaystyle{ \displaystyle X }[/math], the shape will not resemble the shape of the target [math]\displaystyle{ \displaystyle f(x) }[/math]
Most likely we reject y and the chain will get stuck.
If b is too small
Then we are setting up our proposal distribution [math]\displaystyle{ q(y\mid{x}) }[/math] to be much narrower then than the target [math]\displaystyle{ \displaystyle f(x) }[/math] so the chain will not have a chance to explore the sample state space and visit majority of the states of the target [math]\displaystyle{ \displaystyle f(x) }[/math].
If b is just right
Well chosen b will help avoid the issues mentioned above and we can say that the chain is "mixing well".


Mathematical explanation for why this algorithm works:

We talked about [math]\displaystyle{ \emph{discrete} }[/math] MC so far.

<br\> We have seen that: <br\>- [math]\displaystyle{ \displaystyle \pi }[/math] satisfies detailed balance if [math]\displaystyle{ \displaystyle \pi_iP_{ij}=P_{ji}\pi_j }[/math] and <br\>- if [math]\displaystyle{ \displaystyle\pi }[/math] satisfies [math]\displaystyle{ \emph{detailed} }[/math] [math]\displaystyle{ \emph{balance} }[/math]then it is a stationary distribution [math]\displaystyle{ \displaystyle \pi=\pi P }[/math]


In [math]\displaystyle{ \emph{continuous} }[/math]case we write the Detailed Balance as [math]\displaystyle{ \displaystyle f(x)P(x,y)=P(y,x)f(y) }[/math] and say that <br\>[math]\displaystyle{ \displaystyle f(x) }[/math] is [math]\displaystyle{ \emph{stationary} }[/math] [math]\displaystyle{ \emph{distribution} }[/math] if [math]\displaystyle{ f(x)=\int f(y)P(y,x)dy }[/math].


We want to show that if Detailed Balance holds (i.e. assume [math]\displaystyle{ \displaystyle f(x)P(x,y)=P(y,x)f(y) }[/math]) then [math]\displaystyle{ \displaystyle f(x) }[/math] is stationary distribution.

That is to show: [math]\displaystyle{ \displaystyle f(x)P(x,y)=P(y,x)f(y)\Rightarrow }[/math] [math]\displaystyle{ \displaystyle f(x) }[/math] is stationary distribution.

[math]\displaystyle{ f(x)=\int f(y)P(y,x)dy }[/math]
[math]\displaystyle{ =\int f(x)P(x,y)dy }[/math]
[math]\displaystyle{ =f(x)\int P(x,y)dy }[/math] and since [math]\displaystyle{ \int P(x,y)dy=1 }[/math]
[math]\displaystyle{ =\displaystyle f(x) }[/math]


Now, we need to show that detailed balance holds in the Metropolis-Hastings...

Consider 2 points [math]\displaystyle{ \displaystyle x }[/math] and [math]\displaystyle{ \displaystyle y }[/math]:

Either [math]\displaystyle{ \frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})}\gt 1 }[/math] OR [math]\displaystyle{ \frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})}\lt 1 }[/math] (ignoring that it might equal to 1)

Without loss of generality. suppose that the product is [math]\displaystyle{ \displaystyle\lt 1 }[/math].


In this case [math]\displaystyle{ r(x,y)=\frac{f(y)}{f(x)}*\frac{q(x\mid{y})}{q(y\mid{x})} }[/math] and [math]\displaystyle{ \displaystyle r(y,x)=1 }[/math]


Some intuitive meanings before we continue:
[math]\displaystyle{ \displaystyle P(x,y) }[/math] is jumping from [math]\displaystyle{ \displaystyle x }[/math] to [math]\displaystyle{ \displaystyle y }[/math] if proposal distribution generates [math]\displaystyle{ \displaystyle y }[/math] and [math]\displaystyle{ \displaystyle y }[/math] is accepted
[math]\displaystyle{ \displaystyle P(y,x) }[/math] is jumping from [math]\displaystyle{ \displaystyle y }[/math] to [math]\displaystyle{ \displaystyle x }[/math] if proposal distribution generates [math]\displaystyle{ \displaystyle x }[/math] and [math]\displaystyle{ \displaystyle x }[/math] is accepted
[math]\displaystyle{ q(y\mid{x}) }[/math] is the probability of generating [math]\displaystyle{ \displaystyle y }[/math]
[math]\displaystyle{ q(x\mid{y}) }[/math] is the probability of generating [math]\displaystyle{ \displaystyle x }[/math]
[math]\displaystyle{ \displaystyle r(x,y) }[/math] probability of accepting [math]\displaystyle{ \displaystyle y }[/math]
[math]\displaystyle{ \displaystyle r(y,x) }[/math] probability of accepting [math]\displaystyle{ \displaystyle x }[/math].


With that in mind we can show that [math]\displaystyle{ \displaystyle f(x)P(x,y)=P(y,x)f(y) }[/math] as follows:


[math]\displaystyle{ P(x,y)=q(y\mid{x})*(r(x,y))=q(y\mid{x})\frac{f(y)}{f(x)}\frac{q(x\mid{y})}{q(y\mid{x})} }[/math] Cancelling out [math]\displaystyle{ \displaystyle q(y\mid{x}) }[/math] and bringing [math]\displaystyle{ \displaystyle f(x) }[/math] to the other side we get <br\>[math]\displaystyle{ f(x)P(x,y)=f(y)q(x\mid{y}) }[/math] [math]\displaystyle{ \Leftarrow }[/math] equation [math]\displaystyle{ \clubsuit }[/math]


[math]\displaystyle{ P(y,x)=q(x\mid{y})*(r(y,x))=q(x\mid{y})*1 }[/math] Multiplying both sides by [math]\displaystyle{ \displaystyle f(y) }[/math] we get <br\>[math]\displaystyle{ f(y)P(y,x)=f(y)q(x\mid{y}) }[/math] [math]\displaystyle{ \Leftarrow }[/math] equation [math]\displaystyle{ \clubsuit\clubsuit }[/math]


Noticing that the right hand sides of the equation [math]\displaystyle{ \clubsuit }[/math] and equation [math]\displaystyle{ \clubsuit\clubsuit }[/math] are equal we conclude that:

<br\>[math]\displaystyle{ \displaystyle f(x)P(x,y)=P(y,x)f(y) }[/math] as desired and thus showing that Metropolis-Hastings satisfies detailed balance.


Next lecture we will see that Metropolis-Hastings is also irreducible and ergodic thus showing that it converges.

Metropolis Hastings Algorithm Continued - June 25

Metropolis–Hastings algorithm is a Markov chain Monte Carlo method. It is used to help sample from probability distributions that are difficult to sample from. The algorithm was named after Nicholas Metropolis (1915-1999), also co-author of the Simulated Anealing method (that is introduced in this lecture as well). The Gibbs sampling algorithm, that will be introduced next lecture, is a special case of the Metropolis–Hastings algorithm. This is a more efficient method, although less applicable at times.

In the last class, we showed that Metropolis Hastings satisfied the the detail-balance equations. i.e.

<br\>[math]\displaystyle{ \displaystyle f(x)P(x,y)=P(y,x)f(y) }[/math], which means [math]\displaystyle{ \displaystyle f(x) }[/math] is the stationary distribution of the chain.

But this is not enough, we want the chain to converge to the stationary distribution as well.

Thus, we also need it to be:

Irreducible: There is a positive probability to reach any non-empty set of states from any starting point. This is trivial for many choice of [math]\displaystyle{ \emph{q} }[/math] including the one that we used in the example in the previous lecture (which was normally distributed)

Aperiodic: The chain will not oscillate between different set of states. In the previous example, [math]\displaystyle{ q(y\mid{x}) }[/math] is [math]\displaystyle{ \displaystyle N(x,b^2) }[/math], which will clearly not oscillate.

Next we discuss a couple of variations of Metropolis Hastings

Random Walk Metropolis Hastings

[math]\displaystyle{ \emph{Procedure:} }[/math]
<br\>1. Draw [math]\displaystyle{ \displaystyle Y = X_i + \epsilon }[/math], where [math]\displaystyle{ \displaystyle \epsilon }[/math] has distribution [math]\displaystyle{ \displaystyle g }[/math]; [math]\displaystyle{ \epsilon = Y-X_i \sim~ g }[/math]; [math]\displaystyle{ \displaystyle X_i }[/math] is current state & [math]\displaystyle{ \displaystyle Y }[/math] is going to be close to [math]\displaystyle{ \displaystyle X_i }[/math]
<br\>2. It means [math]\displaystyle{ q(y\mid{x}) = g(y-x) }[/math]. (Note that [math]\displaystyle{ \displaystyle g }[/math] is a function of distance between the current state and the state the chain is going to travel to, i.e. it's of the form [math]\displaystyle{ \displaystyle g(|y-x|) }[/math]. Hence we know in this version that [math]\displaystyle{ \displaystyle q }[/math] is symmetric [math]\displaystyle{ \Rightarrow q(y\mid{x}) = g(|y-x|) = g(|x-y|) = q(x\mid{y}) }[/math])
<br\>3. [math]\displaystyle{ r=min{\{\frac{f(y)}{f(x)},1}\} }[/math]

Recall in our previous example we wanted to sample from the Cauchy distribution and our proposal distribution was [math]\displaystyle{ q(y\mid{x}) }[/math] [math]\displaystyle{ \sim~ N(x,b^2) }[/math]

In matlab, we defined this as

[math]\displaystyle{ \displaystyle Y = b* randn + x }[/math] (i.e [math]\displaystyle{ \displaystyle Y = X_i + randn*b) }[/math]

In this case, we need [math]\displaystyle{ \displaystyle \epsilon \sim~ N(0,b^2) }[/math]

The hard problem is to choose b so that the chain will mix well.

Rule of thumb: choose b such that the rejection probability is 0.5 (i.e. half the time accept, half the time reject)

Example


If we draw [math]\displaystyle{ \displaystyle y_1 }[/math] then [math]\displaystyle{ {\frac{f(y_1)}{f(x)}} \gt 1 \Rightarrow min{\{\frac{f(y_1)}{f(x)},1}\} = 1 }[/math], accept [math]\displaystyle{ \displaystyle y_1 }[/math] with probability 1

If we draw [math]\displaystyle{ \displaystyle y_2 }[/math] then [math]\displaystyle{ {\frac{f(y_2)}{f(x)}} \lt 1 \Rightarrow min{\{\frac{f(y_2)}{f(x)},1}\} = \frac{f(y_2)}{f(x)} }[/math], accept [math]\displaystyle{ \displaystyle y_2 }[/math] with probability [math]\displaystyle{ \frac{f(y_2)}{f(x)} }[/math]

Hence, each point drawn from the proposal that belongs to a region with higher density will be accepted for sure (with probability 1), and if a point belongs to a region with less density, then the chance that it will be accepted will be less than 1.

Independence Metropolis Hastings

In this case, the proposal distribution is independent of the current state, i.e. [math]\displaystyle{ \displaystyle q(y\mid{x}) = q(y) }[/math]

We draw from a fixed distribution

And define [math]\displaystyle{ r = min{\{\frac{f(y)}{f(x)} \cdot \frac{q(x)}{q(y)},1}\} }[/math]

And, this does not work unless [math]\displaystyle{ \displaystyle q }[/math] is very similar to the target distribution [math]\displaystyle{ \displaystyle f }[/math] (i.e. usually used when [math]\displaystyle{ \displaystyle f }[/math] is known up to a proportionality constant - the form of the distibution is known, but the distribution is not exactly known)

Now, we pose the question: if [math]\displaystyle{ \displaystyle q(y\mid{x}) }[/math] does not depend on [math]\displaystyle{ \displaystyle X }[/math], does it mean that the sequence generated from this chain is really independent?

Answer: Even though [math]\displaystyle{ Y \sim~ g(y) }[/math] does not depend on [math]\displaystyle{ \displaystyle X }[/math], but [math]\displaystyle{ \displaystyle r }[/math] depends on [math]\displaystyle{ \displaystyle X }[/math]. So it's not really an independent sequence!

[math]\displaystyle{ x_{i+1} = \begin{cases} x_i \\ y \\ \end{cases} }[/math]

Thus, the sequence is not really independent because acceptance probability [math]\displaystyle{ \displaystyle r }[/math] depends on the state [math]\displaystyle{ \displaystyle X_i }[/math]

Simulated Annealing

Simulated Annealing is a method of optimization and an application of the Metropolis Hastings Algorithm.

Consider the problem where we want to find [math]\displaystyle{ x }[/math] such that the objective function [math]\displaystyle{ h(x) }[/math] is at it's minimum,

[math]\displaystyle{ \ \min_{x}(h(x)) }[/math]

Given a constant T and since the exponential function is monotone, this optimization problem is equivalent to,

[math]\displaystyle{ \ \max_{x}(e^{\frac{-h(x)}{T}}) }[/math]

We consider a distribution function [math]\displaystyle{ \displaystyle f }[/math] such that [math]\displaystyle{ \displaystyle f \propto e^{\frac{-h(x)}{T}} }[/math], where T is called the temperature, and define the following procedure:

[math]\displaystyle{ \emph{Procedure:} }[/math]
  1. Set T to be a large number
  2. Start with some random [math]\displaystyle{ \displaystyle X_0, }[/math] [math]\displaystyle{ \displaystyle i = 0 }[/math]
  3. [math]\displaystyle{ Y \sim~ q(y\mid{x}) }[/math] (note that [math]\displaystyle{ \displaystyle q }[/math] is usually chosen to be symmetric)
  4. [math]\displaystyle{ U \sim~ Unif[0,1] }[/math]
  5. Define [math]\displaystyle{ r = min{\{\frac{f(y)}{f(x)},1}\} }[/math] (when [math]\displaystyle{ \displaystyle q }[/math] is symmetric)
  6. [math]\displaystyle{ X_{i+1} = \begin{cases} Y, & \text{with probability r} \\ X_i & \text{with probability 1-r}\\ \end{cases} }[/math]

    IE. [math]\displaystyle{ X_{i+1} = Y }[/math] if [math]\displaystyle{ U\lt r }[/math]

  7. Decrease T and go to Step 2

Now, we know that [math]\displaystyle{ r = min{\{\frac{f(y)}{f(x)},1}\} }[/math]

Consider [math]\displaystyle{ \frac{f(y)}{f(x)} = \frac{e^{\frac{-h(y)}{T}}}{e^{\frac{-h(x)}{T}}} = e^{\frac{h(x)-h(y)}{T}} }[/math]

Now, suppose T is large,

[math]\displaystyle{ \rightarrow }[/math] if [math]\displaystyle{ h(y)\lt h(x) \Rightarrow r = 1 }[/math] and we therefore accept [math]\displaystyle{ \displaystyle y }[/math] with probability 1

[math]\displaystyle{ \rightarrow }[/math] if [math]\displaystyle{ h(y)\gt h(x) \Rightarrow r = e^{\frac{h(x)-h(y)}{T}} \lt 1 }[/math] and we therefore accept [math]\displaystyle{ \displaystyle y }[/math] with probability [math]\displaystyle{ \displaystyle \lt 1 }[/math]

On the other hand, suppose T is arbitrarily small ([math]\displaystyle{ T \rightarrow 0 }[/math]),

[math]\displaystyle{ \rightarrow }[/math] if [math]\displaystyle{ h(y)\lt h(x) \Rightarrow r = 1 }[/math] and we therefore accept [math]\displaystyle{ \displaystyle y }[/math] with probability 1

[math]\displaystyle{ \rightarrow }[/math] if [math]\displaystyle{ h(y)\gt h(x) \Rightarrow r = 0 }[/math] and we therefore reject [math]\displaystyle{ \displaystyle y }[/math]


Example 1

Consider the problem of minimizing the function [math]\displaystyle{ \displaystyle f(x) = -2x^3 - x^2 + 40x + 3 }[/math]

We can plot this function and observe that it makes a local minimum near [math]\displaystyle{ \displaystyle x = -3 }[/math]

File:ezplotf0.jpg

We then plot the graphs of [math]\displaystyle{ \displaystyle \frac{f(x)}{T} }[/math] for [math]\displaystyle{ \displaystyle T = 100, 0.1 }[/math] and observe that the distribution expands for a large T, and contracts for T small - i.e. T plays the role of variance - making the distribution expand and contract accordingly.


In the end, T is small and the region we are trying to sample from becomes sharper. The points that we accept are increasingly close to the local max of the exponential function (which is the mode of the distribution), thereby corresponding to the local min of our original function (as can be seen above).

In practice the algorithm may get 'stuck' in another local minimum nearby for T too small and we don't get the convergence we looking for.

Example 2 (from June 30th lecture)

Suppose we want to minimize the function [math]\displaystyle{ \displaystyle f(x) = (x - 2)^2 }[/math]


Intuitively, we know that the answer is 2. To apply the Simulated Annealing procedure however, we require a proposal distribution. Suppose we use [math]\displaystyle{ \displaystyle Y \sim~ N(x, b^2) }[/math] and we begin with [math]\displaystyle{ \displaystyle T = 10 }[/math]

Then the problem may be solved in MATLAB using the following:

function v = obj(x)
v = (x - 2).^2;
T = 10; %this is the initial value of T, which we must gradually decrease
b = 2;
X(1) = 0;
for i = 2:100 %as we change T, we will change i (e.g. i=101:200)
   Y = b*randn + X(i-1); 
   r = min(1 , exp(-obj(Y)/T)/exp(-obj(X(i-1))/T) );
   U = rand;
   if U < r
       X(i) = Y; %accept Y
   else
       X(i) = X(i-1); %reject Y
   end;
end;

The first run (with [math]\displaystyle{ \displaystyle T = 10 }[/math]) gives us [math]\displaystyle{ \displaystyle X = 1.2792 }[/math]

Next, if we let [math]\displaystyle{ \displaystyle T = {9, 5, 2, 0.9, 0.1, 0.01, 0.005, 0.001} }[/math] in the order displayed, then we get the following graph when we plot X:


i.e. it converges to the minimum of the function

Travelling Salesman Problem

This problem consists of finding the shortest path connecting a group of cities. The salesman must visit each city once and come back to the start in the shortest possible circuit. This problem is essentially one of optimization because the goal is to minimize the salesman's cost function (this function consists of the costs associated with travelling between two cities on a given path).

The travelling salesman problem is one of the most intensely investigated problems in computational mathematics and has been researched by many from diverse academic backgrounds including mathematics, CS, chemistry, physics, psychology, etc... Consequently, the travelling salesman problem now has applications in manufacturing, telecommunications, and neuroscience to name a few.<ref> Applegate, D.L., Bixby, R.E., Chvátal, V., Cook, W.J., The Travelling Salesman Problem: A Computational Study Copyright 2007 Princeton University Press </ref>


For a good introduction to the travelling salesman problem, along with a description of the theory involved in the problem and examples of its application, refer to a paper by Michael Hahsler and Kurt Hornik entitled Introduction to TSP - Infrastructure for the Travelling Salesman Problem. [2] The examples are particularly useful because they are implemented using R (a statistical computing software environment).


Gibbs Sampling - June 30, 2009

This algorithm is a specific form of Metropolis-Hastings and is the most widely used version of the algorithm. It is used to generate a sequence of samples from the joint distribution of multiple random variables. It was first introduced by Geman and Geman (1984) and then further developed by Gelfand and Smith (1990).<ref> Gentle, James E. Elements of Computational Statistics Copyright 2002 Springer Science +Business Media, LLC </ref> In order to use Gibbs Sampling, we must know how to sample from the conditional distributions. The point of Gibbs sampling is that given a multivariate distribution, it is simpler to sample from a conditional distribution than to integrate over a joint distribution. Gibbs Sampling also satisfies detailed balance equation, similar to Metropolis-Hastings

[math]\displaystyle{ \,f(x) p_{xy} = f(y) p_{yx} }[/math]

This implies that the chain is irreversible. The procedure of proving this balance equation is similar to what was done with Metropolis-Hasting proof.


Advantages

  • The algorithm has an acceptance rate of 1. Thus it is efficient because we keep all the points we sample.
  • It is useful for high-dimensional distributions.


Disadvantages<ref> Gentle, James E. Elements of Computational Statistics Copyright 2002 Springer Science +Business Media, LLC </ref>

  • We rarely know how to sample from the conditional distributions.
  • The algorithm can be extremely slow to converge.
  • It is often difficult to know when convergence has occurred.
  • The method is not practical when there are relatively small correlations between the random variables.

Example: Gibbs sampling is used if we want to sample from a multidimensional distribution - i.e. [math]\displaystyle{ \displaystyle f(x_1, x_2, ... , x_n) }[/math]

We can use Gibbs sampling (assuming we know how to draw from the conditional distributions) by drawing

[math]\displaystyle{ \displaystyle x_1^* \sim~ f(x_1|x_2, x_3, ... , x_n) }[/math]

[math]\displaystyle{ x_2^* \sim~ f(x_2|x_1^*, x_3, ... , x_n) }[/math]

[math]\displaystyle{ \vdots }[/math]

[math]\displaystyle{ x_n^* \sim~ f(x_n|x_1^*, x_2^*, ... , x_{n-1}^*) }[/math]

and the resulting set of points drawn [math]\displaystyle{ \displaystyle (x_1^*, x_2^*, \ldots, x_n^*) }[/math] follows the required multivariate distribution.


Suppose we want to sample from a bivariate distribution [math]\displaystyle{ \displaystyle f(x,y) }[/math] with initial point [math]\displaystyle{ \displaystyle(x_i, y_i) = (0,0) }[/math], i = 0
Furthermore, suppose that we know how to sample from the conditional distributions [math]\displaystyle{ \displaystyle f_{X|Y}(x|y) }[/math] and [math]\displaystyle{ \displaystyle f_{Y|X}(y|x) }[/math]

[math]\displaystyle{ \emph{Procedure:} }[/math]

  1. [math]\displaystyle{ \displaystyle Y_{i+1} \sim~ f_{Y_i|X_i}(y|x) }[/math] (i.e. given the previous point, sample a new point)
  2. [math]\displaystyle{ \displaystyle X_{i+1} \sim~ f_{X_{i}|Y_{i+1}}(x|y) }[/math] (note: it must be [math]\displaystyle{ \displaystyle Y_{i+1} }[/math] not [math]\displaystyle{ Y_{i} }[/math], otherwise detailed balance may not hold)
  3. Repeat Steps 1 and 2

Note This method have usually a long time before convergence called "burning time". For this reason the distribution will be sampled better using only some of the last [math]\displaystyle{ \displaystyle X_i }[/math] rather than all of them.

Example Suppose we want to generate samples from a bivariate normal distribution where [math]\displaystyle{ \displaystyle \mu = \left[\begin{matrix} 1 \\ 2 \end{matrix}\right] }[/math] and [math]\displaystyle{ \sigma = \left[\begin{matrix} 1 & \rho \\ \rho & 1 \end{matrix}\right] }[/math]


Note that for a bivariate distribution it may be shown that the conditional distributions are normal. So, [math]\displaystyle{ \displaystyle f(x_2|x_1) \sim~ N(\mu_2 + \rho(x_1 - \mu_1), 1 - \rho^2) }[/math] and [math]\displaystyle{ \displaystyle f(x_1|x_2) \sim~ N(\mu_1 + \rho(x_2 - \mu_2), 1 - \rho^2) }[/math]

The problem (for a specified value [math]\displaystyle{ \displaystyle \rho }[/math]) may be solved in MATLAB using the following:

Y = [1 ; 2];
rho = 0.01;
sigma = sqrt(1 - rho^2);
X(1,:) = [0 0];
for i = 2:5000
   mu = Y(1) + rho*(X(i-1,2) - Y(2));
   X(i,1) = mu + sigma*randn;
   mu = Y(2) + rho*(X(i-1,1) - Y(1));
   X(i,2) = mu + sigma*randn;
end;
%plot(X(:,1),X(:,2),'.') plots all of the points
%plot(X(1000:end,1),X(1000:end,2),'.') plots the last 4000 points -> 
    this demonstrates that convergence occurs after a while 
    (this is called the burning time)

The output of plotting all points is:

Metropolis-Hastings within Gibbs Sampling - July 2

Thus far when discussing Gibbs Sampling, it has been assumed that we know how to sample from the conditional distributions. Even if this is not known, it is still possible to use Gibbs Sampling by utilizing the Metropolis-Hastings algorithm.

  • Choose [math]\displaystyle{ \displaystyle q }[/math] as a proposal distribution for X (assuming Y fixed).
  • Choose [math]\displaystyle{ \displaystyle \tilde{q} }[/math] as a proposal distribution for Y (assuming X fixed).
  • Do a Metropolis-Hastings step for X, treating Y as fixed.
  • Do a Metropolis-Hastings step for Y, treating X as fixed.
[math]\displaystyle{ \emph{Procedure:} }[/math]
  1. Start with some random variables [math]\displaystyle{ \displaystyle X_n, Y_n, n = 0 }[/math]
  2. Draw [math]\displaystyle{ Z~ \sim~ q(Z\mid{X_n}) }[/math]
  3. Set [math]\displaystyle{ r = \min \{ 1, \frac{f(Z,Y_n)}{f(X_n,Y_n)} \frac{q(X_n\mid{Z})}{q(Z\mid{X_n})} \} }[/math]
  4. [math]\displaystyle{ X_{n+1} = \begin{cases} Z, & \text{with probability r}\\ X_n, & \text{with probability 1-r}\\ \end{cases} }[/math]
  5. Draw [math]\displaystyle{ Z~ \sim~ \tilde{q}(Z\mid{Y_n}) }[/math]
  6. Set [math]\displaystyle{ r = \min \{ 1, \frac{f(X_{n+1},Z)}{f(X_{n+1},Y_n)} \frac{\tilde{q}(Y_n\mid{Z})}{\tilde{q}(Z\mid{Y_n})} \} }[/math]
  7. [math]\displaystyle{ Y_{n+1} = \begin{cases} Z, & \text{with probability r} \\ Y_n, & \text{with probability 1-r}\\ \end{cases} }[/math]
  8. Set [math]\displaystyle{ \displaystyle n = n + 1 }[/math], return to step 2 and repeat the same procedure

Page Ranking and Review of Linear Algebra - July 7

Page Ranking

Page Rank is a form of link analysis algorithm, and it is named after Larry Page, who is a computer scientist and is one of the co-founders of Google. As an interesting note, the name "PageRank" is a trademark of Google, and the PageRank process has been patented. However the patent has been assigned to Stanford University instead of Google.

In the real world, the Page Ranking process is used by Internet search engines, namely Google. It assigns a numerical weighting to each web page within the World Wide Web which measures the relative importance of each page. To rank a web page in terms of importance, we look at the number of web pages that link to it. Additionally, we consider the relative importance of the linking web page.

We rank pages based on the weighted number of links to that particular page. A web page is important if so many pages point to it.

Factors relating to importance of links

1) Importance (rank) of linking web page (higher importance is better).

2) Number of outgoing links from linking web page (lower is better - since the importance of the original page itself may be diminished if it has a large number of outgoing links).

Definitions

[math]\displaystyle{ L_{i,j} = \begin{cases} 1 , & \text{if j links to i}\\ 0 , & \text{else}\\ \end{cases} }[/math]


[math]\displaystyle{ c_{j}=\sum_{i=1}^N L_{i,j}\text{ = number of outgoing links from website j} }[/math]

[math]\displaystyle{ P_{i} = (1-d)\times 1+ (d) \times \sum_{j=1}^n \frac{L_{i,j} \times P_j}{c_j} \text{ = rank of i} \text{ where } 0 \leq d \leq 1 }[/math]

Under this formula, [math]\displaystyle{ \displaystyle P_i }[/math] is never zero. We weight the sum and the constant using [math]\displaystyle{ \displaystyle d }[/math](which is just a coefficient between 0 and 1 used to balance the objective function).


In Matrix Form


[math]\displaystyle{ \displaystyle P = (1-d)\times e + d \times L \times D^{-1} \times P }[/math]


where [math]\displaystyle{ P=\left(\begin{matrix}P_{1}\\ P_{2}\\ \vdots \\ P_{N} \end{matrix}\right) }[/math] [math]\displaystyle{ e=\left(\begin{matrix} 1\\ 1\\ \vdots \\1 \end{matrix}\right) }[/math]

are both [math]\displaystyle{ \displaystyle N }[/math] X [math]\displaystyle{ \displaystyle 1 }[/math] matrices

[math]\displaystyle{ L=\left(\begin{matrix}L_{1,1}&L_{1,2}&\dots&L_{1,N}\\ L_{2,1}&L_{2,2}&\dots&L_{2,N}\\ \vdots&\vdots&\ddots&\vdots\\ L_{N,1}&L_{N,2}&\dots&L_{N,N} \end{matrix}\right) }[/math]

[math]\displaystyle{ D=\left(\begin{matrix}c_{1}& 0 &\dots& 0 \\ 0 & c_{2}&\dots&0\\ \vdots&\vdots&\ddots&\vdots&\\ 0&0&\dots&c_{N} \end{matrix}\right) }[/math]

[math]\displaystyle{ D^{-1}=\left(\begin{matrix}1/c_{1}& 0 &\dots& 0 \\ 0 & 1/c_{2}&\dots&0\\ \vdots&\vdots&\ddots&\vdots&\\ 0&0&\dots&1/c_{N} \end{matrix}\right) }[/math]

Solving for P

Since rank is a relative term, if we make an assumption that

[math]\displaystyle{ \sum_{i=1}^N P_i = 1 }[/math]

then we can solve for P (in matrix form this is [math]\displaystyle{ \displaystyle e^T \times P = 1 }[/math]

[math]\displaystyle{ \displaystyle P = (1-d)\times e \times 1 + d \times L \times D^{-1} \times P }[/math]

[math]\displaystyle{ \displaystyle P = (1-d)\times e \times e^T \times P + d \times L \times D^{-1} \times P \text{ by replacing 1 with } e^T \times P }[/math]

[math]\displaystyle{ \displaystyle P = [(1-d) \times e \times e^T + d \times L \times D^{-1}] \times P \text{ by factoring out the } P }[/math]

[math]\displaystyle{ \displaystyle P = A \times P \text{ by defining A (notice that everything in A is known )} }[/math]


We can solve for P using two different methods. Firstly, we can recognize that P is an eigenvector corresponding to eigenvalue 1, for matrix A. Secondly, we can recognize that P is the stationary distribution for a transition matrix A.

If we look at this as a Markov Chain, this represents a random walk on the internet. There is a chance of jumping to an unlinked page (from the constant) and the probability of going to a page increases as the number of links to it increases.


To solve for P, we start with a random guess [math]\displaystyle{ \displaystyle P_0 }[/math] and repeatedly apply

[math]\displaystyle{ \displaystyle P_i \lt = A \times P_i-1 }[/math]

Since this is a stationary series, for large n [math]\displaystyle{ \displaystyle P_n = P }[/math].


Exemple of utilisation of Page Ranking :

Suppose that the links between four webpages are:

1 → 2

↕ ↙

3 ← 4


According to the factors relating to importance of links, we can consider two possible rankings :


[math]\displaystyle{ \displaystyle 3 \gt 2 \gt 1 \gt 4 }[/math]

or

[math]\displaystyle{ \displaystyle 3\gt 1\gt 2\gt 4 }[/math] if we consider that the high importance of the link from 3 to 1 is more influent than the fact that there are two outgoing links from page 1 and only one from page 2.


To rank our pages , we use the formula :


[math]\displaystyle{ \displaystyle P = [(1-d) \times\frac{ e \times e^T }{n}+ d \times L \times D^{-1}] \times P }[/math]


Where

[math]\displaystyle{ L_{i,j} = \begin{cases} 1 , & \text{if j links to i}\\ 0 , & \text{else}\\ \end{cases} }[/math]


In this example,


1) [math]\displaystyle{ L=\left(\begin{matrix}0&0&1&0\\ 1&0&0&0\\ 1&1&0&1\\ 0&0&0&0 \end{matrix}\right) }[/math]


2) [math]\displaystyle{ c_{j}=\sum_{i=1}^N L_{i,j} }[/math],


So that here,


[math]\displaystyle{ c=\left[\begin{matrix} 2&1&1&1 \end{matrix}\right] }[/math]


3) We have D=diag(C), so:


[math]\displaystyle{ D=\left(\begin{matrix}2&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{matrix}\right) }[/math]


Algorithm on matlab:

First we define our matrix :

L=[0 0 1 0; 1 0 0 0; 1 1 0 1; 0 0 0 0]

C=[2 1 1 1]

D=diag(C)

d=0.85

A=(1-d)*ones(4)/4+d*L*inv(D)

[vec val]=eig(A)


We are now going to choose a random vector p:

p=rand(1,4)

p=p/sum(p)

p=p'


We now look at the probability of going on each page if we repeat many times a selection of a webpage

p=A*p

p=A*p

etc...

p=A*p


We finally get:

p =

   0.3725
   0.1958
   0.3942
   0.0375

So we conclude that [math]\displaystyle{ \displaystyle 3 \gt 1 \gt 2 \gt 4 }[/math] as the probability of being on page three > Pr (page 1) > Pr (page 2) > Pr (page 4)

Linear Algebra Review

Inner Product<ref> Lay, David, Linear Algebra and its Applications, Copyright 2006, Pearson Education Inc., Boston, MA, USA. </ref>

Note that the inner product is also referred to as the dot product. If [math]\displaystyle{ \vec{u} = \left[\begin{matrix} u_1 \\ u_2 \\ \vdots \\ u_n \end{matrix}\right] \text{ and } \vec{v} = \left[\begin{matrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{matrix}\right] }[/math] then the inner product is :

[math]\displaystyle{ \vec{u} \cdot \vec{v} = \vec{u}^T\vec{v} = \left[\begin{matrix} u_1 & u_2 & \dots & u_n \end{matrix}\right] \left[\begin{matrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{matrix}\right] = u_1v_1 + u_2v_2 + u_3v_3 + \dots + u_nv_n }[/math]


The length (or norm) of [math]\displaystyle{ \displaystyle \vec{v} }[/math] is the non-negative scalar [math]\displaystyle{ \displaystyle||\vec{v}|| }[/math] defined by [math]\displaystyle{ \displaystyle ||\vec{v}|| = \sqrt{\vec{v} \cdot \vec{v}} = \sqrt{v_1^2 + v_2^2 + \dots + v_n^2} }[/math]


For [math]\displaystyle{ \displaystyle \vec{u} }[/math] and [math]\displaystyle{ \displaystyle \vec{v} }[/math] in [math]\displaystyle{ \mathbf{R}^n }[/math] , the distance between [math]\displaystyle{ \displaystyle \vec{u} }[/math] and [math]\displaystyle{ \displaystyle \vec{v} }[/math] written as [math]\displaystyle{ \displaystyle dist(\vec{u},\vec{v}) }[/math], is the length of the vector [math]\displaystyle{ \vec{u} - \vec{v} }[/math]. That is, [math]\displaystyle{ \displaystyle dist(\vec{u},\vec{v}) = ||\vec{u} - \vec{v}|| }[/math]


If [math]\displaystyle{ \vec{u} }[/math] and [math]\displaystyle{ \vec{v} }[/math] are non-zero vectors in [math]\displaystyle{ \mathbf{R}^2 }[/math] or [math]\displaystyle{ \mathbf{R}^3 }[/math] , then the angle between [math]\displaystyle{ \vec{u} }[/math] and [math]\displaystyle{ \vec{v} }[/math] is given as [math]\displaystyle{ \vec{u} \cdot \vec{v} = ||\vec{u}|| \ ||\vec{v}|| \ cos\theta }[/math]


Orthogonal and Orthonormal<ref> Lay, David, Linear Algebra and its Applications, Copyright 2006, Pearson Education Inc., Boston, MA, USA. </ref>

Two vectors [math]\displaystyle{ \vec{u} }[/math] and [math]\displaystyle{ \vec{v} }[/math] in [math]\displaystyle{ \mathbf{R}^n }[/math] are orthogonal (to each other) if [math]\displaystyle{ \vec{u} \cdot \vec{v} = 0 }[/math]

By the Pythagorean Theorem, it may also be said that two vectors [math]\displaystyle{ \vec{u} }[/math] and [math]\displaystyle{ \vec{v} }[/math] are orthogonal if and only if [math]\displaystyle{ ||\vec{u}+\vec{v}||^2 = ||\vec{u}||^2 + ||\vec{v}||^2 }[/math]

Two vectors [math]\displaystyle{ \vec{u} }[/math] and [math]\displaystyle{ \vec{v} }[/math] in [math]\displaystyle{ \mathbf{R}^n }[/math] are orthonormal if [math]\displaystyle{ \vec{u} \cdot \vec{v} = 0 }[/math] and [math]\displaystyle{ ||\vec{u}||=||\vec{v}||=1 }[/math]


An orthonormal matrix [math]\displaystyle{ \displaystyle U }[/math] is a square invertible matrix, such that [math]\displaystyle{ \displaystyle U^{-1} = U^T }[/math] or alternatively [math]\displaystyle{ \displaystyle U^T \ U = U \ U^T = I }[/math]

Note that an orthogonal matrix is an orthonormal matrix.


Dependence and Independence<ref> Lay, David, Linear Algebra and its Applications, Copyright 2006, Pearson Education Inc., Boston, MA, USA. </ref>

The set of vectors [math]\displaystyle{ \{ \vec{v_1}, \dots , \vec{v_p} \} }[/math] in [math]\displaystyle{ \mathbf{R}^n }[/math] is said to be linearly independent if the vector equation [math]\displaystyle{ \displaystyle a_1\vec{v_1} + a_2\vec{v_2} + \dots + a_p\vec{v_p} = \vec{0} }[/math] has only the trivial solution (i.e. [math]\displaystyle{ \displaystyle a_k = 0 \ \forall k }[/math] ).


The set of vectors [math]\displaystyle{ \{ \vec{v_1}, \dots , \vec{v_p} \} }[/math] in [math]\displaystyle{ \mathbf{R}^n }[/math] is said to be linearly dependent if there exists a set of coefficients [math]\displaystyle{ \{ a_1, \dots , a_p \} }[/math] (not all zero), such that [math]\displaystyle{ \displaystyle a_1\vec{v_1} + a_2\vec{v_2} + \dots + a_p\vec{v_p} = \vec{0} }[/math].


If a set contains more vectors than there are entries in each vector, then the set is linearly dependent.

That is, any vector set [math]\displaystyle{ \{ \vec{v_1}, \dots , \vec{v_p} \} }[/math] in [math]\displaystyle{ \mathbf{R}^n }[/math] is linearly dependent if p > n.


If a vector set [math]\displaystyle{ \{ \vec{v_1}, \dots , \vec{v_p} \} }[/math] in [math]\displaystyle{ \mathbf{R}^n }[/math] contains the zero vector, then the set is linearly dependent.


Trace and Rank<ref> Lay, David, Linear Algebra and its Applications, Copyright 2006, Pearson Education Inc., Boston, MA, USA. </ref>

The trace of a square matrix [math]\displaystyle{ \displaystyle A_{nxn} }[/math], denoted by [math]\displaystyle{ \displaystyle tr(A) }[/math], is the sum of the diagonal entries in [math]\displaystyle{ \displaystyle A }[/math]. That is, [math]\displaystyle{ \displaystyle tr(A) = \sum_{i = 1}^n a_{ii} }[/math]

Note that an alternate definition for the trace is:

[math]\displaystyle{ \displaystyle tr(A) = \sum_{i = 1}^n \lambda_{ii} }[/math]

i.e. it is the sum of all the eigenvalues of the matrix

The rank of a matrix [math]\displaystyle{ \displaystyle A }[/math], denoted by [math]\displaystyle{ \displaystyle rank(A) }[/math], is the dimension of the column space of A. That is, the rank of a matrix is number of linearly independent rows (or columns) of A.


A square matrix is non-singular if and only if its rank equals the number of rows (or columns). Alternatively, a matrix is non-singular if it is invertible (i.e. its determinant is NOT zero). A matrix that is not invertible is sometimes called a singular matrix.

A matrix is said to be non-singular if and only if its rank equals the number of rows or columns. A non-singular matrix has a non-zero determinant.

A square matrix is said to be orthogonal if [math]\displaystyle{ AA^T=A^TA=I }[/math].

For a square matrix A,

  • if [math]\displaystyle{ x^TAx \gt 0,\ \forall x \neq 0 }[/math],then A is said to be positive-definite.
  • if [math]\displaystyle{ x^TAx \geq 0,\ \forall x \neq 0 }[/math],then A is said to be positive-semidefinite.

The inverse of a square matrix A is denoted by [math]\displaystyle{ A^{-1} }[/math] and is such that [math]\displaystyle{ AA^{-1}=A^{-1}A=I }[/math]. The inverse of a matrix A exists if and only if A is non-singular.

The pseudo-inverse matrix [math]\displaystyle{ A^{\dagger} }[/math] is typically used whenever [math]\displaystyle{ A^{-1} }[/math] does not exist because A is either not square or singular: [math]\displaystyle{ A^{\dagger} = (A^TA)^{-1}A^T }[/math] with [math]\displaystyle{ A^{\dagger}A = I }[/math].


Vector Spaces

The n-dimensional space in which all the n-dimensional vectors reside is called a vector space.

A set of vectors [math]\displaystyle{ \{u_1, u_2, u_3, ... u_n\} }[/math] is said to form a basis for a vector space if any arbitrary vector x can be represented by a linear combination of the [math]\displaystyle{ \{u_i\} }[/math]: [math]\displaystyle{ x = a_1u_1 + a_2u_2 + ... + a_nu_n }[/math]

  • The coefficients [math]\displaystyle{ \{a_1, a_2, ... a_n\} }[/math] are called the components of vector x with the basis [math]\displaystyle{ \{u_i\} }[/math].
  • In order to form a basis, it is necessary and sufficient that the [math]\displaystyle{ \{u_i\} }[/math] vectors be linearly independent.

A basis [math]\displaystyle{ \{u_i\} }[/math] is said to be orthogonal if [math]\displaystyle{ u^T_i u_j\begin{cases} \neq 0, & \text{ if }i=j\\ = 0, & \text{ if } i\neq j\\ \end{cases} }[/math]

A basis [math]\displaystyle{ \{u_i\} }[/math] is said to be orthonormal if [math]\displaystyle{ u^T_i u_j = \begin{cases} 1, & \text{ if }i=j\\ 0, & \text{ if } i\neq j\\ \end{cases} }[/math]


Eigenvectors and Eigenvalues

Given matrix [math]\displaystyle{ A_{NxN} }[/math], we say that v is an 'eigenvector if there exists a scalar [math]\displaystyle{ \lambda }[/math] (the eigenvalue) such that [math]\displaystyle{ Av = \lambda v }[/math] where [math]\displaystyle{ \lambda }[/math] is the corresponding eigenvalue.

Computation of eigenvalues [math]\displaystyle{ Av = \lambda v \Rightarrow Av - \lambda v = 0 \Rightarrow (A-\lambda I)v = 0 \Rightarrow \begin{cases} v = 0, & \text{trivial solution}\\ (A-\lambda v) = 0, & \text{non-trivial solution}\\ \end{cases} }[/math] [math]\displaystyle{ (A-\lambda v) = 0 \Rightarrow |A-\lambda v| = 0 \Rightarrow \lambda^N + a_1\lambda^{N-1} + a_2\lambda^{N-2} + ... + a_{N-1}\lambda + a_0 = 0 \leftarrow }[/math] Characteristic Equation

Properties

  • If A is non-singular all eigenvalues are non-zero.
  • If A is real and symmetric, all eigenvalues are real and the associated eigenvectors are orthogonal.
  • If A is positive-definite all eigenvalues are positive


Linear Transformations

A linear transformation is a mapping from a vector space [math]\displaystyle{ X^N }[/math] onto a vector space [math]\displaystyle{ Y^M }[/math], and it is represented by a matrix

  • Given vector [math]\displaystyle{ x \in X^N }[/math], the corresponding vector y on [math]\displaystyle{ Y^M }[/math] is computed as [math]\displaystyle{ y = Ax }[/math].
  • The dimensionality of the two spaces does not have to be the same (M and N do not have to be equal).

A linear transformation represented by a square matrix A is said to be orthonormal when [math]\displaystyle{ AA^T=A^TA=I }[/math]

  • implies that [math]\displaystyle{ A^T=A^{-1} }[/math]
  • An orthonormal transformation has the property of preserving the magnitude of the vectors:

[math]\displaystyle{ |y| = \sqrt{y^Ty} = \sqrt{(Ax)^T Ax} = \sqrt{x^Tx} = |x| }[/math]

  • An orthonormal matrix can be thought of as a rotation of the reference frame
  • The row vectors of an orthonormal transformation form a set of orthonormal basis vectors.


Interpretation of Eigenvalues and Eigenvectors

If we view matrix A as a linear transformation, an eigenvector represents an invariant direction in the vector space.

  • When transformed by A, any point lying on the direction defined by v will remain on that direction and its magnitude will be multiplied by the corresponding eigenvalue.

Given the covariance matrix [math]\displaystyle{ \sum }[/math] of a Gaussian distribution

  • The eigenvectors of [math]\displaystyle{ \sum }[/math] are the principal directions of the distribution
  • The eigenvalues are the variances of the corresponding principal directions

The linear transformation defined by the eigenvectors of [math]\displaystyle{ \sum }[/math] leads to vectors that are uncorrelated regardless of the form of the distribution (This is used in Principal Component Analysis).

  • If the distribution is Gaussian, then the transformed vectors will be statistically independent.

Principal Component Analysis - July 9

Principal Component Analysis (PCA) is a powerful technique for reducing the dimensionality of a data set. It has applications in data visualization, data mining, classification, etc. It is mostly used for data analysis and for making predictive models.

Rough definition

Keepings two important aspects of data analysis in mind:

  • Reducing covariance in data
  • Preserving information stored in data(Variance is a source of information)


PCA takes a sample of d - dimensional vectors and produces an orthogonal(zero covariance) set of d 'Principal Components'. The first Principal Component is the direction of greatest variance in the sample. The second principal component is the direction of second greatest variance (orthogonal to the first component), etc.

Then we can preserve most of the variance in the sample in a lower dimension by choosing the first k Principle Components and approximating the data in k - dimensional space, which is easier to analyze and plot.

Principal Components of handwritten digits

Suppose that we have a set of 103 images (28 by 23 pixels) of handwritten threes, similar to the assignment dataset.

File:threes dataset.png

We can represent each image as a vector of length 644 ([math]\displaystyle{ 644 = 23 \times 28 }[/math]) just like in assignment 5. Then we can represent the entire data set as a 644 by 103 matrix, shown below. Each column represents one image (644 rows = 644 pixels).

File:matrix decomp PCA.png

Using PCA, we can approximate the data as the product of two smaller matrices, which I will call [math]\displaystyle{ V \in M_{644,2} }[/math] and [math]\displaystyle{ W \in M_{2,103} }[/math]. If we expand the matrix product then each image is approximated by a linear combination of the columns of V: [math]\displaystyle{ \hat{f}(\lambda) = \bar{x} + \lambda_1 v_1 + \lambda_2 v_2 }[/math], where [math]\displaystyle{ \lambda = [\lambda_1, \lambda_2]^T }[/math] is a column of W.

File:linear comb PCA.png

Don't worry about the constant term for now. The point is that we can represent an image using just 2 coefficients instead of 644. Also notice that the coefficients correspond to features of the handwritten digits. The picture below shows the first two principal components for the set of handwritten threes.

The first coefficient represents the width of the entire digit, and the second coefficient represents the thickness of the stroke.

More examples

The slides cover several examples. Some of them use PCA, others use similar, more sophisticated techniques outside the scope of this course (see Nonlinear dimensionality reduction).

  • Handwritten digits.
  • Recognition of hand orientation. (Isomap??)
  • Recognition of facial expressions. (LLE - Locally Linear Embedding?)
  • Arranging words based on semantic meaning.
  • Detecting beards and glasses on faces. (MDS - Multidimensional scaling?)

Derivation of the first Principle Component

We want to find the direction of maximum variation. Let [math]\displaystyle{ \boldsymbol{w} }[/math] be an arbitrary direction, [math]\displaystyle{ \boldsymbol{x} }[/math] a data point and [math]\displaystyle{ \displaystyle u }[/math] the length of the projection of [math]\displaystyle{ \boldsymbol{x} }[/math] in direction [math]\displaystyle{ \boldsymbol{w} }[/math].

[math]\displaystyle{ \begin{align} \textbf{w} &= [w_1, \ldots, w_D]^T \\ \textbf{x} &= [x_1, \ldots, x_D]^T \\ u &= \frac{\textbf{w}^T \textbf{x}}{\sqrt{\textbf{w}^T\textbf{w}}} \end{align} }[/math]

The direction [math]\displaystyle{ \textbf{w} }[/math] is the same as [math]\displaystyle{ c\textbf{w} }[/math] so without loss of generality,

[math]\displaystyle{ \begin{align} |\textbf{w}| &= \sqrt{\textbf{w}^T\textbf{w}} = 1 \\ u &= \textbf{w}^T \textbf{x} \end{align} }[/math]

Let [math]\displaystyle{ x_1, \ldots, x_D }[/math] be random variables, then our goal is to maximize the variance of [math]\displaystyle{ u }[/math],

[math]\displaystyle{ \textrm{var}(u) = \textrm{var}(\textbf{w}^T \textbf{x}) = \textbf{w}^T \Sigma \textbf{w}, }[/math]

For a finite data set we replace the covariance matrix [math]\displaystyle{ \Sigma }[/math] by [math]\displaystyle{ s }[/math], the sample covariance matrix,

[math]\displaystyle{ \textrm{var}(u) = \textbf{w}^T s\textbf{w} }[/math]

is the variance of any vector [math]\displaystyle{ \displaystyle u }[/math], formed by the weight vector [math]\displaystyle{ \displaystyle w }[/math]. The first principal component is the vector that maximizes the variance,

[math]\displaystyle{ \textrm{PC} = \underset{\textbf{w}}{\operatorname{arg\,max}} \, \left( \operatorname{var}(u) \right) = \underset{\textbf{w}}{\operatorname{arg\,max}} \, \left( \textbf{w}^T s \textbf{w} \right) }[/math]

where arg max denotes the value of [math]\displaystyle{ w }[/math] that maximizes the function. Our goal is to find the weights [math]\displaystyle{ \displaystyle w }[/math] that maximize this variability, subject to a constraint.
The problem then becomes,

[math]\displaystyle{ \underset{\textbf{w}}{\operatorname{max}} \, \left( \textbf{w}^T s \textbf{w} \right) }[/math] such that [math]\displaystyle{ \textbf{w}^T \textbf{w} = 1 }[/math]

Notice,

[math]\displaystyle{ \textbf{w}^T s \textbf{w} \leq \| \textbf{w}^T s \textbf{w} \| \leq \| s \| \| \textbf{w} \| = \| s \| }[/math]

Therefore the variance is bounded, so the maximum exists. We find the this maximum using the method of Lagrange multipliers.

Principal Component Analysis Continued - July 14

From the previous lecture, we have seen that to take the direction of maximum variance, the problem becomes: [math]\displaystyle{ \underset{\textbf{w}}{\operatorname{max}} \, \left( \textbf{w}^T s \textbf{w} \right) }[/math] with constraint [math]\displaystyle{ \textbf{w}^T \textbf{w} = 1 }[/math].

Before we can proceed, we must review Lagrange Multipliers.

Lagrange Multiplier

"The red line shows the constraint g(x,y) = c. The blue lines are contours of f(x,y). The point where the red line tangentially touches a blue contour is our solution." [Lagrange Multipliers, Wikipedia]

To find the maximum (or minimum) of a function [math]\displaystyle{ \displaystyle f(x,y) }[/math] subject to constraints [math]\displaystyle{ \displaystyle g(x,y) = 0 }[/math], we define a new variable [math]\displaystyle{ \displaystyle \lambda }[/math] called a Lagrange Multiplier and we form the Lagrangian,

[math]\displaystyle{ \displaystyle L(x,y,\lambda) = f(x,y) - \lambda g(x,y) }[/math]

If [math]\displaystyle{ \displaystyle (x^*,y^*) }[/math] is the max of [math]\displaystyle{ \displaystyle f(x,y) }[/math], there exists [math]\displaystyle{ \displaystyle \lambda^* }[/math] such that [math]\displaystyle{ \displaystyle (x^*,y^*,\lambda^*) }[/math] is a stationary point of [math]\displaystyle{ \displaystyle L }[/math] (partial derivatives are 0).
In addition [math]\displaystyle{ \displaystyle (x^*,y^*) }[/math] is a point in which functions [math]\displaystyle{ \displaystyle f }[/math] and [math]\displaystyle{ \displaystyle g }[/math] touch but do not cross. At this point, the tangents of [math]\displaystyle{ \displaystyle f }[/math] and [math]\displaystyle{ \displaystyle g }[/math] are parallel or gradients of [math]\displaystyle{ \displaystyle f }[/math] and [math]\displaystyle{ \displaystyle g }[/math] are parallel, such that:

[math]\displaystyle{ \displaystyle \nabla_{x,y } f = \lambda \nabla_{x,y } g }[/math]

where,
[math]\displaystyle{ \displaystyle \nabla_{x,y} f = (\frac{\partial f}{\partial x},\frac{\partial f}{\partial{y}}) \leftarrow }[/math] the gradient of [math]\displaystyle{ \, f }[/math]
[math]\displaystyle{ \displaystyle \nabla_{x,y} g = (\frac{\partial g}{\partial{x}},\frac{\partial{g}}{\partial{y}}) \leftarrow }[/math] the gradient of [math]\displaystyle{ \, g }[/math]

Example

Suppose we wish to maximise the function [math]\displaystyle{ \displaystyle f(x,y)=x-y }[/math] subject to the constraint [math]\displaystyle{ \displaystyle x^{2}+y^{2}=1 }[/math]. We can apply the Lagrange multiplier method on this example; the lagrangian is:

[math]\displaystyle{ \displaystyle L(x,y,\lambda) = x-y - \lambda (x^{2}+y^{2}-1) }[/math]

We want the partial derivatives equal to zero:


[math]\displaystyle{ \displaystyle \frac{\partial L}{\partial x}=1+2 \lambda x=0 }[/math]

[math]\displaystyle{ \displaystyle \frac{\partial L}{\partial y}=-1+2\lambda y=0 }[/math]

[math]\displaystyle{ \displaystyle \frac{\partial L}{\partial \lambda}=x^2+y^2-1 }[/math]

Solving the system we obtain 2 stationary points: [math]\displaystyle{ \displaystyle (\sqrt{2}/2,-\sqrt{2}/2) }[/math] and [math]\displaystyle{ \displaystyle (-\sqrt{2}/2,\sqrt{2}/2) }[/math]. In order to understand which one is the maximum, we just need to substitute it in [math]\displaystyle{ \displaystyle f(x,y) }[/math] and see which one as the biggest value. In this case the maximum is [math]\displaystyle{ \displaystyle (\sqrt{2}/2,-\sqrt{2}/2) }[/math].

Determining W

Back to the original problem, from the Lagrangian we obtain,

[math]\displaystyle{ \displaystyle L(\textbf{w},\lambda) = \textbf{w}^T S \textbf{w} - \lambda (\textbf{w}^T \textbf{w} - 1) }[/math]

If [math]\displaystyle{ \textbf{w}^T \textbf{w} }[/math] is a unit vector then the second part of the equation is 0.

If [math]\displaystyle{ \textbf{w}^T \textbf{w} }[/math] is not a unit vector the the second part of the equation increases. Thus decreasing overall [math]\displaystyle{ \displaystyle L(\textbf{w},\lambda) }[/math]. Maximization happens when [math]\displaystyle{ \textbf{w}^T \textbf{w} =1 }[/math]


(Note that to take the derivative with respect to w below, [math]\displaystyle{ \textbf{w}^T S \textbf{w} }[/math] can be thought of as a quadratic function of w, hence the 2sw below. For more matrix derivatives, see section 2 of the Matrix Cookbook)

Taking the derivative with respect to w, we get:

[math]\displaystyle{ \displaystyle \frac{\partial L}{\partial \textbf{w}} = 2S\textbf{w} - 2\lambda\textbf{w} }[/math]

Set [math]\displaystyle{ \displaystyle \frac{\partial L}{\partial \textbf{w}} = 0 }[/math], we get

[math]\displaystyle{ \displaystyle S\textbf{w}^* = \lambda^*\textbf{w}^* }[/math]

From the eigenvalue equation [math]\displaystyle{ \, \textbf{w}^* }[/math] is an eigenvector of S and [math]\displaystyle{ \, \lambda^* }[/math] is the corresponding eigenvalue of S. If we substitute [math]\displaystyle{ \displaystyle\textbf{w}^* }[/math] in [math]\displaystyle{ \displaystyle \textbf{w}^T S\textbf{w} }[/math] we obtain,

[math]\displaystyle{ \displaystyle\textbf{w}^{*T} S\textbf{w}^* = \textbf{w}^{*T} \lambda^* \textbf{w}^* = \lambda^* \textbf{w}^{*T} \textbf{w}^* = \lambda^* }[/math]

In order to maximize the objective function we choose the eigenvector corresponding to the largest eigenvalue. We choose the first PC, u1 to have the maximum variance
(i.e. capturing as much variability in in [math]\displaystyle{ \displaystyle x_1, x_2,...,x_D }[/math] as possible.) Subsequent principal components will take up successively smaller parts of the total variability.


D dimensional data will have D eigenvectors

[math]\displaystyle{ \lambda_1 \geq \lambda_2 \geq ... \geq \lambda_D }[/math] where each [math]\displaystyle{ \, \lambda_i }[/math] represents the amount of variation in direction [math]\displaystyle{ \, i }[/math]

so that

[math]\displaystyle{ Var(u_1) \geq Var(u_2) \geq ... \geq Var(u_D) }[/math]


Note that the Principal Components decompose the total variance in the data:

[math]\displaystyle{ \displaystyle \sum_{i = 1}^D Var(u_i) = \sum_{i = 1}^D \lambda_i = Tr(S) = \sum_{i = 1}^D Var(x_i) }[/math]

i.e. the sum of variations in all directions is the variation in the whole data

Example from class

We apply PCA to the noise data, making the assumption that the intrinsic dimensionality of the data is 10. We now try to compute the reconstructed images using the top 10 eigenvectors and plot the original and reconstructed images

The Matlab code is as follows:

 load('C:\Documents and Settings\r2malik\Desktop\STAT 341\noisy.mat')
 who
 size(X)
 imagesc(reshape(X(:,1),20,28)')
 colormap gray
 imagesc(reshape(X(:,1),20,28)')
 [u s v] = svd(X);
 xHat = u(:,1:10)*s(1:10,1:10)*v(:,1:10)'; % use ten principal components
 figure
 imagesc(reshape(xHat(:,1000),20,28)') % here '1000' can be changed to different values, e.g. 105, 500, etc.
 colormap gray

Running the above code gives us 2 images - the first one represents the noisy data - we can barely make out the face

The second one is the denoised image



As you can clearly see, more features can be distinguished from the picture of the de-noised face compared to the picture of the noisy face. If we took more principal components, at first the image would improve since the intrinsic dimensionality is probably more than 10. But if you include all the components you get the noisy image, so not all of the principal components improve the image. In general, it is difficult to choose the optimal number of components.

Principle Component Analysis (continued) - July 16

Application of PCA - Feature Abstraction

One of the applications of PCA is to group similar data (e.g. images). There are generally two methods to do this. We can classify the data (i.e. give each data a label and compare different types of data) or cluster (i.e. do not label the data and compare output for classes).

Generally speaking, we can do this with the entire data set (if we have an 8X8 picture, we can use all 64 pixels). However, this is hard, and it is easier to use the reduced data and features of the data.

Example: Comparing Images of 2s and 3s

To demonstrate this process, we can compare the images of 2s and 3s - from the same data set we have been using throughout the course. We will apply PCA to the data, and compare the images of the labeled data. This is an example in classifying.





The matlab code is as follows.

      load 2_3   %the size of this file is 64 X 400
      [coefs , scores ] = princomp (X')  
             % performs principal components analysis on the data matrix X
             % returns the principal component coefficients and scores
             % scores is the low dimensional representatioation of the data X
      plot(scores(:,1),scores(:,2))    
             % plots the first most variant dimension on the x-axis 
             % and the second highest on the y-axis  
      
      plot(scores(1:200,1),scores(1:200,2))
             % same graph as above, only with the 2s (not 3s)
      
      hold on   % this command allows us to add to the current plot
      plot (scores(201:400,1),scores(201:400,2),'ro')
             % this addes the data for the 3s
             % the 'ro' command makes them red Os on the plot
      
      % If We classify based on the position in this plot (feature), 
      % its easier than looking at each of the 64 data pieces
      gname() % displays a figure window and 
              % waits for you to press a mouse button or a keyboard key
      figure
      subplot(1,2,1)
      imagesc(reshape(X(:,45),8,8)')
      subplot(1,2,2)
      imagesc(reshape(X(:,93),8,8)')
      

General PCA Algorithm

The PCA Algorithm is summarized in the following slide (taken from the Lecture Slides).


Other Notes:

  1. The mean of the data(X) must be 0. This means we may have to preprocess the data by subtracting off the mean(see detailsPCA in Wikipedia.)
  2. Encoding the data means that we are projecting the data onto a lower dimensional subspace by taking the inner product. Encoding: [math]\displaystyle{ X_{D\times n} \longrightarrow Y_{d\times n} }[/math] using mapping [math]\displaystyle{ \, U^T X_{d \times n} }[/math].
  3. When we reconstruct the training set, we are only using the top d dimensions.This will eliminate the dimensions that have lower variance (e.g. noise). Reconstructing: [math]\displaystyle{ \hat{X}_{D\times n}\longleftarrow Y_{d \times n} }[/math] using mapping [math]\displaystyle{ \, UY_{D \times n} }[/math].
  4. We can compare the reconstructed test sample to the reconstructed training sample to classify the new data.

Fisher's Linear Discriminant Analysis (FDA) - July 16(cont)

Similar to PCA, the goal of FDA is to project the data in a lower dimension. The difference is that we we are not interested in maximizing variances. Rather our goal is to find a direction that is useful for classifying the data (i.e. in this case, we are looking for direction representative of a particular characteristic e.g. glasses vs. no-glasses).

The number of dimensions that we want to reduce the data to, depends on the number of classes:
For a 2 class problem, we want to reduce the data to one dimension (a line), [math]\displaystyle{ \displaystyle Z \in \mathbb{R}^{1} }[/math]
Generally, for a k class problem, we want k-1 dimensions, [math]\displaystyle{ \displaystyle Z \in \mathbb{R}^{k-1} }[/math]

As we will see from our objective function, we want to maximize the separation of the classes and to minimize the within variance of each class. That is, our ideal situation is that the individual classes are as far away from each other as possible, but the data within each class is close together (i.e. collapse to a single point).

The following diagram summarizes this goal.

In fact, the two examples above may represent the same data projected on two different lines.

Goal: Maximum Separation

1. Minimize the within class variance

[math]\displaystyle{ \displaystyle \min (w^T\sum_1w) }[/math]

[math]\displaystyle{ \displaystyle \min (w^T\sum_2w) }[/math]

and this problem reduces to [math]\displaystyle{ \displaystyle \min (w^T(\sum_1 + \sum_2)w) }[/math]
(where [math]\displaystyle{ \displaystyle \sum_1 and \sum_2 }[/math] are the covariance matrices for the 1st and 2nd class of data respectively)

Let [math]\displaystyle{ \displaystyle \ s_w=\sum_1 + \sum_2 }[/math] be the within classes covariance. Then, this problem can be rewritten as [math]\displaystyle{ \displaystyle \min (w^Ts_ww) }[/math]

2. Maximize the distance between the means of the projected data


The optimization problem we want to solve is,

[math]\displaystyle{ \displaystyle \max (w^T \mu_1 - w^T \mu_2)^2, }[/math]

[math]\displaystyle{ \begin{align} (w^T \mu_1 - w^T \mu_2)^2 &= (w^T \mu_1 - w^T \mu_2)^T(w^T \mu_1 - w^T \mu_2)\\ &= (\mu_1^Tw - \mu_2^Tw^T)(w^T \mu_1 - w^T \mu_2)\\ &= (\mu_1^T - \mu_2^T)ww^T(\mu_1 - \mu_2) \end{align} }[/math]

which is a scalar. Therefore,

[math]\displaystyle{ \displaystyle = tr[(\mu_1^T - \mu_2^T)ww^T(\mu_1 - \mu_2)] }[/math]

[math]\displaystyle{ \displaystyle = tr[w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw] }[/math]

(using the property of [math]\displaystyle{ \displaystyle tr[ABC] = tr[CAB] = tr[BCA] }[/math]

[math]\displaystyle{ \displaystyle = w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw }[/math]

Thus our original problem equivalent can be written as,

[math]\displaystyle{ \displaystyle \max (w^T \mu_1 - w^T \mu_2)^2 = \displaystyle \max (w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw) }[/math]

For a two class problem the between class variance is,

[math]\displaystyle{ \displaystyle \ s_B=(\mu_1 - \mu_2)(\mu_1 - \mu_2)^T }[/math]

Then this problem can be rewritten as,

[math]\displaystyle{ \displaystyle \max (w^Ts_Bw) }[/math]

Objective Function

We want an objective function which satisifies both of the goals outlined above (at the same time).

  1. [math]\displaystyle{ \displaystyle \min (w^T(\sum_1 + \sum_2)w) }[/math] or [math]\displaystyle{ \displaystyle \min (w^Ts_ww) }[/math]
  2. [math]\displaystyle{ \displaystyle \max (w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw) }[/math] or [math]\displaystyle{ \displaystyle \max (w^Ts_Bw) }[/math]



We take the ratio of the two -- we wish to maximize

[math]\displaystyle{ \displaystyle \frac{(w^T(\mu_1 - \mu_2)(\mu_1 - \mu_2)^Tw)} {(w^T(\sum_1 + \sum_2)w)} }[/math]

or equivalently,

[math]\displaystyle{ \displaystyle \max \frac{(w^Ts_Bw)}{(w^Ts_ww)} }[/math]


This is a very famous problem which is called "the generalized eigenvector problem". We can solve this using Lagrange Multipliers. Since W is a directional vector, we do not care about the size of W. Therefore we solve a problem similar to that in PCA,

[math]\displaystyle{ \displaystyle \max (w^Ts_Bw) }[/math]
subject to [math]\displaystyle{ \displaystyle (w^Ts_Ww=1) }[/math]

We solve the following Lagrange Multiplier problem,

[math]\displaystyle{ \displaystyle L(w,\lambda) = w^Ts_Bw - \lambda (w^Ts_Ww -1) }[/math]

Continuation of Fisher's Linear Discriminant Analysis (FDA) - July 21

As discussed in the previous lecture, our Optimization problem for FDA is:

[math]\displaystyle{ \max_{w} \frac{w^T s_B w}{w^T s_w w} }[/math]

which we turned into

[math]\displaystyle{ \displaystyle \max (w^Ts_Bw) }[/math]
Subject to: [math]\displaystyle{ \displaystyle (w^Ts_ww=1) }[/math]

where [math]\displaystyle{ s_B }[/math] is the covariance matrix between classes and [math]\displaystyle{ s_w }[/math] is the covariance matrix within classes.

Using Lagrange multipliers, we have a Partial solution to: [math]\displaystyle{ \displaystyle (w^Ts_Bw) - \lambda \cdot [(w^Ts_ww)-1] }[/math]

- The optimal solution for w is the eigenvector of [math]\displaystyle{ \displaystyle s_w^{-1}s_B }[/math] corresponding to the largest eigenvalue;

- For a k class problem, we will take the eigenvectors corresponding to the (k-1) highest eigenvalues.

- In the case of two-class problem, the optimal solution for w can be simplfied, such that: [math]\displaystyle{ \displaystyle w \propto s_w^{-1}(\mu_2 - \mu_1) }[/math]

Example:

We see now an application of the theory that we just introduced. Using Matlab, we find the principal components and the projection by Fisher Discriminant Analysis of two Bivariate normal distributions to show the difference between the two methods.

      %First of all, we generate the two data set:
      X1 = mvnrnd([1,1],[1 1.5; 1.5 3], 300) 
      %In this case: mu_1=[1;1]; Sigma_1=[1 1.5; 1.5 3], where mu and sigma are the mean and covariance matrix.
      X2 = mvnrnd([5,3],[1 1.5; 1.5 3], 300) 
      %Here mu_2=[5;3] and Sigma_2=[1 1.5; 1.5 3]
      %The plot of the two distributions is:
      

      %We compute the principal components:
      X=[X1,X2]
      X=X'
      [coefs, scores]=princomp(X');
      coefs(:,1)    %first principal component
      coefs(:,1)
   
      ans =
      0.76355476446932
      0.64574307712603
  
      plot([0 coefs(1,1)], [0 coefs(2,1)],'b')
      plot([0 coefs(1,1)]*10, [0 coefs(2,1)]*10,'r')
      sw=2*[1 1.5;1.5 3]   % sw=Sigma1+Sigma2=2*Sigma1
      w=sw\[4; 2]       % calculate s_w^{-1}(mu2 - mu1)
      plot ([0 w(1)], [0 w(2)],'g')

      %We now make the projection:
      Xf=w'*X
      figure
      plot(Xf(1:300),1,'ob') %In this case, since it's a one dimension data, the plot is "Data Vs Indexes"
      hold on
      plot(Xf(301:600),1,'or')
      

      %We see that in the above picture that there is no overlapping
      Xp=coefs(:,1)'*X
      figure
      plot(Xp(1:300),1,'b')
      hold on
      plot(Xp(301:600),2,'or') 
  
  

      %In this case there is an overlapping since we project the first principal component on [Xp=coefs(:,1)'*X]

Classification - July 21 (cont)

The process of classification involves predicting a discrete random variable from another (not necessarily discrete) random variable. For instance we could be wishing to classify an image as a chair, a desk, or a person. The discrete random variable, [math]\displaystyle{ Y }[/math], is drawn from the set 'chair,' 'desk,' and 'person,' while the random variable [math]\displaystyle{ X }[/math] is the image. (In the case in which Y is a continuous variable, classification is an application of regression.)

Consider independent and identically distributed data points [math]\displaystyle{ \displaystyle (x_1,y_1),...,(x_N,y_N) }[/math] where [math]\displaystyle{ \displaystyle x_i \in X \subset \mathbb{R}^d }[/math] and [math]\displaystyle{ y_i \in Y }[/math] and Y is a finite set of discrete values. The set of data points [math]\displaystyle{ \displaystyle \{(x_1,y_1),...,(x_N,y_N)\} }[/math] is called the training set.

Then [math]\displaystyle{ \displaystyle h(x) }[/math] is a classifier where, given a new data point [math]\displaystyle{ \displaystyle x }[/math], [math]\displaystyle{ \displaystyle h(x) }[/math] predicts [math]\displaystyle{ \displaystyle y }[/math]. The function [math]\displaystyle{ \displaystyle h }[/math] is found using the training set. i.e. the set trains [math]\displaystyle{ \displaystyle h }[/math] to map [math]\displaystyle{ \displaystyle X }[/math] to [math]\displaystyle{ \displaystyle Y }[/math]:

[math]\displaystyle{ \, y = h(x) }[/math]

[math]\displaystyle{ \, h: X \to Y }[/math]


To continue with the example before, given a training set of images displaying desks, chairs, and people, the function should be able to read a new image never before seen and predict what the image displays out of the three above options, within a margin of error.


Error Rate

[math]\displaystyle{ \, \hat E(h) =Pr(h(x)\neq y) }[/math]

Given test points, how can we find the emperical error rate?

We simply count the number of points that have been misclassified and divide by the total number of points.

[math]\displaystyle{ \, \hat E(h) = \frac{1}{N} \sum_{i=1}^N I (Y_i \neq h(x_i)) }[/math]

Error rate is also called misclassification rate, and 1 minus the error rate is sometimes called the classification rate.

Bayes Classification Rule

Considering the case of a two-class problem where [math]\displaystyle{ \mathcal{Y} = \{0,1\} }[/math]

[math]\displaystyle{ \, r(x)= Pr(Y=1 \mid X=x)= \frac {Pr(X=x \mid Y=1)P(Y=1)} {Pr(X=x)} }[/math]

Where the denominator can be written as [math]\displaystyle{ \, Pr(X=x) = Pr(X=x \mid Y=1)P(Y=1)+Pr(X=x \mid Y=0)P(Y=0) }[/math]

So our classifier function [math]\displaystyle{ h(x) = \begin{cases} 1 & r(x) \geq \frac{1}{2} \\ 0 & o/w\\ \end{cases} }[/math]

This function is considered to be the best classifier in terms of error rate. A problem is that we do not know the joint and marginal probability distributions to calculate [math]\displaystyle{ \, r(x) }[/math]. A real-life interpretation of the marginal[math]\displaystyle{ \, Pr(Y=1 \mid X=x) }[/math] may even deal with patterns and meaning, which provides an extra challenge in finding a mathematical interpretation.

In the image, which set would it be more appropriate for the question mark to belong to?


This function is viewed as a theoretical bound - the best that can be achieved by various classification techniques. One such technique is finding the Decision Boundary.

Decision Boundary

The Decision boundary is given by:

[math]\displaystyle{ \, Pr(Y=1 \mid X=x)= Pr(Y=0 \mid X=x) }[/math]

Suggesting those points where the probabilities of being in both classes are identical. Thus,


[math]\displaystyle{ \, D: \{ x \mid Pr(Y=1 \mid X=x)= Pr(Y=0 \mid X=x)\} }[/math]

Linear discriminant analysis has a decision boundary represented by a linear function, while quadratic discriminant analysis has a decision boundary represented by a quadratic function.


Linear Discriminant Analysis(LDA) - July 23

Motivation

"LDA decision boundary for 2 classes of multivariate normal data"

We would like to apply Bayes Classifiaction rule by approximating the class conditional density [math]\displaystyle{ \, f_k(x) }[/math] and the prior [math]\displaystyle{ \, \pi_k }[/math] [math]\displaystyle{ P(Y=k|X=x) = \frac{f_k(x)\pi_k}{\sum_{\forall{k}}f_k(x_k)\pi_k} }[/math]

By making the following assumptions we can find a linear approximation to the boundary given by Bayes rule,

  1. The class conditional density is multivariate gaussian
  2. The classes have a common covariance matrix

Derivation

Note on Quadratic Form

[math]\displaystyle{ \, (x + a)^TA(x+b) = x^TAx + a^TAb + x^TAb + a^TAx }[/math]


By assumption (1),

[math]\displaystyle{ f_k(x) = \frac{1}{(2\pi)^{d/2}|\Sigma_k|^{1/2}}e^{-\frac{1}{2}(x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k)} }[/math]

where [math]\displaystyle{ \, \Sigma_k }[/math] is the class covariance matrix and [math]\displaystyle{ \, \mu_k }[/math] is the class mean. By definition of the decision boundary (decision boundary between class [math]\displaystyle{ \ k }[/math] and class [math]\displaystyle{ \ l }[/math]),

[math]\displaystyle{ \begin{align}&P(Y=k|X=x) = P(Y=l|X=x)\\ \\ &\frac{1}{(2\pi)^{d/2}|\Sigma_k|^{1/2}}e^{-\frac{1}{2}(x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k)}\pi_k = \frac{1}{(2\pi)^{d/2}|\Sigma_l|^{1/2}}e^{-\frac{1}{2}(x-\mu_l)^T\Sigma_l^{-1}(x-\mu_l)}\pi_l\end{align} }[/math]

By assumption (2),

[math]\displaystyle{ \begin{align}& \Sigma_k = \Sigma_l = \Sigma\\ \\ & e^{-\frac{1}{2}(x-\mu_k)^T\Sigma^{-1}(x-\mu_k)}\pi_k = e^{-\frac{1}{2}(x-\mu_l)^T\Sigma^{-1}(x-\mu_l)} \pi_l \\ & -\frac{1}{2}(x-\mu_k)^T\Sigma^{-1}(x-\mu_k) + \log(\pi_k) = -\frac{1}{2}(x-\mu_l)^T\Sigma^{-1}(x-\mu_l) + \log(\pi_l) \\ & -\frac{1}{2}(x^T\Sigma^{-1}x + \mu_k^T\Sigma^{-1}\mu_k - 2x^T\Sigma^{-1}\mu_k) + \frac{1}{2}(x^T\Sigma^{-1}x + \mu_l^T\Sigma^{-1}\mu_l - 2x^T\Sigma^{-1}\mu_l) + \log{\frac{\pi_k}{\pi_l}} = 0\ \\ \\ & x^T(\mu_k - \mu_l) + \frac{1}{2}(\mu_l - \mu_k)^T\Sigma^{-1}(\mu_l + \mu_k) + \log{\frac{\pi_k}{\pi_l}} = 0 \end{align} }[/math]

The result is a linear function of [math]\displaystyle{ \ x }[/math] of the form [math]\displaystyle{ \, x^Ta + b = 0 }[/math].

Example:
      %Load data set
      load 2_3;
      [coefs, scores] = princomp(X');
      size(X)
        % ans = 64 400          % 64 principal components
      size(coefs)
        % ans = 64 64 
      size(scores)
        % ans = 400 64
      Y=scores(:, 1:2);
        % just use two of the 64 principal components
      plot(Y(1:200, 1),Y(1:200, 2), 'b.')
      hold on
      plot(Y(201:400, 1),Y(201:400, 2), 'r.')

      ll=[zeros(200,1),ones(200,1)];
      [C,err,P,logp] = classify(Y, Y, ll', 'linear');
      %The output of the function classify are:
      %  C, that is a vector whose values define the groups that are classified from the rows of the matrix
      %  err, that is an estimate of the misclassification error
      %  P, that gives the probabilities that the observation n belongs to the ith class 
      %  logp, that gives the logaritm of the probability density of the nth observation. Where the latter is the sum of 
      %        the conditional probability of the nth observation to belong to the class ith, times the probability of the 
      %        class ith, taken over all classes.

Computational Method

We can implement this computationally by the following:

Define two variables, [math]\displaystyle{ \, \delta_k }[/math] and [math]\displaystyle{ \, \delta_l }[/math]

[math]\displaystyle{ \,\delta_k = log(f_k(x)\pi_k) = log (\pi_k) - \frac{1}{2}(x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k) - \frac{d}{2}log(2\pi) - \frac{1}{2}log(|\Sigma_k|) }[/math]

[math]\displaystyle{ \,\delta_l = log(f_l(x)\pi_l) = log (\pi_l) - \frac{1}{2}(x-\mu_l)^T\Sigma_l^{-1}(x-\mu_l) - \frac{d}{2}log(2\pi) - \frac{1}{2}log(|\Sigma_l|) }[/math]


To classify a point, [math]\displaystyle{ \ x }[/math], first compute [math]\displaystyle{ \, \delta_k }[/math] and [math]\displaystyle{ \, \delta_l }[/math].

Classify it to class [math]\displaystyle{ \ k }[/math] if [math]\displaystyle{ \, \delta_k \gt \delta_l }[/math] and vise versa.

[math]\displaystyle{ h(x) = \begin{cases} k, & \text{if } \delta_k \gt \delta_l \\ l, & otherwise\\ \end{cases} }[/math]

(note: since [math]\displaystyle{ - \frac{d}{2}log(2\pi) }[/math] is a constant term, we can simply ignore it in the actual computation since it will cancel out when we do the comparison of the deltas.)

Special Case: [math]\displaystyle{ \, \Sigma_k = I }[/math], the identity matrix. Then,

[math]\displaystyle{ \,\delta_k = log (\pi_k) - \frac{1}{2}(x-\mu_k)^T(x-\mu_k) - \frac{1}{2}log(|I|) = log (\pi_k) - \frac{1}{2}(x-\mu_k)^T(x-\mu_k) }[/math]

We see that in the case [math]\displaystyle{ \, \Sigma = I }[/math], we can simply classify a point, [math]\displaystyle{ \ x }[/math], to a class based on the distances between [math]\displaystyle{ \ x }[/math] and the mean of the different classes (adjusted with the log of the prior).

General Case: [math]\displaystyle{ \, \Sigma_k \ne I }[/math]

[math]\displaystyle{ \, \Sigma_k = USV^T = USU^T }[/math] (since [math]\displaystyle{ \ \Sigma }[/math] is symmetric)

[math]\displaystyle{ \, \Sigma_k^{-1} = (USU^T)^{-1} = (U^T)^{-1}S^{-1}U^{-1} = US^{-1}U^T }[/math]

So, [math]\displaystyle{ (x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k) }[/math]

[math]\displaystyle{ \, = (x-\mu_k)^T US^{-1}U^T(x-\mu_k) }[/math]
[math]\displaystyle{ \, = (U^Tx-U^T\mu_k)^T S^{-1}(U^Tx-U^T\mu_k) }[/math]
[math]\displaystyle{ \, = (U^Tx-U^T\mu_k)^T S^{-\frac{1}{2}}S^{-\frac{1}{2}}(U^Tx-U^T\mu_k) }[/math]
[math]\displaystyle{ \, = (S^{-\frac{1}{2}}U^Tx-S^{-\frac{1}{2}}U^T\mu_k)^T I(S^{-\frac{1}{2}}U^Tx-S^{-\frac{1}{2}}U^T\mu_k) }[/math]
[math]\displaystyle{ \, = (x^* - \mu_k^*)^T I(x^* - \mu_k^*) }[/math]

where [math]\displaystyle{ \, x^* = S^{-\frac{1}{2}}U^Tx }[/math] and [math]\displaystyle{ \mu_k^* = S^{-\frac{1}{2}}U^T\mu_k }[/math]

Hence the approach taken should be to transpose point [math]\displaystyle{ x }[/math] from the beginning,

i.e. Let [math]\displaystyle{ \, x^* = S^{-\frac{1}{2}}U^Tx }[/math]

Then compute [math]\displaystyle{ \, \delta_k }[/math] and [math]\displaystyle{ \, \delta_l }[/math] with [math]\displaystyle{ x^* }[/math], similar to the special case above.

If the prior distributions of the 2 classes are the same, then this method only requires us to find the distances from the point x to the mean of the 2 classes. We would classify x based on the shortest distance to the mean.

In the [math]\displaystyle{ \delta }[/math] function calculations above, [math]\displaystyle{ \pi_k = P(X = k) }[/math] and [math]\displaystyle{ \pi_l = P(X = l) }[/math], and can be approximated using the proportions of [math]\displaystyle{ k }[/math] and [math]\displaystyle{ l }[/math] elements in the training set.

More on Quadratic Discriminant Analysis - July 28

The curse of dimensionality is a problem in many fields, including computer science and statistics. Loosely stated, it says that many problems in computer science and statistics become impractical as the number of dimensions increases.

For example, the number of points needed to accurately estimate parameters increases with the number of parameters. For linear discriminant analysis we have to estimate [math]\displaystyle{ \ d(d+1)/2 }[/math] elements of the covariance matrix and [math]\displaystyle{ \ 2d }[/math] elements for the mean, for a total of [math]\displaystyle{ \ (d^2 + 5d)/2 }[/math]. For quadratic discriminant analysis, we have to estimate a second covariance matrix, so we estimate [math]\displaystyle{ \ d^2 + 3d }[/math] parameters in total. Clearly we have to estimate more parameters for QDA than LDA, so we need more points to get an accurate estimate for the discriminant.

Estimation:

[math]\displaystyle{ \ Y_{N\times 1} = X_{N\times d}^{-1}W_{d\times 1} }[/math]
[math]\displaystyle{ \left[\begin{matrix}y_1\\ y_2\end{matrix}\right]=\left[\begin{matrix}x_{11}&x_{12}\\ x_{21}&x_{22}\end{matrix}\right]\left[\begin{matrix}w_1\\ w_2\end{matrix}\right] }[/math]
[math]\displaystyle{ \left[\begin{matrix}y_1\\ y_2\end{matrix}\right]=\left[\begin{matrix}x_{11}&x_{12}&x_{11}^2&x_{12}^2\\ x_{21}&x_{22}&x_{21}^2&x_{22}^2\end{matrix}\right]\left[\begin{matrix}w_1\\ w_2\\ w_3\\ w_4\end{matrix}\right] }[/math]

We can find very flexible boundaries with LDA by expanding the input domain. For example, suppose [math]\displaystyle{ \ X }[/math] has two features and can be represented as [math]\displaystyle{ \ X=[x, y] }[/math] then,

We can expand [math]\displaystyle{ \ X }[/math] to [math]\displaystyle{ \ [x, y, x^2, y^2, xy] }[/math] and get a quadratic boundary.
We can expand [math]\displaystyle{ \ X }[/math] to [math]\displaystyle{ \ [x, y, xy, x^2, y^2, x^3, y^3, x^2y, xy^2] }[/math] and get a cubic boundary.
ect..

In Matlab:

      %To double the input:
      load 2_3;
      [U, Y] = princomp(X');
      ll=zeros(N,1);
      Y1=[Y Y.^2];
      [C,err,p,logp]=classify(Y1,Y1,ll,"linear")
      %Now it classify using not only y but also y^2.