Difference between revisions of "stat340s13"
(→Detailed Balance:) 
m (→AcceptanceRejection Method) 

Line 920:  Line 920:  
[[File:AR_Method.png]]  [[File:AR_Method.png]]  
−  
−  
−  
−  
The main logic behind the AcceptanceRejection Method is that:<br>  The main logic behind the AcceptanceRejection Method is that:<br> 
Revision as of 21:40, 3 July 2013
Contents
 1 Introduction, Class 1  Tuesday, May 7
 2 Class 2  Thursday, May 9
 3 Class 3  Tuesday, May 14
 4 Summary of Inverse Transform Method
 5 Class 4  Thursday, May 16
 6 Class 5  Tuesday, May 21
 7 AcceptanceRejection Method
 8 Class 6  Thursday, May 23
 9 Class 7  Tuesday, May 28
 10 Class 8  Thursday, May 30, 2013
 11 Class 9  Tuesday, June 4, 2013
 12 Class 10  Thursday June 6th 2013
 13 Class 11  Tuesday，June 11, 2013
 13.1 Poisson Process
 13.2 Generating a Homogeneous Poisson Process
 13.3 Markov chain
 13.4 Examples of Transition Matrix
 13.5 Midterm Review
 13.6 Multiplicative Congruential Algorithm
 13.7 Inverse Transformation Method
 13.8 AcceptanceRejection Method
 13.9 Multivariate
 13.10 Vector A/R Method
 13.11 Common distribution
 13.12 Exponential
 13.13 Normal
 13.14 Gamma
 13.15 Bernoulli
 13.16 Binomial
 13.17 Beta Distribution
 13.18 Geometric
 13.19 Poisson
 14 Class 13  Tuesday June 18th 2013
 15 Class 14  Thursday June 20th 2013
 16 Class 15  Tuesday June 25th 2013
 17 Class 16  Thursday June 27th 2013
 18 Class 17  Tuesday July 2nd 2013
Introduction, Class 1  Tuesday, May 7
Course Instructor: Ali Ghodsi
Lecture:
001: T/Th 8:309:50am MC1085
002: T/Th 1:002:20pm DC1351
Tutorial:
2:303:20pm Mon M3 1006
Office Hours:
Friday at 10am, M3 4208
Midterm
Monday June 17,2013 from 2:30pm3:20pm
Final
Saturday August 10,2013 from 7:30pm10:00pm
TA(s):
TA  Day  Time  Location 

Lu Cheng  Monday  3:305:30 pm  M3 3108, space 2 
Han ShengSun  Tuesday  4:006:00 pm  M3 3108, space 2 
Yizhou Fang  Wednesday  1:003:00 pm  M3 3108, space 1 
Huan Cheng  Thursday  3:005:00 pm  M3 3111, space 1 
Wu Lin  Friday  11:001:00 pm  M3 3108, space 1 
Four Fundamental Problems
1. Classification: Given an input object X, we have a function which will take in this input X and identify which 'class (Y)' it belongs to (Discrete Case)
i.e taking value from x, we could predict y.
(For example, if you have 40 images of oranges and 60 images of apples (represented by x), you can estimate a function that takes the images and states what type of fruit it is  note Y is discrete in this case.)
2. Regression: Same as classification but in the continuous case except y is non discrete. (Example of stock prices)
(A simple practice might be investigating the hypothesis that higher levels of education cause higher levels of income.)
3. Clustering: Use common features of objects in same class or group to form clusters.(in this case, x is given, y is unknown)
4. Dimensionality Reduction (aka Feature extraction, Manifold learning): Used when we have a variable in high dimension space and we want to reduce the dimension
Applications
Most useful when structure of the task is not well understood but can be characterized by a dataset with strong statistical regularity
Examples:
 Computer Vision, Computer Graphics, Finance (fraud detection), Machine Learning
 Search and recommendation (eg. Google, Amazon)
 Automatic speech recognition, speaker verification
 Text parsing
 Face identification
 Tracking objects in video
 Financial prediction(e.g. credit cards)
 Fraud detection
 Medical diagnosis
Course Information
Prerequisite: (One of CS 116, 126/124, 134, 136, 138, 145, SYDE 221/322) and (STAT 230 with a grade of at least 60% or STAT 240) and (STAT 231 or 241)
Antirequisite: CM 361/STAT 341, CS 437, 457
General Information
 No required textbook
 Recommended: "Simulation" by Sheldon M. Ross
 Computing parts of the course will be done in Matlab, but prior knowledge of Matlab is not essential (will have a tutorial on it)
 First midterm will be held on Monday, June 17 from 2:30 to 3:30
 Announcements and assignments will be posted on Learn.
 Other course material on: http://wikicoursenote.com/wiki/
 Log on to both Learn and wikicoursenote frequently.
 Email all questions and concerns to UWStat340@gmail.com. Do not use your personal email address! Do not email instructor or TAs about the class directly to their personal accounts!
Wikicourse note (10% of final mark):
When applying for an account in the wikicourse note, please use the quest account as your login name while the uwaterloo email as the registered email. This is important as the quest id will be used to identify the students who make the contributions.
Example:
User: questid
Email: questid@uwaterloo.ca
After the student has made the account request, do wait for several hours before students can login into the account using the passwords stated in the email. During the first login, students will be ask to create a new password for their account.
As a technical/editorial contributor: Make contributions within 1 week and do not copy the notes on the blackboard.
All contributions are now considered general contributions you must contribute to 50% of lectures for full marks
 A general contribution can be correctional (fixing mistakes) or technical (expanding content, adding examples, etc.) but at least half of your contributions should be technical for full marks.
Do not submit copyrighted work without permission, cite original sources. Each time you make a contribution, check mark the table. Marks are calculated on an honour system, although there will be random verifications. If you are caught claiming to contribute but have not, you will not be credited.
Wikicoursenote contribution form : https://docs.google.com/forms/d/1Sgq0uDztDvtcS5JoBMtWziwH96DrBz2JiURvHPNdxs/viewform
 you can submit your contributions multiple times.
 you will be able to edit the response right after submitting
 send email to make changes to an old response : uwstat340@gmail.com
Tentative Topics
 Random variable and stochastic process generation
 DiscreteEvent Systems
 Variance reduction
 Markov Chain Monte Carlo
Tentative Marking Scheme
Item  Value 

Assignments (~6)  30% 
WikiCourseNote  10% 
Midterm  20% 
Final  40% 
The final exam is going to be closed book and only nonprogrammable calculators are allowed.
A passing mark must be achieved in the final to pass the course
Class 2  Thursday, May 9
Generating Random Numbers
Introduction
Simulation is the imitation of a process or system over time. Computational power has introduced the possibility of using simulation study to analyze models used to describe a situation.
In order to perform a simulation study, we must first:
<br\> 1. Use a computer to generate (pseudo) random numbers.
2. Use these numbers to generate values of random variable from distributions.
3. Using the concept of discrete events, we show how the random variables can be used to generate the behavior of a stochastic model over time. (Note: A stochastic model is the opposite of deterministic model, where there are several directions the process can evolve to)
4. After continually generating the behavior of the system, we can obtain estimators and other quantities of interest.
The building block of a simulation study is the ability to generate a random number. This random number is a value from a random variable distributed uniformly on (0,1). There are many different methods of generating a random number:
Physical Method: Roulette wheel, lottery balls, dice rolling, card shuffling etc.
Numerically/Arithmetically: Use of a computer to successively generate pseudorandom numbers. The
sequence of numbers can appear to be random; however they are deterministically calculated with an
equation which defines pseudorandom.
(Source: Ross, Sheldon M., and Sheldon M. Ross. Simulation. San Diego: Academic, 1997. Print.)
In general, a deterministic model produces specific results given certain inputs by the model user, contrasting with a stochastic model which encapsulates randomness and probabilistic events.
A computer cannot generate truly random numbers because computers can only run algorithms, which are deterministic in nature. They can, however, generate Pseudo Random Numbers
Pseudo Random Numbers are the numbers that seem random but are actually deterministic. Although the pseudo random numbers are deterministic, these numbers have a sequence of value and all of them have the appearances of being independent uniform random variables. Being deterministic, pseudo random numbers are valuable and beneficial due to the ease to generate and manipulate.
When people do the test many times, the results will be the closed express values, which make the trial look deterministic, however for each trial, the result is random. So, it looks like pseudo random numbers.
Mod
Let [math]n \in \N[/math] and [math]m \in \N^+[/math], then by Division Algorithm,
[math]\exists q, \, r \in \N \;\text{with}\; 0\leq r \lt m, \; \text{s.t.}\; n = mq+r[/math],
where [math]q[/math] is called the quotient and [math]r[/math] the remainder. Hence we can define a binary function
[math]\mod : \N \times \N^+ \rightarrow \N [/math] given by [math]r:=n \mod m[/math] which returns the remainder after division by m.
Generally, mod means taking the reminder after division by m.
We say that n is congruent to r mod m if n = mq + r, where m is an integer.
if y = ax + b, then [math]b:=y \mod a[/math].
For example:
30 = 4 * 7 + 2
2 = 30 mod 7
25 = 8 * 3 + 1
1 = 25 mod 3
Note: [math]\mod[/math] here is different from the modulo congruence relation in [math]\Z_m[/math], which is an equivalence relation instead of a function.
The modulo operation is useful for determining if an integer divided by another integer produces a nonzero remainder. But both integers should satisfy n = mq + r, where m, r, q, and n are all integers, and r is smaller than m.
Mixed Congruential Algorithm
We define the Linear Congruential Method to be [math]x_{k+1}=(ax_k + b) \mod m[/math], where [math]x_k, a, b, m \in \N, \;\text{with}\; a, m \neq 0[/math]. Given a seed (i.e. an initial value [math]x_0 \in \N[/math]), we can obtain values for [math]x_1, \, x_2, \, \cdots, x_n[/math] inductively. The Multiplicative Congruential Method, invented by Berkeley professor D. H. Lehmer, may also refer to the special case where [math]b=0[/math] and the Mixed Congruential Method is case where [math]b \neq 0[/math]
An interesting fact about Linear Congruential Method is that it is one of the oldest and bestknown pseudo random number generator algorithms. It is very fast and requires minimal memory to retain state. However, this method should not be used for applications that require high randomness. They should not be used for Monte Carlo simulation and cryptographic applications. (Monte Carlo simulation will consider possibilities for every choice of consideration, and it shows the extreme possibilities. This method is not precise enough.)
First consider the following algorithm
[math]x_{k+1}=x_{k} \mod m[/math]
Example
[math]\text{Let }x_{0}=10,\,m=3[/math]
 [math]\begin{align} x_{1} &{}= 10 &{}\mod{3} = 1 \\ x_{2} &{}= 1 &{}\mod{3} = 1 \\ x_{3} &{}= 1 &{}\mod{3} =1 \\ \end{align}[/math]
[math]\ldots[/math]
Excluding [math]x_{0}[/math], this example generates a series of ones. In general, excluding [math]x_{0}[/math], the algorithm above will always generate a series of the same number less than M. Hence, it has a period of 1. The period can be described as the length of a sequence before it repeats. We want a large period with a sequence that is random looking. We can modify this algorithm to form the Multiplicative Congruential Algorithm.
[math]x_{k+1}=(a \cdot x_{k} + b) \mod m [/math](a little tip: [math](a \cdot b)\mod c = (a\mod c)\cdot(b\mod c))[/math]
Example
[math]\text{Let }a=2,\, b=1, \, m=3, \, x_{0} = 10[/math]
[math]\begin{align}
\text{Step 1: } 0&{}=(2\cdot 10 + 1) &{}\mod 3 \\
\text{Step 2: } 1&{}=(2\cdot 0 + 1) &{}\mod 3 \\
\text{Step 3: } 0&{}=(2\cdot 1 + 1) &{}\mod 3 \\
\end{align}[/math]
[math]\ldots[/math]
This example generates a sequence with a repeating cycle of two integers.
(If we choose the numbers properly, we could get a sequence of "random" numbers. How do we find the value of [math]a,b,[/math] and [math]m[/math]? At the very least [math]m[/math] should be a very large, preferably prime number. The larger [math]m[/math] is, the higher the possibility to get a sequence of "random" numbers. This is easier to solve in Matlab. In Matlab, the command rand() generates random numbers which are uniformly distributed on the interval (0,1)). Matlab uses [math]a=7^5, b=0, m=2^{31}1[/math] – recommended in a 1988 paper, "Random Number Generators: Good Ones Are Hard To Find" by Stephen K. Park and Keith W. Miller (Important part is that [math]m[/math] should be large and prime)
Note: [math]\frac {x_{n+1}}{m1}[/math] is an approximation to the value of a U(0,1) random variable.
MatLab Instruction for Multiplicative Congruential Algorithm:
Before you start, you need to clear all existing defined variables and operations:
>>clear all >>close all
>>a=17 >>b=3 >>m=31 >>x=5 >>mod(a*x+b,m) ans=26 >>x=mod(a*x+b,m)
(Note:
1. Keep repeating this command over and over again and you will get random numbers – this is how the command rand works in a computer.
2. There is a function in MATLAB called RAND to generate a random number between 0 and 1.
For example, in MATLAB, we can use rand(1,1000) to generate 1000's numbers between 0 and 1. This is essentially a vector with 1 row, 1000 columns, with each entry a random number between 0 and 1.
3. If we would like to generate 1000 or more numbers, we could use a for loop
(Note on MATLAB commands:
1. clear all: clears all variables.
2. close all: closes all figures.
3. who: displays all defined variables.
4. clc: clears screen.
5. ; : prevents the results from printing.
>>a=13 >>b=0 >>m=31 >>x(1)=1 >>for ii=2:1000 x(ii)=mod(a*x(ii1)+b,m); end >>size(x) ans=1 1000 >>hist(x)
(Note: The semicolon after the x(ii)=mod(a*x(ii1)+b,m) ensures that Matlab will not print the entire vector of x. It will instead calculate it internally and you will be able to work with it. Adding the semicolon to the end of this line reduces the run time significantly.)
This algorithm involves three integer parameters [math]a, b,[/math] and [math]m[/math] and an initial value, [math]x_0[/math] called the seed. A sequence of numbers is defined by [math]x_{k+1} = ax_k+ b \mod m[/math].
Note: For some bad [math]a[/math] and [math]b[/math], the histogram may not look uniformly distributed.
Note: In MATLAB, hist(x) will generate a graph representing the distribution. Use this function after you run the code to check the real sample distribution.
Example: [math]a=13, b=0, m=31[/math]
The first 30 numbers in the sequence are a permutation of integers from 1 to 30, and then the sequence repeats itself so it is important to choose [math]m[/math] large to decrease the probability of each number repeating itself too early. Values are between [math]0[/math] and [math]m1[/math]. If the values are normalized by dividing by [math]m1[/math], then the results are approximately numbers uniformly distributed in the interval [0,1]. There is only a finite number of values (30 possible values in this case). In MATLAB, you can use function "hist(x)" to see if it looks uniformly distributed. We saw that the values between 030 had the same frequency in the histogram, so we can conclude that they are uniformly distributed.
If [math]x_0=1[/math], then
 [math]x_{k+1} = 13x_{k}\mod{31}[/math]
So,
 [math]\begin{align} x_{0} &{}= 1 \\ x_{1} &{}= 13 \times 1 + 0 &{}\mod{31} = 13 \\ x_{2} &{}= 13 \times 13 + 0 &{}\mod{31} = 14 \\ x_{3} &{}= 13 \times 14 + 0 &{}\mod{31} =27 \\ \end{align}[/math]
etc.
For example, with [math]a = 3, b = 2, m = 4, x_0 = 1[/math], we have:
 [math]x_{k+1} = (3x_{k} + 2)\mod{4}[/math]
So,
 [math]\begin{align}
x_{0} &{}= 1 \\
x_{1} &{}= 3 \times 1 + 2 \mod{4} = 1 \\
x_{2} &{}= 3 \times 1 + 2 \mod{4} = 1 \\
\end{align}[/math]
etc.
FAQ:
1.Why is it 1 to 30 instead of 0 to 30 in the example above?
[math]b = 0[/math] so in order to have [math]x_k[/math] equal to 0, [math]x_{k1}[/math] must be 0 (since [math]a=13[/math] is relatively prime to 31). However, the seed is 1. Hence, we will never observe 0 in the sequence.
Alternatively, {0} and {1,2,...,30} are two orbits of the left multiplication by 13 in the group [math]\Z_{31}[/math].
2.Will the number 31 ever appear?Is there a probability that a number never appears?
The number 31 will never appear. When you perform the operation [math]\mod m[/math], the largest possible answer that you could receive is [math]m1[/math]. Whether or not a particular number in the range from 0 to [math]m  1[/math] appears in the above algorithm will be dependent on the values chosen for [math]a, b[/math] and [math]m[/math].
Examples:[From Textbook]
If [math]x_0=3[/math] and [math]x_n=(5x_{n1}+7)\mod 200[/math], find [math]x_1,\cdots,x_{10}[/math].
Solution:
[math]\begin{align}
x_1 &{}= (5 \times 3+7) &{}\mod{200} &{}= 22 \\
x_2 &{}= 117 &{}\mod{200} &{}= 117 \\
x_3 &{}= 592 &{}\mod{200} &{}= 192 \\
x_4 &{}= 2967 &{}\mod{200} &{}= 167 \\
x_5 &{}= 14842 &{}\mod{200} &{}= 42 \\
x_6 &{}= 74217 &{}\mod{200} &{}= 17 \\
x_7 &{}= 371092 &{}\mod{200} &{}= 92 \\
x_8 &{}= 1855467 &{}\mod{200} &{}= 67 \\
x_9 &{}= 9277342 &{}\mod{200} &{}= 142 \\
x_{10} &{}= 46386717 &{}\mod{200} &{}= 117 \\
\end{align}[/math]
Comments:
Typically, it is good to choose [math]m[/math] such that [math]m[/math] is large, and [math]m[/math] is prime. Careful selection of parameters '[math]a[/math]' and '[math]b[/math]' also helps generate relatively "random" output values, where it is harder to identify patterns. For example, when we used a composite (non prime) number such as 40 for [math]m[/math], our results were not satisfactory in producing an output resembling a uniform distribution.
The computed values are between 0 and [math]m1[/math]. If the values are normalized by dividing by [math]m1[/math], their result is numbers uniformly distributed on the interval [math]\left[0,1\right][/math] (similar to computing from uniform distribution).
From the example shown above, if we want to create a large group of random numbers, it is better to have large, prime [math]m[/math] so that the generated random values will not repeat after several iterations. Note: the period for this example is 8: from '[math]x_2[/math]' to '[math]x_9[/math]'.
There has been a research on how to choose uniform sequence. Many programs give you the options to choose the seed. Sometimes the seed is chosen by CPU.
Theorem (extra knowledge)
Let c be a nonzero constant. Then for any seed x0, and LCG will have largest max. period if and only if
(i) m and c are coprime;
(ii) (a1) is divisible by all prime factor of m;
(iii) if and only if m is divisible by 4, then a1 is also divisible by 4.
We want our LCG to have a large cycle. We call a cycle with m element the maximal period. We can make it bigger by making m big and prime. Recall:any number you can think of can be broken into a factor of prime Define coprime:Two numbers X and Y, are coprime if they do not share any prime factors.
Example:
Xn=(15Xn1 + 4) mod 7
(i) m=7 c=4 > coprime;
(ii) a1=14 and a1 is divisible by 7;
(iii) dose not apply.
(The extra knowledge stops here)
In this part, I learned how to use R code to figure out the relationship between two integers division, and their remainder. And when we use R to calculate R with random variables for a range such as(1:1000),the graph of distribution is like uniform distribution.
Summary of Multiplicative Congruential Algorithm
Problem: generate Pseudo Random Numbers.
Plan:
 find integer: a b m(large prime) x_{0}(the seed) .
 [math]x_{k+1}=(ax_{k}+b)[/math]mod m
Matlab Instruction:
>>clear all >>close all >>a=17 >>b=3 >>m=31 >>x=5 >>mod(a*x+b,m) ans=26 >>x=mod(a*x+b,m)
[math]\lt math\gt Insert formula here[/math][math]\lt math\gt Insert formula here[/math]</math></math>=== Inverse Transform Method ===
Now that we know how to generate random numbers, we use these values to sample form distributions such as exponential. However, to easily use this method, the probability distribution consumed must have a cumulative distribution function (cdf) [math]F[/math] with a tractable (that is, easily found) inverse [math]F^{1}[/math].
Theorem:
If we want to generate the value of a discrete random variable X, we must generate a random number U, uniformly distributed over (0,1).
Let [math]F:\R \rightarrow \left[0,1\right][/math] be a cdf. If [math]U \sim U\left[0,1\right][/math], then the random variable given by [math]X:=F^{1}\left(U\right)[/math]
follows the distribution function [math]F\left(\cdot\right)[/math],
where [math]F^{1}\left(u\right):=\inf F^{1}\big(\left[u,+\infty\right)\big) = \inf\{x\in\R  F\left(x\right) \geq u\}[/math] is the generalized inverse.
Note: [math]F[/math] need not be invertible everywhere on the real line, but if it is, then the generalized inverse is the same as the inverse in the usual case. We only need it to be invertible on the range of F(x), [0,1].
Proof of the theorem:
The generalized inverse satisfies the following:
[math]\begin{align}
\forall u \in \left[0,1\right], \, x \in \R, \\
&{} F^{1}\left(u\right) \leq x &{} \\
\Rightarrow &{} F\Big(F^{1}\left(u\right)\Big) \leq F\left(x\right) &&{} F \text{ is nondecreasing} \\
\Rightarrow &{} F\Big(\inf \{y \in \R  F(y)\geq u \}\Big) \leq F\left(x\right) &&{} \text{by definition of } F^{1} \\
\Rightarrow &{} \inf \{F(y) \in [0,1]  F(y)\geq u \} \leq F\left(x\right) &&{} F \text{ is right continuous and nondecreasing} \\
\Rightarrow &{} u \leq F\left(x\right) &&{} \text{by definition of } \inf \\
\Rightarrow &{} x \in \{y \in \R  F(y) \geq u\} &&{} \\
\Rightarrow &{} x \geq \inf \{y \in \R  F(y)\geq u \}\Big) &&{} \text{by definition of } \inf \\
\Rightarrow &{} x \geq F^{1}(u) &&{} \text{by definition of } F^{1} \\
\end{align}[/math]
That is [math]F^{1}\left(u\right) \leq x \Leftrightarrow u \leq F\left(x\right)[/math]
Finally, [math]P(X \leq x) = P(F^{1}(U) \leq x) = P(U \leq F(x)) = F(x)[/math], since [math]U[/math] is uniform on the unit interval.
This completes the proof.
Therefore, in order to generate a random variable X~F, it can generate U according to U(0,1) and then make the transformation x=[math] F^{1}(U) [/math]
Note that we can apply the inverse on both sides in the proof of the inverse transform only if the pdf of X is monotonic. A monotonic function is one that is either increasing for all x, or decreasing for all x. Of course, this holds true for all CDFs, since they are monotonic by definition.
In short, what the theorem tells us is that we can use a random number [math] U from U(0,1) [/math] to randomly sample a point on the CDF of X, then apply the inverse of the CDF to map the given probability to its domain, which gives us the random variable X.
Example 1  Exponential: [math] f(x) = \lambda e^{\lambda x}[/math]
Calculate the CDF:
[math] F(x)= \int_0^x f(t) dt = \int_0^x \lambda e ^{\lambda t}\ dt[/math]
[math] = \frac{\lambda}{\lambda}\, e^{\lambda t}\,  \underset{0}{x} [/math]
[math] = e^{\lambda x} + e^0 =1  e^{ \lambda x} [/math]
Solve the inverse:
[math] y=1e^{ \lambda x} \Rightarrow 1y=e^{ \lambda x} \Rightarrow x=\frac {ln(1y)}{\lambda}[/math]
[math] y=\frac {ln(1x)}{\lambda} \Rightarrow F^{1}(x)=\frac {ln(1x)}{\lambda}[/math]
Note that 1 − U is also uniform on (0, 1) and thus −log(1 − U) has the same distribution as −logU.
Steps:
Step 1: Draw U ~U[0,1];
Step 2: [math] x=\frac{ln(U)}{\lambda} [/math]
MatLab Code:
>>u=rand(1,1000); >>hist(u) #will generate a fairly uniform diagram
#let λ=2 in this example; however, you can make another value for λ >>x=(log(1u))/2; >>size(x) #1000 in size >>figure >>hist(x) #exponential
Example 2  Continuous Distribution:
[math] f(x) = \dfrac {\lambda } {2}e^{\lambda \left x\theta \right } for \infty \lt X \lt \infty , \lambda \gt 0 [/math]
Calculate the CDF:
[math] F(x)= \frac{1}{2} e^{\lambda (\theta  x)} , for \ x \le \theta [/math]
[math] F(x) = 1  \frac{1}{2} e^{\lambda (x  \theta)}, for \ x \gt \theta [/math]
Solve for the inverse:
[math]F^{1}(x)= \theta + ln(2y)/\lambda, for \ 0 \le y \le 0.5[/math]
[math]F^{1}(x)= \theta  ln(2(1y))/\lambda, for \ 0.5 \lt y \le 1[/math]
Algorithm:
Steps:
Step 1: Draw U ~ U[0, 1];
Step 2: Compute [math]X = F^1(U)[/math] i.e. [math]X = \theta + \frac {1}{\lambda} ln(2U)[/math] for U < 0.5 else [math]X = \theta \frac {1}{\lambda} ln(2(1U))[/math]
Example 3  [math]F(x) = x^5[/math]:
Given a CDF of X: [math]F(x) = x^5[/math], transform U~U[0,1].
Sol:
Let [math]y=x^5[/math], solve for x: [math]x=y^\frac {1}{5}[/math]. Therefore, [math]F^{1} (x) = x^\frac {1}{5}[/math]
Hence, to obtain a value of x from F(x), we first set u as an uniform distribution, then obtain the inverse function of F(x), and set
[math]x= u^\frac{1}{5}[/math]
Algorithm:
Steps:
Step 1: Draw U ~ rand[0, 1];
Step 2: X=U^(1/5);
Example 4  BETA(1,β):
Given u~U[0,1], generate x from BETA(1,β)
Solution:
[math]F(x)= 1(1x)^\beta[/math],
[math]u= 1(1x)^\beta[/math]
Solve for x:
[math](1x)^\beta = 1u[/math],
[math]1x = (1u)^\frac {1}{\beta}[/math],
[math]x = 1(1u)^\frac {1}{\beta}[/math]
let β=3, use Matlab to construct N=1000 observations from Beta(1,3)
Matlab Code:
>> u = rand(1,1000);
x = 1(1u)^(1/3);
>> hist(x,50)
>> mean(x)
Example 5  Estimating [math]\pi[/math]:
Let's use rand() and Monte Carlo Method to estimate [math]\pi[/math]
N= total number of points
N_{c} = total number of points inside the circle
Prob[(x,y) lies in the circle=[math]\frac {Area(circle)}{Area(square)}[/math]
If we take square of size 2, circle will have area =[math]\pi (\frac {2}{2})^2 =\pi[/math].
Thus [math]\pi= 4(\frac {N_c}{N})[/math]
For example, UNIF(a,b)
[math]y = F(x) = (x  a)/ (b  a) [/math] [math]x = (b  a ) * y + a[/math] [math]X = a + ( b  a) * U[/math]
where U is UNIF(0,1)
Limitations:
1. This method is flawed since not all functions are invertible or monotonic: generalized inverse is hard to work on.
2. It may be impractical since some CDF's and/or integrals are not easy to compute such as Gaussian distribution.
We learned how to prove the transformation from cdf to inverse cdf,and use the uniform distribution to obtain a value of x from F(x). We can also use uniform distribution in inverse method to determine other distributions. The probability of getting a point for a circle over the triangle is a closed uniform distribution, each point in the circle and over the triangle is almost the same. Then, we can look at the graph to determine what kind of distribution the graph resembles.
Probability Distribution Function Tool in MATLAB
disttool #shows different distributions
This command allows users to explore the effect of changes of parameters on the plot of either a CDF or PDF.
change the value of mu and sigma can change the graph skew side.
Class 3  Tuesday, May 14
Recall the Inverse Transform Method
To sample X with CDF F(x),
1. Draw u~U(0,1)
2. X = F^{1}(u)
Proof
First note that
[math]P(U\leq a)=a, \forall a\in[0,1][/math]
 [math]P(X\leq x)[/math]
[math]= P(F^{1}(U)\leq x)[/math] (since [math]X= F^{1}(U)[/math] by the inverse method)
[math]= P((F(F^{1}(U))\leq F(x))[/math] (since [math]F [/math] is monotonically increasing)
[math]= P(U\leq F(x)) [/math] (since [math] P(U\leq a)= a[/math] for [math]U \sim U(0,1), a \in [0,1][/math], this is explained further below)
[math]= F(x) , \text{ where } 0 \leq F(x) \leq 1 [/math]
This is the c.d.f. of X.
Note: that the CDF of a U(a,b) random variable is:
 [math] F(x)= \begin{cases} 0 & \text{for }x \lt a \\[8pt] \frac{xa}{ba} & \text{for }a \le x \lt b \\[8pt] 1 & \text{for }x \ge b \end{cases} [/math]
Thus, for [math] U [/math] ~ [math]U(0,1) [/math], we have [math]P(U\leq 1) = 1[/math] and [math]P(U\leq 1/2) = 1/2[/math].
More generally, we see that [math]P(U\leq a) = a[/math].
For this reason, we had [math]P(U\leq F(x)) = F(x)[/math].
Reminder:
This is only for uniform distribution [math] U~ \sim~ Unif [0,1] [/math]
[math] P (U \le 1) = 1 [/math]
[math] P (U \le 0.5) = 0.5 [/math]
[math] P (U \le a) = a [/math]
Note that on a single point there is no mass probability (i.e. [math]u[/math] <= 0.5, is the same as [math] u [/math] < 0.5) More formally, this is saying that [math] P(X = x) = F(x) \lim_{s \to x^}F(x)[/math] which equals zero for any continuous random variable
Limitations of the Inverse Transform Method
Though this method is very easy to use and apply, it does have a major disadvantage/limitation:
 We need to find the inverse cdf [math] F^{1}(\cdot) [/math]. In some cases the inverse function does not exist, or is difficult to find.
For example, it is too difficult to find the inverse cdf of the Gaussian distribution, so we must find another method to sample from the Gaussian distribution.
Discrete Case
The same technique can be used for discrete case. We want to generate a discrete random variable x, that has probability mass function:
 [math]\begin{align}P(X = x_i) &{}= p_i \end{align}[/math]
 [math]x_0 \leq x_1 \leq x_2 \dots \leq x_n[/math]
 [math]\sum p_i = 1[/math]
Algorithm for applying Inverse Transformation Method in Discrete Case (Procedure):
1. Define a probability mass function for [math]x_{i}[/math] where i = 1,....,k. Note: k could grow infinitely.
2. Generate a uniform random number U, [math] U~ \sim~ Unif [0,1] [/math]
3. If [math]U\leq p_{o}[/math], deliver [math]X = x_{o}[/math]
4. Else, if [math]U\leq p_{o} + p_{1} [/math], deliver [math]X = x_{1}[/math]
5. Repeat the process again till we reached to [math]U\leq p_{o} + p_{1} + ......+ p_{k}[/math], deliver [math]X = x_{k}[/math]
Note that after generating a random U, the value of X can be determined by finding the interval [math][F(x_{j1}),F(x_{j})][/math] in which U lies.
Example 3.0:
Generate a random variable from the following probability function:
x  2  1  0  1  2 
f(x)  0.1  0.5  0.07  0.03  0.3 
Answer:
1. Gen U~U(0,1)
2. If U < 0.5 then output 1
else if U < 0.8 then output 2
else if U < 0.9 then output 2
else if U < 0.97 then output 0 else output 1
Example 3.1 (from class): (Coin Flipping Example)
We want to simulate a coin flip. We have U~U(0,1) and X = 0 or X = 1.
We can define the U function so that:
If U <= 0.5, then X = 0
and if 0.5 < U <= 1, then X =1.
This allows the probability of Heads occurring to be 0.5 and is a good generator of a random coin flip.
[math] U~ \sim~ Unif [0,1] [/math]
 [math]\begin{align} P(X = 0) &{}= 0.5\\ P(X = 1) &{}= 0.5\\ \end{align}[/math]
The answer is:
 [math] x = \begin{cases} 0, & \text{if } U\leq 0.5 \\ 1, & \text{if } 0.5 \lt U \leq 1 \end{cases}[/math]
 Code
>>for ii=1:1000 u=rand; if u<0.5 x(ii)=0; else x(ii)=1; end end >>hist(x)
Note: The role of semicolon in Matlab: Matlab will not print out the results if the line ends in a semicolon and vice versa.
Example 3.2 (From class):
Suppose we have the following discrete distribution:
 [math]\begin{align} P(X = 0) &{}= 0.3 \\ P(X = 1) &{}= 0.2 \\ P(X = 2) &{}= 0.5 \end{align}[/math]
The cumulative distribution function (cdf) for this distribution is then:
 [math] F(x) = \begin{cases} 0, & \text{if } x \lt 0 \\ 0.3, & \text{if } x \lt 1 \\ 0.5, & \text{if } x \lt 2 \\ 1, & \text{if } x \ge 2 \end{cases}[/math]
Then we can generate numbers from this distribution like this, given [math]U \sim~ Unif[0, 1][/math]:
 [math] x = \begin{cases} 0, & \text{if } U\leq 0.3 \\ 1, & \text{if } 0.3 \lt U \leq 0.5 \\ 2, & \text{if } 0.5 \lt U\leq 1 \end{cases}[/math]
"Procedure"
1. Draw U~u (0,1)
2. if U<=0.3 deliver x=0
3. else if 0.3<U<=0.5 deliver x=1
4. else 0.5<U<=1 deliver x=2
Can you find a faster way to run this algorithm? Consider:
 [math] x = \begin{cases} 2, & \text{if } U\leq 0.5 \\ 1, & \text{if } 0.5 \lt U \leq 0.7 \\ 0, & \text{if } 0.7 \lt U\leq 1 \end{cases}[/math]
The logic for this is that U is most likely to fall into the largest range. Thus by putting the largest range (in this case x >= 0.5) we can improve the run time of this algorithm. Could this algorithm be improved further using the same logic?
 Code (as shown in class)
Use Editor window to edit the code
>>close all >>clear all >>for ii=1:1000 u=rand; if u<=0.3 x(ii)=0; elseif u<=0.5 x(ii)=1; else x(ii)=2; end end >>size(x) >>hist(x)
Example 3.3: Generating a random variable from pdf
 [math] f_{x}(x) = \begin{cases} 2x, & \text{if } 0\leq x \leq 1 \\ 0, & \text{if } otherwise \end{cases}[/math]
 [math] F_{x}(x) = \begin{cases} 0, & \text{if } x \lt 0 \\ \int_{0}^{x}2sds = x^{2}, & \text{if } 0\leq x \leq 1 \\ 1, & \text{if } x \gt 1 \end{cases}[/math]
 [math]\begin{align} U = x^{2}, X = F^{1}x(U)= U^{\frac{1}{2}}\end{align}[/math]
Example 3.4: Generating a Bernoulli random variable
 [math]\begin{align} P(X = 1) = p, P(X = 0) = 1  p\end{align}[/math]
 [math] F(x) = \begin{cases} 1p, & \text{if } x \lt 1 \\ 1, & \text{if } x \ge 1 \end{cases}[/math]
1. Draw [math] U~ \sim~ Unif [0,1] [/math]
2. [math]
X = \begin{cases}
0, & \text{if } 0 \lt U \lt 1p \\
1, & \text{if } 1p \le U \lt 1
\end{cases}[/math]
Example 3.5: Generating Binomial(n,p) Random Variable
[math] use p\left( x=i+1\right) =\dfrac {ni} {i+1}\dfrac {p} {1p}p\left( x=i\right) [/math]
Step 1: Generate a random number [math]U[/math].
Step 2: [math]c = \frac {p}{(1p)}[/math], [math]i = 0[/math], [math]pr = (1p)^n[/math], [math]F = pr[/math]
Step 3: If U<F, set X = i and stop,
Step 4: [math] pr = \, {\frac {c(ni)}{(i+1)}} {pr}, F = F +pr, i = i+1[/math]
Step 5: Go to step 3
 Note: These steps can be found in Simulation 5th Ed. by Sheldon Ross.
 Note: Another method by seeing the Binomial as a sum of n independent Bernoulli random variables.
Step 1: Generate n uniform numbers U1 ... Un.
Step 2: X = [math]\sum U_i \lt = p[/math] where P is the probability of success.
Example 3.6: Generating a Poisson random variable
Let X ~ Poi(u). Write an algorithm to generate X. The PDF of a poisson is:
 [math]\begin{align} f(x) = \frac {\, e^{u} u^x}{x!} \end{align}[/math]
We know that
 [math]\begin{align} P_{x+1} = \frac {\, e^{u} u^{x+1}}{(x+1)!} \end{align}[/math]
The ratio is [math]\begin{align} \frac {P_{x+1}}{P_x} = ... = \frac {u}{{x+1}} \end{align}[/math] Therefore, [math]\begin{align} P_{x+1} = \, {\frac {u}{x+1}} P_x\end{align}[/math]
Algorithm:
1) Generate U ~ U(0,1)
2) [math]\begin{align} X = 0 \end{align}[/math]
[math]\begin{align} F = P(X = 0) = e^{u}*u^0/{0!} = e^{u} = p \end{align}[/math]
3) If U<F, output x
Else, [math]\begin{align} p = (u/(x+1))^p \end{align}[/math]
[math]\begin{align} F = F + p \end{align}[/math]
[math]\begin{align} x = x + 1 \end{align}[/math]
4) Go to 1
Acknowledgements: This is from Stat 340 Winter 2013
Example 3.7: Generating Geometric Distribution:
Consider Geo(p) where p is the probability of success, and define random variable X such that X is the total number of trials required to achieve the first success. x=1,2,3..... We have pmf: [math]P(X=x_i) = \, p (1p)^{x_{i}1}[/math] We have CDF: [math]F(x)=P(X \leq x)=1P(X\gt x) = 1(1p)^x[/math], P(X>x) means we get at least x failures before observe the first success. Now consider the inverse transform:
 [math] x = \begin{cases} 1, & \text{if } U\leq p \\ 2, & \text{if } p \lt U \leq 1(1p)^2 \\ 3, & \text{if } 1(1p)^2 \lt U\leq 1(1p)^3 \\ .... k, & \text{if } 1(1p)^{k1} \lt U\leq 1(1p)^k .... \end{cases}[/math]
Note: Unlike the continuous case, the discrete inversetransform method can always be used for any discrete distribution (but it may not be the most efficient approach)
General Procedure
1. Draw U ~ U(0,1)
2. If [math]U \leq P_{0}[/math] deliver [math]x = x_{0}[/math]
3. Else if [math]U \leq P_{0} + P_{1}[/math] deliver [math]x = x_{1}[/math]
4. Else if [math]U \leq P_{0} + P_{1} + P_{2} [/math] deliver [math]x = x_{2}[/math]
...
Else if [math]U \leq P_{0} + ... + P_{k} [/math] deliver [math]x = x_{k}[/math]
===Inverse Transform Algorithm for Generating a Binomial(n,p) Random Variable(from textbook)===
step 1: Generate a random number U
step 2: c=p/(1p),i=0, pr=(1p)^{n}, F=pr.
step 3: If U<F, set X=i and stop.
step 4: pr =[c(ni)/(i+1)]pr, F=F+pr, i=i+1.
step 5: Go to step 3.
Problems
1. We have to find [math] F^{1} [/math]
2. For many distributions, such as Gaussian, it is too difficult to find the inverse of [math] F(x)[/math].
Flipping a coin is a discrete case of uniform distribution, and the code below shows an example of flipping a coin 1000 times; the result is closed to the expected value 0.5.
Example 2, as another discrete distribution, shows that we can sample from parts like 0,1 and 2, and the probability of each part or each trial is the same.
Example 3 uses inverse method to figure out the probability range of each random varible.
Summary of Inverse Transform Method
Problem:generate types of distribution.
Plan:
Continuous case:
 find CDF F
 find the inverse F^{1}
 Generate a list of uniformly distributed number {x}
 {F^{1}(x)} is what we want
Matlab Instruction
>>u=rand(1,1000); >>hist(u) >>x=(log(1u))/2; >>size(x) >>figure >>hist(x)
Discrete case:
 generate a list of uniformly distributed number {u}
 d_{i}=x_{i} if[math] X=x_i, [/math] if [math] F(x_{i1})\lt U\leq F(x_i) [/math]
 {d_{i}=x_{i}} is what we want
Matlab Instruction
>>for ii=1:1000 u=rand; if u<0.5 x(ii)=0; else x(ii)=1; end end >>hist(x)
AcceptanceRejection Method
Although the inverse transformation method does allow us to change our uniform distribution, it has two limits;
 Not all functions have inverse functions (ie, the range of x and y have limit and do not fix the inverse functions)
 For some distributions, such as Gaussian, it is too difficult to find the inverse
To generate random samples for these functions, we will use different methods, such as the AcceptanceRejection Method. This method is more efficient than the inverse transform method. The basic idea is to find an alternative probability distribution with density function f(x);
Suppose we want to draw random sample from a target density function f(x), x∈S_{x}, where S_{x} is the support of f(x). If we can find some constant c(≥1) (In practice, we prefer c as close to 1 as possible) and a density function g(x) having the same support S_{x} so that f(x)≤cg(x), ∀x∈S_{x}, then we can apply the procedure for AcceptanceRejection Method. Typically we choose a density function that we already know how to sample from for g(x).
The main logic behind the AcceptanceRejection Method is that:
1. We want to generate sample points from an unknown distribution, say f(x).
2. We use [math]\,cg(x)[/math] to generate points so that we have more points than f(x) could ever generate for all x. (where c is a constant, and g(x) is a known distribution)
3. For each value of x, we accept and reject some points based on a probability, which will be discussed below.
Note: If the red line was only g(x) as opposed to [math]\,c g(x)[/math] (i.e. c=1), then [math]g(x) \geq f(x)[/math] for all values of x if and only if g and f are the same functions. This is because the sum of pdf of g(x)=1 and the sum of pdf of f(x)=1, hence, [math]g(x) \ngeqq f(x)[/math] ∀x.
Also remember that [math]\,c g(x)[/math] always generates higher probability than what we need. Thus we need an approach of getting the proper probabilities.
c must be chosen so that [math]f(x)\leqslant c g(x)[/math] for all value of x. c can only equal 1 when f and g have the same distribution. Otherwise:
Either use a software package to test if [math]f(x)\leqslant c g(x)[/math] for an arbitrarily chosen c > 0, or:
1. Find first and second derivatives of f(x) and g(x).
2. Identify and classify all local and absolute maximums and minimums, using the First and Second Derivative Tests, as well as all inflection points.
3. Verify that [math]f(x)\leqslant c g(x)[/math] at all the local maximums as well as the absolute maximums.
4. Verify that [math]f(x)\leqslant c g(x)[/math] at the tail ends by calculating [math]\lim_{x \to +\infty} \frac{f(x)}{\, c g(x)}[/math] and [math]\lim_{x \to \infty} \frac{f(x)}{\, c g(x)}[/math] and seeing that they are both < 1. Use of L'Hopital's Rule should make this easy, since both f and g are p.d.f's, resulting in both of them approaching 0.
5.Efficiency: the number of times N that steps 1 and 2 need to be called(also the number of iterations needed to successfully generate X) is a random variable and has a geometric distribution with success probability p=P(U<= f(Y)/(cg(Y))) , P(N=n)=(1p^(n1))p ,n>=1.Thus on average the number of iterations required is given by E(N)=1/p
c should be close to the maximum of f(x)/g(x), not just some arbitrarily picked large number. Otherwise, the AcceptanceRejection method will have more rejections (since our probability [math]f(x)\leqslant c g(x)[/math] will be close to zero). This will render our algorithm inefficient.
The expected number of iterations of the algorithm required with an X is c.
Note:
1. Value around x_{1} will be sampled more often under cg(x) than under f(x).There will be more samples than we actually need, if [math]\frac{f(y)}{\, c g(y)}[/math] is small, the acceptancerejection technique will need to be done to these points to get the accurate amount.In the region above x_{1}, we should accept less and reject more.
2. Value around x_{2}: number of sample that are drawn and the number we need are much closer. So in the region above x_{2}, we accept more. As a result, g(x) and f(x) are comparable.
3. The constant c is needed because we need to adjust the height of g(x) to ensure that it is above f(x). Besides that, it is best to keep the number of rejected varieties small for maximum efficiency.
Another way to understand why the the acceptance probability is [math]\frac{f(y)}{\, c g(y)}[/math], is by thinking of areas. From the graph above, we see that the target function in under the proposed function c g(y). Therefore, [math]\frac{f(y)}{\, c g(y)}[/math] is the proportion or the area under c g(y) that also contains f(y). Therefore we say we accept sample points for which u is less then [math]\frac{f(y)}{\, c g(y)}[/math] because then the sample points are guaranteed to fall under the area of c g(y) that contains f(y).
There are 2 cases that are possible:
Sample of points is more than enough, [math]c g(x) \geq f(x) [/math]
Similar or the same amount of points, [math]c g(x) \geq f(x) [/math]
There is 1 case that is not possible:
Less than enough points, such that [math] g(x) [/math] is greater than [math] f [/math], [math]g(x) \geq f(x)[/math]
Procedure
 Draw Y~g(.)
 Draw U~u(0,1) (Note: U and Y are independent)
 If [math]u\leq \frac{f(y)}{cg(y)}[/math] (which is [math]P(acceptedy)[/math]) then x=y, else return to Step 1
Note: Recall [math]P(U\leq a)=a[/math]. Thus by comparing u and [math]\frac{f(y)}{\, c g(y)}[/math], we can get a probability of accepting y at these points. For instance, at some points that cg(x) is much larger than f(x), the probability of accepting x=y is quite small.
ie. At X_{1}, low probability to accept the point since f(x) is much smaller than cg(x).
At X_{2}, high probability to accept the point. [math]P(U\leq a)=a[/math] in Uniform Distribution.
Note: Since U is the variable for uniform distribution between 0 and 1. It equals to 1 for all. The condition depends on the constant c. so the condition changes to [math]c\leq \frac{f(y)}{g(y)}[/math]
introduce the relationship of cg(x)and f(x),and prove why they have that relationship and where we can use this rule to reject some cases.
and learn how to see the graph to find the accurate point to reject or accept the ragion above the random variable x.
for the example, x1 is bad point and x2 is good point to estimate the rejection and acceptance
Some notes on the constant C
1. C is chosen such that [math] c g(y)\geq f(y)[/math], that is,[math] c g(y)[/math] will always dominate [math]f(y)[/math]. Because of this,
C will always be greater than or equal to one and will only equal to one if and only if the proposal distribution and the target distribution are the same. It is normally best to choose C such that the absolute maxima of both [math] c g(y)[/math] and [math] f(y)[/math] are the same.
2. [math] \frac {1}{C} [/math] is the area of [math] F(y)[/math] over the area of [math] c G(y)[/math] and is the acceptance rate of the points generated. For example, if [math] \frac {1}{C} = 0.7[/math] then on average, 70 percent of all points generated are accepted.
3. C is the average number of times Y is generated from g .
Theorem
Let [math]f: \R \rightarrow [0,+\infty][/math] be a welldefined pdf, and [math]\displaystyle Y[/math] be a random variable with pdf [math]g: \R \rightarrow [0,+\infty][/math] such that [math]\exists c \in \R^+[/math] with [math]f \leq c \cdot g[/math]. If [math]\displaystyle U \sim~ U(0,1)[/math] is independent of [math]\displaystyle Y[/math], then the random variable defined as [math]X := Y \vert U \leq \frac{f(Y)}{c \cdot g(Y)}[/math] has pdf [math]\displaystyle f[/math], and the condition [math]U \leq \frac{f(Y)}{c \cdot g(Y)}[/math] is denoted by "Accepted".
Proof
Recall the conditional probability formulas:
[math]\begin{align}
P(AB)=\frac{P(A \cap B)}{P(B)}, \text{ or }P(AB)=\frac{P(BA)P(A)}{P(B)} \text{ for pmf}
\end{align}[/math]
[math]P(yaccepted)=f(y)=\frac{P(acceptedy)P(y)}{P(accepted)}[/math]
based on the concept from procedurestep1:
[math]P(y)=g(y)[/math]
[math]P(acceptedy)=\frac{f(y)}{cg(y)}[/math]
(the larger the value is, the larger the chance it will be selected)
[math]
\begin{align}
P(accepted)&=\int_y\ P(acceptedy)P(y)\\
&=\int_y\ \frac{f(s)}{cg(s)}g(s)ds\\
&=\frac{1}{c} \int_y\ f(s) ds\\
&=\frac{1}{c}
\end{align}[/math]
Therefore:
[math]\begin{align}
P(x)&=P(yaccepted)\\
&=\frac{\frac{f(y)}{cg(y)}g(y)}{1/c}\\
&=\frac{\frac{f(y)}{c}}{1/c}\\
&=f(y)\end{align}[/math]
Here is an alternative introduction of AcceptanceRejection Method
Comments:
AcceptanceRejection Method is not good for all cases. The limitation with this method is that sometimes many points will be rejected. One obvious cons is that it could be very hard to pick the [math]g(y)[/math] and the constant [math]c[/math] in some cases. We have to pick the SMALLEST C such that [math]cg(x) \leq f(x)[/math] else the the algorithm will not be efficient. This is because [math]f(x)/cg(x)[/math] will become smaller and probability [math]u \leq f(x)/cg(x)[/math] will go down and many points will be rejected making the algorithm inefficient.
Note: When [math]f(y)[/math] is very different than [math]g(y)[/math], it is less likely that the point will be accepted as the ratio above would be very small and it will be difficult for [math]U[/math] to be less than this small value.
An example would be when the target function ([math]f[/math]) has a spike or several spikes in its domain  this would force the known distribution ([math]g[/math]) to have density at least as large as the spikes, making the value of [math]c[/math] larger than desired. As a result, the algorithm would be highly inefficient.
AcceptanceRejection Method
Example 1 (discrete case)
We wish to generate X~Bi(2,0.5), assuming that we cannot generate this directly.
We use a discrete distribution DU[0,2] to approximate this.
[math]f(x)=Pr(X=x)=2Cx*(0.5)^2[/math]
[math]x[/math]  0  1  2 
[math]f(x)[/math]  1/4  1/2  1/4 
[math]g(x)[/math]  1/3  1/3  1/3 
[math]c=f(x)/g(x)[/math]  3/4  3/2  3/4 
[math]f(x)/(cg(x))[/math]  1/2  1  1/2 
Since we need [math]c \geq f(x)/g(x)[/math]
We need [math]c=3/2[/math]
Therefore, the algorithm is:
1. Generate [math]u,v~U(0,1)[/math]
2. Set [math]y= \lfloor 3*u \rfloor[/math] (This is using uniform distribution to generate DU[0,2]
3. If [math](y=0)[/math] and [math](v\lt 1/2), output=0[/math]
If [math](y=2) [/math] and [math](v\lt 1/2), output=2 [/math]
Else if [math]y=1, output=1[/math]
An elaboration of “c”
c is the expected number of times the code runs to output 1 random variable. Remember that when [math]u \lt f(x)/(cg(x))[/math] is not satisfied, we need to go over the code again.
Proof
Let [math]f(x)[/math] be the function we wish to generate from, but we cannot use inverse transform method to generate directly.
Let [math]g(x)[/math] be the helper function
Let [math]kg(x)\gt =f(x)[/math]
Since we need to generate y from [math]g(x)[/math],
[math]Pr(select y)=g(y)[/math]
[math]Pr(output yselected y)=Pr(u\lt f(y)/(cg(y)))= f(y)/(cg(y))[/math] (Since u~Unif(0,1))
[math]Pr(output y)=Pr(output y1selected y1)Pr(select y1)+ Pr(output y2selected y2)Pr(select y2)+…+ Pr(output ynselected yn)Pr(select yn)=1/c[/math]
Consider that we are asking for expected time for the first success, it is a geometric distribution with probability of success=1/c
Therefore, [math]E(X)=1/(1/c))=c[/math]
Acknowledgements: Some materials have been borrowed from notes from Stat340 in Winter 2013.
Use the conditional probability to proof if the probability is accepted, then the result is closed pdf of the original one. the example shows how to choose the c for the two function [math]g(x)[/math] and [math]f(x)[/math].
Example of AcceptanceRejection Method
Generating a random variable having p.d.f.
[math]f(x) = 20x(1  x)^3, 0\lt x \lt 1 [/math]
Since this random variable (which is beta with parameters 2, 4) is concentrated in the interval (0, 1), let us consider the acceptancerejection method with
g(x) = 1, 0 < x < 1
To determine the constant c such that f(x)/g(x) <= c, we use calculus to determine the maximum value of
[math] f(x)/g(x) = 20x(1  x)^3 [/math]
Differentiation of this quantity yields
[math]d/dx[f(x)/g(x)]=20*[(1x)^33x(1x)^2][/math]
Setting this equal to 0 shows that the maximal value is attained when x = 1/4, and thus,
[math]f(x)/g(x)\lt = 20*(1/4)*(3/4)^3=135/64=c [/math]
Hence,
[math]f(x)/cg(x)=(256/27)*(x*(1x)^3)[/math]
and thus the simulation procedure is as follows:
1) Generate two random numbers U1 and U2 .
2) If U_{2}<(256/27)*U_{1}*(1U_{1})^{3}, set X=U_{1}, and stop Otherwise return to step 1). The average number of times that step 1) will be performed is c = 135/64.
(The above example is from http://www.cs.bgu.ac.il/~mps042/acceptance.htm, example 2.)
use the derivative to proof the accepetancerejection method, find the local maximum of f(x)/g(x). and we can calculate the best constant c.
Another Example of AcceptanceRejection Method
Generate a random variable from:
[math]f(x)=3*x^2[/math], 0< x <1
Assume g(x) to be uniform over interval (0,1), where 0< x <1
Therefore:
[math]c = max(f(x)/(g(x)))= 3[/math]
the best constant c is the max(f(x)/(cg(x))) and the c make the area above the f(x) and below the g(x) to be small. because g(.) is uniform so the g(x) is 1. max(g(x)) is 1
[math]f(x)/(cg(x))= x^2[/math]
Acknowledgement: this is example 1 from http://www.cs.bgu.ac.il/~mps042/acceptance.htm
Class 4  Thursday, May 16
 When we want to find a target distribution, denoted as [math]f(x)[/math], we need to first find a proposal distribution [math]g(x)[/math] that is easy to sample from.
 The relationship between the proposal distribution and target distribution is: [math] c \cdot g(x) \geq f(x) [/math], where c is a constant. This means that the area of f(x) is under the area of [math] c \cdot g(x)[/math].
 Chance of acceptance is less if the distance between [math]f(x)[/math] and [math] c \cdot g(x)[/math] is big, and viceversa, we use [math] c [/math] to keep [math] \frac {f(x)}{c \cdot g(x)} [/math] below 1 (so [math]f(x) \leq c \cdot g(x)[/math]). Therefore, we must find the constant [math] C [/math] to achieve this.
 In other words, a [math]C[/math] is chosen to make sure [math] c \cdot g(x) \geq f(x) [/math]. However, it will not make sense if [math]C[/math] is simply chosen to be arbitrarily large. We need to choose [math]C[/math] such that [math]c \cdot g(x)[/math] fits [math]f(x)[/math] as tightly as possible.
 The constant c cannot be a negative number.
How to find C:
[math]\begin{align}
&c \cdot g(x) \geq f(x)\\
&c\geq \frac{f(x)}{g(x)} \\
&c= \max \left(\frac{f(x)}{g(x)}\right)
\end{align}[/math]
If [math]f[/math] and [math] g [/math] are continuous, we can find the extremum by taking the derivative and solve for [math]x_0[/math] such that:
[math] 0=\frac{d}{dx}\frac{f(x)}{g(x)}_{x=x_0}[/math]
Thus [math] c = \frac{f(x_0)}{g(x_0)} [/math]
 The logic behind this:
The AcceptanceRejection method involves finding a distribution that we know how to sample from (g(x)) and multiplying g(x) by a constant c so that [math]c \cdot g(x)[/math] is always greater than or equal to f(x). Mathematically, we want [math] c \cdot g(x) \geq f(x) [/math].
And it means c has to be greater or equal to [math]\frac{f(x)}{g(x)}[/math]. So the smallest possible c that satisfies the condition is the maximum value of [math]\frac{f(x)}{g(x)}[/math]
. If c is too large, the chance of acceptance of generated values will be small, thereby losing efficiency of the algorithm. Therefore, it is best to get the smallest possible c such that [math] c g(x) \geq f(x)[/math].
 For this method to be efficient, the constant c must be selected so that the rejection rate is low. (The efficiency for this method is [math]\left ( \frac{1}{c} \right )[/math])
 It is easy to show that the expected number of trials for an acceptance is [math] \frac{Total Number of Trials} {C} [/math].
 recall the acceptance rate is 1/c. (Not rejection rate)
 Let [math]X[/math] be the number of trials for an acceptance, [math] X \sim~ Geo(\frac{1}{c})[/math]
 [math]\mathbb{E}[X] = \frac{1}{\frac{1}{c}} = c [/math]
 The number of trials needed to generate a sample size of [math]N[/math] follows a negative binomial distribution. The expected number of trials needed is then [math]cN[/math].
 So far, the only distribution we know how to sample from is the UNIFORM distribution.
Procedure:
1. Choose [math]g(x)[/math] (simple density function that we know how to sample, i.e. Uniform so far)
The easiest case is UNIF(0,1). However, in other cases we need to generate UNIF(a,b). We may need to perform a linear transformation on the UNIF(0,1) variable.
2. Find a constant c such that :[math] c \cdot g(x) \geq f(x) [/math], otherwise return to step 1.
Recall the general procedure of AcceptanceRejection Method
 Let [math]Y \sim~ g(y)[/math]
 Let [math]U \sim~ Unif [0,1] [/math]
 If [math]U \leq \frac{f(Y)}{c \cdot g(Y)}[/math] then X=Y; else return to step 1 (This is not the way to find C. This is the general procedure.)
Example: Generate a random variable from the pdf
[math] f(x) = \begin{cases} 2x, & \mbox{if }0 \leqslant x \leqslant 1 \\ 0, & \mbox{otherwise} \end{cases} [/math]
We can note that this is a special case of Beta(2,1), where,
[math]beta(a,b)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}x^{(a1)}(1x)^{(b1)}[/math]
Where Γ (n)=(n1)! if n is positive integer
[math]Gamma(z)=\int _{0}^{\infty }t^{z1}e^{t}dt[/math]
Aside: Beta function
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function defined by
[math]B(x,y)=\int_0^1 \! {t^{(x1)}}{(1t)^{(y1)}}\,dt[/math]
[math]beta(2,1)= \frac{\Gamma(3)}{(\Gamma(2)\Gamma(1))}x^1 (1x)^0 = 2x[/math]
[math]g=u(0,1)[/math]
[math]y=g[/math]
[math]f(x)\leq c\cdot g(x)[/math]
[math]c\geq \frac{f(x)}{g(x)}[/math]
[math]c = \max \frac{f(x)}{g(x)} [/math]
[math]c = \max \frac{2x}{1}, 0 \leq x \leq 1[/math]
Taking x = 1 gives the highest possible c, which is c=2
Note that c is a scalar greater than 1.
cg(x) is proposal dist, and f(x) is target dist.
Note: g follows uniform distribution, it only covers half of the graph which runs from 0 to 1 on yaxis. Thus we need to multiply by c to ensure that [math]c\cdot g[/math] can cover entire f(x) area. In this case, c=2, so that makes g runs from 0 to 2 on yaxis which covers f(x).
Comment:
From the picture above, we could observe that the area under f(x)=2x is a half of the area under the pdf of UNIF(0,1). This is why in order to sample 1000 points of f(x), we need to sample approximately 2000 points in UNIF(0,1).
And in general, if we want to sample n points from a distritubion with pdf f(x), we need to scan approximately [math]n\cdot c[/math] points from the proposal distribution (g(x)) in total.
Step
 Draw y~U(0,1)
 Draw u~U(0,1)
 if [math]u \leq \frac{(2\cdot y)}{(2\cdot 1)}, u \leq y,[/math] then [math] x=y[/math]
 Else go to Step 1
Note: In the above example, we sample 2 numbers. If second number (u) is less than or equal to first number (y), then accept x=y, if not then start all over.
Matlab Code
>>close all >>clear all >>ii=1; # ii:numbers that are accepted >>jj=1; # jj:numbers that are generated >>while ii<1000 y=rand; u=rand; jj=jj+1; if u<=y x(ii)=y; ii=ii+1; end end >>hist(x) >>jj jj = 2024 # should be around 2000
 *Note: The reason that a for loop is not used is that we need to continue the looping until we get 1000 successful samples. We will reject some samples during the process and therefore do not know the number of y we are going to generate.
 *Note2: In this example, we used c=2, which means we accept half of the points we generate on average. Generally speaking, 1/c would be the probability of acceptance, and an indicator of the efficiency of your chosen proposal distribution and algorithm.
 *Note3: We use while instead of for when looping because we do not know how many iterations are required to generate 1000 successful samples. We can view this as a negative binomial distribution so while the expected number of iterations required is n * c, it will likely deviate from this amount. We expect 2000 in this case.
 *Note4: If c=1, we will accept all points, which is the ideal situation. However, this is essentially impossible because if c = 1 then our distributions f(x) and g(x) must be identical, so we will have to be satisfied with as close to 1 as possible.
Use Inverse Method for this Example
 [math]F(x)=\int_0^x \! 2s\,ds={x^2}0={x^2}[/math]
 [math]y=x^2[/math]
 [math]x=\sqrt y[/math]
 [math] F^{1}\left (\, x \, \right) =\sqrt x[/math]
 Procedure
 1: Draw [math] U~ \sim~ Unif [0,1] [/math]
 2: [math] x=F^{1}\left (\, u\, \right) =\sqrt u[/math]
Matlab Code
>>u=rand(1,1000); >>x=u.^0.5; >>hist(x)
Matlab Tip: Periods, ".",meaning "elementwise", are used to describe the operation you want performed on each element of a vector. In the above example, to take the square root of every element in U, the notation U.^0.5 is used. However if you want to take the Square root of the entire matrix U the period, "*.*" would be excluded. i.e. Let matrix B=U^0.5, then [math]B^T*B=U[/math]. For example if we have a two 1 X 3 matrices and we want to find out their product; using "." in the code will give us their product. However, if we don't use ".", it will just give us an error. For example, a =[1 2 3] b=[2 3 4] are vectors, a.*b=[2 6 12], but a*b does not work since matrix dimensions must agree.
Example for AR method:
Given [math] f(x)= \frac{3}{4} (1x^2), 1 \leq x \leq 1 [/math], use AR method to generate random number
Let g=U(1,1) and g(x)=1/2
let y ~ f, [math] cg(x)\geq f(x), c\frac{1}{2} \geq \frac{3}{4} (1x^2) /1, c=max 2*\frac{3}{4} (1x^2) = 3/2 [/math]
The process:
 1: Draw U1 ~ U(0,1)
 2: Draw U2 ~ U(0,1)
 3: let [math] y = U1*2  1 [/math]
 4: if [math]U2 \leq \frac { \frac{3}{4} * (1y^2)} { \frac{3}{4}} = {1y^2}[/math], then x=y, note that (3/4(1y^2)/(3/4) is getting from f(y) / (cg(y)) )
 5: else: return to step 1
Example of AcceptanceRejection Method
[math] f(x) = 3x^2, 0\lt x\lt 1 [/math] [math]g(x)=1, 0\lt x\lt 1[/math]
[math]c = \max \frac{f(x)}{g(x)} = \max \frac{3x^2}{1} = 3 [/math]
[math]\frac{f(x)}{c \cdot g(x)} = x^2[/math]
1. Generate two uniform numbers in the unit interval [math]U_1, U_2 \sim~ U(0,1)[/math]
2. If [math]U_2 \leqslant {U_1}^2[/math], accept [math]U_1[/math] as the random variable with pdf [math]f[/math], if not return to Step 1
We can also use [math]g(x)=2x[/math] for a more efficient algorithm
[math]c = \max \frac{f(x)}{g(x)} = \max \frac {3x^2}{2x} = \frac {3x}{2} [/math]. Use the inverse method to sample from [math]g(x)[/math] [math]G(x)=x^2[/math]. Generate [math]U[/math] from [math]U(0,1)[/math] and set [math]x=sqrt(u)[/math]
1. Generate two uniform numbers in the unit interval [math]U_1, U_2 \sim~ U(0,1)[/math]
2. If [math]U_2 \leq \frac{3\sqrt{U_1}}{2}[/math], accept [math]U_1[/math] as the random variable with pdf [math]f[/math], if not return to Step 1
 Note :the function q(x) = c * g(x) is called an envelop or majoring function.
To obtain a better proposing function g(x), we can first assume a new q(x) and then solve for the normalizing constant by integrating.
In the previous example, we first assume q(x) = 3x. To find the normalizing constant, we need to solve k * [math]\sum 3x = 1[/math] which gives us k = 2/3. So, g(x) = k*q(x) = 2x.
Possible Limitations
This method could be computationally inefficient depending on the rejection rate. We may have to sample many points before
we get the 1000 accepted points. In the example we did in class relating the [math]f(x)=2x[/math],
we had to sample around 2070 points before we finally accepted 1000 sample points.
If the form of the proposal distribution, g, is very different from target distribution, f, then c is very large and the algorithm is not computationally efficient.
Acceptance  Rejection Method Application on Normal Distribution
[math]X \sim∼ N(\mu,\sigma^2), \text{ or } X = \sigma Z + \mu, Z \sim~ N(0,1) [/math]
[math]\vert Z \vert[/math] has probability density function of
f(x) = (2/[math]\sqrt{2\pi}[/math]) e^{x2/2}
g(x) = e^{x}
Take h(x) = f(x)/g(x) and solve for h'(x) = 0 to find x so that h(x) is maximum.
Hence x=1 maximizes h(x) => c = [math]\sqrt{2e/\pi}[/math]
Thus f(y)/cg(y) = e^{(y1)2/2}
learn how to use code to calculate the c between f(x) and g(x).
How to transform [math]U(0,1)[/math] to [math]U(a, b)[/math]
1. Draw U from [math]U(0,1)[/math]
2. Take [math]Y=(ba)U+a[/math]
3. Now Y follows [math]U(a,b)[/math]
Example: Generate a random variable z from the Semicircular density [math]f(x)= \frac{2}{\pi R^2} \sqrt{R^2x^2}, R\leq x\leq R[/math].
> Proposal distribution: UNIF(R, R)
> We know how to generate using [math] U \sim UNIF (0,1) [/math] Let [math] Y= 2RUR=R(2U1)[/math], therefore Y follows [math]U(R,R)[/math]
> In order to maximize the function we must maximize the top and minimize the bottom.
Now, we need to find c:
Since c=max[f(x)/g(x)], where
[math]f(x)= \frac{2}{\pi R^2} \sqrt{R^2x^2}[/math], [math]g(x)=\frac{1}{2R}[/math], [math]R\leq x\leq R[/math]
Thus, we have to maximize R^2x^2.
=> When x=0, it will be maximized.
Therefore, c=4/pi. * Note: This also means that the probability of accepting a point is [math]\pi/4[/math].
We will accept the points with limit f(x)/[cg(x)]. Since [math]\frac{f(y)}{cg(y)}=\frac{\frac{2}{\pi R^{2}} \sqrt{R^{2}y^{2}}}{\frac{4}{\pi} \frac{1}{2R}}=\frac{\frac{2}{\pi R^{2}} \sqrt{R^{2}R^{2}(2U1)^{2}}}{\frac{2}{\pi R}}[/math]
 Note: Y= R(2U1)
We can also get Y= R(2U1) by using the formula y = a+(ba)*u, to transform U~(0,1) to U~(a,b). Letting a=R and b=R, and substituting it in the formula y = a+(ba)*u, we get Y= R(2U1).
Thus, [math]\frac{f(y)}{cg(y)}=\sqrt{1(2U1)^{2}}[/math] * this also means the probability we can accept points
The algorithm to generate random variable x is then:
1. Draw [math]\ U[/math] from [math]\ U(0,1)[/math]
2. Draw [math]\ U_{1}[/math] from [math]\ U(0,1)[/math]
3. If [math]U_{1} \leq \sqrt{1(2U1)^2}, set x = U_{1}[/math]
else return to step 1.
The condition is
[math] U_{1} \leq \sqrt{(1(2U1)^2)}[/math]
[math]\ U_{1}^2 \leq 1  (2U 1)^2[/math]
[math]\ U_{1}^2  1 \leq (2U  1)^2[/math]
[math]\ 1  U_{1}^2 \geq (2U  1)^2[/math]
One more example about AR method
(In this example, we will see how to determine the value of c when c is a function with unknown parameters instead of a value)
Let [math]f(x)=x*e^{x}, x\gt 0 [/math]
Use [math]g(x)=a*e^{a*x}[/math]to generate random variable
Solution: First of all, we need to find c
[math]cg(x)\gt =f(x)[/math]
[math]c\gt =\frac{f(x)}{g(x)}[/math]
[math]\frac{f(x)}{g(x)}=\frac{x}{a} * e^{(1a)x}[/math]
take derivative with respect to x, and set it to 0 to get the maximum,
[math]\frac{1}{a} * e^{(1a)x}  \frac{x}{a} * e^{(1a)x} * (1a) = 0 [/math]
[math]x=\frac {1}{1a}[/math]
[math]\frac {f(x)}{g(x)} = \frac {e^{1}}{a*(1a)} [/math]
[math]\frac {f(0)}{g(0)} = 0[/math]
[math]\frac {f(\infty)}{g(\infty)} = 0[/math]
therefore, [math]c= \frac {e^{1}}{a*(1a)}[/math]
In order to minimize c, we need to find the appropriate a
Take derivative with respect to a and set it to be zero,
We could get [math]a= \frac {1}{2}[/math]
[math]c=\frac{4}{e}[/math]
Procedure:
1. Generate u v ~unif(0,1)
2. Generate y from g, since g is exponential with rate 2, let y=0.5*ln(u)
3. If [math]v\lt \frac{f(y)}{c\cdot g(y)}[/math], output y
Else, go to 1
Acknowledgements: The example above is from Stat 340 Winter 2013 notes.
Summary of how to find the value of c
Let [math]h(x) = \frac {f(x)}{g(x)}[/math], and then we have the following:
1. First, take derivative of h(x) with respect to x, get x_{1};
2. Plug x_{1} into h(x) and get the value(or a function) of c, denote as c_{1};
3. Check the endpoints of x and sub the endpoints into h(x);
4. (if c_{1} is a value, then we can ignore this step) Since we want the smallest value of c such that [math]f(x) \leq c\cdot g(x)[/math] for all x, we want the unknown parameter that minimizes c.
So we take derivative of c_{1} with respect to the unknown parameter (ie k=unknown parameter) to get the value of k.
Then we submit k to get the value of c_{1}. (Double check that [math]c_1 \geq 1[/math]
5. Pick the maximum value of h(x) to be the value of c.
For the two examples above, we need to generate the probability function to uniform distribution, and figure out [math]c=max\frac {f(y)}{g(y)} [/math]. If [math]v\lt \frac {f(y)}{c\cdot g(y)}[/math], output y.
Summary of when to use the Accept Rejection Method
1) When the calculation of inverse cdf cannot to be computed or is too difficult to compute.
2) When f(x) can be evaluated to at least one of the normalizing constant.
3) A constant c where [math]f(x)\leq c\cdot g(x)[/math]
4) A uniform draw
Interpretation of 'C'
We can use the value of c to calculate the acceptance rate by '1/c'.
For instance, assume c=1.5, then we can tell that 66.7% of the points will be accepted (1/1.5=0.667). We can also call the efficiency of the method is 66.7%.
Class 5  Tuesday, May 21
Recall the example in the last lecture. The following code will generate a random variable required by the question.
 Code
>>close all >>clear all >>ii=1; >>R=1; #Note: that R is a constant in which we can change i.e. if we changed R=4 then we would have a density between 4 and 4 >>while ii<1000 u1 = rand; u2 = rand; y = R*(2*u21); if (1u1^2)>=(2*u21)^2 x(ii) = y; ii = ii + 1; #Note: for beginner programmers that this step increases the ii value for next time through the while loop end end >>hist(x,20) # 20 is the number of bars
MATLAB tips: hist(x,y) where y is the number of bars in the graph.
a histogram to show variable x, and the bars number is y.
Discrete Examples
 Example 1
Generate random variable [math]X[/math] according to p.m.f
[math]\begin{align}
P(x &=1) &&=0.15 \\
P(x &=2) &&=0.25 \\
P(x &=3) &&=0.3 \\
P(x &=4) &&=0.1 \\
P(x &=5) &&=0.2 \\
\end{align}[/math]
The discrete case is analogous to the continuous case. Suppose we want to generate an X that is a discrete random variable with pmf f(x)=P(X=x). Suppose also that we use the discrete uniform distribution as our target distribution, then [math] g(x)= P(X=x) =0.2 [/math] for all X.
The following algorithm then yields our X:
Step 1. Draw discrete uniform distribution of 1, 2, 3, 4 and 5, [math]Y \sim~ g[/math].
Step 2. Draw [math]U \sim~ U(0,1)[/math].
Step 3. If [math]U \leq \frac{f(Y)}{c \cdot g(Y)}[/math], then X = Y ;
Else return to Step 1.
C can be found by maximizing the ratio :[math] \frac{f(x)}{g(x)} [/math]. To do this, we want to maximize [math] f(x) [/math] and minimize [math] g(x) [/math].
 [math]c = max \frac{f(x)}{g(x)} = \frac {0.3}{0.2} = 1.5 [/math]
Note: In this case [math]f(x)=P(X=x)=0.3[/math] (highest probability from the discrete probabilities in the question)
 [math]\frac{p(x)}{cg(x)} = \frac{p(x)}{1.5*0.2} = \frac{p(x)}{0.3} [/math]
Note: The U is independent from y in Step 2 and 3 above. ~The constant c is a indicator of rejection rate or efficiency of the algorithm.
the acceptancerejection method of pmf, the uniform probability is the same for all variables, and there 5 parameters(1,2,3,4,5), so g(x) is 0.2
Remember that we always want to choose [math] cg [/math] to be equal to or greater than [math] f [/math], but as close as possible.
limitations: If the form of the proposal dist g is very different from target dist f, then c is very large and the algorithm is not computatively effect.
 Code for example 1
>>close all >>clear all >>p=[.15 .25 .3 .1 .2]; %This a vector holding the values >>ii=1; >>while ii < 1000 y=unidrnd(5); %generates random numbers for the discrete uniform u=rand; distribution with maximum 5. if u<= p(y)/0.3 x(ii)=y; ii=ii+1; end end >>hist(x)
unidrnd(k) draws from the discrete uniform distribution of integers [math]1,2,3,...,k[/math] If this function is not built in to your MATLAB then we can do simple transformation on the rand(k) function to make it work like the unidrnd(k) function.
The acceptance rate is [math]\frac {1}{c}[/math], so the lower the c, the more efficient the algorithm. Theoretically, c equals 1 is the best case because all samples would be accepted; however it would only be true when the proposal and target distributions are exactly the same, which would never happen in practice.
For example, if c = 1.5, the acceptance rate would be [math]\frac {1}{1.5}=\frac {2}{3}[/math]. Thus, in order to generate 1000 random values, on average, a total of 1500 iterations would be required.
A histogram to show 1000 random values of f(x), more random value make the probability close to the express probability value.
 Example 2
p(x=1)=0.1
p(x=2)=0.3
p(x=3)=0.6
Let g be the uniform distribution of 1, 2, or 3
g(x)= 1/3
[math]c=max(p_{x}/g(x))=0.6/(1/3)=1.8[/math]
Hence p(x)/cg(x) = p(x)/(1.8*1/3)= p(x)/0.6
1,y~g
2,u~U(0,1)
3, If [math]U \leq \frac{f(y)}{cg(y)}[/math], set x = y. Else go to 1.
 Code for example 2
>>close all >>clear all >>p=[.1 .3 .6]; >>ii=1; >>while ii < 1000 y=unidrnd(3); u=rand; if u<= p(y)/0.6 x(ii)=y; ii=ii+1; end end >>hist(x)
 Example 3
[math]p_{x}=e^{3}3^{x}/x! , x\gt =0[/math]
(poisson distribution)
Try the first few p_{x}'s: .0498 .149 .224 .224 .168 .101 .0504 .0216 .0081 .0027
Use the geometric distribution for [math]g(x)[/math];
[math]g(x)=p(1p)^{x}[/math], choose p=0.25
Look at [math]p_{x}/g(x)[/math] for the first few numbers: .199 .797 1.59 2.12 2.12 1.70 1.13 .647 .324 .144
We want [math]c=max(p_{x}/g(x))[/math] which is approximately 2.12
1. Generate [math]U_{1} \sim~ U(0,1); U_{2} \sim~ U(0,1)[/math]
2. [math]j = \lfloor \frac{ln(U_{1})}{ln(.75)} \rfloor+1;[/math]
3. if [math]U_{2} \lt \frac{p_{j}}{cg(j)}[/math], set X = x_{j}, else go to step 1.
Note: In this case, f(x)/g(x) is extremely difficult to differentiate so we were required to test points. If the function is easily differentiable, we can calculate the max as if it were a continuous function then check the two surrounding points for which is the highest discrete value.
 Example 4 (Hypergeometric & Binomial)
Suppose we are given f(x) such that it is hypergeometically distributed, given 10 white balls, 5 red balls, and select 3 balls, let X be the number of red ball selected, without replacement.
Choose g(x) such that it is binomial distribution, Bin(3, 1/3). Find the rejection constant, c
Solution:
For hypergeometric: [math]P(X=0) =\binom{10}{3}/\binom{15}{3} =0.2637, P(x=1)=\binom{10}{2} * \binom{5}{1} /\binom{15}{3}=0.4945, P(X=2)=\binom{10}{1} * \binom{5}{2} /\binom{15}{3}=0.2198,[/math]
[math]P(X=3)=\binom{5}{3}/\binom{15}{3}= 0.02198[/math]
For Binomial g(x): P(X=0) = (2/3)^3=0.2963; P(X=1)= 3*(1/3)*(2/3)^2 = 0.4444, P(X=2)=3*(1/3)^2*(2/3)=0.2222, P(X=3)=(1/3)^3=0.03704
Find the value of f/g for each X
X=0: 0.8898; X=1: 1.1127; X=2: 0.9891; X=3: 0.5934
Choose the maximum which is c=1.1127
Looking for the max f(x) is 0.4945 and the max g(x) is 0.4444, so we can calculate the max c is 1.1127. But for the graph, this c is not the best because it does not cover all the point of f(x), so we need to move the c*g(x) graph to cover all f(x), and decreasing the rejection ratio.
Limitation: If the shape of the proposed distribution g is very different from the target distribution f, then the rejection rate will be high (High c value). Computationally, the algorithm is always right; however it is inefficient and requires many iterations.
Here is an example:
In the above example, we need to move c*g(x) to the peak of f to cover the whole f. Thus c will be very large and 1/c will be small.
The higher the rejection rate, more points will be rejected.
More on rejection/acceptance rate: 1/c is the acceptance rate. As c decreases (note: the minimum value of c is 1), the acceptance rate increases. In our last example, 1/c=1/1.5≈66.67%. Around 67% of points generated will be accepted.
which brings the acceptance rate low which leads to very time consuming sampling
AcceptanceRejection Method
Problem: The CDF is not invertible or it is difficult to find the inverse.
Plan:
 Draw y~g(.)
 Draw u~Unif(0,1)
 If [math]u\leq \frac{f(y)}{cg(y)}[/math]then set x=y. Else return to Step 1
x will have the desired distribution.
Matlab Example
close all clear all ii=1; R=1; while ii<1000 u1 = rand; u2 = rand; y = R*(2*u21); if (1u1^2)>=(2*u21)^2 x(ii) = y; ii = ii + 1; end end hist(x,20)
Recall that,
Suppose we have an efficient method for simulating a random variable having probability mass function {q(j),j>=0}. We can use this as the basis for simulating from the distribution having mass function {p(j),j>=0} by first simulating a random variable Y having mass function {q(j)} and then accepting this simulated value with a probability proportional to p(Y)/q(Y).
Specifically, let c be a constant such that p(j)/q(j)<=c for all j such that p(j)>0
We now have the following technique, called the acceptancerejection method, for simulating a random variable X having mass function p(j)=P{X=j}.
Sampling from commonly used distributions
Please note that this is not a general technique as is that of acceptancerejection sampling. Later, we will generalize the distributions for multidimensional purposes.
 Gamma
The CDF of the Gamma distribution [math]Gamma(t,\lambda)[/math] is(t denotes the shape, [math]\lambda[/math] denotes the scale:
[math] F(x) = \int_0^{x} \frac{e^{y}y^{t1}}{(t1)!} \mathrm{d}y, \; \forall x \in (0,+\infty)[/math], where [math]t \in \N^+ \text{ and } \lambda \in (0,+\infty)[/math].
Neither Inverse Transformation nor Acceptance/Rejection Method can be easily applied to Gamma distribution.
However, we can use additive property of Gamma distribution to generate random variables.
 Additive Property
If [math]X_1, \dots, X_t[/math] are independent exponential distributions with hazard rate [math] \lambda [/math] (in other words, [math] X_i\sim~ Exp (\lambda) [/math][math], Exp (\lambda)= Gamma (1, \lambda)), then \Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda) [/math]
Side notes: if [math] X_i\sim~ Gamma(a,\lambda)[/math] and [math] Y_i\sim~ Gamma(B,\lambda)[/math] are independent gamma distributions, then [math]\frac{X}{X+Y}[/math] has a distribution of [math] Beta(a,B). [/math]
If we want to sample from the Gamma distribution, we can consider sampling from [math]t[/math] independent exponential distributions using the Inverse Method for each [math] X_i[/math] and add them up. Note that this only works the specific set of gamma distributions where t is a positive integer.
According to this property, a random variable that follows Gamma distribution is the sum of i.i.d (independent and identically distributed) exponential random variables. Now we want to generate 1000 values of [math]Gamma(20,10)[/math] random variables, so we need to obtain the value of each one by adding 20 values of [math]X_i \sim~ Exp(10)[/math]. To achieve this, we generate a 20by1000 matrix whose entries follow [math]Exp(10)[/math] and add the rows together. [math] x_1 [/math]~Exp([math]\lambda [/math]) [math]x_2 [/math]~Exp([math] \lambda [/math]) ... [math]x_t [/math]~Exp([math] \lambda [/math]) [math]x_1+x_2+...+x_t[/math]
>>l=1 >>urand(1,1000); >>x=(1/l)*log(u); >>hist(x) >>rand
 Procedure
 Sample independently from a uniform distribution [math]t[/math] times, giving [math] U_1,\dots,U_t \sim~ U(0,1)[/math]
 Use the Inverse Transform Method, [math] X_i = \frac {1}{\lambda}\log(1U_i)[/math], giving [math] X_1,\dots,X_t \sim~Exp(\lambda)[/math]
 Use the additive property,[math] X = \Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda) [/math]
 Note for Procedure
 If [math]U\sim~U(0,1)[/math], then [math]U[/math] and [math]1U[/math] will have the same distribution (both follows [math]U(0,1)[/math])
 This is because the range for [math]1U[/math] is still [math](0,1)[/math], and their densities are identical over this range.
 Let [math]Y=1U[/math], [math]Pr(Y\lt =y)=Pr(1U\lt =y)=Pr(U\gt =1y)=1Pr(U\lt =1y)=1(1y)=y[/math], thus [math]1U\sim~U(0,1)[/math]
 Code
>>close all >>clear all >>lambda = 1; >>u = rand(20, 1000); Note: this command generate a 20x1000 matrix (which means we generate 1000 number for each X_i with t=20); all the elements are generated by rand >>x = (1/lambda)*log(1u); Note: log(1u) is essentially the same as log(u) only if u~U(0,1) >>xx = sum(x) Note: sum(x) will sum all elements in the same column. size(xx) can help you to verify >>size(sum(x)) Note: see the size of x if we forget it (the answer is 20 1000) >>hist(x(1:)) Note: the graph of the first exponential distribution >>hist(xx)
size(x) and size(u) are both 20*1000 matrix. Since if u~unif(0, 1), u and 1  u have the same distribution, we can substitute 1u with u to simply the equation. Alternatively, the following command will do the same thing with the previous commands.
 Code
>>close all >>clear all >>lambda = 1; >>xx = sum((1/lambda)*log(rand(20, 1000))); ''This is simple way to put the code in one line. Here we can use either log(u) or log(1u) since U~U(0,1); >>hist(xx)
in the matrix rand(20,1000) means 20 row with 1000 numbers for each. use the code to show the generalize the distributions for multidimensional purposes in different cases, such as sum xi (each xi not equal xj), and they are independent, or matrix. Finally, we can see the conclusion is shown by the histogram.
Other Sampling Method: Box Muller
 From cartesian to polar coordinates
[math] R=\sqrt{x_{1}^2+x_{2}^2}= x_{2}/sin(\theta)= x_{1}/cos(\theta)[/math]
[math] tan(\theta)=x_{2}/x_{1} \rightarrow \theta=tan^{1}(x_{2}/x_{1})[/math]
 BoxMuller Transformation:
It is a transformation that consumes two continuous uniform random variables [math] X \sim U(0,1), Y \sim U(0,1) [/math] and outputs a bivariate normal random variable with [math] Z_1\sim N(0,1), Z_2\sim N(0,1). [/math]
Matlab
If X is a matrix,
 X(1,:) returns the first row
 X(:,1) returns the first column
 X(i,j) returns the (i,j)th entry
 sum(X,1) or sum(X) is a summation of the rows of X. The output is a row vector of the sums of each column.
 sum(X,2) is a summation of the columns of X, returning a vector.
 rand(r,c) will generate uniformly distributed random numbers in r rows and c columns.
 The dot operator (.), when placed before a function, such as +,,^, *, and many others specifies to apply that function to every element of a vector or a matrix. For example, to add a constant c to elements of a matrix A, do A.+c as opposed to simply A+c. The dot operator is not required for functions that can only take a number as their input (such as log).
 Matlab processes loops very slow, while it is fast with matrices and vectors, so it is preferable to use the dot operator to and matrices of random numbers than loops if it is possible.
Class 6  Thursday, May 23
Announcement
1. On the day of each lecture, students from the morning section can only contribute the first half of the lecture (i.e. 8:30  9:10 am), so that the second half can be saved for the ones from the afternoon section. After the day of lecture, students are free to contribute anything.
Standard Normal distribution
If X ~ N(0,1) i.e. Standard Normal Distribution  then its p.d.f. is of the form
 [math]f(x) = \frac{1}{\sqrt{2\pi}}\, e^{ \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2}[/math]
 Warning : the General Normal distribution is
[math] f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{ \frac{(x\mu)^2}{2\sigma^2} } [/math] 
which is almost useless in this course

where [math] \mu [/math] is the mean or expectation of the distribution and [math] \sigma [/math] is standard deviation
 N(0,1) is standard normal. [math] \mu [/math] =0 and [math] \sigma [/math]=1
Let X and Y be independent standard normal.
Let [math] \theta [/math] and R denote the Polar coordinate of the vector (X, Y)
Note: R must satisfy two properties:
 1. Be a positive number (as it is a length)
 2. It must be from a distribution that has more data points closer to the origin so that as we go further from the origin, less points are generated (the two options are Chisquared and Exponential distribution)
The form of the joint distribution of R and [math]\theta[/math] will show that the best choice for distribution of R^{2} is exponential.
We cannot use the Inverse Transformation Method since F(x) does not have a closed form solution. So we will use joint probability function of two independent standard normal random variables and polar coordinates to simulate the distribution:
We know that
 R^{2}= X^{2}+Y^{2} and [math] \tan(\theta) = \frac{y}{x} [/math] where X and Y are two independent standard normal
 [math]f(x) = \frac{1}{\sqrt{2\pi}}\, e^{ \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2}[/math]
 [math]f(y) = \frac{1}{\sqrt{2\pi}}\, e^{ \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} y^2}[/math]
 [math]f(x,y) = \frac{1}{\sqrt{2\pi}}\, e^{ \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2} * \frac{1}{\sqrt{2\pi}}\, e^{ \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} y^2}=\frac{1}{2\pi}\, e^{ \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} (x^2+y^2)} [/math]
 Since for independent distributions, their joint probability function is the multiplication of two independent probability functions
It can also be shown using 11 transformation that the joint distribution of R and θ is given by,
11 transformation:
Let [math]d=R^2[/math]
[math]x= \sqrt {d}\cos \theta [/math] [math]y= \sqrt {d}\sin \theta [/math]
then [math]\left J\right = \left \dfrac {1} {2}d^{\frac {1} {2}}\cos \theta d^{\frac{1}{2}}\cos \theta +\sqrt {d}\sin \theta \dfrac {1} {2}d^{\frac{1}{2}}\sin \theta \right = \dfrac {1} {2}[/math] It can be shown that the pdf of [math] d [/math] and [math] \theta [/math] is:
 [math]\begin{matrix} f(d,\theta) = \frac{1}{2}e^{\frac{d}{2}}*\frac{1}{2\pi},\quad d = R^2 \end{matrix},\quad for\quad 0\leq d\lt \infty\ and\quad 0\leq \theta\leq 2\pi [/math]
Note that [math] \begin{matrix}f(r,\theta)\end{matrix}[/math] consists of two density functions, Exponential and Uniform, so assuming that r and [math]\theta[/math] are independent [math] \begin{matrix} \Rightarrow d \sim~ Exp(1/2), \theta \sim~ Unif[0,2\pi] \end{matrix} [/math]
 [math] \begin{align} R^2 = x^2 + y^2 \end{align} [/math]
 [math] \tan(\theta) = \frac{y}{x} [/math]
[math]\begin{align} f(d) = Exp(1/2)=\frac{1}{2}e^{\frac{d}{2}}\ \end{align}[/math]
[math]\begin{align} f(\theta) =\frac{1}{2\pi}\ \end{align}[/math]
To sample from the normal distribution, we can generate a pair of independent standard normal X and Y by:
1) Generating their polar coordinates
2) Transforming back to rectangular (Cartesian) coordinates.
Alternative Method of Generating Standard Normal Random Variables
Step 1: Generate [math]u1~Unif(0,1)[/math] Step 2: Generate [math]Y1~Exp(1),Y2~Exp(2)[/math] Step 3: If [math]Y2 \geq(Y11)^2/2[/math],set [math]V=Y1[/math],otherwise,go to step 1 Step 4: If [math]u1 \leq 1/2[/math],then [math]X=V[/math]
Expectation of a Standard Normal distribution
The expectation of a standard normal distribution is 0
 Below is the proof:
 [math]\operatorname{E}[X]= \;\int_{\infty}^{\infty} x \frac{1}{\sqrt{2\pi}} e^{x^2/2} \, dx.[/math]
 [math]\phi(x) = \frac{1}{\sqrt{2\pi}}\, e^{ \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2}.[/math]
 [math]=\;\int_{\infty}^{\infty} x \phi(x), dx.[/math]
 Since the first derivative ϕ′(x) is −xϕ(x)
 [math]=\;\  \int_{\infty}^{\infty} \phi'(x), dx.[/math]
 [math]=  \left[\phi(x)\right]_{\infty}^{\infty}[/math]
 [math]= 0[/math]
More intuitively, because x is an odd function (f(x)+f(x)=0). Taking integral of x will give [math]x^2/2 [/math] which is an even function (f(x)=f(x)). If support is from negative infinity to infinity, then the integral will return 0.
 Procedure (BoxMuller Transformation Method):
Pseudorandom approaches to generating normal random variables used to be limited. Inefficient methods such as inverse Gaussian function, sum of uniform random variables, and acceptancerejection were used. In 1958, a new method was proposed by George Box and Mervin Muller of Princeton University. This new technique was easy to use and also had the accuracy to the inverse transform sampling method that it grew more valuable as computers became more computationally astute.
The BoxMuller method takes a sample from a bivariate independent standard normal distribution, each component of which is thus a univariate standard normal. The algorithm is based on the following two properties of the bivariate independent standard normal distribution:
if [math]Z = (Z_{1}, Z_{2}[/math]) has this distribution, then
1.[math]R^2=Z_{1}^2+Z_{2}^2[/math] is exponentially distributed with mean 2, i.e.
[math]P(R^2 \leq x) = 1e^{x/2}[/math].
2.Given [math]R^2[/math], the point [math](Z_{1},Z_{2}[/math]) is uniformly distributed on the circle of radius R centered at the origin.
We can use these properties to build the algorithm:
1) Generate random number [math] \begin{align} U_1,U_2 \sim~ \mathrm{Unif}(0, 1) \end{align} [/math]
2) Generate polar coordinates using the exponential distribution of d and uniform distribution of θ,
[math] \begin{align} R^2 = d = 2\log(U_1), & \quad r = \sqrt{d} \\ & \quad \theta = 2\pi U_2 \end{align} [/math]
[math] \begin{matrix} \ R^2 \sim~ Exp(1/2), \theta \sim~ Unif[0,2\pi] \end{matrix} [/math]
Note: If U~unif(0,1), then ln(1U)=ln(U)
3) Transform polar coordinates (i.e. R and θ) back to Cartesian coordinates (i.e. X and Y),
[math] \begin{align} x = R\cos(\theta) \\ y = R\sin(\theta) \end{align} [/math]
.
Alternatively,
[math] x =\cos(2\pi U_2)\sqrt{2\ln U_1}\, [/math] and
[math] y =\sin(2\pi U_2)\sqrt{2\ln U_1}\, [/math]
Note: In steps 2 and 3, we are using a similar technique as that used in the inverse transform method.
The BoxMuller Transformation Method generates a pair of independent Standard Normal distributions, X and Y (Using the transformation of polar coordinates).
If you want to generate a number of independent standard normal distributed numbers (more than two), you can run the BoxMuller method several times.
For example:
If you want 8 independent standard normal distributed numbers, then run the BoxMuller methods 4 times (8/2 times).
If you want 9 independent standard normal distributed numbers, then run the BoxMuller methods 5 times (10/2 times), and then delete one.
 Code
>>close all >>clear all >>u1=rand(1,1000); >>u2=rand(1,1000); >>d=2*log(u1); >>tet=2*pi*u2; >>x=d.^0.5.*cos(tet); >>y=d.^0.5.*sin(tet); >>hist(tet) >>hist(d) >>hist(x) >>hist(y)
"Remember: For the above code to work the "." needs to be after the d to ensure that each element of d is raised to the power of 0.5.
Otherwise matlab will raise the entire matrix to the power of 0.5."
Note:
the first graph is hist(tet) and it is a uniform distribution.
The second one is hist(d) and it is a exponential distribution.
The third one is hist(x) and it is a normal distribution.
The last one is hist(y) and it is also a normal distribution.
Attention:There is a "dot" between sqrt(d) and "*". It is because d and tet are vectors.
As seen in the histograms above, X and Y generated from this procedure have a standard normal distribution.
 Code
>>close all >>clear all >>x=randn(1,1000); >>hist(x) >>hist(x+2) >>hist(x*2+2)
Note: randn is random sample from a standard normal distribution.
Note: hist(x+2) will be centered at 2 instead of at 0.
hist(x*3+2) is also centered at 2. The mean doesn't change, but the variance of x*3+2 becomes nine times (3^2) the variance of x.
Comment: BoxMuller transformations are not computationally efficient. The reason for this is the need to compute sine and cosine functions. A way to get around this timeconsuming difficulty is by an indirect computation of the sine and cosine of a random angle (as opposed to a direct computation which generates U and then computes the sine and cosine of 2πU.
Alternative Methods of generating normal distribution
1. Even though we cannot use inverse transform method, we can approximate this inverse using different functions.One method would be rational approximation.
2.Central limit theorem : If we sum 12 independent U(0,1) distribution and subtract 6 (which is E(ui)*12)we will approximately get a standard normal distribution.
3. Ziggurat algorithm which is known to be faster than BoxMuller transformation and a version of this algorithm is used for the randn function in matlab.
If Z~N(0,1) and X= μ +Zσ then X~[math] N(\mu, \sigma^2)[/math]
If Z_{1}, Z_{2}... Z_{d} are independent identically distributed N(0,1), then Z=(Z_{1},Z_{2}...Z_{d})^{T} ~N(0, I_{d}), where 0 is the zero vector and I_{d} is the identity matrix.
For the histogram, the constant is the parameter that affect the center of the graph.
Proof of Box Muller Transformation
Definition: A transformation which transforms from a twodimensional continuous uniform distribution to a twodimensional bivariate normal distribution (or complex normal distribution).
Let U_{1} and U_{2} be independent uniform (0,1) random variables. Then [math]X_{1} = (2lnU)^0.5_{1}*cos(2\pi U_{2})[/math]
[math]X_{2} = (2lnU)^0.5_{1}*sin(2\pi U_{2})[/math] are independent N(0,1) random variables.
This is a standard transformation problem. The joint distribution is given by
f(x1 ,x2) = f_{u1}, _{u2}(g1^− 1(x1,x2),g2^− 1(x1,x2)) *  J 
where J is the Jacobian of the transformation,
J = ∂u_{1}/∂x_{1},∂u_{1}/∂x_{2} ∂u_{2}/∂x_{1},∂u_{2}/∂x_{2}
where
u_{1} = g_{1} ^1(x1,x2) u_{2} = g_{2} ^1(x1,x2)
Inverting the above transformations, we have
u1 = exp^{(x_{1} ^2+ x_{2} ^2)/2} u2 = (1/2pi)*tan^1 (x_{2}/x_{1})
Finally we get
f(x1,x2) = {exp^((x1^2+x2^2)/2)}/2pi
which factors into two standard normal pdfs.
General Normal distributions
General normal distribution is a special version of the standard normal distribution. The domain of the general normal distribution is affected by the standard deviation and translated by the mean value.
 The pdf of the general normal distribution is
[math] f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{ \frac{(x\mu)^2}{2\sigma^2} } [/math] 
which is almost useless in this course

where [math] \mu [/math] is the mean or expectation of the distribution and [math] \sigma [/math] is standard deviation
The special case of the normal distribution is standard normal distribution, which the variance is 1 and the mean is zero. If X is a general normal deviate, then Z = (X − μ)/σ will have a standard normal distribution.
If Z ~ N(0,1), and we want [math]X [/math]~[math] N(\mu, \sigma^2)[/math], then [math]X = \mu + \sigma * Z[/math] Since [math]E(x) = \mu +\sigma*0 = \mu [/math] and [math]Var(x) = 0 +\sigma^2*1[/math]
If [math]Z_1,...Z_d[/math] ~ N(0,1) and are independent then [math]Z = (Z_1,..Z_d)^{T} [/math]~ [math]N(0,I_d)[/math] ie.
 Code
>>close all >>clear all >>z1=randn(1,1000); <generate variable from standard normal distribution >>z2=randn(1,1000); >>z=[z1;z2]; <produce a vector >>plot(z(1,:),z(2,:),'.')
If Z~N(0,Id) and X= [math]\underline{\mu} + \Sigma^{\frac{1}{2}} \,Z [/math] then [math]\underline{X}[/math] ~[math]N(\underline{\mu},\Sigma)[/math]
NonStandard Normal Distributions
Example 1: Singlevariate Normal
If X ~ Norm(0, 1) then (a + bX) has a normal distribution with a mean of [math]\displaystyle a[/math] and a standard deviation of [math]\displaystyle b[/math] (which is equivalent to a variance of [math]\displaystyle b^2[/math]). Using this information with the BoxMuller transform, we can generate values sampled from some random variable [math]\displaystyle Y\sim N(a,b^2) [/math] for arbitrary values of [math]\displaystyle a,b[/math].
 Generate a sample u from Norm(0, 1) using the BoxMuller transform.
 Set v = a + bu.
The values for v generated in this way will be equivalent to sample from a [math]\displaystyle N(a, b^2)[/math]distribution. We can modify the MatLab code used in the last section to demonstrate this. We just need to add one line before we generate the histogram:
v = a + b * x;
For instance, this is the histogram generated when b = 15, a = 125:
Example 2: Multivariate Normal
The BoxMuller method can be extended to higher dimensions to generate multivariate normals. The objects generated will be nx1 vectors, and their variance will be described by nxn covariance matrices.
[math]\mathbf{z} = N(\mathbf{u}, \Sigma)[/math] defines the n by 1 vector [math]\mathbf{z}[/math] such that:
 [math]\displaystyle u_i[/math] is the average of [math]\displaystyle z_i[/math]
 [math]\!\Sigma_{ii}[/math] is the variance of [math]\displaystyle z_i[/math]
 [math]\!\Sigma_{ij}[/math] is the covariance of [math]\displaystyle z_i[/math] and [math]\displaystyle z_j[/math]
If [math]\displaystyle z_1, z_2, ..., z_d[/math] are normal variables with mean 0 and variance 1, then the vector [math]\displaystyle (z_1, z_2,..., z_d) [/math] has mean 0 and variance [math]\!I[/math], where 0 is the zero vector and [math]\!I[/math] is the identity matrix. This fact suggests that the method for generating a multivariate normal is to generate each component individually as single normal variables.
The mean and the covariance matrix of a multivariate normal distribution can be adjusted in ways analogous to the single variable case. If [math]\mathbf{z} \sim N(0,I)[/math], then [math]\Sigma^{1/2}\mathbf{z}+\mu \sim N(\mu,\Sigma)[/math]. Note here that the covariance matrix is symmetric and nonnegative, so its square root should always exist.
We can compute [math]\mathbf{z}[/math] in the following way:
 Generate an n by 1 vector [math]\mathbf{x} = \begin{bmatrix}x_{1} & x_{2} & ... & x_{n}\end{bmatrix}[/math] where [math]x_{i}[/math] ~ Norm(0, 1) using the BoxMuller transform.
 Calculate [math]\!\Sigma^{1/2}[/math] using singular value decomposition.
 Set [math]\mathbf{z} = \Sigma^{1/2} \mathbf{x} + \mathbf{u}[/math].
The following MatLab code provides an example, where a scatter plot of 10000 random points is generated. In this case x and y have a covariance of 0.9  a very strong positive correlation.
x = zeros(10000, 1); y = zeros(10000, 1); for ii = 1:10000 u1 = rand; u2 = rand; R2 = 2 * log(u1); theta = 2 * pi * u2; x(ii) = sqrt(R2) * cos(theta); y(ii) = sqrt(R2) * sin(theta); end E = [1, 0.9; 0.9, 1]; [u s v] = svd(E); root_E = u * (s ^ (1 / 2)) * u'; z = (root_E * [x y]'); z(1,:) = z(1,:) + 0; z(2,:) = z(2,:) + 3; scatter(z(1,:), z(2,:))
Note: The svd command computes the matrix singular value decomposition.
[u,s,v] = svd(E) produces a diagonal matrix s of the same dimension as E, with nonnegative diagonal elements in decreasing order, and unitary matrices u and v so that E = u*s*v'.
This code generated the following scatter plot:
In Matlab, we can also use the function "sqrtm()" or "chol()" (Cholesky Decomposition) to calculate square root of a matrix directly. Note that the resulting root matrices may be different but this does materially affect the simulation. Here is an example:
E = [1, 0.9; 0.9, 1]; r1 = sqrtm(E); r2 = chol(E);
R code for a multivariate normal distribution:
n=10000; r2<2*log(runif(n)); theta<2*pi*(runif(n)); x<sqrt(r2)*cos(theta); y<sqrt(r2)*sin(theta); a<matrix(c(x,y),nrow=n,byrow=F); e<matrix(c(1,.9,09,1),nrow=2,byrow=T); svde<svd(e); root_e<svde$u %*% diag(svde$d)^1/2; z<t(root_e %*%t(a)); z[,1]=z[,1]+5; z[,2]=z[,2]+ 8; par(pch=19); plot(z,col=rgb(1,0,0,alpha=0.06))
Bernoulli Distribution
The Bernoulli distribution is a discrete probability distribution, which usually describes an event that only has two possible results, i.e. success or failure (x=0 or 1). If the event succeed, we usually take value 1 with success probability p, and take value 0 with failure probability q = 1  p.
P ( x = 0) = q = 1  p
P ( x = 1) = p
P ( x = 0) + P (x = 1) = p + q = 1
If X~Ber(p), its pdf is of the form [math]f(x)= p^{x}(1p)^{(1x)}[/math], x=0,1
P is the success probability.
The Bernoulli distribution is a special case of binomial distribution, where the variate x only has two outcomes; so that the Bernoulli also can use the probability density function of the binomial distribution with the variate x taking values 0 and 1.
Let x1,x2 denote the lifetime of 2 independent particles, x1~exp([math]\lambda[/math]), x2~exp([math]\lambda[/math]) we are interested in y=min(x1,x2)
Procedure: To simulate the event of flipping a coin, let P be the probability of flipping head and X = 1 and 0 represent flipping head and tail respectively: 1. Draw U ~ Uniform(0,1) 2. If U <= P X = 1 Else X = 0 3. Repeat as necessary
An intuitive way to think of this is in the coin flip example we discussed in a previous lecture. In this example we set p = 1/2 and this allows for 50% of points to be heads or tails.
 Code to Generate Bernoulli(p = 0.3)
i = 1; while (i <=1000) u =rand(); p = 0.3; if (u <= p) x(i) = 1; else x(i) = 0; end i = i + 1; end hist(x)
However, we know that if [math]\begin{align} X_i \sim Bernoulli(p) \end{align}[/math] where each [math]\begin{align} X_i \end{align}[/math] is independent,
[math]U = \sum_{i=1}^{n} X_i \sim Binomial(n,p)[/math]
So we can sample from binomial distribution using this property.
Note: We can consider Binomial distribution as the sum of n, independent, Bernoulli distributions
 Code to Generate Binomial(n = 20,p = 0.7)
p = 0.7; n = 20; for k=1:5000 i = 1; while (i <= n) u=rand(); if (u <= p) y(i) = 1; else y(i) = 0; end i = i + 1; end x(k) = sum(y==1); end hist(x)
</pre>
Note: We can also regard the Bernoulli Distribution as either a conditional distribution or [math]f(x)= p^{x}(1p)^{(1x)}[/math], x=0,1.
Comments on Matlab: When doing operations on vectors, always put a dot before the operator if you want the operation to be done to every element in the vector. example: Let V be a vector with dimension 2*4 and you want each element multiply by 3.
The Matlab code is 3.*V
some examples for using code to generate distribution.
Class 7  Tuesday, May 28
Universality of the Uniform Distribution/Inverse Method
The inverse method is universal in the sense that we can potentially sample from any distribution where we can find the inverse of the cumulative distribution function.
Procedure:
1.Generate U~Unif [0, 1)
2.set [math]x=F^{1}(u)[/math]
3.X~f(x)
Remark
1. The preceding can be written algorithmically as
Generate a random number U
If U<_{p0} set X=_{x0} and stop
If U<_{p0}+_{p1} set X=x1 and stop
...
2. If the _{xi}, i>=0, are ordered so that _{x0}<_{x1}<_{x2}<... and if we let F denote the distribution function of X, then [math]F(\lt sub\gt xk\lt /sub\gt =\lt sub\gt /sum/pi\lt /sub\gt )[/math] and so X will equal _{xj} if F(_{x(j1)})<=U<F(_{xj})
Example 1
Let [math]X[/math]_{1},[math]X[/math]_{2} denote the lifetime of two independent particles:
[math]X[/math]_{1}~exp([math]\lambda[/math]_{1})
[math]X[/math]_{2}~exp([math]\lambda[/math]_{2})
We are interested in [math]y=min(X[/math]_{1}[math],X[/math]_{2}[math])[/math]
Design an algorithm based on the InverseTransform Method to generate samples according to [math]f[/math]_{y}[math](y)[/math]
Solution:
x_{1}~exp([math]\lambda_1[/math])
x_{2}~exp([math]\lambda_2[/math])
[math]f_{x(x)}=\lambda e^{\lambda x},x\geq0 [/math]
[math]F_X(x)=1e^{\lambda x}, x\geq 0[/math]
[math]1F_Y(y) = P(Y\gt y)[/math] = P(min(X_{1},X_{2}) > y) = [math]\, P((X_1)\gt y) P((X_2)\gt y) = e^{\, (\lambda_1 + \lambda_2) y}[/math]
[math]F_Y(y)=1e^{\, (\lambda_1 + \lambda_2) y}, y\geq 0[/math]
[math]U=1e^{\, (\lambda_1 + \lambda_2) y}[/math] => [math]y=\, {\frac {1}{{\lambda_1 +\lambda_2}}} ln(1u)[/math]
Procedure:
Step1: Generate U~ U(0, 1)
Step2: set [math]y=\, {\frac {1}{{\lambda_1 +\lambda_2}}} ln(u)[/math]
If we generalize this example from two independent particles to n independent particles we will have:
[math]X[/math]_{1}~exp([math]\lambda[/math]_{1})
[math]X[/math]_{2}~exp([math]\lambda[/math]_{2})
...
[math]X[/math]_{n}~exp([math]\lambda[/math]_{n})
.
And the algorithm using the inversetransform method as follows:
step1: Generate U~U(0,1)
Step2: [math]y=\, {\frac {1}{{ \sum\lambda_i}}} ln(1u)[/math]
Example 2
Consider U~Unif[0,1)
[math]X=\, a (1\sqrt{1u})[/math],
where a>0 and a is a real number
What is the distribution of X?
Solution:
We can find a form for the cumulative distribution function of X by isolating U as U~Unif[0,1) will take values from the range of F(X)uniformly. It then remains to differentiate the resulting form by X to obtain the probability density function.
[math]X=\, a (1\sqrt{1u})[/math]
=>[math]1\frac {x}{a}=\sqrt{1u}[/math]
=>[math]u=1(1\frac {x}{a})^2[/math]
=>[math]u=\, {\frac {x}{a}} (2\frac {x}{a})[/math]
[math]f(x)=\frac {dF(x)}{dx}=\frac {2}{a}\frac {2x}{a^2}=\, \frac {2}{a} (1\frac {x}{a})[/math]
Example 3
Suppose F_{X}(x) = x^{n}, 0 ≤ x ≤ 1, n ∈ N > 0. Generate values from X.
Solution:
1. generate u ~ Unif[0, 1)
2. Set x = U^{1/n}
For example, when n = 20,
u = 0.6 => x = u_{1/20} = 0.974
u = 0.5 => x = u_{1/20} = 0.966
u = 0.2 => x = u_{1/20} = 0.923
Observe from above that the values of X for n = 20 are close to 1, this is because we can view X^{n} as the maximum of n independent random variables X, X~Unif(0,1) and is much likely to be close to 1 as n increases. This is because when n is large the exponent tends towards 0. This observation is the motivation for method 2 below.
Recall that
If Y = max (X_{1}, X_{2}, ... , X_{n}), where X_{1}, X_{2}, ... , X_{n} are independent,
F_{Y}(y) = P(Y ≤ y) = P(max (X_{1}, X_{2}, ... , X_{n}) ≤ y) = P(X_{1} ≤ y, X_{2} ≤ y, ... , X_{n} ≤ y) = F_{x1}(y) F_{x2}(y) ... F_{xn}(y)
Similarly if [math] Y = min(X_1,\ldots,X_n)[/math] then the cdf of [math]Y[/math] is [math]F_Y = 1 [/math][math]\prod[/math][math](1 F_{X_i})[/math]
Method 1: Following the above result we can see that in this example, F_{X} = x^{n} is the cumulative distribution function of the max of n uniform random variables between 0 and 1 (since for U~Unif(0, 1), F_{U}(x) =
Method 2: generate X by having a sample of n independent U~Unif(0, 1) and take the max of the n samples to be x. However, the solution given above using inversetransform method only requires generating one uniform random number instead of n of them, so it is a more efficient method.
Generate the Y = max (X1, X2, ... , Xn), Y = min (X1, X2, ... , Xn), pdf and cdf, but (xi and xj are independent) i,j=1,2,3,4,5.....
Example 4 (New)
Now, we are having an similar example as example 1 just doing the maximum way.
Let X_{1},X_{2} denote the lifetime of two independent particles:
[math]\, X_1, X_2 \sim exp(\lambda)[/math]
We are interested in Z=max(X_{1},X_{2})
Design an algorithm based on the InverseTransform Method to generate samples according to f_{Z}(z)
[math]\, F_Z(z)=P[Z\lt =z] = F_{X_1}(z) \cdot F_{X_2}(z) = (1e^{\lambda z})^2[/math]
[math] \text{thus } F^{1}(z) = \frac{1}{\lambda}\log(1\sqrt z)[/math]
To sample Z:
[math]\, \text{Step 1: Generate } U \sim U[0,1)[/math]
[math]\, \text{Step 2: Let } Z = \frac{1}{\lambda}\log(1\sqrt U)[/math], therefore we can generate random variable of Z.
Discrete Case:
u~unif(0,1)
x < 0, S < P_{0}
while u < S
x < x + 1
S < S + P_{0}
Return x
Decomposition Method
The CDF, F, is a composition if [math]F_{X}(x)[/math] can be written as:
[math]F_{X}(x) = \sum_{i=1}^{n}p_{i}F_{X_{i}}(x)[/math] where
1) p_{i} > 0
2) [math]\sum_{i=1}^{n}[/math]p_{i} = 1.
3) [math]F_{X_{i}}(x)[/math] is a CDF
The general algorithm to generate random variables from a composition CDF is:
1) Generate U, V ~ [math]U(0,1)[/math]
2) If u < p_{1}, v=[math]F_{X_{1}}(x)[/math]^{1}
3) Else if u < p_{1}+p_{2}, v=[math]F_{X_{2}}(x)[/math]^{1}
4) ....
Explanation
Each random variable that is a part of X contributes [math]p_{i} F_{X_{i}}(x)[/math] to [math]F_{X}(x)[/math] every time.
From a sampling point of view, that is equivalent to contributing [math]F_{X_{i}}(x)[/math] [math]p_{i}[/math] of the time. The logic of this is similar to that of the AcceptReject Method, but instead of rejecting a value depending on the value u takes, we instead decide which distribution to sample it from.
Simplified Version
1) Generate [math]u \sim Unif(0,1)[/math]
2) Set [math] X=0, s=P_0[/math]
3) While [math] u \gt s, [/math]
set [math] X = X+1[/math] and [math] s=s+P_x [/math]
4) Return [math] X [/math]
Examples of Decomposition Method
Example 1
[math]f(x) = \frac{5}{12}(1+(x1)^4) 0\leq x\leq 2[/math]
[math]f(x) = \frac{5}{12}+\frac{5}{12}(x1)^4 = \frac{5}{6} (\frac{1}{2})+\frac {1}{6}(\frac{5}{2})(x1))^4[/math]
Let[math]f_{x_1}= \frac{1}{2}[/math] and [math]f_{x_2} = \frac {5}{2}(x1)^4[/math]
Algorithm:
Generate U~Unif(0,1)
If [math]0\lt u\lt \frac {5}{6}[/math], then we sample from f_{x1}
Else if [math]\frac{5}{6}\lt u\lt 1[/math], we sample from f_{x2}
We can find the inverse CDF of f_{x2} and utilize the Inverse Transform Method in order to sample from f_{x2}
Sampling from f_{x1} is more straightforward since it is uniform over the interval (0,2)
divided f(x) to two pdf of x1 and x2, with uniform distribution, of two range of uniform.
Example 2
[math]f(x)=\frac{1}{4}e^{x}+2x+\frac{1}{12}, \quad 0\leq x \leq 3 [/math]
We can rewrite f(x) as [math]f(x)=(\frac{1}{4}) e^{x}+(\frac{2}{4}) 4x+(\frac{1}{4}) \frac{1}{3}[/math]
Let f_{x1} = [math]e^{x}[/math], f_{x2} = 4x, and f_{x3} = [math]\frac{1}{3}[/math]
Generate U~Unif(0,1)
If [math]0\lt u\lt \frac{1}{4}[/math], we sample from f_{x1}
If [math]\frac{1}{4}\leq u \lt \frac{3}{4}[/math], we sample from f_{x2}
Else if [math]\frac{3}{4} \leq u \lt 1[/math], we sample from f_{x3}
We can find the inverse CDFs of f_{x1} and f_{x2} and utilize the Inverse Transform Method in order to sample from f_{x1} and f_{x2}
We find F_{x1} = [math] 1e^{x}[/math] and F_{x2} = [math]2x^{2}[/math]
We find the inverses are [math] X = ln(1u)[/math] for F_{x1} and [math] X = \sqrt{\frac{U}{2}}[/math] for F_{x2}
Sampling from f_{x3} is more straightforward since it is uniform over the interval (0,3)
In general, to write an efficient algorithm for:
[math]F_{X}(x) = p_{1}F_{X_{1}}(x) + p_{2}F_{X_{2}}(x) + ... + p_{n}F_{X_{n}}(x)[/math]
We would first calculate [math] {q_i} = \sum_{j=1}^i p_j, \forall i = 1,\dots, n[/math]
Then Generate [math] U \sim~ Unif(0,1) [/math]
If [math] U \lt q_1 [/math] sample from [math] f_1 [/math]
else if [math] u\lt q_i [/math] sample from [math] f_i [/math] for [math] 1 \lt i \lt n [/math]
else sample from [math] f_n [/math]
when we divided the pdf of different range of f(x1) f(x2) and f(x3), and generate all of them and inverse, U~U(0,1)
Example of Decomposition Method
F_{x}(x) = 1/3*x+1/3*x^{2}+1/3*x^{3}, 0<= x<=1
let U =F_{x}(x) = 1/3*x+1/3*x^{2}+1/3*x^{3}, solve for x.
P_{1}=1/3, F_{x1}(x)= x, P_{2}=1/3,F_{x2}(x)= x^{2}, P_{3}=1/3,F_{x3}(x)= x^{3}
Algorithm:
Generate U ~ Unif [0,1)
Generate V~ Unif [0,1)
if 0<u<1/3, x = v
else if u<2/3, x = v^{1/2}
else x = v^{1/3}
Matlab Code:
u=rand v=rand if u<1/3 x=v elseif u<2/3 x=sqrt(v) else x=v^(1/3) end
=== Example of Decomposition Method(new) ===
F_{x}(x) = 1/2*x+1/2*x^{2}, 0<= x<=1
let U =F_{x}(x) = 1/2*x+1/2*x^{2}, solve for x.
P_{1}=1/2, F_{x1}(x)= x, P_{2}=1/2,F_{x2}(x)= x^{2},
Algorithm:
Generate U ~ Unif [0,1)
Generate V~ Unif [0,1)
if 0<u<1/2, x = v
else x = v^{1/2}
Matlab Code:
u=rand v=rand if u<1/2 x=v else x=sqrt(v) end
Extra Knowledge about Decomposition Method
There are different types and applications of Decomposition Method
1. Primal decomposition
2. Dual decomposition
3. Decomposition with constraints
4. More general decomposition structures
5. Rate control
6. Single commodity network ﬂow
For More Details, please refer to http://www.stanford.edu/class/ee364b/notes/decomposition_notes.pdf
Fundamental Theorem of Simulation
Consider two shapes, A and B, where B is a subshape (subset) of A. We want to sample uniformly from inside the shape B. Then we can sample uniformly inside of A, and throw away all samples outside of B, and this will leave us with a uniform sample from within B. (Basis of the AcceptReject algorithm)
The advantage of this method is that we can sample a unknown distribution from a easy distribution. The disadvantage of this method is that it may need to reject many points, which is inefficient.
inverse each part of partial CDF, the partial CDF is divided by the original CDF, partial range is uniform distribution.
Question 2
Use Acceptance and Rejection Method to sample from [math]f_X(x)=b*x^n*(1x)^n[/math] , [math]n\gt 0[/math], [math]0\lt x\lt 1[/math]
Solution: This is a beta distribution, Beta ~[math]\int _{0}^{1}b*x^{n}*(1x)^{n}dx = 1[/math]
U_{1~Unif[0,1) }
U_{2~Unif[0,1)
}
fx=[math] bx^{1/2}(1x)^{1/2} \lt = bx^{1/2}\sqrt2 ,0\lt =x\lt =1/2 [/math]
The beta distribution maximized at 0.5 with value [math](1/4)^n[/math]. So, [math]c=b*(1/4)^n[/math] Algorithm: 1.Draw [math]U_1[/math] from [math]U(0, 1)[/math].[math] U_2[/math] from [math]U(0, 1)\lt math\gt 2.If \lt math\gt U_2\lt =b*(U_1)^n*(1(U_1))^n/b*(1/4)^n=(4*(U_1)*(1(U_1)))^n[/math]
then X=U_1 Else return to step 1.
Discrete Case:
Most discrete random variables do not have a closed form inverse CDF. Also, its CDF [math]F:X \rightarrow [0,1][/math] is not necessarily onto. This means that not every point in the interval [math] [0,1] [/math] has a preimage in the support set of X through the CDF function.
Let [math]X[/math] be a discrete random variable where [math]a \leq X \leq b[/math] and [math]a,b \in \mathbb{Z}[/math] .
To sample from [math]X[/math], we use the partition method below:
[math]\, \text{Step 1: Generate u from } U \sim Unif[0,1][/math]
[math]\, \text{Step 2: Set } x=a, s=P(X=a)[/math]
[math]\, \text{Step 3: While } u\gt s, x=x+1, s=s+P(X=x)[/math]
[math]\, \text{Step 4: Return } x[/math]
Class 8  Thursday, May 30, 2013
In this lecture, we will discuss algorithms to generate 3 wellknown distributions: Binomial, Geometric and Poisson. For each of these distributions, we will first state its general understanding, probability mass function, expectation and variance. Then, we will derive one or more algorithms to sample from each of these distributions, and implement the algorithms on Matlab.
The Bernoulli distribution
The Bernoulli distribution is a special case of the binomial distribution, where n = 1. X ~ Bin(1, p) has the same meaning as X ~ Ber(p), where p is the probability if the event success, otherwise the probability is 1p (we usually define a variate q, q= 1p). The mean of Bernoulli is p, variance is p(1p). Bin(n, p), is the distribution of the sum of n independent Bernoulli trials, Bernoulli(p), each with the same probability p, where 0<p<1.
For example, let X be the event that a coin toss results in a "head" with probability p, then X~Bernoulli(p).
P(X=1)=p,P(X=0)=1p, P(x=0)+P(x=1)=p+q=1
Algorithm:
1. Generate u~Unif(0,1)
2. If u ≤ p, then x = 1
else x = 0
The answer is:
when U≤p, x=1
when U>p, x=0
3.Repeat as necessary
The Binomial Distribution
If X~Bin(n,p), then its pmf is of form:
f(x)=(nCx) p^{x}(1p)^{(nx)}, x=0,1,...n
Or f(x) = [math](n!/x!(nx)!)[/math] p^{x}(1p)^{(nx)}, x=0,1,...n
Mean (x) = E(x) = [math] np [/math]
Variance = [math] np(1p) [/math]
Generate n uniform random number [math]U_1,...,U_n[/math] and let X be the number of [math]U_i[/math] that are less than or equal to p.
The logic behind this algorithm is that the Binomial Distribution is simply a Bernoulli Trial, with a probability of success of p, repeated n times. Thus, we can sample from the distribution by sampling from n Bernoulli. The sum of these n bernoulli trials will represent one binomial sampling. Thus, in the below example, we are sampling 1000 realizations from 20 Bernoulli random variables. By summing up the rows of the 20 by 1000 matrix that is produced, we are summing up the 20 bernoulli outcomes to produce one binomial sampling. We have 1000 rows, which means we have realizations from 1000 binomial random variables when this sum is done (the output of the sum is a 1 by 1000 sized vector).
To continue with the previous example, let X be the number of heads in a series of n independent coin tosses  where for each toss, the probability of coming up with a head is p  then X~Bin(n, p).
MATLAB tips: to get a pdf f(x), we can use code binornd(N,P). N means number of trials and p is the probability of success. a=[2 3 4],if set a<3, will produce a=[1 0 0]. If you set "a == 3", it will produce [0 1 0]. If a=[2 6 9 10], if set a<4, will produce a=[1 0 0 0], because only the first element (2) is less than 4, meanwhile the rest are greater. So we can use this to get the number which is less than p.
Algorithm for Bernoulli is given as above
Code
>>a=[3 5 8]; >>a<5 ans= 1 0 0 >>rand(20,1000) >>rand(20,1000)<0.4 >>A = sum(rand(20,1000)<0.4) #sum of raws ~ Bin(20 , 0.3) >>hist(A) >>mean(A) Note: `1` in the above code means sum the matrix by column >>sum(sum(rand(20,1000)<0.4)>8)/1000 This is an estimate of Pr[A>8].
remark: a=[2 3 4],if set a<3, will produce a=[1 0 0]. If you set "a == 3", it will produce [0 1 0]. using code to find some value what i want to get from the matrix. It`s useful to define some matrixs.
Relation between Bernoulli Distribution and Binomial Distribution: For instance, we want to find numbers ≤0.3. Uniform collects which is ≤0.3, and Binomial calculates how many numbers are there ≤0.3.
The Geometric Distribution
Geometric distribution is a discrete distribution. There are two types geometric distributions, the first one is the probability distribution of the number of X Bernoulli fail trials, with probability 1p, needed until the first success situation happened, X come from the set { 1, 2, 3, ...}; the other one is the probability distribution of the number Y = X − 1 of failures, with probability 1p, before the first success, Y comes from the set { 0, 1, 2, 3, ... }.
For example,
If the success event showed at the first time, which x=1, then f(x)=p.
If the success event showed at the second time and failed at the first time, which x=2, then f(x)=p(1p).
If the success event showed at the third time and failed at the first and second time, which x=3, then f(x)= p(1p)^{(k1)}. etc.
If the success event showed at the x time and all failed before time x, which x=x, then f(x)= p(1p)^{(k1)}
which is,
x Pr
1 P
2 P(1P)
3 P(1P)^{2}
. .
. .
. .
n P(1P)^{(n1)}
For example, suppose a die is thrown repeatedly until the first time a "6" appears. This is a question of geometric distribution of the number of times on the set { 1, 2, 3, ... } with p = 1/6.
Generally speaking, if X~G(p) then its pdf is of the form f(x)=(1p)^{(x1)}*p, x=1,2,...
The random variable X is the number of trials required until the first success in a series of independent Bernoulli trials.
Other properties
Probability mass function : P(X=k) = p(1p)^{(k1)}
Tail probability : P(X>n) = [math](1p)^n[/math]
The CDF : P(X<n) = 1  [math](1p)^n[/math]
Mean of x = 1/p Var(x) = (1p)/p^2
There are two ways to look at a geometric distribution.
1st Method
We look at the number of trials before the first success. This includes the last trial in which you succeeded. This will be used in our course.
pdf is of form f(x)=>(1p)^{(x1)}*(p), x = 1, 2, 3, ...
2nd Method
This involves modeling the failure before the first success. This does not include the last trial in which we succeeded.
pdf is of form f(x)=> ((1p)^x)*p , x = 0, 1, 2, ....
If Y~Exp([math]\lambda[/math]) then X=floor(Y)+1 is geometric.
Choose e^([math]\lambda[/math])=1p. Then X ~ geo (p)
P (X > x) = (1p)^{x}(because first x trials are not successful)
Proof
P(X>x) = P( floor(Y) + 1 > X) = P(floor (Y) > x 1) = P(Y>= x) = e^{([math]\lambda[/math] * x)}
SInce p = 1 e^{[math]\lambda[/math]} or [math]\lambda[/math]= [math]log(1p)[/math](compare the pdf of exponential distribution and Geometric distribution,we can look at e^{[math]\lambda[/math]} the probability of the fail trial), then
P(X>x) = e^{([math]\lambda[/math] * x)} = e^{log(1p)*x} = (1p)^{x}
Note that floor(Y)>X > Y >= X+1 (X is an integer)
proof how to use EXP distribution to find P(X>x)=(1p)^x
Suppose X has the exponential distribution with rate parameter [math] \lambda \gt 0 [/math]
the [math]\left \lfloor X \right \rfloor [/math] and [math]\left \lceil X \right \rceil [/math] have geometric distribution on [math] \mathcal{N} [/math] and [math] \mathcal{N}_{+} [/math] respectively each with success probability [math] 1e^ { \lambda} [/math]
Proof:
[math]\text{For } n \in \mathcal{N} [/math]
[math]\begin{align}
P(\left \lfloor X \right \rfloor = n)&{}= P( n \leq X \lt n+1) \\
&{}= F( n+1)  F(n) \\
\text{By algebra and simplification:} \\
P(\left \lfloor X \right \rfloor = n)&{}= (e^ {\lambda})^n \cdot (1  e^ {\lambda}) \\
&{}= Geo (1  e^ {\lambda}) \\
\text{Proof of ceiling part follows immediately.} \\
\end{align}[/math]
Algorithm:
1) Let [math]\lambda = \log (1p) [/math]
2) Generate a [math]Y \sim Exp(\lambda )[/math]
3) We can then let [math]X = \left \lfloor Y \right \rfloor + 1, where X\sim Geo(p)[/math]
note: [math]\left \lfloor Y \right \rfloor \gt 2 \gt Y\gt =3[/math]
[math] \left \lfloor Y \right \rfloor \gt 5 \gt Y\gt =6[/math]
[math]\left \lfloor Y \right \rfloor\gt x [/math] > Y>= X+1
[math]P(Y\gt =X)[/math]
Y ~ Exp ([math]\lambda[/math])
pdf of Y : [math]\lambda e^{\lambda}[/math]
cdf of Y : [math]1 e^{\lambda}[/math]
cdf [math]P(Y\lt x)=1e^{\lambda x}[/math]
[math]P(Y\gt =x)=1(1 e^{\lambda x})=e^{\lambda x}[/math]
[math] e^{\lambda}=1p \gt log(1p)=\lambda[/math]
[math]P(Y\gt =x)=e^{\lambda x}=e^{log(1p)x}=(1p)^x[/math]
[math]E[x]=1/P [/math]
[math]Var= (1P)/(P^2)[/math]
P(X>x)
=P(floor(y)+1>x)
=P(floor(y)>x1)
=P(y>=x)
use [math]e^{\lambda}=1p[/math] to figure out the mean and variance.
Code
>>p=0.4; >>l=log(1p); >>u=rand(1,1000); >>y=(1/l)*log(u); >>x=floor(y)+1; >>hist(x) ===Note:=== mean(x)~E[X]=> 1/p Var(x)~V[X]=> (1p)/p^2 A specific Example: Consider x=5 >> sum(x==5)/1000 > chance that will succeed at fifth trial; >> ans = 0.0780 >> sum(x>10)/1000 > chance that will succeed after 10 trials >> ans = 0.0320
Note that the above mean is the average amount of times you should try until you get a successful case.
EXAMPLE for geometric distribution: Consider the case of rolling a die:
X=the number of rolls that it takes for the number 5 to appear.
We have X ~Geo(1/6), [math]f(x)=(1/6)*(5/6)^{x1}[/math], x=1,2,3....
Now, let [math]Y=e^{\lambda}[/math] => x=floor(Y) +1
Let [math]e^{\lambda}=5/6[/math]
[math]P(X\gt x) = P(Y\gt =x)[/math] (from the class notes)
We have [math]e^{\lambda *x} = (5/6)^x[/math]
Algorithm: let [math]\lambda = \log(5/6)[/math]
1) Let Y be [math]e^{\lambda}[/math], exponentially distributed
2) Set X= floor(Y)+1, to generate X
[math] E[x]=6, Var[X]=5/6 /(1/6^2) = 30 [/math]
GENERATING NEGATIVE BINOMIAL RV USING GEOMETRIC RV'S
Property of negative binomial Random Variable:
The negative binomial random variable is a sum of r independent geometric random variables.
Using this property we can formulate the following algorithm:
Step 1: Generate r geometric rv's each with probability p using the procedure presented above.
Step 2: Take the sum of these r geometric rv's. This RV follows NB(r,p)
remark the step 1 and step 2. Looking for the floor Y, and e^(mu)=1p=5/6, and then generate x.
Poisson Distribution
If [math]\displaystyle X \sim \text{Poi}(\lambda)[/math], its pdf is of the form [math]\displaystyle \, f(x) = \frac{e^{\lambda}\lambda^x}{x!}[/math] , where [math]\displaystyle \lambda [/math] is the rate parameter.
definition:In probability theory and statistics, the Poisson distribution (pronounced [pwasɔ̃]) is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume. For instance, suppose someone typically gets 4 pieces of mail per day on average. There will be, however, a certain spread: sometimes a little more, sometimes a little less, once in a while nothing at all.[2] Given only the average rate, for a certain period of observation (pieces of mail per day, phonecalls per hour, etc.), and assuming that the process, or mix of processes, that produces the event flow is essentially random, the Poisson distribution specifies how likely it is that the count will be 3, or 5, or 10, or any other number, during one period of observation. That is, it predicts the degree of spread around a known average rate of occurrence. The Derivation of the Poisson distribution section shows the relation with a formal definition.(from Wikipedia)
Understanding of Poisson distribution:
If customers independently come to bank over time, all following exponential distributions with rate [math]\lambda[/math] per unit of time, then X(t) = # of customer in [0,t] ~ Poi[math](\lambda t)[/math]
Its mean and variance are
[math]\displaystyle E[X]=\lambda[/math]
[math]\displaystyle Var[X]=\lambda[/math]
An useful property: If [math]X_i \sim \mathrm{Pois}(\lambda_i)\, i=1,\dots,n[/math] are independent and [math]\lambda=\sum_{i=1}^n \lambda_i[/math], then [math]Y = \left( \sum_{i=1}^n X_i \right) \sim \mathrm{Pois}(\lambda)[/math]
A Poisson random variable X can be interpreted as the maximal number of i.i.d. (Independent and Identically Distributed) exponential variables(with parameter [math]\lambda[/math]) whose sum does not exceed 1.
The traditional understanding of the Poisson distribution as the total number of events in a specific interval can be understood here since the above definition simply describes the Poisson as the sum of waiting times for n events in an interval of length 1.
[math]\displaystyle\text{Let } Y_j \sim \text{Exp}(\lambda), U_j \sim \text{Unif}(0,1)[/math]
[math]Y_j = \frac{1}{\lambda}\log(U_j) \text{ from Inverse Transform Method}[/math]
[math]\begin{align}
X &= \max \{ n: \sum_{j=1}^{n} Y_j \leq 1 \} \\
&= \max \{ n: \sum_{j=1}^{n}  \frac{1}{\lambda}\log(U_j) \leq 1 \} \\
&= \max \{ n: \sum_{j=1}^{n} \log(U_j) \gt = \lambda \} \\
&= \max \{ n: \log(\prod_{j=1}^{n} U_j) \gt = \lambda \} \\
&= \max \{ n: \prod_{j=1}^{n} U_j \gt = e^{\lambda} \} \\
&= \min \{ n: \prod_{j=1}^{n} U_j \gt = e^{\lambda} \}  1 \\
\end{align}[/math]
Note: From above, we can use Logarithm Rules [math]\log(a)+\log(b)=\log(ab)[/math] to generate the result.
Algorithm:
1) Set n=1, a=1
2) Generate [math]U_n \sim U(0,1), a=aU_n [/math]
3) If [math]a \gt = e^{\lambda}[/math] , then n=n+1, and go to Step 2. Else, x=n1
using inversemethod to proof mean and variance of poisson distribution.
MATLAB Code for generating Poisson Distribution
>>l=2; N=1000 >>for ii=1:N n=1; a=1; u=rand; a=a*u; while a>exp(l) n=n+1; u=rand; a=a*u; end x(ii)=n1; end >>hist(x) >>Sum(x==1)/N # Probability of x=1 >>Sum(x>3)/N # Probability of x > 3
Another way to generate random variable from poisson distribution
Note: [math]P(X=x)=\frac {e^{\lambda}\lambda^x}{x!}, \forall x \in \N[/math]
Let [math]\displaystyle p(x) = P(X=x)[/math] denote the pmf of [math]\displaystyle X[/math].
Then ratio is [math]\frac{p(x+1)}{p(x)}=\frac{\lambda}{x+1}, \forall x \in \N[/math]
Therefore, [math]p(x+1)=\frac{\lambda}{x+1}p(x)[/math]
Algorithm:
1. Set [math]\displaystyle x=0[/math]
2. Set [math]\displaystyle F=p=e^{\lambda}[/math]
3. Generate [math]\displaystyle U \sim~ \text{Unif}(0,1)[/math]
4. If [math]\displaystyle U\lt F[/math], output [math]\displaystyle x[/math]
Else
[math]\displaystyle p=\frac{\lambda}{x+1} p[/math]
[math]\displaystyle F=F+p[/math]
[math]\displaystyle x = x+1[/math]
Go to 4.
This is indeed the inversetransform method, with a clever way to calculate the CDF on the fly.
u=rand(0.1000) hist(x)
Class 9  Tuesday, June 4, 2013
Beta Distribution
The beta distribution is a continuous probability distribution. There are two positive shape parameters in this distribution defined as alpha and beta, both parameters greater than 0, and X within the interval [0,1]. The parameter alpha is used as exponents of the random variable. The parameter beta is used to control the shape of the this distribution. We use the beta distribution to build the model of the behavior of random variables, which are limited to intervals of finite length. For example, we can use the beta distribution to analyze the time allocation of sunshine data and variability of soil properties.
If X~Beta([math]\alpha, \beta[/math]) then its p.d.f. is of the form
 [math]\displaystyle \text{ } f(x) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}x^{\alpha1}(1x)^{\beta1} [/math] where [math]0 \leq x \leq 1[/math] and [math]\alpha[/math]>0, [math]\beta[/math]>0
and [math]f(x;\alpha,\beta)= 0 [/math] otherwise Note: [math]\Gamma(\alpha)=(\alpha1)! [/math] if [math]\alpha[/math] is a positive integer.
The mean of the beta distribution is [math]\frac{\alpha}{\alpha + \beta}[/math]. The variance is [math]\frac{\alpha\beta}{(\alpha+\beta)^2 (\alpha + \beta + 1)}[/math]
The variance of the beta distribution decreases monotonically if [math] \alpha = \beta [/math] and as [math] \alpha = \beta [/math] increases, the variance decreases.
The formula for the cumulative distribution function of the beta distribution is also called the incomplete beta function ratio (commonly denoted by Ix) and is defined as F(x) = I(x)(p,q)
To generate random variables of a Beta distribution, there are multiple cases depending on the value of [math]\alpha [/math] and [math] \beta [/math]:
Case 1: If [math]\alpha=1[/math] and [math]\beta=1[/math]
 [math]\displaystyle \text{Beta}(1,1) = \frac{\Gamma(1+1)}{\Gamma(1)\Gamma(1)}x^{11}(1x)^{11}[/math]
 [math] = \frac{1!}{0!0!}x^{0}(1x)^{0}[/math]
 [math] = 1 [/math]
Note: 0! = 1.
Hence, the distribution is:
 [math]\displaystyle \text{Beta}(1,1) = U (0, 1) [/math]
If the Question asks for sampling Beta Distribution, we can sample from Uniform Distribution which we already know how to sample from
Algorithm:
Generate U~Unif(0,1)
Case 2: Either [math]\alpha=1[/math] or [math]\beta=1[/math]
e.g. [math]\alpha=1[/math]
We don't make any assumption about [math]\beta[/math] except that it is a positive integer. <br\>
 [math]\displaystyle \text{f}(x) = \frac{\Gamma(1+\beta)}{\Gamma(1)\Gamma(\beta)}x^{11}(1x)^{\beta1}=\beta(1x)^{\beta1}[/math]
 [math]\beta=1 [/math]
 [math]\displaystyle \text{f}(x) = \frac{\Gamma(\alpha+1)}{\Gamma(\alpha)\Gamma(1)}x^{\alpha1}(1x)^{11}=\alpha x^{\alpha1}[/math]
The CDF is [math]F(x) = x^{\alpha}[/math] (using integration of [math]f(x)[/math]) WIth CDF F(x) = x^α, if U have CDF, it is very easy to sample: y=x^α > x=y^α > inverseF(x)= x^(1/α) U~U(0,1) > x=u^(1/α) Applying the inverse transform method with [math]y = x^\alpha \Rightarrow x = y^\frac {1}{\alpha}[/math]
[math]F(x)^{1} = y^\frac {1}{\alpha}[/math]
between case 1 and case 2, when alpha and beta be different value, the beta distribution can simplify to other distribution.
Algorithm
 1. Generate U~Unif(0,1)<br\>
 2. Assign [math]x = u^\frac {1}{\alpha}[/math]
After we have simplified this example, we can use other distribution methods to solve the problem.
MATLAB Code to generate random n variables using the above algorithm
x = rand(1,n).^(1/alpha)
Case 3:<br\> To sample from beta in general. we use the property that <br\>
 if [math]Y_1[/math] follows gamma [math](\alpha,1)[/math]<br\>
 [math]Y_2[/math] follows gamma [math](\beta,1)[/math]<br\>
Note: 1. [math]\alpha[/math] and [math]\beta[/math] are shape parameters here and 1 is the scale parameter.<br\>
 then [math]Y=\frac {Y_1}{Y_1+Y_2}[/math] follows Beta [math](\alpha,\beta)[/math]<br\>
2.Exponential: [math]\frac{1}{\lambda} \log(u)[/math] <br\> 3.Gamma: [math]\frac{1}{\lambda} \log(u_1 * \cdots * u_t)[/math]<br\>
Algorithm<br\>
 1. Sample from Y1 ~ Gamma ([math]\alpha[/math],1) #[math]\alpha[/math] is the shape, and 1 is the scale. <br\>
 2. Sample from Y2 ~ Gamma ([math]\beta[/math],1) <br\>
 3. Set
 [math] Y = \frac{Y_1}{Y_1+Y_2}[/math]
Please see the following example for Matlab code.
Case 4:<br\> Use The AcceptanceRejection Method <br\>
The beta density is
[math]\displaystyle \text{Beta}(\alpha,\beta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}x^{\alpha1}(1x)^{\beta1} [/math] where [math]0 \leq x \leq 1[/math]
Assume [math]\alpha,\beta \geq 1[/math]. Then [math]\displaystyle f(x)[/math] has the maximum at [math]\frac{\alpha1}{\alpha+\beta2}[/math].
(Please note that we could find the maximum by taking the derivative of f(x), let f'(x)=0 and then use maximum likelihood estimate to find what the maximum value is)
Define
[math] c=f(\frac{\alpha1}{\alpha+\beta2}[/math]) and choose [math]\displaystyle g(x)=1[/math].
The AR method becomes
1.Generate independent [math]\displaystyle U_1[/math] and [math]\displaystyle U_2[/math] from [math]\displaystyle UNIF[0,1][/math] until [math]\displaystyle cU_2 \leq f(U_1)[/math];
2.Return [math]\displaystyle U_1[/math].
MATLAB Code for generating Beta Distribution
>>Y1 = sum(log(rand(10,1000))) #Gamma(10,1), sum 10 exponentials for each of the 1000 samples >>Y2 = sum(log(rand(5,1000))) #Gamma(5,1), sum 5 exponentials for each of the 1000 samples %NOTE: here, lamda is 1, since the scale parameter for Y1 & Y2 are both 1 >>Y=Y1./(Y1+Y2) #Don't forget to divide elements using "." Where Y follows Beta(10,5) >>figure >>hist(Y1) #Gamma curve >>figure >>hist(Y2) #Gamma curve >>figure >>hist(Y) #Do this to check that the shape fits beta. ~Beta(10,5). >>disttool #Check the beta plot.
This is the histogram of Y, precisely simulated version of Beta (10,5)
This is the pdf of various beta distributions
300px
MATLAB tips: rand(10,1000) produces one 10*1000 matrix and sum(rand(10,1000)) produces a 10*1000 matrix
and each element in the matrix follows CDF of uniform distribution.
Example for the code to explain the beta distribution.
Another MATLAB Code for generating Beta Distribution using AR method
>>alpha = 3 >>beta = 2 >> a = sum (log(rand(alpha,1000))) >> b = sum (log(rand(beta,1000))) >> aandb=sum(log(rand(alpha+beta,1000))) >> t = (alpha  1)/(alpha + beta 2) >> c = (andb/(a*b))*t^(alpha1)*(1t)^(beta1) >> u1 = rand >> u2 = rand >> x = (andb/(a*b))*u1^(alpha1)*(1u1)^(beta1) >> while c*u2>x >> u1 = rand >> u2 = rand >> x = (andb/(a*b))*u1^(alpha1)*(1u1)^(beta1) >> end >> u1
Random Vector Generation
We want to sample from [math]X = (X_1, X_2, [/math]…,[math] X_d)[/math], a ddimensional vector from a known pdf [math]f(x)[/math] and cdf [math]F(x)[/math]. We need to take into account the following two cases:
Case 1
if the [math]x_1, x_2 \cdots, x_d[/math]'s are independent, then
[math]f(x) = f(x_1,\cdots, x_d) = f(x_1)\cdots f(x_d)[/math]
we can sample from each component [math]x_1, x_2,\cdots, x_d[/math] individually, and then form a vector.
based on the property of independence, we can derive the pdf or pmf of [math]x=x_1,x_2,x_3,x_4,x_5,\cdots[/math]
Case 2
If [math]X_1, X_2, \cdots , X_d[/math] are not independent
[math]f(x) = f(x_1, \cdots , x_d) = f(x_1) f(x_2x_1) \cdots f(x_dx_{d1},\cdots ,x_1)[/math]
we need to know the conditional distributions of [math]f(x_2x_1), f(x_3x_2, x_1),\cdots, f(x_dx_{d1}, \cdots, x_1)[/math]
This is generally a hard problem. Conditional probabilities are not easy to compute, then sampling from these would be based on your statistics knowledge.
In each case, we have to consider the previous cases.
[math]f(x_1)[/math] is onedimensional, some as [math]f(x_2x_1)[/math] and all others.
In general, one could consider the covariance matrix [math] C [/math] of random variables [math] X_1[/math],…,[math]X_d [/math].
Suppose we now have the Cholesky factor [math] G[/math] of [math] C [/math] (i.e. [math] C = GG^T [/math]). In matlab, we use Chol(C)
For any dtuple [math] X := (X_1 ,\ldots , X_d) [/math] (i.e random variable generated by [math] X_1,\ldots , X_d [/math] respectively)
[math] GX [/math] would yield the desired distribution.
Note (Product Rule)
1.) All cases can use this (independent or dependent): [math]f(x) = f(x_1, x_2)= f(x_1) f(x_2x_1)[/math]
2.) If we determine that [math]x_1[/math] and [math] x_2[/math] are independent, then we can use [math]f(x) = f(x_1, x_2)= f(x_1)f(x_2)[/math]
 ie. If late for class=[math]x_1[/math] and sick=[math]x_2[/math], then these are dependent variables so can only use equation 1 ([math]f(x) = f(x_1, x_2)= f(x_1) f(x_2x_1)[/math])
 ie. If late for class=[math]x_1[/math] and milk is white=[math]x_2[/math], then these are independent variables so can use both equations 1 and 2.
the case show the formula of the X = (X1,X2,…,Xd), a ddimensional vector, when they are not independent of each x. we use conditional function to define the probability function of x with ddimensional.
Example
Generate uniform random vectors
1) x = (x_{1}, …, x_{d}) from the ddimensional rectangle
2) D = { (x_{1}, …, x_{d}) : a_{i} <= x_{i} <= b_{i} , i = 1, …, d}
Algorithm:
1) for i = 1 to d
2) U_{i} ~ U(0,1)
3) x_{i} = a_{i} + U(b_{i}a_{i})
4) end
 Note: x_{i} = a_{i} + U(b_{i}a_{i}) denotes X_{i} ~U(a_{i},b_{i})
An example of the 2D case is given below:
>>a=[1 2]; >>b=[4 6]; >>for i=1:2 u(i) = rand(); x(i) = a(i) + (b(i)  a(i))*u(i); end >>hold on => this is to retain current graph when adding new graphs >>rectangle('Position',[1 2 3 4]) => draw the boundary of the rectangle >>axis([0 10 0 10]) => change the size of axes >>plot(x(1),x(2),'.')
Code:
function x = urectangle (d,n,a,b) for ii = 1:d; u(ii,:) = rand(1,n); x(ii,:) = a+ u(ii,:)*(ba); %keyboard #makes the function stop at this step so you can evaluate the variables end >>x=urectangle(2, 100, 2, 5); >>scatter(x(1,:),x(2,:)) >>x=urectangle(2, 10000, 2, 5); #generate 10000 numbers (instead of 100) >>x=urectangle(3, 10000, 2, 5); #changed to 3dimensional >>scatter3(x(1,:), x(2,:), x(3,:)) >>axis square
Vector AcceptanceRejection Method
The acceptancerejection method can be extended to ndimensional cases, with the same concept:
If a random vector is to be generated uniformly from G, an irregular shape in the nth dimension, and W is a regular shape arbitrarily close to G in the nth dimension, then acceptancerejection method can be applied as follows:
1. Sample from the regular shape W
2. Accept sample points if they are inside G
Example:
Generate a random vector Z that is uniformly distributed over region G
G: ddimensional unit ball, [math]G = \big\{{x: \sum_{i}{x_i}^2 \leq 1}\big\}[/math]
w: ddimensional hypercube, [math]W = \big\{{1 \leq x_i \leq 1}\big\}_{i=1}^d[/math]
Procedure:
Step 1: [math]U_1 \sim~ U(0,1),\cdots, U_d \sim~ U(0,1)[/math]
Step 2: [math]X_1 = 1  2U_1, \cdots, X_d = 1  2U_d, R = \sum_i X_i^2[/math]
Step 3: If [math]R \leq 1, Z=(X_1, ..... , X_d)[/math]
Else go to step 1
it is an example of the vector A/R, regular shape is W likes the proposal distribution g(x), G is the target distribution g(x) <br\>
Suppose we sampled from the target area W uniformly, let Aw, Ag indicate the area of W and G, g(x)=1/Aw and f(x)=1/Ag
This is the picture of the example
matlab code:
u = rand(d,n); z = 1 2 *u; R = sum(z.^2); jj=1; for ii=1:n if R(ii)<=1 x(:,jj)=z(:,ii); jj=jj+1; end end output = x; end
Class 10  Thursday June 6th 2013
MATLAB code for using Acceptance/Rejection Method to sample from a ddimensional unit ball.
1. U1~UNIF(0, 1) U2~UNIF(0, 1) ... Ud~UNIF(0, 1)
Code:
function output = Unitball(d,n) u = rand(d,n); z = 1 2 *u; R = sum(z.^2); jj=1; for ii=1:n if R(ii)<=1 x(:,jj)=z(:,ii); jj=jj+1; end end output = x; end >> data = Unitball(d, n) >> scatter(data(1,:), data(2,:)) %plot 2d graph R(ii) computes the sum of the square of each element of a vector, so if it is less than 1, then the vector is in the unit ball. x(:,jj) means all the numbers in the jj column. z(:,ii) means all the numbers in the ii column starting from 1st column until the nth column, which is the last one. Save it with the name of the pattern.
Execution: >>[x]=Unitball(2,10000); >>scatter(x(1,:),x(2,:)); %plot 2D circle >>axis square; %make the xy axis has same size >>size(x) ans = 2 7839 >>scatter(x(1,:),x(2,:))
scatter(x(1,:),x(2,:)) the (x(1,:) means all the numbers in the first row are parameter.
Calculate the efficiency:
>>c=7839/10000 %Efficiency = points accepted / total points c = 0.7839
We can use the above program to calculate how many points in the circle condition are in the square.
Estimate [math]\displaystyle \pi[/math]
 We know the radius is 1
 Then the area of the square is [math](1(1))^2=4[/math]<br\>
 Then the area of the circle is [math]\pi[/math]<br\>
 [math]\pi[/math] is approximated to be [math]4\times c=4 \times 0.7839=3.1356[/math] in the above example <br\>
>> 4*size(x,2)/10000 ans = 3.1356 >> [x]=Unitball(3,10000); >> scatter3(x(1,:),x(2,:),x(3,:)) %plot 3d ball >> axis square >> size(x,2)/10000 %returns the size of the dimension of X specified by scalar 2 ans = 0.5231 >> [x]=Unitball(5,10000); >> size(x,2)/10000 ans = 0.1648
3d unit ball
Note that c increases exponentially as d increases, which will result in rejections of more points. So this method is not efficient for large values of d.
In practice, when we need to vectorlize a high quality image or genes then d would have to be very large. So AR method is not an efficient way to solve the problem.
Efficiency
In the above example, the efficiency of the vector A/R is equal to the ratio
[math]\frac{1}{C}=\frac{\text{volume of hyperball}}{\text{volume of hybercube}}= \max \frac{g(x)}{f(x)} [/math]
In general, the efficiency can be thought of as the total number of points accepted divided by the total number of points generated.
As the dimension increase, the efficiency of the algorithm will decrease exponentially.
For example, for approximating value of [math]\pi[/math], when [math]d \text{(dimension)} =2[/math], the efficiency is around 0.7869; when [math]d=3[/math], the efficiency is around 0.5244; when [math]d=10[/math], the efficiency is around 0.0026: it is getting close to 0.
Thus, when we want to generate high dimension vectors, AcceptanceRejection Method is not efficient to be used.
The end of midterm coverage
Stochastic Process
The basic idea of Stochastic Process (also called random process) is a collection of some random variables, [math]\big\{X_t:t\in T\big\}[/math], where the set X is called the state space that each variable is in it and T is called the index set.
Definition: In probability theory, a stochastic process /stoʊˈkæstɪk/, or sometimes random process (widely used) is a collection of random variables; this is often used to represent the evolution of some random value, or system, over time. This is the probabilistic counterpart to a deterministic process (or deterministic system). Instead of describing a process which can only evolve in one way (as in the case, for example, of solutions of an ordinary differential equation), in a stochastic or random process there is some indeterminacy: even if the initial condition (or starting point) is known, there are several (often infinitely many) directions in which the process may evolve. (from Wikipedia)
A stochastic process is nondeterministic. This means that there is some indeterminacy in the final state, even if the initial condition is known.
We can illustrate this with an example of speech: if "I" is the first word in a sentence, the set of words that could follow would be limited (eg. like, want, am), and the same happens for the third word and so on. The words then have some probabilities among them such that each of them is a random variable, and the sentence would be a collection of random variables.
Also, Different Stochastic Process has different properties.
In the course, we study two Stochastic Process models.
The two stochastic Process models we will study are:
1. Poisson ProcessThis is continuous time counting process that satisfies a couple of properties that are listed in the next section. The Poisson process is understood to be a good model for events such as incoming phone calls, number of traffic accidents, and goals during a game of hockey or soccer. It is also an example of a birthdeath process.
2. Markov Process This is a stochastic process that satisfies the Markov property which can be understood as the memoryless property. The property states that the jump to a future state only depends on the current state of the process, and not of the process's history. This model is used to model random walks exhibited by particles, the health state of a life insurance policyholder, decision making by a memoryless mouse in a maze, etc.
Stochastic Process means even we get some conditions at the beginning, we just can guess some variables followed the first, but at the end the variable would be unpredictable.
Example
stochastic process always has state space and the index set to limit the range.
The state space is the set of English words, and [math]x_t[/math] are words that are said.
Another example involves the stock market: the set of all nonnegative numbers is the state space, and [math]x_t[/math] are stock prices.
The state space is the set of cars , while [math]x_t[/math] are sport cars.
Poisson Process
The Poisson process, which is discrete, arises when we count the number of occurrences of events over time.
e.g traffic accidents , arrival of emails. Emails arrive at random time [math]T_1, T_2[/math] ...
Let [math]N_t[/math] denote the number of arrivals in the time interval [math](0,t][/math]<br\> Let [math]N(a,b][/math] denote the number of arrivals in the time interval (a,b]<br\> By definition, [math]N(a,b]=N_bN_a[/math]<br\> The two random variables [math]N(a,b][/math] and [math]N(c,d][/math] are independent if [math](a,b][/math] and [math](c,d][/math] do not intersect.<br\>
Definition: An arrival counting process [math]N=(N_t)[/math] is called (Homogeneous) Poisson Process (PP) with rate [math]\lambda \gt 0[/math] if
A. The numbers of points in nonoverlapping intervals are independent.
B. The number of points in interval [math]I(a,b][/math] has a poisson distribution with mean [math]\lambda (ba)[/math] ,where (ba) represents the length of I.
In particular, observe that if [math]N=(N_t)[/math] is a Poisson process of rate [math]\lambda\gt 0[/math], then the moments are E[N_{t}] = [math]\lambda t[/math] and Var[N_{t}] = [math]\lambda t[/math]
How to generate a multivariate normal with build in function "randn": (example)
(please check the general idea at the end of lecture 6 course note.)
z=randn(length(mu),n); %Here length(mu)=2 since s is a 2*2 matrix; s=[1 0.7;0.7 1] A=chol(s)% This only works for a positive definite A, otherwise we need to consider some other form of decomposition x=mu'*ones(1,n) + A*z; %ones(1,n) is a function expand the size of mu from 1*1 %matrix to 1*n matrix;
and if we want to use boxmuller to generate a multivariate normal, we could use the code in lecture 6:
d = length(mu); R = chol(Sigma); U1=rand(n,d) U2=rand(d,d) D=2*log(U1) tet=2*pi*U2 Z=(D.^0.5)*cos(tet)' %since in lecture 6, we set up x and y are independent normal %distribution, so in here we can choose either cos and sin X = Z*R + ones(n,1)*mu';
Central Limit Theorem
We can use simulation to test results in probability and statistics. For example, with the central limit theorem, if we sample from a sufficiently large number of independently distributed random variables, the mean will be approximately normally distributed. We illustrate with an example using 1000 observations each of 20 independent exponential random variables.
Definition:
Given certain conditions, the mean of a sufficiently large number of independent random variables, each with a well defined mean and variance, will be approximately normal distributed.
>> X = exprnd (20,20,1000); % 1000 instances of 20 exponential random numbers with mean 20 >> hist(X(1,:)) >> hist(sum(X(1:2,:))) ... >>hist(sum(X(1:20,:))) > approaches to normal
Class 11  Tuesday，June 11, 2013
Poisson Process
A discrete stochastic variable X is said to have a Poisson distribution with parameter λ > 0 if
 [math]\!f(n)= \frac{\lambda^n e^{\lambda}}{n!} \qquad n= 0,1,\ldots,[/math].
Properties of Homogeneous Poisson Process
(a) Independence: The numbers of arrivals in nonoverlapping intervals are independent
(b) Homogeneity or Uniformity: The number of arrival in each interval I(a,b] is Poisson distribution with rate [math]\lambda (ba)[/math]
(c) Individuality: for a sufficiently short time period of length h, the probability of 2 or more events occurring in the interval is close to 0
Notation
N_{t} denotes the number of arrivals up to t, i.e.(0,t]
N(a,b] = N_{b}  N_{a} denotes the number of arrivals in I(a, b].
For a small interval (t,t+h], where h is small
1. The number of arrivals in this interval is independent of the number of arrivals up to t(N_{t})
2. [math] P (N(t,t+h)=1N_{t} ) = P (N(t,t+h)=1) =\frac{e^{\lambda h} (\lambda h)^1}{1!} =e^{\lambda h} {\lambda h} \approx \lambda h [/math] since [math]e^{\lambda h} \approx 1[/math] when h is small.
[math]\lambda h[/math] can be thought of as the probability of observing an arrival in the interval t to t+h.
Similarly, the probability of not observing an arrival in this interval is 1[math]\lambda [/math] h.
 Note : Recall that exponential random variable is the waiting time until one event of interested occurs.
Review of Poisson  Example
Let X be the r.v of the number of accidents in an hour, following the Poisson distribution with mean 1.8.
[math]P(X=0)=e^{1.8} [/math]
[math]P(X=4)=\frac {e^{1.8}(1.8)^4}{4!} [/math]
[math]P(X\geq1) = 1  P(x=0) = 1 e^{1.8}[/math]
P(N_{3}>3N_{2})=P(N_{1}>2)
when we use the inversetransfer method, we can assume the poisson process to be exp distribution, and get the h function from the inverse method, and sometimes we assume h is very small.
Generating a Homogeneous Poisson Process
Suppose we want to generate the first n events of a Poisson process with rate [math]{\lambda}[/math]. We showed that the times between successive events are independent exponential random variables, each with rate [math]{\lambda}[/math]. Therefore, we can first generate n random numbers U_{1}, U_{2},...,U_{n} and use the inverse transform method to find the corresponding exponential variable X_{1}, X_{2},...,X_{n}. Then X_{i} can be interpreted as the time between the (i1)st and the ith event of the process.
The actual time of the jth event is the sum of the first j interarrival times. Therefore, the generated values of the first n event times are [math]\sum_{i=1}^{j} X_i[/math], j = 1...n.
Homogeneous poisson process refers to the rate of occurrences remaining constant for all periods of time.
U_{n}~U(0,1)
[math]T_nT_{n1}=\frac {1}{\lambda} log(U_n)[/math]
The waiting time between each occurrence follows the exponential distribution with parameter [math]\lambda [/math]. [math]T_n[/math] represents the time elapsed at the n^{th} occurence.
1) Set T_{0} = 0 ,and n = 1
2) U_{n} follow U(0,1)
3) T_{n}  T_{n1} =[math] \frac {1}{\lambda} [/math] log (U_{n}) (Declare an arrival)
4) if T_{n} >T stop;
else n = n + 1, go to step 2
h is the a range and we assume the probability of every point in this rang is the same by uniform ditribution.(cause h is small) and we test the rang is Tn smaller than T. And [math] \frac {1}{\lambda} [/math] log (U_{n})represents that chance that one arrival arrives.
Higher Dimensions:
The poisson distribution arises as the distribution of counts of occurrences of events in (multidimensional) intervals in multidimensional poisson process in a directly equivalent way to the result for unidimensional processes. This,is D is any region the multidimensional space for which D, the area or volume of the region, is finite, and if Template:nowrap is count of the number of events in D, then
[math] P(N(D)=k)=\frac{(\lambdaD)^k e^{\lambdaD}}{k!} .[/math]
To sample from higher dimensional Poisson process:
1. Generate a random number N that is Poisson distributed with parameter [math]{\lambda}[/math]*A_{d}, where A_{d} is the area under the bounded region. (ie A_{2} is area of the region, A_{3} is the volume of the 3d space.
2. Given N=n, generate n random (uniform) points in the region.
At the end, it generates n (cumulative) arrival times, up to time TT.
MatLab Code
T(1)=0; % Matlab does not allow 0 as an index, so we start with T(1). ii=1; l=2; TT=5; while T(ii)<=TT u=rand; ii=ii+1; T(ii)=T(ii1)  (1/l)*log(u); end plot(T)
The following plot is using TT = 50.
The number of points generated every time on average should be [math]\lambda[/math] * TT.
The maximum value of the points should be TT.
when TT be big, the plot of the graph will be linear, when we set the TT be 5 or small number, the plot graph looks like discrete distribution.
Markov chain
A Markov Chain is a stochastic process where:
1) Each stage has a fixed number of states,
2) the (conditional) probabilities at each stage only depend on the previous state.
Source: "http://math.scu.edu/~cirving/m6_chapter8_notes.pdf"
A Markov Chain is said to be irreducible if for each pair of states i and j there is a positive probability, starting in state i, that the process will ever enter state j.
Markov Chain is the simplification of assumption, for instance, the result of GPA in university depend on the GPA's in high school, middle school, elementary school, etc., but during a job interview after university graduation, the interviewer would most likely to ask about the GPA in university of the interviewee but not the GPA from early years because they assume what happened before are summarized and adequately represented by the information of the GPA earned during university. Therefore, it's not necessary to look back to elementary school. A Markov Chain works the same way, we assume that everything that has occurred earlier in the process is only important for finding out where we are now, the future only depends on the present of where we are now, not the past of how we got here. So the n_{th}random variable would only depend on the n1_{th}term but not all the previous ones. A Markov process exhibits the memoryless property.
Examples of Markov Chain applications in various fields:
 Physics: The movement of a particle (memoryless random walk)
 Finance: The volatility of prices of financial securities or commodities
 Actuarial Science: The pricing and valuation of multiplestate and multiplelives insurance and annuity contracts
 Technology: The Google link analysis algorithm "PageRank"
Product Rule (Stochastic Process):
[math]f(x_1,x_2,...,x_n)=f(x_1)f(x_2\mid x_1)f(x_3\mid x_2,x_1)...f(x_n\mid x_{n1},x_{n2},....)[/math]
In Markov Chain
[math] f(x_1,x_2,...,x_n)=f(x_1)f(x_2\mid x_1)f(x_3\mid x_2)...f(x_n\mid x_{n1}) [/math]
Concept: The current status of a subject must be relative to the past.However, it will only depend on the previous result only. In other words, if an event occurring tomorrow follows a Markov process it will depend on today and yesterday (the past) is negligible. The past (not including the current state of course) is negligible since its information is believed to have been captured and reflected in the current state.
A Markov Chain is a Stochastic Process for which the distribution of [math]x_t[/math] depends only on [math]x_{t1}[/math].
Given [math] x_t[/math], [math]x_{t1}[/math] and [math]x_{t+1} [/math] are independent. The process of getting [math] x_n [/math] is drawn as follows. The distribution of [math]x_n[/math] only depends on the value of [math]x_{n1}[/math].
[math] x_1 \rightarrow x_2\rightarrow...\rightarrow x_n[/math]
Formal Definition:
The process [math] \{x_n: n \in T\} [/math] is a markov chain if:
[math] Pr(x_nx_{n1},...,x_1) = Pr(x_nx_{n1}) \ \ \forall n\in T [/math] and [math] \forall x\in X[/math]
CONTINUOUS TIME MARKOV PROCESS
A continuous time markov process would be one where the time spent in each state is not discrete, but can take
on positive real values. In other words, the index set is the positive real numbers. If the process is homogeneous, then the time spent will have an exponential distribution. In this case we will have a transition rate matrix that captures the rate at which we move between two states. An example will be the homogeneous Poisson process.
Transition Matrix
Transition Probability: [math] P_{ij} = P(X_{t+1} =j  X_t =i) [/math] is the onestep transition probability from state i to state j.
Transition Probability: [math] P_{ij}(k) = P(X_{t+1}(k) =j  X_t(k) =i) [/math] is the kstep transition probability from state i to state j.
The matrix P whose elements are transition Probabilities [math] P_{ij} [/math] is a onestep transition matrix.
Example:
[math]\begin{align}
P_{ab} &=P(X_{t+1} &=b &\mid X_{t} &=a) &= 0.3 \\
P_{aa} &=P(X_{t+1} &=a &\mid X_{t} &=a) &= 0.7 \\
P_{ba} &=P(X_{t+1} &=a &\mid X_{t} &=b) &= 0.2 \\
P_{bb} &=P(X_{t+1} &=b &\mid X_{t} &=b) &= 0.8 \\
\end{align}[/math]
[math] P= \left [ \begin{matrix} 0.7 & 0.3 \\ 0.2 & 0.8 \end{matrix} \right] [/math]
The above matrix can be drawn into a state transition diagram
Properties of Transition Matrix:
1. [math] 1 \geq P_{ij} \geq 0 [/math]
2. [math]\sum_{j}^{}{P_{ij}=1}[/math] which means the rows of P should sum to 1.
In general, one would consider a (finite) set of elements [math] \Omega [/math] such that:
[math] \forall x \in \Omega [/math], the probability of the next state is given according to the distribution [math] P(x,\cdot) [/math]
This means our model can be simulated as a sequence of random variables [math] (X_0, X_1, X_2, \ldots ) [/math] with state space [math] \Omega [/math] and transition matrix [math] P = [P_{ij}] [/math] where [math] \forall t \in \N, 0 \leq s \leq t+1, x_s \in \Omega, [/math]
we have to following property (Markov property):
[math] P(X_{t+1}= x_{t+1} \vert \cap^{t}_{s=0} X_s = x_s) = P(X_{t+1} =x_{t+1} \vert X_t =x_t) = P(x_t,x_{t+1}) [/math]
And [math] \forall x \in \Omega, \sum_{y\in\Omega} P(x,y) =1; \; \forall x,y\in\Omega, P_{xy} = P(x,y) \geq 0 [/math]
Moreover if [math] \forall x,y \in \Omega, \exists k, P^k (x,y) \gt 0 [/math]
[math] \Omega \lt \infty [/math] (i.e Any two states can be translated somehow)
Then one might consider the periodicity of the chain and derive a notion of cyclic behavior.
Examples of Transition Matrix
1.There are four states: 0,1,2, and 3.
Each row adds up to 1, and all the entries are between 0 and 1(since the transition probability is always between 0 and 1).
This matrix means that
 at state 0, can only go to state 1, since probability is 1.
 at state 1, can go to state 0 with a probability of 1/3 and state 2 with a probability of 2/3
 at state 2, can go to state 1 with a probability of 2/3 and state 3 with a probability of 1/3
 at state 3, can only go to state 2, since probability is 1.
2. Consider a Markov chain with state space {0, 1} and transition matrix [math] P= \left [ \begin{matrix} 1/3 & 2/3 \\ 3/4 & 1/4 \end{matrix} \right] [/math]. Assuming that the chain starts in state 0 at time n = 0, what is the probability that it is in state 1 at time n = 2?
[math]\begin{align}
P(X_{2} &=1 &\mid X_{0} &=0) & =P(X_{1} &=0,X_{2} &=1 &\mid X_{0} &=0)+P(X_{1} &=1,X_{2} &=1 &\mid X_{0} &=0)\end{align} [/math]
[math] \begin{align} P(X_{1} &=0 &\mid X_{0} &=0) * P(X_{2} &=1 &\mid X_{1} &=0)+P(X_{1} &=1 &\mid X_{0} &=0) * P(X_{2} &=1 &\mid X_{1}&=1) &=1/3*2/3+ 2/3*1/4 &=7/18 \\
\end{align}[/math]
Midterm Review
Multiplicative Congruential Algorithm
x_{k+1}= (ax_{k}+c) mod m
Where a, c, m and x_{1} (the seed) are values we must chose before running the algorithm. While there is no set value for each, it is best for m to be large and prime.
Examples:
X_{0} = 10 ,a = 2 , c = 1 , m = 13 X_{1} = 2 * 10 + 1 mod 13 = 8 X_{2} = 2 * 8 + 1 mod 13 = 4 ... and so on
X_{0} = 44 ,a = 13 , c = 17 , m = 211 X_{1} = 13 * 44 + 17 mod 211 = 167 X_{2} = 13 * 167 + 17 mod 211 = 78 X_{3} = 13 * 78 + 17 mod 211 = 187 ... and so on
Inverse Transformation Method
For continuous cases:
1. U~U(0,1)
2. X=F^{1}(u)
Note:
In Uniform Distribution [math]P(X\lt =a)=a [/math]
proof:[math]P(X\lt =x) = P(F^{1}(u)\lt =x)=P(u\lt =F(x)) = F(x)[/math]
For discrete cases:
1. U~U(0,1)
2. x=x_{i} if F(x_{i1})[math]\lt [/math]u[math]\leq[/math]F(x_{i})
AcceptanceRejection Method
cg(x)>=f(x)
[math]c=max\frac{f(x)}{g(x)}[/math]
[math]\frac{1}{c}[/math] is the efficiency of the method/probability of acceptance
1. Y~g(x)
2. U~U(0,1)
3. If [math]U\lt =\frac{f(y)}{c*g(y)}[/math] then X=Y
else go to step 1
Proof that this method generates the desired distribution: [math]P(Yaccepted)=\frac{P(acceptedY)P(Y)}{P(accepted)}=\frac{\frac{f(y)}{cg(y)} g(y)}{\int_{y}^{ } \frac{f(y)}{cg(y)} g(y) dy}=\frac{\frac{f(y)}{c}}{\frac{1}{c}\cdot 1}=f(y)[/math]
Multivariate
f(x_{1},....,x_{n})=f(x_{1})f(x_{2}x_{1})...f(x_{n}x_{n1},...x_{1}) in general we need knowledge of conditional distribution if x_{1}....x_{n} are independent i.e. f(x_{1}...x_{n})=f(x_{1})f(x_{2})...f(x_{n})
Vector A/R Method
This method is not efficient for high dimensional vectors
 Sample uniformly from a space W that contains the sample space G of interest
Accept if the point is inside G
Sample uniformly from W
g(x)=[math]\frac{1}{A_W}[/math], where A_{W} is the area of W.
f(x)=[math]\frac{1}{A_G}[/math], where A_{G} is the area of G.
[math]\frac{A_G}{A_W}[/math] = [math]\frac{1}{c}[/math] is the efficiency of the algorithm. That is the amount of points accepted over the amount of points rejected. This acceptance rate drops greatly for each dimension added.
Note that [math]\frac{A_G}{A_W}[/math] decreases exponentially as the dimensional goes up, thus its not efficient for for high dimensional.
Common distribution
Exponential
Models the waiting until the first success.
X~Exp[math](\lambda) [/math]
[math] f (x) = \lambda e^{\lambda x}[/math] , [math]x\gt 0 [/math]
1. U~Unif(0,1)
2. The inverse of exponential function is x = [math]\frac{1}{\lambda} log(U)[/math]
Normal
BoxMuller method
1.[math]U_{1},U_{2}\sim~ U(0,1)[/math]
2.[math]R^{2}=2log(U_{1}), R^{2}\sim~ Exp(1/2)[/math]
[math]\theta = 2\pi U_{2},\theta\sim~ U(0,2\pi) [/math]
3.[math]X=Rcos(\theta), Y=Rsin(\theta), X,Y\sim~ N(0,1)[/math]
To obtain any normal distribution X once a set of standard normal samples Z is generated, apply the following transformations:
[math]Z\sim N(0,1)\rightarrow X \sim N(\mu,\sigma ^{2})[/math]
[math]X=\mu+\sigma~Z[/math]
In the multivariate case,
[math]\underline{Z}\sim N(\underline{0},I)\rightarrow \underline{X} \sim N(\underline{\mu},\Sigma)[/math]
[math]\underline{X} = \underline{\mu} +\Sigma ^{1/2} \underline{Z}[/math]
Gamma
Gamma(t,λ)
t: The number of exponentials and the shape parameter
λ: The mean of the exponentials and the scale parameter
Also, Gamma(t,λ) can be expressed into a summation of t exp(λ).
[math]x=\frac {1}{\lambda}\log(u_1)\frac {1}{\lambda}\log(u_2).......\frac {1}{\lambda}\log(u_t)[/math]
[math]=\frac {1}{\lambda}[\log(u_1)+\log(u_2)+.....+\log(u_t)][/math]
[math]=\frac {1}{\lambda}\log(\prod_{j=1}^{t} U_j)[/math]
Bernoulli
A Bernoulli random variable can only take two possible values: 0 and 1. 1 represents "success" and 0 represents "failure." If p is the probability of success, we have pdf
[math] f(x)= p^x (1p)^{1x}, x=0,1 [/math]
To generate a Bernoulli random variable we use the following procedure:
sample u~U(0,1)
if u <= p, then x=1
else x=0
where 1 stands for success and 0 stands for failure.
Binomial
The sum of n independent Bernoulli trials
<br\>
X~ Bin(n,p)
1. U1, U2, ... Un ~ U(0,1)
2. [math] X= \sum^{n}_{1} I(U_i \leq p) [/math] ,where [math]I(U_i \leq p)[/math] is an indicator for a successful trial.
Return to 1
I is an indicator variable if for U <= P, then I(U<=P)=1; else I(U>P)=0.
Repeat this N times if you need N samples.
The theory behind the algorithm is the fact that the sum of n independent and identically distributed Bernoulli trials, Ber(p), follows a binomial Bin(n,p) distribution.
Example:
Suppose rolling a die, success= lands on 5, fail ow
p=1/6, 1p=5/6, rolling for 10 times, n=10
simulate this binomial distribution.
1) Generate [math] U_1....U_{10} [/math] ~ [math] U(0,1) [/math]
2) [math] X= \sum^{10}_{1} I(U_i \leq \frac{1}{6}) [/math]
3)Return to one.
Beta Distribution
[math]\displaystyle \text{Beta}(\alpha,\beta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}x^{\alpha1}(1x)^{\beta1} [/math] where [math]0 \leq x \leq 1[/math] and [math]\alpha[/math]>0, [math]\beta[/math]>0
and
[math]f(x;\alpha,\beta)= 0 [/math] otherwise
 [math]\displaystyle \text{Beta}(1,1) = U (0, 1) [/math]
generate uniform directly
 [math]\displaystyle \text{Beta}(\alpha,1)={f}(x) = \frac{\Gamma(\alpha+1)}{\Gamma(\alpha)\Gamma(1)}x^{\alpha1}(1x)^{11}=\alpha x^{\alpha1}[/math]
use inverse method to generate
Algorithm<br\>
 1. Sample from Y1 ~ Gamma ([math]\alpha[/math],1) where [math]\alpha[/math] is the shape, and 1 is the scale.<br\>
 2. Sample from Y2 ~ Gamma ([math]\beta[/math],1) where [math]\alpha[/math] is the shape, and 1 is the scale.<br\>
 3. Set [math] Y = \frac{Y_1}{Y_1+Y_2}, [/math] Then Y ~ [math]\beta ( \alpha , \beta )[/math], where we suppose [math]\alpha , \beta[/math] are integers.
Geometric
This distribution models the number of failures before the first success.
X~Geo(p)
[math]f(x)=p*(1p)^{x1}, x=1, 2, ........[/math]
If Y~Exp[math](\lambda)[/math] where [math]\lambda=log(1p)[/math]
then [math]X=floor[Y]+1[/math] is Geo(p)
Proof:
[math]F(x)=P(X\lt =x)[/math]
[math] =1P(X\gt x)[/math]
[math] =1P(floor [Y]+1\gt x)[/math] as [math] floor [Y]+1\gt x[/math] implies [math] Y\leq x [/math] for all realvalued Y
[math] =1P(Y\gt =x)[/math]
[math] =1(1P(Y\lt x))[/math]
[math] =1e^{\lambda*x}[/math]
[math] =1(1p)^x[/math], which is the CDF of Geo(p)
The above method can also be viewed with the inverse method since it is not rejecting any points. The following uses inverse method to generate Geo(p).
[math]F(x)=P(X \leq x)[/math]
[math]F(x)=1 P(X\gt x)[/math]
[math]P(X \leq x)=1(1p)^x[/math] since [math]P(X\gt x)=(1p)^x[/math]
[math]y=1(1p)^x[/math]
[math]y1=(1p)^x[/math]
[math]1y=(1p)^x[/math]
[math]1y=(e^(\lambda))^x=e^(\lambda*x)[/math] since [math]1p=e^\lambda[/math]
[math]log(1y)=\lambda*x[/math]
[math]x=1/(\lambda)*log(1y)[/math]
[math]F^(1)(x)=1/(\lambda)*log(1x)[/math]
Algorithm:
1. Generate U ~ U(0,1)
2. Generate [math]Y=1/(\lambda)*log(1u)[/math] where [math]Y ~ \sim Exp(\lambda) [/math]
3. Set [math]X=[Y]+1[/math]
Note:
If X~Unif (0,1), Y= floor(3X) = [3X]> Y ~ DU[0,2] (DU means discrete uniform)
If X~Unif (0,1), Y= floor(5U)2 = [5U]2 > Y~ DU[2,2]
Poisson
This distribution models the number of times and event occurs in a given time period
X~Poi[math](\lambda)[/math]
X is the maximum number of iid Exp([math]\lambda[/math]) whose sum is less than or equal to 1.
[math]X = \max\{n: \sum\limits_{i=1}^n Y_i \leq 1, Y_i \sim Exp(\lambda)\}[/math]
[math] = \max\{n: \sum\limits_{i=1}^n \frac{1}{\lambda} log(U_i)\lt =1 , U_i \sim U[0,1]\}[/math]
[math] = \max\{n: \prod\limits_{i=1}^n U_i \gt = e^{\lambda}, U_i \sim U[0,1]\}[/math]
Algorithm<br\>
 1. Set n=1, a=1<br\>
 2. [math]U_n[/math] ~ [math]U[0,1][/math] and set [math]a=aU_n[/math]<br\>
 3. If [math]a \geq e^{\lambda}[/math] then: n=n+1 and go to Step 2. Else set X=n1.
An alternate way to write an algorithm for Poisson is as followings:
1) x = 0, F = P(X=0) = e^λ = p
2) Generate U ~ U(0,1)
3) If U < F, then output x
4) Else p = (λ/(x+1)) * p
F = F + p
x = x+1
5) else go to step 2
Acknowledgments: from Spring 2012 stat 340 coursenotes
Class 13  Tuesday June 18th 2013
nstep transition matrix
Onestep transition probability:
The probability of X_{n+1} being in state j given that X_{n} is in state i is called the
onestep transition probability and is denoted by P_{i,j}^{n,n+1}. That is
P_{i,j}^{n,n+1} = Pr(X_{n+1} =j/X_{n} =i)
The twostep transition probability of moving from state a to state a:
[math]P_2 (a,a)=Pr (X_{m+2}=a X_m=a)=Pr(X_{m+1}=a X_m=a)Pr(X_{m+2}=aX_{m+1}=a)+ Pr(X_{m+1}=bX_m=a)Pr(X_{m+2}=aX_{m+1}=b)[/math]
[math] =0.7(0.7)+0.3(0.2)=0.55 [/math]
An nstep transition matrix is a matrix [math] P_n [/math] whose elements are the probability of moving to state j from state i in n steps.
[math]P_n (i,j)=Pr(X_{m+n}=jX_m=i)[/math] is the nstep transition probability.
In general [math]P_n = P^n[/math] with [math]P_n(i,j) \geq 0[/math] and [math]\sum_{j} P_n(i,j) = 1[/math]
Example from previous class:
[math] P= \left [ \begin{matrix} 0.7 & 0.3 \\ 0.2 & 0.8 \end{matrix} \right] [/math]
The two step transition probability matrix is:
[math] P P= \left [ \begin{matrix} 0.7 & 0.3 \\ 0.2 & 0.8 \end{matrix} \right] \left [ \begin{matrix} 0.7 & 0.3 \\ 0.2 & 0.8 \end{matrix} \right] [/math]=[math]\left [ \begin{matrix} 0.7(0.7)+0.3(0.2) & 0.7(0.3)+0.3(0.8) \\ 0.2(0.7)+0.8(0.2) & 0.2(0.3)+0.8(0.8) \end{matrix} \right] [/math]=[math]\left [ \begin{matrix} 0.55 & 0.45 \\ 0.3 & 0.7 \end{matrix} \right] [/math]<br\>
[math]P_2 = P_1 P_1 [/math]<br\>
[math]P_n = P_1^n [/math]<br\>
The equation above is a special case of the ChapmanKolmogorov equations.
It is true because of the Markov property or the memoryless property of Markov chains, where the probabilities of going forward to the next state
only depends on your current state, not your previous states. By intuition, we can multiply the 1step transition
matrix ntimes to get a nstep transition matrix.
Example:
We can see how [math]P_n = P^n[/math] from the following:
[math]\mu_1=\mu_0\cdot P[/math]
[math]\mu_2=\mu_1\cdot P[/math]
[math]\mu_3=\mu_2\cdot P[/math]
Therefore,
[math]\mu_3=\mu_0\cdot P^3
[/math]
[math]P_n(i,j)[/math] is called nsteps transition probability.
[math]\mu_0 [/math] is called the initial distribution.
[math]\mu_n = \mu_0* P^n [/math]
Example:
Consider a twostate Markov chain {[math]X_t; t = 0, 1, 2,...[/math]} with states {1,2} and transition probability matrix
[math] P= \left [ \begin{matrix} 1/2 & 1/2 \\ 1/3 & 2/3 \end{matrix} \right] [/math]
Given [math] X_0 = 1 [/math]. Compute the following:
a)[math] P(X_1=1  X_0=1) = P(1,1) = 1/2 [/math]
b)[math] P(X_2=1, X_1=1 X_0=1) = P(X_2=1X_1=1)*P(X_1=1X_0=1)= 1/2 * 1/2 = 1/4 [/math]
c)[math] P(X_2=1X_0=1)= P_2(1,1) = 5/12 [/math]
d)[math] P^2=P*P= \left [ \begin{matrix} 5/12 & 7/12 \\ 7/18 & 11/18 \end{matrix} \right] [/math]
Marginal Distribution of Markov Chain
We represent the probability of all states at time t with a vector [math]\underline{\mu_t}[/math]
[math]\underline{\mu_t}~=(\mu_t(1), \mu_t(2),...\mu_t(n))[/math] where [math]\mu_t(1)[/math] is the probability of being on state 1 at time t.
and in general, [math]\mu_t(i)[/math] shows the probability of being on state i at time t.
For example, if there are two states a and b, then [math]\underline{\mu_5}[/math]=(0.1, 0.9) means that the chance of being in state a at time 5 is 0.1 and the chance of being on state b at time 5 is 0.9.
If we generate a chain for many times, the frequency of states at each time shows marginal distribution of the chain at that time.
The vector [math]\underline{\mu_0}[/math] is called the initial distribution.
[math] P_2~=P_1 P_1 [/math] (as verified above)
In general,
[math] P_n~=(P_1)^n [/math] **Note that [math]P_1[/math] is equal to the matrix P
[math]\mu_n~=\mu_0 P_n[/math]
where [math]\mu_0[/math] is the initial distribution,
and [math]\mu_m+n~=\mu_m P_n[/math]
N can be negative, if P is invertible.
0 a a b a
1 a b a a
2 b a a b
3 b a b b
4 a a a b
5 a b b a
[math]\mu_4~=(3/4, 1/4)[/math]
if we simulate a chain many times, frequency of states at time t show the marginal distribution at time t
Marginal Distribution
[math]\mu_1~ = \mu_0P[/math]
[math]\mu_2~ = \mu_1P = \mu_0PP = \mu_0P^2[/math]
In general, [math]\mu_n~ = \mu_0P^n[/math]
Property: If [math]\mu_n~\neq\mu_t~[/math](for any t less than n), then we say P does not converge.
Stationary Distribution
[math]\pi[/math] is stationary distribution of the chain if [math]\pi[/math]P = [math]\pi[/math]
where [math]\pi[/math] is a probability vector [math]\pi[/math]=([math]\pi[/math]_{i}  [math]i \in X[/math]) such that all the entries are nonnegative and sum to 1.
In other words, if X_{0} is draw from [math]\pi[/math]. Then marginally, X_{n} is also drawn from the same distribution [math]\pi[/math] for every n≥0.
Comments:
As n gets bigger and bigger, [math]\mu_n[/math] will possibly stop changing, so the quantity [math]\pi[/math] _{i} can also be interpreted as the limiting probability that the chain is in the state [math]j[/math]
Comments:
1. [math]\pi[/math] may not exist and even if it exists, it may not always be unique.
2. If [math]\pi[/math] exists and is unique, then [math]\pi[/math]_{i} is called the longrun proportion of the process in state i and the stationary distribution is also the limiting distribution of the process.
MatLab Code
In Matlab, you can find the stationary distribution by: >> p=[.7 .3;.2 .8] % Input the matrix P p = 0.7000 0.3000 0.2000 0.8000 >> p^2 % one state to another state by 2 steps transition ans = 0.5500 0.4500 0.3000 0.7000 >> mu=[.9 .1] mu = 0.9000 0.1000 >> mu*p % enter mu=mu*P, repeat multiple times until the value of the vector mu remains unchanged ans = 0.6500 0.3500 >> mu=mu*p mu = 0.4002 0.5998 >> mu=mu*p %The vector mu will be your stationary distribution mu = 0.4000 0.6000 >> p^100 % it is limiting distribution of chain which finally gives a stable matrix ans = 0.4000 0.6000 0.4000 0.6000
Alternate Method of Computing the Stationary Distribution
Recall that if [math]\lambda v = A v[/math], then [math]\lambda[/math] is the eigenvalue of [math]A[/math] corresponding to the eigenvector [math]v[/math]
By definition of stationary distribution, [math]\pi = \pi P[/math]
Taking the transpose, [math]\pi^T = (\pi P)^T [/math]
then [math]I \pi^T = P^T \pi^T \Rightarrow (P^TI) \pi^T = 0 [/math]
So [math]\pi^T [/math] is an eigenvector of [math]P^T[/math] with corresponding eigenvalue 1.
the transpose method to calculate the pi matrix probability.
It is thus possible to compute the stationary distribution by taking the eigenvector of the transpose of the transition matrix corresponding to 1, and normalize it such that all elements are nonnegative and sum to one so that the elements satisfy the definition of a stationary distribution. The transformed vector is still an eigenvector since a linear transformation of an eigenvector is still within the eigenspace. Taking the transpose of this transformed eigenvector gives the stationary distribution.
Generating Random Initial distribution
[math]\mu~=rand(1,n)[/math]
[math]\mu~=\mu/\Sigma(\mu)[/math]
Doubly Stochastic Matrices
We say that the transition matrix [math]\, P=(p_{ij})[/math] is doubly stochastic if both rows and columns sum to 1, i.e.,
[math]\, \sum_{i} p_{ij} = \sum_{j} p_{ij} = 1 [/math]
It is easy to show that the stationary distribution of an nxn doubly stochastic matrix P is:
[math] (\frac{1}{n}, \ldots , \frac{1}{n}) [/math]
Properties of Markov Chain
A Markov chain is a random process usually characterized as memoryless: the next state depends only on the current state and not on the sequence of events that preceded it. This specific kind of "memorylessness" is called the Markov property. Markov chains have many applications as statistical models of realworld processes.
1. Reducibility
State [math]j[/math] is said to be accessible from State [math]i[/math] (written [math]i \rightarrow j[/math]) if a system started in State [math]i[/math] has a nonzero probability of transitioning into State [math]j[/math] at some point. Formally, State [math]j[/math] is accessible from State [math]i[/math] if there exists an integer [math]n_{ij} \geq 0[/math] such that
[math]P(X_{n_{ij}} =j \vert X_0 =i) \gt 0[/math]
This integer is allowed to be different for each pair of states, hence the subscripts in [math]n_{ij}[/math]. By allowing n to be zero, every state is defined to be accessible from itself.
2. Periodicity
State [math]i[/math] has period [math]k[/math] if any return to State [math]i[/math] must occur in multiples of [math]k[/math] time steps. Formally, the period of a state is defined as
[math]k= \gcd\{n:P(X_n =j \vert X_0 =i)\gt 0\} [/math]
3. Recurrence
State [math]i[/math] is said to be transient if, given that we start in State [math]i[/math], there is a nonzero probability that we will never return to [math]i[/math]. Formally, let the random variable [math]T_i[/math] be the first return time to State [math]i[/math] (the "hitting time"):
[math]T_i = \min\{n \geq 1:X_n=i \vert X_0=i\}[/math]
Class 14  Thursday June 20th 2013
Example: Find the stationary distribution of [math] P= \left[ {\begin{array}{ccc} \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\[6pt] \frac{1}{4} & \frac{3}{4} & 0 \\[6pt] \frac{1}{2} & 0 & \frac{1}{2} \end{array} } \right][/math]
[math]\pi=\pi p[/math]
Solve the system of linear equations to find a stationary distribution
[math]\pi=(\frac{1}{3},\frac{4}{9}, \frac{2}{9})[/math]
[math]\lambda u=A u[/math]
[math]\pi[/math] can be considered as an eigenvector of P with eigenvalue = 1. But the vector u here needs to be a column vector. So we need to transform [math]\pi[/math] into a column vector.
[math]\pi[/math]^{T}= P^{T}[math]\pi[/math]^{T}
Then [math]\pi[/math]^{T} is an eigenvector of P^{T} with eigenvalue = 1.
MatLab tips:[V D]=eig(A), where D is a diagonal matrix of eigenvalues and V is a matrix of eigenvectors of matrix A
Limiting distribution
A Markov chain has limiting distribution [math]\pi[/math] if
[math]\lim_{n\to \infty} P^n= \left[ {\begin{array}{ccc} \pi \\ \vdots \\ \pi \\ \end{array} } \right][/math]
That is [math]\pi_j=\lim[P^n]_{ij}[/math] exists and is independent of i.
A Markov Chain is convergent if and only if its limiting distribution exists.
If the limiting distribution [math]\pi[/math] exists, it must be equal to the stationary distribution.
In general, there are chains with stationery distributions that don't converge, this means that they have stationary distribution but are not limiting.
Example:
[math] P= \left [ \begin{matrix} \frac{4}{5} & \frac{1}{5} & 0 & 0 \\[6pt] \frac{1}{5} & \frac{4}{5} & 0 & 0 \\[6pt] 0 & 0 & \frac{4}{5} & \frac{1}{5} \\[6pt] 0 & 0 & \frac{1}{10} & \frac{9}{10} \\[6pt] \end{matrix} \right] [/math]
This chain converges but is not a limiting distribution as the rows are not the same and it doesn't converge to the stationary distribution.
The following contents are problematic. Please correct it if possible.
Suppose we're given that the limiting distribution [math] \pi [/math] exists for stochastic matrix P, that is, [math] \pi = \pi * P [/math]
WLOG assume P is diagonalizable, (if not we can always consider the Jordan form and the computation below is exactly the same.
Let [math] P = U * \Sigma * U^{1} [/math] be the eigenvalue decomposition of [math] P [/math], where [math]\Sigma = diag(\lambda_1,\ldots,\lambda_n) ; \lambda_i \gt \lambda_j, \forall i \lt j [/math]
Suppose [math] \pi^T = \sum a_i u_i [/math] where [math] a_i \in \mathcal{R} [/math] and [math] u_i [/math] are eigenvectors of [math] P [/math] for [math] i = 1\ldots n [/math]
By definition: [math] \pi^k = \pi*P = \pi*P^k \implies \pi = \pi*(U * \Sigma * U^{1}) *(U * \Sigma * U^{1} )*\ldots*(U * \Sigma * U^{1}) [/math]
Therefore [math] \pi^k = \sum a_i * \lambda_i^k u_i [/math] since [math] \lt u_i , u_j\gt = 0, \forall i\neq j [/math].
Therefore [math] \lim_{k \rightarrow \infty} \pi^k = \lim_{k \rightarrow \infty} \lambda_i^k * a_1 * u_1 = u_1 [/math]
MatLab Code
>> P=[1/3, 1/3, 1/3; 1/4, 3/4, 0; 1/2, 0, 1/2] % We input a matrix P.This is the same matrix as last class. P = 0.3333 0.3333 0.3333 0.2500 0.7500 0 0.5000 0 0.5000 >> P^2 ans = 0.3611 0.3611 0.2778 0.2708 0.6458 0.0833 0.4167 0.1667 0.4167 >> P^3 ans = 0.3495 0.3912 0.2593 0.2934 0.5747 0.1319 0.3889 0.2639 0.3472 >> P^10 the example of code and an example of stand distribution, then the all the pi probability in the matrix are the same. ans = 0.3341 0.4419 0.2240 0.3314 0.4507 0.2179 0.3360 0.4358 0.2282 >> P^100 % The stationary distribution is [0.3333 0.4444 0.2222] since values keep unchanged. ans = 0.3333 0.4444 0.2222 0.3333 0.4444 0.2222 0.3333 0.4444 0.2222 >> [vec val]=eigs(P') % We can find the eigenvalues and eigenvectors from the transpose of matrix P. vec = 0.5571 0.2447 0.8121 0.7428 0.7969 0.3324 0.3714 0.5523 0.4797 val = 1.0000 0 0 0 0.6477 0 0 0 0.0643 >> a=vec(:,1) % The eigenvectors can be mutiplied by (1) since λV=AV can be written as λ(V)=A(V) a = 0.5571 0.7428 0.3714 >> sum(a) ans = 1.6713 >> a/sum(a) ans = 0.3333 0.4444 0.2222
This is [math]\pi_j = lim[p^n]_(ij)[/math] exist and is independent of i
Example: Find the stationary distribution of P= [math]\left[ {\begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{array} } \right][/math]
[math]\pi=\pi~P[/math]
[math]\pi=[/math] [[math]\pi[/math]_{0}, [math]\pi[/math]_{1}, [math]\pi[/math]_{2}]
The system of equations is:
0*[math]\pi[/math]_{0}+0*[math]\pi[/math]_{1}+1*[math]\pi[/math]_{2} = [math]\pi[/math]_{0} => [math]\pi[/math]_{2} = [math]\pi[/math]_{0}
1*[math]\pi[/math]_{0}+0*[math]\pi[/math]_{1}+0*[math]\pi[/math]_{2} = [math]\pi[/math]_{1} => [math]\pi[/math]_{1} = [math]\pi[/math]_{0}
0*[math]\pi[/math]_{0}+1*[math]\pi[/math]_{1}+0*[math]\pi[/math]_{2} = [math]\pi[/math]_{2}
[math]\pi[/math]_{0}+[math]\pi[/math]_{1}+[math]\pi[/math]_{2} = 1
[math]\pi[/math]_{0}+[math]\pi[/math]_{0}+[math]\pi[/math]_{0} = 3[math]\pi[/math]_{0} = 1, which gives [math]\pi[/math]_{0} = 1/3
Also, [math]\pi[/math]_{1} = [math]\pi[/math]_{2} = 1/3
So, [math]\pi[/math] = [math][\frac{1}{3}, \frac{1}{3}, \frac{1}{3}][/math]
when the p matrix is a standard matrix, then all the probabilities of pi are the same in the matrix.
MatLab Code
MATLAB >> P=[0, 1, 0;0, 0, 1; 1, 0, 0] P = 0 1 0 0 0 1 1 0 0 >> pii=[1/3, 1/3, 1/3] pii = 0.3333 0.3333 0.3333 >> pii*P ans = 0.3333 0.3333 0.3333 >> P^1000 ans = 0 1 0 0 0 1 1 0 0 >> P^10000 ans = 0 1 0 0 0 1 1 0 0 >> P^10002 ans = 1 0 0 0 1 0 0 0 1 >> P^10003 ans = 0 1 0 0 0 1 1 0 0 >> %P^10000 = P^10003 >> % This chain does not have limiting distribution, it has a stationary distribution. This chain does not converge, it has a cycle.
The first condition of limiting distribution is satisfied; however, the second condition where [math]\pi[/math]_{j} has to be independent of i (i.e. all rows of the matrix are the same) is not met.
This example shows the distinction between having a stationary distribution and convergence(having a limiting distribution).Note: [math]\pi=(1/3,1/3,1/3)[/math] is the stationary distribution as [math]\pi=\pi*p[/math]. However, upon repeatedly multiplying P by itself (repeating the step [math]P^n[/math] as n goes to infinite) one will note that the results become a cycle (of period 3) of the same sequence of matrices. The chain has a stationary distribution, but does not converge to it. Thus, there is no limiting distribution.
Another example:
Find the stationary distribution of P= [math]\left[ {\begin{array}{ccc}
0.5 & 0 & 0 \\
1 & 0 & 0.5 \\
0 & 1 & 0.5 \end{array} } \right][/math]
[math]\pi=\pi~P[/math]
[math]\pi=[/math] [[math]\pi[/math]_{0}, [math]\pi[/math]_{1}, [math]\pi[/math]_{2}]
The system of equations is:
[math]0.5\pi_0+1\pi_1+0\pi_2= \pi_0=\gt 2\pi_1 = \pi_0[/math]
[math]0\pi_0+0\pi_1+1\pi_2= \pi_1 =\gt \pi_1=\pi_2[/math]
[math]0\pi_0+0.5\pi_1+0.5\pi_2 = \pi_2 =\gt \pi_1 = \pi_2[/math]
[math]\pi_0+\pi_1+\pi_2 = 1[/math]
[math]2\pi_1+\pi_1+\pi_1 = 4\pi_1 = 1[/math], which gives [math]\pi_1=\frac {1}{4}[/math]
Also, [math]\pi_1 = \pi_2 = \frac {1}{4}[/math]
So, [math]\pi = [\frac{1}{2}, \frac{1}{4}, \frac{1}{4}][/math]
The definition of stationary distribution is that [math]\pi[/math] is the stationary distribution of the chain if [math]\pi=\pi~P[/math], where [math]\pi[/math] is a probability vector. For every n[math]\gt =[/math]0.
However, just because X_{n} ~ [math]\pi[/math] for every n[math]\gt =[/math]0 does not mean every state is independently identically distributed.
Limiting distribution of the chain refers the transition matrix that reaches the stationary state. If the lim(n> infinite)P^n > c, where c is a constant, then, we say this Markov chain is coverage; otherwise, it's not coverage.
Example: Find the stationary distribution of P= [math]\left[ {\begin{array}{ccc} 1/3 & 1/3 & 1/3 \\ 1/4 & 3/4 & 0 \\ 1/2 & 0 & 1/2 \end{array} } \right][/math]
Solution: [math]\pi=\left[ {\begin{array}{ccc} \pi_0 & \pi_1 & \pi_2 \end{array} } \right][/math]
Using the stationary distribution property [math]\pi=\pi~P[/math] we get,
[math]\pi_0=\frac{1}{3}\pi_0+\frac{1}{4}\pi_1+\frac{1}{2}\pi_2 [/math]
[math]\pi_1=\frac{1}{3}\pi_0+\frac{3}{4}\pi_1+0\pi_2 [/math]
[math]\pi_2=\frac{1}{3}\pi_0+0\pi_1+\frac{1}{2}\pi_2 [/math]
And since [math]\pi[/math] is a probability vector,
[math] \pi_{0}~ + \pi_{1} + \pi_{2} = 1 [/math]
Solving the 4 equations for the 3 unknowns gets,
[math]\pi_{0}~=1/3[/math], [math]\pi_{1}~=4/9[/math], and [math]\pi_{2}~=2/9[/math]
Therefore [math]\pi=\left[ {\begin{array}{ccc}
1/3 & 4/9 & 2/9 \end{array} } \right][/math]
Example 2: Find the stationary distribution of P= [math]\left[ {\begin{array}{ccc} 1/3 & 1/3 & 1/3 \\ 1/4 & 1/2 & 1/4 \\ 1/6 & 1/3 & 1/2 \end{array} } \right][/math]
Solution: [math]\pi=\left[ {\begin{array}{ccc} \pi_0 & \pi_1 & \pi_2 \end{array} } \right][/math]
Using the stationary distribution property [math]\pi=\pi~P[/math] we get,
[math]\pi_0=\frac{1}{3}\pi_0+\frac{1}{4}\pi_1+\frac{1}{6}\pi_2 [/math]
[math]\pi_1=\frac{1}{3}\pi_0+\frac{1}{2}\pi_1+\frac{1}{3}\pi_2 [/math]
[math]\pi_2=\frac{1}{3}\pi_0+\frac{1}{4}\pi_1+\frac{1}{2}\pi_2 [/math]
And since [math]\pi[/math] is a probability vector,
[math] \pi_{0}~ + \pi_{1} + \pi_{2} = 1 [/math]
Solving the 4 equations for the 3 unknowns gets,
[math]\pi_{0}=\frac {6}{25}[/math], [math]\pi_{1}~=\frac {2}{5}[/math], and [math]\pi_{2}~=\frac {9}{25}[/math]
Therefore [math]\pi=\left[ {\begin{array}{ccc}
\frac {6}{25} & \frac {2}{5} & \frac {9}{25} \end{array} } \right][/math]
The above two examples are designed to solve for the stationary distribution of the matrix P however they also give us the limiting distribution of the matrices as we have mentioned earlier that the stationary distribution is equivalent to the limiting distribution.
Ergodic Chain
A Markov chain is called an ergodic chain if it is possible to go from every state to every state (not necessarily in one move). For instance, note that we can claim a Markov chain is ergodic if it is possible to somehow start at any state i and end at any state j in the matrix. We could have a chain with states 0, 1, 2, 3, 4 where it is not possible to go from state 0 to state 4 in just one step. However, it may be possible to go from 0 to 1, then from 1 to 2, then from 2 to 3, and finally 3 to 4 so we can claim that it is possible to go from 0 to 4 and this would satisfy a requirement of an ergodic chain. The example below will further explain this concept.
Note:if there's a finite number N then every other state can be reached in N steps.
Example
[math] P= \left[ \begin{matrix}
\frac{1}{3} \; & \frac{1}{3} \; & \frac{1}{3} \\ \\
\frac{1}{4} \; & \frac{3}{4} \; & 0 \\ \\
\frac{1}{2} \; & 0 \; & \frac{1}{2}
\end{matrix} \right] [/math]
[math] \pi=\left[ \begin{matrix}
\frac{1}{3} & \frac{4}{9} & \frac{2}{9}
\end{matrix} \right] [/math]
There are three states in this example.
In this case, state a can go to state a, b, or c; state b can go to state a, b, or c; and state c can go to state a, b, or c so it is possible to go from every state to every state. (Although state b cannot directly go into c in one move, it must go to a, and then to c.).
A kbyk matrix indicates that the chain has k states.
 Ergodic Markov chains are irreducible.
 A Markov chain is called a regular chain if some power of the transition matrix has only positive elements.
 Any transition matrix that has no zeros determines a regular Markov chain
 However, it is possible for a regular Markov chain to have a transition matrix that has zeros.
For example, recall the matrix of the Land of Oz
[math]P = \left[ \begin{matrix}
& R & N & S \\
R & 1/2 & 1/4 & 1/4 \\
N & 1/2 & 0 & 1/2 \\
S & 1/4 & 1/4 & 1/2 \\
\end{matrix} \right][/math]
Theorem
An ergodic Markov chain has a unique stationary distribution [math]\pi[/math]. The limiting distribution exists and is equal to [math]\pi[/math]
Note: Ergodic Markov Chain is irreducible, aperiodic and positive recurrent.
Example: Consider the markov chain of [math]\left[\begin{matrix}0 & 1 \\ 1 & 0\end{matrix}\right][/math], the stationary distribution is obtained by solving [math]\pi P = \pi[/math], getting [math]\pi=[0.5, 0.5][/math], but from the assignment we know that it does not converge, ie. there is no limiting distribution, because the Markov chain is not aperiodic and cycle repeats [math]P^2=\left[\begin{matrix}1 & 0 \\ 0 & 1\end{matrix}\right][/math] and [math]P^3=\left[\begin{matrix}0 & 1 \\ 1 & 0\end{matrix}\right][/math]
Another Example
[math]P=\left[ {\begin{array}{ccc}
\frac{1}{4} & \frac{3}{4} \\[6pt]
\frac{1}{5} & \frac{4}{5} \end{array} } \right][/math]
This matrix means that there are two points in the space, let's call them a and b
Starting from a, the probability of staying in a is 1/4
Starting from a, the probability of going from a to b is 3/4
Starting from b, the probability of going from b to a is 1/5
Starting from b, the probability of staying in b is 4/5
Solve the equation [math] \pi = \pi P [/math]
[math] \pi_0 = .25 \pi_0 + .2 \pi_1 [/math]
[math] \pi_1 = .75 \pi_0 + .8 \pi_1 [/math]
[math] \pi_0 + \pi_1 = 1 [/math]
Solving this system of equations we get:
[math] \pi_0 = \frac{4}{15} \pi_1 [/math]
[math] \pi_1 = \frac{15}{19} [/math]
[math] \pi_0 = \frac{4}{19} [/math]
[math] \pi = [\frac{4}{19}, \frac{15}{19}] [/math]
[math] \pi [/math] is the long run distribution
We can use the stationary distribution to compute the expected waiting time to return to state 'a'
given that we start at state 'a' and so on.. Formula for this will be : [math] E[T_{i,i}]=\frac{1}{\pi_i}[/math]
In the example above this will mean that that expected waiting time for the markov process to return to
state 'a' given that we start at state 'a' is 19/4.
definition of limiting distribution.
remark：satisfied balance of [math]\pi_i P_{ij} = P_{ji} \pi_j[/math], so there is other way to calculate the step probability.
MatLab Code
In the following, P is the transition matrix. eye(n) refers to the n by n Identity matrix. L is the Laplacian matrix, L = (I  P). The Laplacian matrix will have at least 1 zero Eigenvalue. For every 0 in the diagonal, there is a component. If there is exactly 1 zero Eigenvalue, then the matrix is connected and has only 1 component. The number of zeros in the Laplacian matrix is the number of parts in your graph/process. If there is more than one zero on the diagonal of this matrix, this means there is a disconnect in the graph.
>> P=[1/3, 1/3, 1/3; 1/4, 3/4, 0; 1/2, 0, 1/2] P = 0.3333 0.3333 0.3333 0.2500 0.7500 0 0.5000 0 0.5000 >> eye(3) %%returns 3x3 identity matrix ans = 1 0 0 0 1 0 0 0 1 >> L=(eye(3)P) L = 0.6667 0.3333 0.3333 0.2500 0.2500 0 0.5000 0 0.5000 >> [vec val]=eigs(L) vec = 0.7295 0.2329 0.5774 0.2239 0.5690 0.5774 0.6463 0.7887 0.5774 val = 1.0643 0 0 0 0.3523 0 0 0 0.0000 %% Only one value of zero on the diagonal means the chain is connected >> P=[0.8, 0.2, 0, 0;0.2, 0.8, 0, 0; 0, 0, 0.8, 0.2; 0, 0, 0.1, 0.9] P = 0.8000 0.2000 0 0 0.2000 0.8000 0 0 0 0 0.8000 0.2000 0 0 0.1000 0.9000 >> eye(4) ans = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 >> L=(eye(4)P) L = 0.2000 0.2000 0 0 0.2000 0.2000 0 0 0 0 0.2000 0.2000 0 0 0.1000 0.1000 >> [vec val]=eigs(L) vec = 0.7071 0 0.7071 0 0.7071 0 0.7071 0 0 0.8944 0 0.7071 0 0.4472 0 0.7071 val = 0.4000 0 0 0 0 0.3000 0 0 0 0 0.0000 0 0 0 0 0.0000 %% Two values of zero on the diagonal means there are two 'islands' of chains
[math]\Pi[/math] satisfies detailed balance if [math]\Pi_i P_{ij}=P_{ij} \Pi_j[/math]. Detailed balance guarantees that [math]\Pi[/math] is stationary distribution.
Adjacency matrix  a matrix [math]A[/math] that dictates which states are connected and way of portraying which vertices in the matrix are adjacent. If we compute [math]A^2[/math], we can know which states are connected with paths of length 2.
A Markov chain is called an irreducible chain if it is possible to go from every state to every state (not necessary in one more).
Theorem: An ergodic Markov chain has a unique stationary distribution [math]\pi[/math]. The limiting distribution exists and is equal to [math]\pi[/math].
Markov process satisfies detailed balance if and only if it is a reversible Markov process
where P is the matrix of Markov transition.
Satisfying the detailed balance condition guarantees that [math]\pi[/math] is stationary distributed.
[math] \pi [/math] satisfies detailed balance if [math] \pi_i P_{ij} = P_{ji} \pi_j [/math]
which is the same as the Markov process equation.
Example in the class: [math]P= \left[ {\begin{array}{ccc} \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\[6pt] \frac{1}{4} & \frac{3}{4} & 0 \\[6pt] \frac{1}{2} & 0 & \frac{1}{2} \end{array} } \right][/math]
and [math]\pi=(\frac{1}{3},\frac{4}{9}, \frac{2}{9})[/math]
[math]\pi_1 P_{1,2} = 1/3 \times 1/3 = 1/9,\, P_{2,1} \pi_2 = 1/4 \times 4/9 = 1/9 \Rightarrow \pi_1 P_{1,2} = P_{2,1} \pi_2 [/math]
[math]\pi_2 P_{2,3} = 4/9 \times 0 = 0,\, P_{3,2} \pi_3 = 0 \times 2/9 = 0 \Rightarrow \pi_2 P_{2,3} = P_{3,2} \pi_3[/math]
Remark：Detailed balance of [math] \pi_i * Pij = Pji * \pi_j[/math] ， so there is other way to calculate the step probability
[math]\pi[/math] is stationary but is not limiting.
Class 15  Tuesday June 25th 2013
Announcement
Note to all students, the first half of today's lecture will cover the midterm's solution; however please do not post the solution on the Wikicoursenote.
Detailed balance
Let [math]\displaystyle P[/math] be the transition probability matrix of a Markov chain. If there exists a distribution vector [math]\displaystyle \underline{\pi} = [\pi_1 \pi_2 ... \pi_n][/math] such that [math]\pi_i \cdot P_{ij}=P_{ji} \cdot \pi_j, \; \forall i,j[/math], then the Markov chain is said to have detailed balance. A detailed balanced Markov chain must have [math]\displaystyle \underline{\pi}[/math] given above as a stationary distribution, that is [math]\displaystyle \underline{\pi} = \underline{\pi} P[/math], where [math]\displaystyle \underline{\pi}[/math] is a 1 by n matrix and [math]\displaystyle P[/math] is a n by n matrix.
Proof:
[math]\; [\pi P]_j = \sum_i \pi_i P_{ij} =\sum_i P_{ji}\pi_j =\pi_j\sum_i P_{ji} =\pi_j ,\forall j[/math]
 Note: Since [math]\pi_j[/math] is a sum of coloum j and we can do this proof for every element in matrix P; in general, we can prove [math]\pi=\pi P[/math]
Hence [math]\pi[/math] is always a stationary distribution of [math]P(X_n+1=jX_n=i)[/math], for every n.
In other terms, [math] P_{ij} = P(X_n = j X_{n1} = i) [/math], where [math]\pi_j[/math] is the equilibrium probability of being in state j and [math]\pi_i[/math] is the equilibrium probability of being in state i. [math]P(X_{n1} = i) = \pi_i[/math] is equivalent to [math]P(X_{n1} = i, Xn = j)[/math] being symmetric in i and j.
Keep in mind that the detailed balance is a sufficient but not required condition for a distribution to be stationary. i.e. A distribution satisfying the detailed balance is stationary, but a stationary distribution does not necessarily satisfy the detailed balance.
In the stationary distribution [math]\pi=\pi P[/math], in the proof the sum of the p is equal 1 so the [math]\pi P=\pi[/math].
PageRank (http://en.wikipedia.org/wiki/PageRank)
 PageRank is a linkanalysis algorithm developed by Larry Page from Google in 1996; used for measuring a website's importance, relevance and popularity.
 PageRank is a graph containing web pages and their links to each other.
 Many social media sites use this (such as Facebook and Twitter)
 It can also be used to find criminals (ie. theives, hackers, terrorists, etc.) by finding out the links.
This is what made Google the search engine of choice over Yahoo, Bing, etc. What made Google's search engine a huge success is not its search function, but rather the algorithm it used to rank the pages. (Ex. If we come up with 100 million search results, how do you list them by relevance and importance so the users can easily find what they are looking for. Most users will not go past the first 3 or so search pages to find what they are looking for. It is this ability to rank pages that allow Google to remain more popular than Yahoo, Bink, AskJeeves, etc.)
The order of importance
1. A web page is important if it has many other pages linked to it
2. The more important a web page is, the more weight should be assigned to its outgoing links
3. If a webpage has many outgoing links then its links have less value (ex: if a page links to everyone, like 411, it is not as important as pages that have incoming links)
File:diagram.jpg
[math]L=
\left[ {\begin{matrix}
0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
1 & 1 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 \end{matrix} } \right][/math]
ie: According to the above example
Page 3 is the most important since it has the most links pointing to it, therefore more weigh should be placed on its outgoing links.
Page 4 comes after page 3 since it has the second most links pointing to it
Page 2 comes after page 4 since it has the third most links pointing to it
Page 1 and page 5 are the least important since no links point to them
As page 1 and page 2 has the most outgoing links, then their links have less value compared to the other pages.
[math]L_{ij} = 1[/math] if j has a link to i;
[math]L_{ij} = 0[/math] otherwise.
C_{j} The number of outgoing links of page [math]j[/math]:
[math]C_j=\sum_i L_{ij}[/math]
(i.e. sum of entries in column j)
[math]P_j[/math] is the rank of page [math]j[/math].
Suppose we have [math]N[/math] pages, [math]P[/math] is a vector containing ranks of all pages.
 [math]P[/math] is a [math]N \times 1[/math] vector.
 [math]C_i[/math] counts the number of incoming links of page [math]i[/math]
[math]C_i=\sum_j L_{ij}[/math]
(i.e. sum of entries in row i)
for each row, if there is a 1 in the third column, it means page three point to that page.
A PageRank results from a mathematical algorithm based on the webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or usa.gov. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. (This would be represented in our diagram as an arrow pointing towards the page. Hence in our example, Page 3 is the most important, since it has the most 'votes of support). The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. If there are no links to a web page, then there is no support for that page (In our example, this would be Page 1 and Page 5). (source:http://en.wikipedia.org/wiki/PageRank#Description)
For those interested in PageRank, here is the original paper by Google cofounders Brin and Page: http://infolab.stanford.edu/pub/papers/google.pdf
Notice: Page and Brin confused a formula in above paper.
Example of Page Rank Application in Real Life
Page Rank checker  This is a free service to check Google™ page rank instantly via online PR checker or by adding a PageRank checking button to the
web pages (http://www.prchecker.info/check_page_rank.php)
GoogleMatrix G = d * [ (Hyperlink Matrix H) + (Dangling Nodes Matrix A) ] + ((1d)/N) * (NxN Matrix U of all 1's)
(source: https://googledrive.com/host/0B2GQktuwcTiaWw5OFVqT1k3bDA/)
Class 16  Thursday June 27th 2013
Page Rank
[math]L_{ij}[/math] equals 1 if j has a link to i, and equals 0 otherwise.
[math]C_j[/math] :The number of outgoing links for page j, where [math]c_j=\sum_i L_{ij}[/math]
P is N by 1 vector contains rank of all N pages; for page i, the rank is [math]P_i[/math]
[math]P_i= (1d) + d\cdot \sum_j \frac {L_{ij}P_j}{c_j}[/math] where 0 < d < 1 is constant (in original page rank algorithm d = 0.8).
(note: we do not want rank 0 occurs, hence we use d)
The reason we multiplying by the d term in front of the [math]\sum_j \frac {L_{ij}P_j}{c_j}[/math] is to get rid of the problem with extreme cases where term one or two is dominant over the other.
Interpretation of the formula:
1) sum of L_{ij} is the total number of incoming links
2) the above sum is weighted by page rank of the pages that contain the link to i (P_{j}) i.e. if a highrank page points to page i, then this link carries more weight than links from lowerrank pages.
3) the sum is then weighted by the inverse of the number of outgoing links from the pages that contain links to i (c_{j}). i.e. if a page has more outgoing links than other pages then its links carry less weight.
4) finally, we take a linear combination of the page rank obtained from above and a constant 1. This ensures that every page has a rank greater than zero.
Note that this is a system of N equations with N unknowns.
[math]c_j[/math] is the number of outgoing links, less outgoing links means more important.
Let D be a diagonal N by N matrix such that [math] D_{ii}[/math] = [math]c_i[/math]
[math]D= \left[ {\begin{matrix} c_1 & 0 & ... & 0 \\ 0 & c_2 & ... & 0 \\ 0 & 0 & ... & 0 \\ 0 & 0 & ... & c_N \end{matrix} } \right][/math]
Then P = (1d) e + dLD^{1}P
where e =[1 1 ....]^{T} , i.e. a N by 1 vector.
We assume that rank of all N pages sums to N. The sum of rank of all N pages can be any number, as long as the ranks have certain propotion.
i.e. e^{T} P = N, then [math]~\frac{e^{T}P}{N} = 1[/math]
D^{1} will be:
D^{1}[math]= \left[ {\begin{matrix} \frac {1}{c_1} & 0 & ... & 0 \\ 0 & \frac {1}{c_2} & ... & 0 \\ 0 & 0 & ... & 0 \\ 0 & 0 & ... & \frac {1}{c_N} \end{matrix} } \right][/math]
[math]P=~(1d)e+dLD^{1}P[/math] where [math]e=\begin{bmatrix} 1\\ 1\\ ...\\ 1 \end{bmatrix}[/math]
[math]P=(1d)~\frac{ee^{T}P}{N}+dLD^{1}P[/math]
[math]P=[(1d)~\frac{ee^T}{N}+dLD^{1}]P[/math]
[math]P=AP[/math]
N is the rank,
L is a N*N matrix,
D^{1} is a N*N matrix,
P is a N*1 matrix
d is a constant between 0 and 1
P=AP
P is an eigenvector of A with corresponding eigenvalue equal to 1.
P^{T}=P^{T}A^{T}
Notice that all entries in A^{T }are nonnegative and each row sums to 1. Hence A^{T} satisfies the definition of a transition probability matrix.
P^{T} is the stationary distribution of a Markov Chain with transition probability matrix A^{T}.
We can consider A^{T} to be the matrix describing all possible movements following links on the internet, and P^{T} as the probability of being on any given webpage if we were on the internet long enough.
Damping Factor "d"
The PageRank assumes that any imaginary user who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will keep on clicking is a damping factor, "d". After many studies, the approximation of "d" is 0.85. Other values for "d" have been used in class and may appear on assignments/exams.
Examples
Example 1
File:eg1.jpg
[math]L=
\left[ {\begin{matrix}
0 & 0 & 1 \\
1 & 0 & 0 \\
0 & 1 & 0 \end{matrix} } \right][/math]
[math]c= \left[ {\begin{matrix} 1 & 1 & 1 \end{matrix} } \right][/math]
[math]D= \left[ {\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} } \right][/math]
MATLAB Code d=0.8 N=3 A=(1d)*ones(N)/N+d*L*pinv(D) #pinv: MoorePenrose inverse (pseudoinverse) of symbolic matrix We use the pinv(D) function [pseudoinverse] instead of the inv(D) function because in the case of a noninvertible matrix, it would not crash the program. [vec val]=eigs(A) (eigendecomposition) a=vec(:,1) (find the eigenvector equals to 1) a=a/sum(a) (normalize a) or to show that A transpose is a stationary transition matrix (transpose(A))^200 will be the same as a=a/sum(a)
NOTE: Changing the value of d, does not change the ranking order of the pages.
By looking at each entry after normalizing a, we can tell the ranking order of each page.
c = [1 1 1] since there are 3 pages, each page is one way recurrent to each other and there is only one outgoing for each page. Hence, D is a 3x3 standard diagonal matrix.
Example 2
[math]L=
\left[ {\begin{matrix}
0 & 0 & 1 \\
1 & 0 & 1 \\
0 & 1 & 0 \end{matrix} } \right][/math]
[math]c= \left[ {\begin{matrix} 1 & 1 & 2 \end{matrix} } \right][/math]
[math]D= \left[ {\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{matrix} } \right][/math]
Matlab code >> L=[0 0 1;1 0 1;0 1 0]; >> C=sum(L); >> D=diag(C); >> d=0.8; >> N=3; >> A=(1d)*ones(N)/N+d*L*pinv(D); >> [vec val]=eigs(A) vec = 0.3707 0.3536 + 0.3536i 0.3536  0.3536i 0.6672 0.3536  0.3536i 0.3536 + 0.3536i 0.6461 0.7071 0.7071 val = 1.0000 0 0 0 0.4000  0.4000i 0 0 0 0.4000 + 0.4000i >> a=vec(:,1) a = 0.3707 0.6672 0.6461 >> a=a/sum(a) ;;standardize a a = 0.2201 0.3962 0.3836
NOTE: Page 2 is the most important page because it has 2 incomings. Similarly, page 3 is more important than page 1 because page 3 has the incoming result from page 2.
This example is similar to the first example, but here, page 3 can go back to page 2, so the matrix of the outgoing matrix, the third column of the D matrix is 3 in the third row. And we use the code to calculate the p=Ap.
Example 3
[math]L= \left[ {\begin{matrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{matrix} } \right][/math]
[math]c= \left[ {\begin{matrix} 1 & 2 & 1 \end{matrix} } \right][/math]
[math]D= \left[ {\begin{matrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{matrix} } \right][/math]
[math]d=0.8[/math]
[math]N=3[/math]
this example is the second page have 2 outgoings.
Another Example:
Consider: 1 > ,<2 >3
L= [0 1 0; 1 0 0; 0 1 0]; c=[1,1,1]; D= [1 0 0; 0 1 0; 0 0 1]
Example 4
1 <> 2 > 3 <> 4
[math]L= \left[ {\begin{matrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{matrix} } \right][/math]
[math]c= \left[ {\begin{matrix} 1 & 2 & 1 & 1 \end{matrix} } \right][/math]
[math]D=
\left[ {\begin{matrix}
1 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \end{matrix} } \right][/math]
Matlab code
>> L=L= [0 1 0 0;1 0 0 0;0 1 0 1;0 0 1 0]; >> C=sum(L); >> D=diag(C); >> d=0.8; >> N=4; >> A=(1d)*ones(N)/N+d*L*pinv(D) A = 0.0500 0.4500 0.0500 0.0500 0.8500 0.0500 0.0500 0.0500 0.0500 0.4500 0.0500 0.8500 0.0500 0.0500 0.8500 0.0500 >> [vec val]=eigs(A) vec = 0.1817 0.0000 0.4082 0.4082 0.2336 0.0000 0.5774 0.5774 0.7009 0.7071 0.4082 0.4082 0.6490 0.7071 0.5774 0.5774 val = 1.0000 0 0 0 0 0.8000 0 0 0 0 0.5657 0 0 0 0 0.5657 >> a=vec(:,1) >> a=vec(:,1) a = 0.1817 0.2336 0.7009 0.6490 >> a=a/sum(a) a = 0.1029 0.1324 0.3971 0.3676
NOTE: The ranking of each page is as follows: Page 3, Page 4, Page 2 and Page 1. Page 3 is the highest since it has the most incoming links. All of the other pages only have one incoming link but since Page 3, highest ranked page, links to Page 4, Page 4 is the second highest ranked. Lastly, since Page 2 links into Page 3 it is the next highest rank.
Page 2 has 2 outgoing links. Pages with the same incoming links can be ranked closest to the highest ranked page. If the highest page P1 is incoming into a page P2, then P2 is ranked second, and so on.
Example 5
[math]L= \left[ {\begin{matrix} 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \end{matrix} } \right][/math]
[math]c= \left[ {\begin{matrix} 3 & 1 & 1 & 3 \end{matrix} } \right][/math]
[math]D= \left[ {\begin{matrix} 3 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 3 \end{matrix} } \right][/math]
Matlab code >> L= [0 1 0 1; 1 0 1 1; 1 0 0 1;1 0 0 0]; >> d = 0.8; >> N = 4; >> C = sum(L); >> D = diag(C); >> A=(1d)*ones(N)/N+d*L*pinv(D); >> [vec val]=eigs(A); >> a=vec(:,1); >> a=a/sum(a) a = 0.3492 0.3263 0.1813 0.1431
Example 6
[math]L=
\left[ {\begin{matrix}
0 & 1 & 0 & 0 & 1\\
1 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0\\
0 & 1 & 1 & 0 & 1\\
0 & 0 & 0 & 1 & 0 \end{matrix} } \right][/math]
Matlab Code
>> d=0.8 d = 0.8000 >> L=[0 1 0 0 1;1 0 0 0 0;0 1 0 0 0;0 1 1 0 1;0 0 0 1 0] L = 0 1 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 1 0 0 0 1 0 >> c=sum(L) c = 1 3 1 1 2 >> D=diag(c) D = 1 0 0 0 0 0 3 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 2 >> N=5 N = 5 >> A=(1d)*ones(N)/N+d*L*pinv(D) A = 0.0400 0.3067 0.0400 0.0400 0.4400 0.8400 0.0400 0.0400 0.0400 0.0400 0.0400 0.3067 0.0400 0.0400 0.0400 0.0400 0.3067 0.8400 0.0400 0.4400 0.0400 0.0400 0.0400 0.8400 0.0400 >> [vec val]=eigs(A) vec = Columns 1 through 4 0.4129 0.4845 + 0.1032i 0.4845  0.1032i 0.0089 + 0.2973i 0.4158 0.6586 0.6586 0.5005 + 0.2232i 0.1963 0.2854  0.0608i 0.2854 + 0.0608i 0.2570  0.2173i 0.5700 0.1302 + 0.2612i 0.1302  0.2612i 0.1462  0.3032i 0.5415 0.2416  0.3036i 0.2416 + 0.3036i 0.6202 Column 5 0.0089  0.2973i 0.5005  0.2232i 0.2570 + 0.2173i 0.1462 + 0.3032i 0.6202 val = Columns 1 through 4 1.0000 0 0 0 0 0.5886  0.1253i 0 0 0 0 0.5886 + 0.1253i 0 0 0 0 0.1886  0.3911i 0 0 0 0 Column 5 0 0 0 0 0.1886 + 0.3911i >> a=vec(:,1) a = 0.4129 0.4158 0.1963 0.5700 0.5415 >> a=a/sum(a) a = 0.1933 0.1946 0.0919 0.2668 % (the most important) 0.2534
for the matrix , page 1 has 2 outgoing, and page 2 has one from page 1, page 3 has 2 from 2, page 4 has 3 outgoing ,and page 5 has 1 outgoing , 1 and 2, 4 and 5 outgoing to each other. so the c=[2 1 1 3 1], the rank is page 4 , page 5, page 2 and page 1 and page 3.
Class 17  Tuesday July 2nd 2013
Markov Chain Monte Carlo (MCMC)
Definition: Markov Chain Monte Carlo (MCMC) methods are a class of algorithms for sampling from probability distributions based on constructing a Markov chain that has the desired distribution as its equilibrium distribution. The state of the chain after a large number of steps is then used as a sample of the desired distribution. The quality of the sample improves as a function of the number of steps. (http://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo)
One of the main purposes of MCMC is to simulate the dependent sample. Other methods learned in class over time only allow us to simulate iid random variables. Since we do not have joint distribution of these dependent random variables we use MCMC to sample from those. Once again, it can not produce iid distributions
Markov Chain Monte Carlo basic ideas: given a probability distribution [math]\pi[/math] on a set [math]\Omega[/math], the problem is to generate random elements of [math]\Omega[/math] with distribution [math]\pi[/math] . MCMC does that by constructing a Markov Chain with stationary distribution [math]\pi[/math] and simulating the chain.
Why? After a large number of iterations, the Markov Chain will reach its stationary distribution. Simply by choosing the value of the Markov chain at that large step, we are effectively sampling from the desired distribution.
The MCMC method is usually used in continuous cases but a discrete example is given below.
Motivation example
 Suppose we want to generate a random variable X according to distribution [math]\pi=(\pi_1, \pi_2, ... , \pi_m)[/math]
X can take m possible different values from [math]{1,2,3,\cdots,m}[/math]
 We want to generate [math]\{X_t: t=0, 1, \cdots, m\}[/math] according to [math]\pi[/math]
Now we have m=6, [math]\pi=[0.1,0.1,0.1,0.2,0.3,0.2][/math], [math]X \in [1,2,3,4,5,6][/math]
Suppose [math]X_t=j[/math]. Consider an arbitrary probability transition matrix Q with entry [math]q_{ij}[/math] being the probability of moving to state j from state i. ([math]q_{ij}[/math] can not be zero.)
[math] \mathbf{Q} =
\begin{bmatrix}
q_{11} & q_{12} & \cdots & q_{1m} \\
q_{21} & q_{22} & \cdots & q_{2m} \\
\vdots & \vdots & \ddots & \vdots \\
q_{m1} & q_{m2} & \cdots & q_{mm}
\end{bmatrix}
[/math]
We generate Y = j according to the ith row of Q. Note that the ith row of Q is a probability vector that shows the probability of moving to any state j from the current state i, i.e.[math]P(Y=j)=q_{ij}[/math]
In the following algorithm:
[math]q_{ij}[/math] is the [math]ij^{th}[/math] entry of matrix Q. It is the probability of Y=j given that [math]x_t = i[/math].
[math]r_{ij}[/math] is the probability of accepting Y as [math]x_{t+1}[/math].
How to get the acceptance probability?
If [math]\pi [/math] is the stationary distribution, then it must satisfy the detailed balance condition: [math]\pi_i P_{ij}[/math] = [math]\pi_j P_{ji}[/math]
Since [math]P_{ij}[/math] = [math]q_{ij} r_{ij}[/math], we have [math]\pi_i q_{ij} r_{ij}[/math] = [math]\pi_j q_{ji} r_{ji}[/math].
We want to find a general solution: [math]r_{ij} = a(i,j) \pi_j q_{ji}[/math], where a(i,j) = a(j,i).
Recall [math]r_{ij}[/math] is the probability of acceptance, thus it must be that
1.[math]r_{ij} = a(i,j)[/math] [math]\pi_j q_{ji} [/math]≤1, then we get: [math]a(i,j) [/math]≤ [math]1/(\pi_j q_{ji})[/math]
2. [math]r_{ji} = a(i,j) [/math] [math]\pi_i q_{ij} [/math] ≤ 1, then we get: [math]a(j,i)[/math] ≤ [math]1/(\pi_j q_{ji})[/math]
So we choose a(i,j) as large as possible, but it needs to satisfy the two conditions above.
[math]a(i,j) = \min \{\frac{1}{\pi_j q_{ji}},\frac{1}{\pi_j q_{ji}}\} [/math]
Thus, [math] r_{ij} = \min \{\frac{\pi_j q_{ji}}{\pi_i q_{ij}}, 1\} [/math]
Algorithm:
 [math]P(Y=j) = q_{ij} [/math]. [math]\frac{\pi_j q_{ji}}{\pi_i q_{ij}}[/math] is a positive ratio.
 [math] r_{ij} = \min \{\frac{\pi_j q_{ji}}{\pi_i q_{ij}}, 1\} [/math]
 [math]
x_{t+1} = \begin{cases}
Y, & \text{with probability } r_{ij} \\
x_t, & \text{otherwise} \end{cases} [/math]
 go back to the first step
We can compare this with the AcceptanceRejection model we learned before.
 [math]U[/math] ~ [math]Uniform(0,1)[/math]
 If [math]U \lt r_{ij}[/math], then accept.
Metropolis Algorithm
Proposition: Metropolis works:
The [math]P_{ij}[/math]'s from Metropolis Algorithm satisfy detailed balance property w.r.t [math]\pi[/math] . i.e. [math]\pi_i P_{ij} = \pi_j P_{ji}[/math]. The new Markov Chain has a stationary distribution [math]\pi[/math].
Remarks:
1. We only need to know ratios of values of [math]\pi_i[/math]'s.
2. The MC might converge to [math]\pi[/math] exponentially slowly
This algorithm generates [math]\{x_t: t=0,...,m\}[/math]. In the long run, the marginal distribution of [math] x_t [/math] is [math]\underline{\Pi} [/math]
[math]\{x_t: t = 0, 1,...,m\}[/math] is a Markov chain with probability transition matrix (PTM), P.
This is a Markov Chain since [math] x_{t+1} [/math] only depends on [math] x_t [/math], where
[math] P_{ij}= \begin{cases}
q_{ij} r_{ij}, & \text{if }i \neq j \\[6pt]
1  \displaystyle\sum_{k \neq i} q_{ik} r_{ik}, & \text{if }i = j \end{cases} [/math]
[math]q_{ij}[/math] is the chance to generate state j; r_ij is the chance to accept state j as the next state.
Detailed Balance:
A good way to think of the detailed balance equations is that they balance the probability flux from state i to state j with that from state j to state i.
We need to show that [math]\underline{\Pi}[/math] is the stationary distribution of this Markov chain, i.e. [math]\displaystyle \underline{\Pi} = \underline{\Pi}P[/math]
Recall that if a Markov chain satisfies the detailed balance property, i.e. [math]\displaystyle \pi_i P_{ij} = \pi_j P_{ji} \, \forall i,j[/math], then [math]\underline{\Pi}[/math] is the stationary distribution of the chain.
Proof:
WLOG, we can assume that [math]\frac{\pi_j q_{ji}}{\pi_i q_{ij}}\lt 1[/math]
LHS:
[math]\pi_i P_{ij} = \pi_i q_{ij} r_{ij} = \pi_i q_{ij} \cdot \min(\frac{\pi_j q_{ji}}{\pi_i q_{ij}},1) = \cancel{\pi_i q_{ij}} \cdot \frac{\pi_j q_{ji}}{\cancel{\pi_i q_{ij}}} = \pi_j q_{ji}[/math]
RHS:
Note that by our assumption, since [math]\frac{\pi_j q_{ji}}{\pi_i q_{ij}}\lt 1[/math], its reciprocal [math]\frac{\pi_i q_{ij}}{\pi_j q_{ji}} \geq 1[/math]
So [math]\displaystyle \pi_j P_{ji} = \pi_ j q_{ji} r_{ji} = \pi_ j q_{ji} \cdot \min(\frac{\pi_i q_{ij}}{\pi_j q_{ji}},1) = \pi_j q_{ji} \cdot 1 = \pi_ j q_{ji}[/math]
Hence LHS=RHS
If we assume that [math]\frac{\pi_j q_{ji}}{\pi_i q_{ij}}=1[/math]
(essentially [math]\frac{\pi_j q_{ji}}{\pi_i q_{ij}}\gt =1[/math]
)
LHS:
[math]\pi_i P_{ij} = \pi_i q_{ij} r_{ij} = \pi_i q_{ij} \cdot \min(\frac{\pi_j q_{ji}}{\pi_i q_{ij}},1) =\pi_i q_{ij} \cdot 1 = \pi_i q_{ij}[/math]
RHS:
Note that by our assumption, since [math]\frac{\pi_j q_{ji}}{\pi_i q_{ij}}\geq 1[/math], its reciprocal [math]\frac{\pi_i q_{ij}}{\pi_j q_{ji}} \leq 1 [/math]
So [math]\displaystyle \pi_j P_{ji} = \pi_ j q_{ji} r_{ji} = \pi_ j q_{ji} \cdot \min(\frac{\pi_i q_{ij}}{\pi_j q_{ji}},1) = \cancel{\pi_j q_{ji}} \cdot \frac{\pi_i q_{ij}}{\cancel{\pi_j q_{ji}}} = \pi_i q_{ij}[/math]
Hence LHS=RHS[math]\square[/math]
Note:
1) If we instead assumed [math]\displaystyle \frac{\pi_i q_{ij}}{\pi_j q_{ji}} \geq 1[/math], the proof is similar with LHS= RHS = [math] \pi_i q_{ij} [/math]
2) If [math]\displaystyle i = j[/math], then detailed balance is satisfied trivially.
since [math]{\pi_i q_{ij}}[/math], and [math]{\pi_j q_{ji}}[/math] are smaller than one. so the above steps show the proof of [math]\frac{\pi_i q_{ij}}{\pi_j q_{ji}}\lt 1[/math].