Difference between revisions of "stat340s13"

From statwiki
Jump to: navigation, search
(Sample From a Vector(Multidimensional Distribution))
m (Conversion script moved page Stat340s13 to stat340s13: Converting page titles to lowercase)
 
Line 1: Line 1:
 +
<div style = "align:left; background:#00ffff; font-size: 150%">
 +
If you
 +
use ideas, plots, text, code and other intellectual property developed by someone else
 +
in your `wikicoursenote' contribution , you have to cite the
 +
original source. If you copy a sentence or a paragraph from work done by someone
 +
else, in addition to citing the original source you have to use quotation marks to
 +
identify the scope of the copied material. Evidence of copying or plagiarism will
 +
cause a failing mark in the course.
 +
 +
Example of citing the original source
 +
 +
Assumptions Underlying Principal Component Analysis can be found here<ref>http://support.sas.com/publishing/pubcat/chaps/55129.pdf</ref>
 +
 +
</div>
 +
 +
==Important Notes==
 +
<span style="color:#ff0000;font-size: 200%"> To make distinction between the material covered in class and additional material that you have add to the course, use the following convention. For anything that is not covered in the lecture write:</span>
 +
 +
<div style = "align:left; background:#F5F5DC; font-size: 120%">
 +
In the news recently was a story that captures some of the ideas behind PCA. Over the past two years, Scott Golder and Michael Macy, researchers from Cornell University, collected 509 million Twitter messages from 2.4 million users in 84 different countries. The data they used were words collected at various times of day and they classified the data into two different categories: positive emotion words and negative emotion words. Then, they were able to study this new data to evaluate subjects' moods at different times of day, while the subjects were in different parts of the world. They found that the subjects generally exhibited positive emotions in the mornings and late evenings, and negative emotions mid-day. They were able to "project their data onto a smaller dimensional space" using PCS. Their paper, "Diurnal and Seasonal Mood Vary with Work, Sleep, and Daylength Across Diverse Cultures," is available in the journal Science.<ref>http://www.pcworld.com/article/240831/twitter_analysis_reveals_global_human_moodiness.html</ref>.
 +
 +
Assumptions Underlying Principal Component Analysis can be found here<ref>http://support.sas.com/publishing/pubcat/chaps/55129.pdf</ref>
 +
 +
</div>
 +
 
== Introduction, Class 1 - Tuesday, May 7 ==
 
== Introduction, Class 1 - Tuesday, May 7 ==
  
Line 6: Line 31:
 
<!-- br tag for spacing-->
 
<!-- br tag for spacing-->
 
Lecture: <br />
 
Lecture: <br />
001: TTh 8:30-9:50 MC1085 <br />
+
001: T/Th 8:30-9:50am MC1085 <br />
002: TTh 1:00-2:20 DC1351 <br />
+
002: T/Th 1:00-2:20pm DC1351 <br />
 
Tutorial: <br />
 
Tutorial: <br />
2:30-3:20 Mon M3 1006 <br />
+
2:30-3:20pm Mon M3 1006 <br />
 +
Office Hours: <br />
 +
Friday at 10am, M3 4208 <br />
  
 
=== Midterm ===
 
=== Midterm ===
Monday June 17 2013 from 2:30-3:30
+
Monday June 17,2013 from 2:30pm-3:20pm
 +
 
 +
=== Final ===
 +
Saturday August 10,2013 from 7:30pm-10:00pm
  
 
=== TA(s):  ===
 
=== TA(s):  ===
Line 51: Line 81:
 
=== Four Fundamental Problems ===
 
=== Four Fundamental Problems ===
 
<!-- br tag for spacing-->
 
<!-- br tag for spacing-->
1. Classification: Given an input object X, we have a function which will take in this input X and identify which 'class (Y)' it belongs to (Discrete Case) <br />
+
1 Classification: Given input object X, we have a function which will take this input X and identify which 'class (Y)' it belongs to (Discrete Case) <br />
   i.e taking value from x, we could predict y.
+
   <font size="3">i.e taking value from x, we could predict y.</font>
(For example, an image of a fruit can be classified, through some sort of algorithm to be a picture of either an apple or an orange.) <br />
+
(For example, if you have 40 images of oranges and 60 images of apples (represented by x), you can estimate a function that takes the images and states what type of fruit it is - note Y is discrete in this case.) <br />
2. Regression: Same as classification but in the continuous case except y is discrete <br />
+
2 Regression: Same as classification but in the continuous case except y is non discrete. Results from regression are often used for prediction,forecasting and etc. (Example of stock prices, height, weight, etc.) <br />
3. Clustering: Use common features of objects in same class or group to form clusters.(in this case, x is given, y is unknown) <br />
+
(A simple practice might be investigating the hypothesis that higher levels of education cause higher levels of income.) <br />
4. Dimensionality Reduction (aka Feature extraction, Manifold learning): Used when we have a variable in high dimension space and we want to reduce the dimension <br />
+
3 Clustering: Use common features of objects in same class or group to form clusters.(in this case, x is given, y is unknown; For example, clustering by provinces to measure average height of Canadian men.) <br />
 +
4 Dimensionality Reduction (also known as Feature extraction, Manifold learning): Used when we have a variable in high dimension space and we want to reduce the dimension <br />
  
 
=== Applications ===
 
=== Applications ===
Line 86: Line 117:
 
*Other course material on: http://wikicoursenote.com/wiki/
 
*Other course material on: http://wikicoursenote.com/wiki/
 
*Log on to both Learn and wikicoursenote frequently.
 
*Log on to both Learn and wikicoursenote frequently.
*Email all questions and concerns to UWStat340@gmail.com. Do not use your personal email address! Do not email instructor or TAs about the class directly to theri personal accounts!
+
*Email all questions and concerns to UWStat340@gmail.com. Do not use your personal email address! Do not email instructor or TAs about the class directly to their personal accounts!
  
'''Wikicourse note (10% of final mark):'''
+
'''Wikicourse note (complete at least 12 contributions to get 10% of final mark):'''
When applying an account in the wikicourse note, please use the quest account as your login name while the uwaterloo email as the registered email. This is important as the quest id will use to identify the students who make the contributions.
+
When applying for an account in the wikicourse note, please use the quest account as your login name while the uwaterloo email as the registered email. This is important as the quest id will be used to identify the students who make the contributions.
 
Example:<br/>
 
Example:<br/>
 
User: questid<br/>
 
User: questid<br/>
Line 97: Line 128:
 
'''As a technical/editorial contributor''': Make contributions within 1 week and do not copy the notes on the blackboard.
 
'''As a technical/editorial contributor''': Make contributions within 1 week and do not copy the notes on the blackboard.
  
<s>Must do both</s> ''All contributions are now considered general contributions you must contribute to 50% of lectures for full marks''
+
''All contributions are now considered general contributions you must contribute to 50% of lectures for full marks''
  
*A general contribution can be correctional (fixing mistakes) or technical (expanding content, adding examples, etc) but at least half of your contributions should be technical for full marks
+
*A general contribution can be correctional (fixing mistakes) or technical (expanding content, adding examples, etc.) but at least half of your contributions should be technical for full marks.
  
 
Do not submit copyrighted work without permission, cite original sources.
 
Do not submit copyrighted work without permission, cite original sources.
Each time you make a contribution, check mark the table. Marks are calculated on honour system, although there will be random verifications. If you are caught claiming to contribute but didn't, you will ''lose'' marks.
+
Each time you make a contribution, check mark the table. Marks are calculated on an honour system, although there will be random verifications. If you are caught claiming to contribute but have not, you will not be credited.
  
'''Wikicoursenote contribution form''' : [https://docs.google.com/forms/d/1Sgq0uDztDvtcS5JoBMtWziwH96DrBz2JiURvHPNd-xs/viewform]
+
'''Wikicoursenote contribution form''' : https://docs.google.com/forms/d/1Sgq0uDztDvtcS5JoBMtWziwH96DrBz2JiURvHPNd-xs/viewform
  
- you can submit your contributions in multiple times.<br />
+
- you can submit your contributions multiple times.<br />
 
- you will be able to edit the response right after submitting<br />
 
- you will be able to edit the response right after submitting<br />
 
- send email  to make changes to an old response : uwstat340@gmail.com<br />
 
- send email  to make changes to an old response : uwstat340@gmail.com<br />
Line 116: Line 147:
 
- Markov Chain Monte Carlo
 
- Markov Chain Monte Carlo
  
=== Tentative Marking Scheme ===
+
==Class 2 - Thursday, May 9==
{| class="wikitable"
+
===Generating Random Numbers===
|-
+
==== Introduction ====
! Item
+
Simulation is the imitation of a process or system over time. Computational power has introduced the possibility of using simulation study to analyze models used to describe a situation.
! Value
 
|-
 
| Assignments (~6)
 
| 30%
 
|-
 
| WikiCourseNote
 
| 10%
 
|-
 
| Midterm
 
| 20%
 
|-
 
| Final
 
| 40%
 
|}
 
  
 +
In order to perform a simulation study, we should:
 +
<br\> 1 Use a computer to generate (pseudo*) random numbers (rand in MATLAB).<br>
 +
2 Use these numbers to generate values of random variable from distributions: for example, set a variable in terms of uniform u ~ U(0,1).<br>
 +
3 Using the concept of discrete events, we show how the random variables can be used to generate the behavior of a stochastic model over time. (Note: A stochastic model is the opposite of deterministic model, where there are several directions the process can evolve to)<br>
 +
4 After continually generating the behavior of the system, we can obtain estimators and other quantities of interest.<br>
  
'''The final exam is going to be closed book and only non-programmable calculators are allowed'''
+
The building block of a simulation study is the ability to generate a random number. This random number is a value from a random variable distributed uniformly on (0,1). There are many different methods of generating a random number: <br>
'''A passing mark must be achieved in the final to pass the course'''
+
<br><font size="3">Physical Method: Roulette wheel, lottery balls, dice rolling, card shuffling etc. <br>
 +
<br>Numerically/Arithmetically: Use of a computer to successively generate pseudorandom numbers. The <br />sequence of numbers can appear to be random; however they are deterministically calculated with an <br />equation which defines pseudorandom. <br></font>
  
==Sampling (Generating random numbers), Class 2 - Thursday, May 9==
+
(Source: Ross, Sheldon M., and Sheldon M. Ross. Simulation. San Diego: Academic, 1997. Print.)
  
 +
*We use the prefix pseudo because computer generates random numbers based on algorithms, which suggests that generated numbers are not truly random. Therefore pseudo-random numbers is used.
  
=== Introduction ===
+
In general, a deterministic model produces specific results given certain inputs by the model user, contrasting with a '''stochastic''' model which encapsulates randomness and probabilistic events.
Some people believe that sampling activities such as rolling a dice and flipping a coin are not truly random but are '''deterministic''', since the result can be reliably calculated using things such as physics and math. In general, a deterministic model produces specific results given certain inputs by the model user, contrasting with a '''stochastic''' model which encapsulates randomness and probabilistic events.
+
[[File:Det_vs_sto.jpg]]
 +
<br>A computer cannot generate truly random numbers because computers can only run algorithms, which are deterministic in nature. They can, however, generate Pseudo Random Numbers<br>
  
A computer cannot generate truly random numbers because computers can only run algorithms, which are deterministic in nature. They can, however, generate '''Pseudo Random Numbers'''; numbers that seem random but are actually deterministic. Although the pseudo random numbers are deterministic, these numbers have a sequence of value and all of them have the appearances of being independent uniform random variables. Being deterministic, pseudo random numbers are valuable and beneficial due to the ease to generate and manipulate.
+
'''Pseudo Random Numbers''' are the numbers that seem random but are actually determined by a relative set of original values. It is a chain of numbers pre-set by a formula or an algorithm, and the value jump from one to the next, making it look like a series of independent random events. The flaw of this method is that, eventually the chain returns to its initial position and pattern starts to repeat, but if we make the number set large enough we can prevent the numbers from repeating too early. Although the pseudo random numbers are deterministic, these numbers have a sequence of value and all of them have the appearances of being independent uniform random variables. Being deterministic, pseudo random numbers are valuable and beneficial due to the ease to generate and manipulate.
  
When people do the test for many times, the results will be closed the express values,that makes the trial looks like deterministic, however for each trial, the result is random.
+
When people repeat the test many times, the results will be the closed express values, which make the trials look deterministic. However, for each trial, the result is random. So, it looks like pseudo random numbers.
So, it looks like pseudo random numbers.
 
  
=== Mod ===
+
==== Mod ====
 
Let <math>n \in \N</math> and <math>m \in \N^+</math>, then by Division Algorithm,  
 
Let <math>n \in \N</math> and <math>m \in \N^+</math>, then by Division Algorithm,  
 
<math>\exists q, \, r \in \N \;\text{with}\; 0\leq r < m, \; \text{s.t.}\; n = mq+r</math>,  
 
<math>\exists q, \, r \in \N \;\text{with}\; 0\leq r < m, \; \text{s.t.}\; n = mq+r</math>,  
 
where <math>q</math> is called the quotient and <math>r</math> the remainder. Hence we can define a binary function
 
where <math>q</math> is called the quotient and <math>r</math> the remainder. Hence we can define a binary function
<math>\mod : \N \times \N^+ \rightarrow \N </math> given by <math>r:=n \mod m</math> which means take the remainder after division by m.   
+
<math>\mod : \N \times \N^+ \rightarrow \N </math> given by <math>r:=n \mod m</math> which returns the remainder after division by m.   
 
<br />
 
<br />
 +
Generally, mod means taking the reminder after division by m.
 
<br />
 
<br />
We say that n is congruent to r mod m if n = mq + r, where m is an integer. <br />
+
We say that n is congruent to r mod m if n = mq + r, where m is an integer.  
 +
Values are between 0 and m-1 <br />
 
if y = ax + b, then <math>b:=y \mod a</math>. <br />
 
if y = ax + b, then <math>b:=y \mod a</math>. <br />
4.2 = 3 * 1.1 + 0.9 mod 2<br />
 
0.9 = 4.2 mod 1.1<br />
 
<br />
 
For example:<br />
 
30 = 4 * 7 + 2 mod 7<br />
 
2 = 30 mod 7<br />
 
25 = 8 * 3 + 1 mod 3<br />
 
1 = 25 mod 3<br />
 
  
 +
'''Example 1:'''<br />
  
'''Note:''' <math>\mod</math> here is different from the modulo congruence relation in <math>\Z_m</math>, which is an equivalence relation instead of a function.
+
<math>30 = 4 \cdot  7 + 2</math><br />
  
mod can figure out one integer can be divided by another integer with no remainder or not. But both two integer should follow function: n = mq + r. m, r,q n are all integer. and q smaller than q.
+
<math>2 := 30\mod 7</math><br />
 +
<br />
 +
<math>25 = 8 \cdot  3 + 1</math><br />
  
=== Multiplicative Congruential Algorithm ===
+
<math>1: = 25\mod 3</math><br />
This is a simple algorithm used to generate uniform pseudo random numbers. It is also referred to as the '''Linear Congruential Method''' or '''Mixed Congruential Method'''. We define the Linear Congruential Method to be <math>x_{k+1}=(ax_k + b) \mod m</math>, where <math>x_k, a, b, m \in \N, \;\text{with}\; a, m > 0</math>. ( <math>\mod m</math> means taking the remainder after division by m) Given a "seed"(all integers and an initial value <math>.x_0</math> called '''seed''') <math>.(x_0 \in \N</math>, we can obtain values for <math>x_1, \, x_2, \, \cdots, x_n</math> inductively. The Multiplicative Congruential Method may also refer to the special case where <math>b=0</math>.<br />
+
<br />
 +
<math>-3=5\cdot (-1)+2</math><br />
  
An interesting fact about '''Linear Congruential Method''' is that it is one of the oldest and best-known pseudorandom number generator algorithms. It is very fast and requires minimal memory to retain state. However, this method should not be used for applications where high-quality randomness is required. They should not be used for Monte Carlo simulation and cryptographic applications. (Monte Carlo simulation will consider possibilities for every choice of consideration, and it shows the extreme possibilities. This method is not precise enough.)<br />
+
<math>2:=-3\mod 5</math><br />
  
 +
<br />
 +
'''Example 2:'''<br />
  
 +
If <math>23 = 3 \cdot  6 + 5</math> <br />
  
'''First consider the following algorithm'''<br />
+
Then equivalently, <math>5 := 23\mod 6</math><br />
<math>x_{k+1}=x_{k} \mod m</math>
+
<br />
 +
If <math>31 = 31 \cdot  1</math> <br />
  
 +
Then equivalently, <math>0 := 31\mod 31</math><br />
 +
<br />
 +
If <math>-37 = 40\cdot (-1)+ 3</math> <br />
  
'''Example'''<br />
+
Then equivalently, <math>3 := -37\mod 40</math><br />
<math>\text{Let }x_{k}=10,\,m=3</math><br //>
 
  
:<math>\begin{align}
+
'''Example 3:'''<br />
 +
<math>77 = 3 \cdot  25 + 2</math><br />
  
x_{1} &{}= 10 &{}\mod{3} = 1 \\
+
<math>2 := 77\mod 3</math><br />
 +
<br />
 +
<math>25 = 25 \cdot  1 + 0</math><br />
  
x_{2} &{}= 1 &{}\mod{3} = 1 \\
+
<math>0: = 25\mod 25</math><br />
 +
<br />
  
x_{3} &{}= 1 &{}\mod{3} =1 \\
 
\end{align}</math>
 
<math>\ldots</math><br />
 
  
Excluding x0, this example generates a series of ones. In general, excluding x0, the algorithm above will always generate a series of the same number less than M. Hence, it has a period of 1.  We can modify this algorithm to form the Multiplicative Congruential Algorithm. <br />
 
  
  
'''Multiplicative Congruential Algorithm'''<br />
+
'''Note:''' <math>\mod</math> here is different from the modulo congruence relation in <math>\Z_m</math>, which is an equivalence relation instead of a function.
<math>x_{k+1}=(a \cdot x_{k} + b) \mod m  </math>(a little tip: (a*b)mod c = (a mod c)*(b mod c))
+
 
 +
The modulo operation is useful for determining if an integer divided by another integer produces a non-zero remainder. But both integers should satisfy <math>n = mq + r</math>, where <math>m</math>, <math>r</math>, <math>q</math>, and <math>n</math> are all integers, and <math>r</math> is smaller than <math>m</math>. The above rules also satisfy when any of <math>m</math>, <math>r</math>, <math>q</math>, and <math>n</math> is negative integer, see the third example.
 +
 
 +
==== Mixed Congruential Algorithm ====
 +
We define the Linear Congruential Method to be <math>x_{k+1}=(ax_k + b) \mod m</math>, where <math>x_k, a, b, m \in \N, \;\text{with}\; a, m \neq 0</math>. Given a '''seed''' (i.e. an initial value <math>x_0 \in \N</math>), we can obtain values for <math>x_1, \, x_2, \, \cdots, x_n</math> inductively. The Multiplicative Congruential Method, invented by Berkeley professor D. H. Lehmer, may also refer to the special case where <math>b=0</math> and the Mixed Congruential Method is case where <math>b \neq 0</math> <br />. Their title as "mixed" arises from the fact that it has both a multiplicative and additive term.
 +
 
 +
An interesting fact about '''Linear Congruential Method''' is that it is one of the oldest and best-known pseudo random number generator algorithms. It is very fast and requires minimal memory to retain state. However, this method should not be used for applications that require high randomness. They should not be used for Monte Carlo simulation and cryptographic applications. (Monte Carlo simulation will consider possibilities for every choice of consideration, and it shows the extreme possibilities. This method is not precise enough.)<br />
 +
 
 +
[[File:Linear_Congruential_Statment.png‎|600px]] "Source: STAT 340 Spring 2010 Course Notes"
 +
 
 +
'''First consider the following algorithm'''<br />
 +
<math>x_{k+1}=x_{k} \mod m</math> <br />
 +
 
 +
such that: if <math>x_{0}=5(mod 150)</math>, <math>x_{n}=3x_{n-1}</math>, find <math>x_{1},x_{8},x_{9}</math>. <br />
 +
<math>x_{n}=(3^n)*5(mod 150)</math> <br />
 +
<math>x_{1}=45,x_{8}=105,x_{9}=15</math> <br />
 +
 
 +
 
 +
 
 +
'''Example'''<br />
 +
<math>\text{Let }x_{0}=10,\,m=3</math><br //>
 +
 
 +
:<math>\begin{align}
 +
 
 +
x_{1} &{}= 10 &{}\mod{3} = 1 \\
 +
 
 +
x_{2} &{}= 1 &{}\mod{3} = 1 \\
 +
 
 +
x_{3} &{}= 1 &{}\mod{3} =1 \\
 +
\end{align}</math>
 +
<math>\ldots</math><br />
 +
 
 +
Excluding <math>x_{0}</math>, this example generates a series of ones. In general, excluding <math>x_{0}</math>, the algorithm above will always generate a series of the same number less than M. Hence, it has a period of 1. The '''period''' can be described as the length of a sequence before it repeats. We want a large period with a sequence that is random looking. We can modify this algorithm to form the Multiplicative Congruential Algorithm. <br />
 +
 
 +
 
 +
 
 +
<math>x_{k+1}=(a \cdot x_{k} + b) \mod m  </math>(a little tip: <math>(a \cdot b)\mod c = (a\mod c)\cdot(b\mod c))</math><br/>
  
 
'''Example'''<br />
 
'''Example'''<br />
Line 214: Line 278:
 
This example generates a sequence with a repeating cycle of two integers.<br />
 
This example generates a sequence with a repeating cycle of two integers.<br />
  
(If we choose the numbers properly, we could get a sequence of "random" numbers. However, how do we find the value of <math>a,b,</math> and <math>m</math>?  At the very least <math>m</math> should be a very '''large''', preferably prime number.  The larger <math>m</math> is, the higher possibility people get a sequence of "random" numbers.  This is easier to solve in Matlab. In Matlab, the command rand() generates random numbers which are uniformly distributed in the interval (0,1)). Matlab uses <math>a=7^5, b=0, m=2^{31}-1</math> – recommended in a 1988 paper, "Random Number Generators: Good Ones Are Hard To Find" by Stephen K. Park and Keith W. Miller (Important part is that <math>m</math> should be '''large and prime''')<br />  
+
(If we choose the numbers properly, we could get a sequence of "random" numbers. How do we find the value of <math>a,b,</math> and <math>m</math>?  At the very least <math>m</math> should be a very '''large''', preferably prime number.  The larger <math>m</math> is, the higher the possibility to get a sequence of "random" numbers.  This is easier to solve in Matlab. In Matlab, the command rand() generates random numbers which are uniformly distributed on the interval (0,1)). Matlab uses <math>a=7^5, b=0, m=2^{31}-1</math> – recommended in a 1988 paper, "Random Number Generators: Good Ones Are Hard To Find" by Stephen K. Park and Keith W. Miller (Important part is that <math>m</math> should be '''large and prime''')<br />  
 +
 
 +
Note: <math>\frac {x_{n+1}}{m-1}</math> is an approximation to the value of a U(0,1) random variable.<br />
 +
 
 +
 
 
   
 
   
 
'''MatLab Instruction for Multiplicative Congruential Algorithm:'''<br />
 
'''MatLab Instruction for Multiplicative Congruential Algorithm:'''<br />
Line 234: Line 302:
  
 
''(Note: <br />
 
''(Note: <br />
1. Keep repeating this command over and over again and you will seem to get random numbers – this is how the command rand works in a computer. <br />
+
1. Keep repeating this command over and over again and you will get random numbers – this is how the command rand works in a computer. <br />
2. There is a function in MATLAB called '''RAND''' to generate a number between 0 and 1. <br />
+
2. There is a function in MATLAB called '''RAND''' to generate a random number between 0 and 1. <br />
3. If we would like to generate 1000 and more numbers, we could use a '''for''' loop)<br /><br />
+
For example, in MATLAB, we can use '''rand(1,1000)''' to generate 1000's numbers between 0 and 1. This is essentially a vector with 1 row, 1000 columns, with each entry a random number between 0 and 1.<br />
 +
3. If we would like to generate 1000 or more numbers, we could use a '''for''' loop<br /><br />
  
 
''(Note on MATLAB commands: <br />
 
''(Note on MATLAB commands: <br />
Line 242: Line 311:
 
2. close all: closes all figures.<br />
 
2. close all: closes all figures.<br />
 
3. who: displays all defined variables.<br />
 
3. who: displays all defined variables.<br />
4. clc: clears screen.)<br /><br />
+
4. clc: clears screen.<br />
 +
5. ; : prevents the results from printing.<br />
 +
6. disstool: displays a graphing tool.<br /><br />
  
 
<pre style="font-size:16px">
 
<pre style="font-size:16px">
Line 261: Line 332:
  
  
This algorithm involves three integer parameters <math>a, b,</math> and <math>m</math> and an initial value, <math>x_0</math> called the '''seed'''. A sequence of numbers is defined by <math>x_{k+1} = ax_k+ b \mod m</math>. <math>\mod m</math> means taking the remainder after division by <math>m</math>. <!-- This paragraph seems redundant as it is mentioned above. --><br />
+
This algorithm involves three integer parameters <math>a, b,</math> and <math>m</math> and an initial value, <math>x_0</math> called the '''seed'''. A sequence of numbers is defined by <math>x_{k+1} = ax_k+ b \mod m</math>. <br />
  
Note: For some bad <math>a</math> and <math>b</math>, the histogram may not looks uniformly distributed.<br />
+
Note: For some bad <math>a</math> and <math>b</math>, the histogram may not look uniformly distributed.<br />
  
Note: hist(x) will generate a graph about the distribution. Use it after run the code to check the real sample distribution.
+
Note: In MATLAB, hist(x) will generate a graph representing the distribution. Use this function after you run the code to check the real sample distribution.
  
 
'''Example''': <math>a=13, b=0, m=31</math><br />
 
'''Example''': <math>a=13, b=0, m=31</math><br />
The first 30 numbers in the sequence are a permutation of integers from 1 to 30, and then the sequence repeats itself so '''it is important to  choose <math>m</math> large''' to decrease the probability of each number repeating itself too early. Values are between <math>0</math> and <math>m-1</math>. If the values are normalized by dividing by <math>m-1</math>, then the results are '''approximately''' numbers uniformly distributed in the interval [0,1]. There is only a finite number of values (30 possible values in this case). In MATLAB, you can use function "hist(x)" to see if it looks uniformly distributed. <br />
+
The first 30 numbers in the sequence are a permutation of integers from 1 to 30, and then the sequence repeats itself so '''it is important to  choose <math>m</math> large''' to decrease the probability of each number repeating itself too early. Values are between <math>0</math> and <math>m-1</math>. If the values are normalized by dividing by <math>m-1</math>, then the results are '''approximately''' numbers uniformly distributed in the interval [0,1]. There is only a finite number of values (30 possible values in this case). In MATLAB, you can use function "hist(x)" to see if it looks uniformly distributed. We saw that the values between 0-30 had the same frequency in the histogram, so we can conclude that they are uniformly distributed. <br />
  
 
If <math>x_0=1</math>, then <br />
 
If <math>x_0=1</math>, then <br />
Line 297: Line 368:
 
x_{2} &{}= 3 \times 1 + 2 \mod{4} = 1 \\
 
x_{2} &{}= 3 \times 1 + 2 \mod{4} = 1 \\
 
\end{align}</math><br />
 
\end{align}</math><br />
 
+
Another Example, a =3, b =2, m = 5, x_0=1
 
etc.
 
etc.
 
<hr/>
 
<hr/>
 
<p style="color:red;font-size:16px;">FAQ:</P>
 
<p style="color:red;font-size:16px;">FAQ:</P>
1.Why in the example above is 1 to 30 not 0 to 30?<br>
+
1.Why is it 1 to 30 instead of 0 to 30 in the example above?<br>
 
''<math>b = 0</math> so in order to have <math>x_k</math> equal to 0, <math>x_{k-1}</math> must be 0 (since <math>a=13</math> is relatively prime to 31). However, the seed is 1. Hence, we will never observe 0 in the sequence.''<br>
 
''<math>b = 0</math> so in order to have <math>x_k</math> equal to 0, <math>x_{k-1}</math> must be 0 (since <math>a=13</math> is relatively prime to 31). However, the seed is 1. Hence, we will never observe 0 in the sequence.''<br>
 
Alternatively, {0} and {1,2,...,30} are two orbits of the left multiplication by 13 in the group <math>\Z_{31}</math>.<br>
 
Alternatively, {0} and {1,2,...,30} are two orbits of the left multiplication by 13 in the group <math>\Z_{31}</math>.<br>
Line 309: Line 380:
  
 
'''Examples:[From Textbook]'''<br />
 
'''Examples:[From Textbook]'''<br />
If <math>x_0=3</math> and <math>x_n=(5x_{n-1}+7)\mod 200</math>, find <math>x_1,\cdots,x_{10}</math>.<br />
+
<math>\text{If }x_0=3 \text{ and } x_n=(5x_{n-1}+7)\mod 200</math>, <math>\text{find }x_1,\cdots,x_{10}</math>.<br />
 
'''Solution:'''<br />
 
'''Solution:'''<br />
 
<math>\begin{align}
 
<math>\begin{align}
Line 325: Line 396:
  
 
'''Comments:'''<br />
 
'''Comments:'''<br />
Typically, it is good to choose <math>m</math> such that <math>m</math> is large, and <math>m</math> is prime. Careful selection of parameters '<math>a</math>' and '<math>b</math>' also helps generate relatively "random" output values, where it is harder to identify patterns. For example, when we used a composite (non prime) number such as 40 for <math>m</math>, our results were not satisfactory in producing an output resembling a uniform distribution.<br />
 
  
The computed values are between 0 and <math>m-1</math>. If the values are normalized by dividing by '''<math>m-1</math>''', their result is numbers uniformly distributed on the interval <math>\left[0,1\right]</math> (similar to computing from uniform distribution).<br />
+
Matlab code:
 +
a=5;
 +
b=7;
 +
m=200;
 +
x(1)=3;
 +
for ii=2:1000
 +
x(ii)=mod(a*x(ii-1)+b,m);
 +
end
 +
size(x);
 +
hist(x)
  
From the example shown above, if we want to create a large group of random numbers, it is better to have large <math>m</math> so that the random values generated will not repeat after several iterations.<br />
 
  
There has been a research about how to choose uniform sequence.  Many programs give you the options to choose the seed.  Sometimes the seed is chosen by CPU.<br />
 
  
 +
Typically, it is good to choose <math>m</math> such that <math>m</math> is large, and <math>m</math> is prime. Careful selection of parameters '<math>a</math>' and '<math>b</math>' also helps generate relatively "random" output values, where it is harder to identify patterns. For example, when we used a composite (non prime) number such as 40 for <math>m</math>, our results were not satisfactory in producing an output resembling a uniform distribution.<br />
  
 +
The computed values are between 0 and <math>m-1</math>. If the values are normalized by dividing by '''<math>m-1</math>''', their result is numbers uniformly distributed on the interval <math>\left[0,1\right]</math> (similar to computing from uniform distribution).<br />
  
 +
From the example shown above, if we want to create a large group of random numbers, it is better to have large, prime <math>m</math> so that the generated random values will not repeat after several iterations. Note: the period for this example is 8: from '<math>x_2</math>' to '<math>x_9</math>'.<br />
  
this part i learnt how to use R code to figure out the relationship between two ingeter
+
There has been a research on how to choose uniform sequence.  Many programs give you the options to choose the seed.  Sometimes the seed is chosen by CPU.<br />
 +
 
 +
<span style="background:#F5F5DC">Theorem (extra knowledge)</span><br />
 +
Let c be a non-zero constant. Then for any seed x0, and LCG will have largest max. period if and only if<br />
 +
(i) m and c are coprime;<br />
 +
(ii) (a-1) is divisible by all prime factor of m;<br />
 +
(iii) if and only if m is divisible by 4, then a-1 is also divisible by 4.<br />
 +
 
 +
We want our LCG to have a large cycle.
 +
We call a cycle with m element the maximal period.
 +
We can make it bigger by making m big and prime.
 +
Recall:any number you can think of can be broken into a factor of prime
 +
Define coprime:Two numbers X and Y, are coprime if they do not share any prime factors.
 +
 
 +
Example:<br />
 +
<font size="3">Xn=(15Xn-1 + 4) mod 7</font><br />
 +
(i) m=7 c=4 -> coprime;<br />
 +
(ii) a-1=14 and a-1 is divisible by 7;<br />
 +
(iii) dose not apply.<br />
 +
(The extra knowledge stops here)
 +
 
 +
 
 +
 
 +
In this part, I learned how to use R code to figure out the relationship between two integers
 
division, and their remainder. And when we use R to calculate R with random variables for a range such as(1:1000),the graph of distribution is like uniform distribution.
 
division, and their remainder. And when we use R to calculate R with random variables for a range such as(1:1000),the graph of distribution is like uniform distribution.
 
<div style="border:1px solid #cccccc;border-radius:10px;box-shadow: 0 5px 15px 1px rgba(0, 0, 0, 0.6), 0 0 200px 1px rgba(255, 255, 255, 0.5);padding:20px;margin:20px;background:#FFFFAD;">
 
<div style="border:1px solid #cccccc;border-radius:10px;box-shadow: 0 5px 15px 1px rgba(0, 0, 0, 0.6), 0 0 200px 1px rgba(255, 255, 255, 0.5);padding:20px;margin:20px;background:#FFFFAD;">
<h2 style="text-align:center;">Summary of Multiplicative Congruential Algorithm</h2>
+
<h4 style="text-align:center;">Summary of Multiplicative Congruential Algorithm</h4>
 
<p><b>Problem:</b> generate Pseudo Random Numbers.</p>
 
<p><b>Problem:</b> generate Pseudo Random Numbers.</p>
 
<b>Plan:</b>  
 
<b>Plan:</b>  
 
<ol>
 
<ol>
<li>find integer: <i>a b m</i>(large prime) </i>x<sub>0</sub></i>(the seed) .</li>
+
<li>find integer: <i>a b m</i>(large prime) <i>x<sub>0</sub></i>(the seed) .</li>
<li><math>x_{x+1}=(ax_{k}+b)</math>mod m</li>
+
<li><math>x_{k+1}=(ax_{k}+b)</math>mod m</li>
 
</ol>
 
</ol>
 
<b>Matlab Instruction:</b>
 
<b>Matlab Instruction:</b>
Line 358: Line 461:
 
</pre>
 
</pre>
 
</div>
 
</div>
 +
Another algorithm for generating pseudo random numbers is the multiply with carry method. Its simplest form is similar to the linear congruential generator. They differs in that the parameter b changes in the MWC algorithm. It is as follows: <br>
 +
 +
1.) x<sub>k+1</sub> = ax<sub>k</sub> + b<sub>k</sub> mod m <br>
 +
2.) b<sub>k+1</sub> = floor((ax<sub>k</sub> + b<sub>k</sub>)/m) <br>
 +
3.) set k to k + 1 and go to step 1
 +
[http://www.javamex.com/tutorials/random_numbers/multiply_with_carry.shtml Source]
  
 
=== Inverse Transform Method ===
 
=== Inverse Transform Method ===
This method is useful for generating types of distribution other than uniform distribution, such as exponential distribution and normal distribution. However, to easily use this method in generating pseudorandom numbers, the probability distribution consumed must have a cumulative distribution function (cdf) <math>F</math> with a tractable inverse <math>F^{-1}</math>.<br />
+
Now that we know how to generate random numbers, we use these values to sample form distributions such as exponential. However, to easily use this method, the probability distribution consumed must have a cumulative distribution function (cdf) <math>F</math> with a tractable (that is, easily found) inverse <math>F^{-1}</math>.<br />
  
 
'''Theorem''': <br />
 
'''Theorem''': <br />
Line 367: Line 476:
 
follows the distribution function <math>F\left(\cdot\right)</math>,
 
follows the distribution function <math>F\left(\cdot\right)</math>,
 
where <math>F^{-1}\left(u\right):=\inf F^{-1}\big(\left[u,+\infty\right)\big) = \inf\{x\in\R | F\left(x\right) \geq u\}</math> is the generalized inverse.<br />
 
where <math>F^{-1}\left(u\right):=\inf F^{-1}\big(\left[u,+\infty\right)\big) = \inf\{x\in\R | F\left(x\right) \geq u\}</math> is the generalized inverse.<br />
'''Note''': <math>F</math> need not be invertible, but if it is, then the generalized inverse is the same as the inverse in the usual case.
+
'''Note''': <math>F</math> need not be invertible everywhere on the real line, but if it is, then the generalized inverse is the same as the inverse in the usual case. We only need it to be invertible on the range of F(x), [0,1].  
  
 
'''Proof of the theorem:'''<br />
 
'''Proof of the theorem:'''<br />
 
The generalized inverse satisfies the following: <br />
 
The generalized inverse satisfies the following: <br />
<math>\begin{align}
+
 
\forall u \in \left[0,1\right], \, x \in \R, \\
+
:<math>P(X\leq x)</math> <br />
&{} F^{-1}\left(u\right) \leq x &{} \\
+
<math>= P(F^{-1}(U)\leq x)</math> (since <math>X= F^{-1}(U)</math> by the inverse method)<br />
\Rightarrow &{} F\Big(F^{-1}\left(u\right)\Big) \leq F\left(x\right) &&{} F \text{ is non-decreasing} \\
+
<math>= P((F(F^{-1}(U))\leq F(x))</math>  (since <math>F </math> is monotonically increasing) <br />
\Rightarrow &{} F\Big(\inf \{y \in \R | F(y)\geq u \}\Big) \leq F\left(x\right) &&{} \text{by definition of } F^{-1} \\
+
<math>= P(U\leq F(x)) </math> (since <math> P(U\leq a)= a</math> for <math>U \sim U(0,1), a \in [0,1]</math>,<br />
\Rightarrow &{} \inf \{F(y) \in [0,1] | F(y)\geq u \} \leq F\left(x\right) &&{} F \text{ is right continuous and non-decreasing} \\
+
<math>= F(x) , \text{ where } 0 \leq F(x) \leq 1 </math>  <br />
\Rightarrow &{} u \leq F\left(x\right) &&{} \text{by definition of } \inf \\
+
 
\Rightarrow &{} x \in \{y \in \R | F(y) \geq u\} &&{} \\
+
This is the c.d.f. of X.  <br />
\Rightarrow &{} x \geq \inf \{y \in \R | F(y)\geq u \}\Big) &&{} \text{by definition of } \inf \\
+
<br />
\Rightarrow &{} x \geq F^{-1}(u) &&{} \text{by definition of } F^{-1} \\
 
\end{align}</math>
 
  
 
That is <math>F^{-1}\left(u\right) \leq x \Leftrightarrow u \leq F\left(x\right)</math><br />
 
That is <math>F^{-1}\left(u\right) \leq x \Leftrightarrow u \leq F\left(x\right)</math><br />
Line 391: Line 498:
 
Therefore, in order to generate a random variable X~F, it can generate U according to U(0,1) and then make the transformation x=<math> F^{-1}(U) </math> <br />
 
Therefore, in order to generate a random variable X~F, it can generate U according to U(0,1) and then make the transformation x=<math> F^{-1}(U) </math> <br />
  
Note that we can apply the inverse on both sides in the proof of the inverse transform only if the pdf of X is monotonic. A monotonic function is one that is either increasing for all x, or decreasing for all x.
+
Note that we can apply the inverse on both sides in the proof of the inverse transform only if the pdf of X is monotonic. A monotonic function is one that is either increasing for all x, or decreasing for all x. Of course, this holds true for all CDFs, since they are monotonic by definition. <br />
  
'''Inverse Transform Algorithm for Generating Binomial(n,p) Random Variable'''<br>
+
In short, what the theorem tells us is that we can use a random number <math> U from U(0,1) </math> to randomly sample a point on the CDF of X, then apply the inverse of the CDF to map the given probability to its domain, which gives us the random variable X.<br/>
Step 1: Generate a random number <math>U</math>.<br>
 
Step 2: <math>c = \frac {p}{(1-p)}</math>, <math>i = 0</math>, <math>pr = (1-p)^n</math>, <math>F = pr</math><br>
 
Step 3: If U<F, set X = i and stop,<br>
 
Step 4: <math> pr = \, {\frac {c(n-i)}{(i+1)}} {pr}, F = F +pr, i = i+1</math><br>
 
Step 5: Go to step 3<br>*
 
*Note: These steps can be found in Simulation 5th Ed. by Sheldon Ross.
 
  
'''Example''': <math> f(x) = \lambda e^{-\lambda x}</math><br/>
 
<!-- Cannot write integrals without imaging -->
 
<math> F(x)= \int_0^x f(t) dt</math><br/>
 
<math> = \int_0^x \lambda e ^{-\lambda t}\ dt</math><br />
 
<math> = \frac{\lambda}{-\lambda}\, e^{-\lambda t}\, | \underset{0}{x} </math><br />
 
<math> = -e^{-\lambda x} + e^0 </math> <br>
 
<math> =1 - e^{- \lambda x} </math><br />
 
<math> y=1-e^{- \lambda x} </math><br />
 
<math> 1-y=e^{- \lambda x}</math><br />
 
<math>x=-\frac {ln(1-y)}{\lambda}</math><br />
 
<math>y=-\frac {ln(1-x)}{\lambda}</math><br />
 
<math>F^{-1}(x)=-\frac {ln(1-x)}{\lambda}</math><br />
 
  
<!-- What are these for? -->
+
'''Example 1 - Exponential''': <math> f(x) = \lambda e^{-\lambda x}</math><br/>
 +
Calculate the CDF:<br />
 +
<math> F(x)= \int_0^x f(t) dt = \int_0^x \lambda e ^{-\lambda t}\ dt</math>
 +
<math> = \frac{\lambda}{-\lambda}\, e^{-\lambda t}\, | \underset{0}{x} </math>
 +
<math> = -e^{-\lambda x} + e^0 =1 - e^{- \lambda x} </math><br />
 +
Solve the inverse:<br />
 +
<math> y=1-e^{- \lambda x}  \Rightarrow  1-y=e^{- \lambda x} \Rightarrow  x=-\frac {ln(1-y)}{\lambda}</math><br />
 +
<math> y=-\frac {ln(1-x)}{\lambda}  \Rightarrow  F^{-1}(x)=-\frac {ln(1-x)}{\lambda}</math><br />
 +
Note that 1 − U is also uniform on (0, 1) and thus −log(1 − U) has the same distribution as −logU. <br />
 +
Steps: <br />
 
Step 1: Draw U ~U[0,1];<br />
 
Step 1: Draw U ~U[0,1];<br />
Step 2: <math>  x=\frac{-ln(1-U)}{\lambda} </math> <br />
+
Step 2: <math>  x=\frac{-ln(U)}{\lambda} </math> <br /><br />
  
'''Example''':
 
<math> X= a + (b-a),</math> U is uniform on [a, b] <br />
 
<math> x=\frac{-ln(U)}{\lambda}</math> is exponential with parameter <math> {\lambda} </math> <br /><br />
 
'''Example 2''':
 
Given a CDF of X: <math>F(x) = x^5</math>, transform U~U[0,1]. <br />
 
Sol:
 
Let <math>y=x^5</math>, solve for x: <math>x=y^\frac {1}{5}</math>. Therefore, <math>F^{-1} (x) = x^\frac {1}{5}</math><br />
 
Hence, to obtain a value of x from F(x), we first set u as an uniform distribution, then obtain the inverse function of F(x), and set
 
<math>x= u^\frac{1}{5}</math><br /><br />
 
  
'''Matlab Code''':
+
EXAMPLE 2 Normal distribution
 +
G(y)=P[Y<=y)
 +
      =P[-sqr (y) < z < sqr (y))
 +
      =integrate from -sqr(z) to Sqr(z) 1/sqr(2pi) e ^(-z^2/2) dz
 +
      = 2 integrate from 0 to sqr(y)  1/sqr(2pi) e ^(-z^2/2) dz
 +
its the cdf of Y=z^2
 +
 
 +
pdf g(y)= G'(y)
 +
pdf pf x^2 (1)
 +
 
 +
'''MatLab Code''':<br />
  
 
<pre style="font-size:16px">
 
<pre style="font-size:16px">
For this exponential distribution, we will let lambda be 2.
+
>>u=rand(1,1000);
Code:
+
>>hist(u)       # this will generate a fairly uniform diagram
% Set up the parameters.
 
lam = 2;
 
n = 1000;
 
% Generate the random variables.
 
uni = rand(1,n);
 
X = -log(uni)/lam;
 
% Get the values to draw the theoretical curve.
 
x = 0:.1:5;
 
% This is a fuction in the Statistics Toolbox.
 
y = exppdf(x,1/2);
 
% Get the information for the histogram.
 
[N, h] = hist(X,10);
 
% Change bar heights to make it correspond to the theoretical density.
 
N = N/(h(2)-h(1))/n;
 
% Do the plots.
 
bar(h,N,1,'w')
 
hold on
 
plot(x,y)
 
hold off
 
xlabel('X')
 
ylabel('f(x) - Exponential')
 
 
</pre>
 
</pre>
[[File:Exponential.jpg]]
+
[[File:ITM_example_hist(u).jpg|300px]]
 
+
<pre style="font-size:16px">
'''Example 3''':
+
#let λ=2 in this example; however, you can make another value for λ
Given u~U[0,1], generate x from BETA(1,β)<br />
 
Solution:
 
<math>F(x)= 1-(1-x)^\beta</math>,
 
<math>u= 1-(1-x)^\beta</math><br />
 
Solve for x:
 
<math>(1-x)^\beta = 1-u</math>,
 
<math>1-x = (1-u)^\frac {1}{\beta}</math>,
 
<math>x = 1-(1-u)^\frac {1}{\beta}</math><br />
 
 
 
'''Example 4-Estimating pi''':
 
Let's use rand() and Monte Carlo Method to estimate <math>pi</math> <br />
 
N= total number of points <br />
 
Nc = total number of points inside the circle<br />
 
Prob[(x,y) lies in the circle]=<math>Area of circle/Area of Square</math><br />
 
If we take square of size 2, circle will have area pi.<br />
 
Thus pi= <math>4*(Nc/N)</math><br />
 
 
 
'''Matlab Code''':
 
 
 
<pre style="font-size:16px">
 
>>N=10000;
 
>>Nc=0;
 
>>a=0;
 
>>b=2;
 
>>for t=1:N
 
      x=a+(b-a)*rand();
 
      y=a+(b-a)*rand();
 
      if (x-1)^2+(y-1)^2<=1
 
          Nc=Nc+1;
 
      end
 
  end
 
>>4*(Nc/N)
 
  ans = 3.1380
 
</pre>
 
 
 
 
 
 
 
In Matlab, you can use functions:
 
"who" to see what variables you have defined
 
"clear all" to clear all variables you have defined
 
"close all" to close all figures
 
 
 
'''MatLab for Inverse Transform Method''':<br />
 
 
 
<pre style="font-size:16px">
 
>>u=rand(1,1000);
 
>>hist(u)      #will generate a fairly uniform diagram
 
</pre>
 
[[File:ITM_example_hist(u).jpg|300px]]
 
<pre style="font-size:16px">
 
#let λ=2 in this example; however, you can make another value for λ
 
 
>>x=(-log(1-u))/2;
 
>>x=(-log(1-u))/2;
 
>>size(x)      #1000 in size  
 
>>size(x)      #1000 in size  
Line 516: Line 543:
 
[[File:ITM_example_hist(x).jpg|300px]]
 
[[File:ITM_example_hist(x).jpg|300px]]
  
<!-- Did class end before this was finished? -->
+
'''Example 2 - Continuous Distribution''':<br />
 
 
'''Limitations:'''<br />
 
1. This method is flawed since not all functions are invertible or monotonic: generalized inverse is hard to work on.<br />
 
2. It may be impractical since some CDF's and/or integrals are not easy to compute such as Gaussian distribution.<br />
 
  
We learned how to prove the cdf transfer to inverse cdf,and use the uniform distribution to obtain a value of x from F(x).
+
<math> f(x) = \dfrac {\lambda } {2}e^{-\lambda \left| x-\theta \right| } for -\infty < X < \infty , \lambda >0 </math><br/>
We also can use uniform distribution in inverse mothed to determine other distribution.
 
The probability of getting a point for a circle over the triangle is a closed uniform distribution, each point in the circle and over the triangle is almost the same.
 
and we can look at the graph to determine what kind of distribution the graph belongs to.
 
  
=== Probability Distribution Function Tool in MATLAB ===
+
Calculate the CDF:<br />
<pre style="font-size:16px">
 
disttool        #shows different distributions
 
</pre>  
 
  
This command allows users to explore the effect of changes of parameters on the plot of either a CDF or PDF.
+
<math> F(x)= \frac{1}{2} e^{-\lambda (\theta - x)} , for \ x \le \theta </math><br/>
 +
<math> F(x) = 1 - \frac{1}{2} e^{-\lambda (x - \theta)}, for \ x > \theta </math><br/>
  
[[File:Disttool.jpg|450px]]
+
Solve for the inverse:<br />
change the value of mu and sigma can change the graph skew side.
 
  
== (Generating random numbers continue) Class 3 - Tuesday, May 14 ==
+
<math>F^{-1}(x)= \theta + ln(2y)/\lambda, for \ 0 \le y \le 0.5</math><br/>
=== Recall the Inverse Transform Method ===
+
<math>F^{-1}(x)= \theta - ln(2(1-y))/\lambda, for \ 0.5 < y \le 1</math><br/>
'''1. Draw U~U(0,1) ''' <br />
 
'''2. X = F<sup>-1</sup>(U)  '''<br />
 
  
 +
Algorithm:<br />
 +
Steps: <br />
 +
Step 1: Draw U ~ U[0, 1];<br />
 +
Step 2: Compute <math>X = F^-1(U)</math> i.e. <math>X = \theta  + \frac {1}{\lambda} ln(2U)</math> for U < 0.5 else <math>X = \theta -\frac {1}{\lambda} ln(2(1-U))</math>
  
'''Proof''' <br />
 
First note that
 
<math>P(U\leq a)=a, \forall a\in[0,1]</math> <br />
 
  
:<math>P(X\leq x)</math> <br />
+
'''Example 3 - <math>F(x) = x^5</math>''':<br/>
<math>= P(F^{-1}(U)\leq x)</math> (since <math>U</math> has a uniform distribution)<br />
+
Given a CDF of X: <math>F(x) = x^5</math>, transform U~U[0,1]. <br />
<math>= P((F(F^{-1}(U))\leq F(x))</math> (since <math>F(\cdot )</math> is monotonically increasing) <br />
+
Sol:
<math>= P(U\leq F(x)) </math> <br />
+
Let <math>y=x^5</math>, solve for x: <math>x=y^\frac {1}{5}</math>. Therefore, <math>F^{-1} (x) = x^\frac {1}{5}</math><br />
<math>= F(x) , \text{ where } 0 \leq F(x) \leq 1 </math>   <br />
+
Hence, to obtain a value of x from F(x), we first set 'u' as an uniform distribution, then obtain the inverse function of F(x), and set
 +
<math>x= u^\frac{1}{5}</math><br /><br />
  
This is the c.d.f. of X.  <br />
+
Algorithm:<br />
<br />
+
Steps: <br />
 +
Step 1: Draw U ~ rand[0, 1];<br />
 +
Step 2: X=U^(1/5);<br />
  
'''Note''': that the CDF of a U(a,b) random variable is:
+
'''Example 4 - BETA(1,β)''':<br/>
:<math>
+
Given u~U[0,1], generate x from BETA(1,β)<br />
  F(x)= \begin{cases}
+
Solution:
  0 & \text{for }x < a \\[8pt]
+
<math>F(x)= 1-(1-x)^\beta</math>,
  \frac{x-a}{b-a} & \text{for }a \le x < b \\[8pt]
+
<math>u= 1-(1-x)^\beta</math><br />
  1 & \text{for }x \ge b
+
Solve for x:
  \end{cases}
+
<math>(1-x)^\beta = 1-u</math>,
</math>  
+
<math>1-x = (1-u)^\frac {1}{\beta}</math>,
 +
<math>x = 1-(1-u)^\frac {1}{\beta}</math><br />
 +
let β=3, use Matlab to construct N=1000 observations from Beta(1,3)<br />
 +
'''MatLab Code''':<br />
  
Thus, for <math> U </math> ~ <math>U(0,1) </math>, we have <math>P(U\leq 1) = 1</math> and <math>P(U\leq 1/2) = 1/2</math>.<br />
+
<pre style="font-size:16px">
More generally, we see that <math>P(U\leq a) = a</math>.<br />
+
>> u = rand(1,1000);
For this reason, we had <math>P(U\leq F(x)) = F(x)</math>.<br />
+
x = 1-(1-u)^(1/3);
 +
>> hist(x,50)
 +
>> mean(x)
 +
</pre>
  
'''Reminder:  ''' <br />  
+
'''Example 5 - Estimating <math>\pi</math>''':<br/>
'''This is only for uniform distribution <math> U~ \sim~ Unif [0,1] </math> '''<br />
+
Let's use rand() and Monte Carlo Method to estimate <math>\pi</math> <br />
<math> P (U \le 1) = 1 </math> <br />
+
N= total number of points <br />
<math> P (U \le 0.5) = 0.5 </math> <br />
+
N<sub>c</sub> = total number of points inside the circle<br />
 +
Prob[(x,y) lies in the circle=<math>\frac {Area(circle)}{Area(square)}</math><br />
 +
If we take square of size 2, circle will have area =<math>\pi (\frac {2}{2})^2 =\pi</math>.<br />
 +
Thus <math>\pi= 4(\frac {N_c}{N})</math><br />
  
[[File:2.jpg]]            <math>P(U\leq a)=a</math>
+
  <font size="3">For example, '''UNIF(a,b)'''<br />
 +
  <math>y = F(x) = (x - a)/ (b - a) </math>
 +
  <math>x = (b - a ) * y + a</math>
 +
  <math>X = a + ( b - a) * U</math><br />
 +
  where U is UNIF(0,1)</font>
  
LIMITATIONS OF THE INVERSE TRANSFORM METHOD
+
'''Limitations:'''<br />
 +
1. This method is flawed since not all functions are invertible or monotonic: generalized inverse is hard to work on.<br />
 +
2. It may be impractical since some CDF's and/or integrals are not easy to compute such as Gaussian distribution.<br />
  
Though this method is very easy to use and apply, it does have disadvantages/limitations:
+
We learned how to prove the transformation from cdf to inverse cdf,and use the uniform distribution to obtain a value of x from F(x).
 +
We can also use uniform distribution in inverse method to determine other distributions.
 +
The probability of getting a point for a circle over the triangle is a closed uniform distribution, each point in the circle and over the triangle is almost the same. Then, we can look at the graph to determine what kind of distribution the graph resembles.
  
1. We have to find the inverse c.d.f function <math> F^{-1}(\cdot) </math> and make sure it is monotonically increasing, in some cases this function does not exist
+
==== Probability Distribution Function Tool in MATLAB ====
 +
<pre style="font-size:16px">
 +
disttool        #shows different distributions
 +
</pre>  
  
2. For many distributions such as Gaussian, it is too difficult to find the inverse cdf function , making this method inefficient
+
This command allows users to explore different types of distribution and see how the changes affect the parameters on the plot of either a CDF or PDF.
  
=== Discrete Case ===
 
The same technique can be used for discrete case. We want to generate a discrete random variable x, that has probability mass function: <br/>
 
In general in the discrete case, we have <math>x_0, \dots , x_n</math> where:
 
  
:<math>\begin{align}P(X = x_i) &{}= p_i \end{align}</math>
+
[[File:Disttool.jpg|450px]]
:<math>x_0 \leq x_1 \leq x_2 \dots \leq x_n</math>
+
change the value of mu and sigma can change the graph skew side.
:<math>\sum p_i = 1</math>
+
 
 +
== Class 3 - Tuesday, May 14 ==
 +
=== Recall the Inverse Transform Method ===
 +
Let U~Unif(0,1),then the random variable  X = F<sup>-1</sup>(u) has distribution F.  <br />
 +
To sample X with CDF F(x), <br />
  
Algorithm for applying Inverse Transformation Method in Discrete Case (Procedure):<br>
+
<math>1) U~ \sim~ Unif [0,1] </math>
1. Define a probability mass function for <math>x_{i}</math> where i = 1,....,k. Note: k could grow infinitely. <br>
+
'''2) X = F<sup>-1</sup>(u)  '''<br />
2. Generate a uniform random number U, <math> U~ \sim~ Unif [0,1] </math><br>
 
3. If <math>U\leq p_{o}</math>, deliver <math>X = x_{o}</math><br>
 
4. Else, if <math>U\leq p_{o} + p_{1} </math>, deliver <math>X = x_{1}</math><br>
 
5. Repeat the process again till we reached to <math>U\leq p_{o} + p_{1} + ......+ p_{k}</math>, deliver <math>X = x_{k}</math><br>
 
  
'''Example in class:''' (Coin Flipping Example)<br />
 
We want to simulate a coin flip. We have U~U(0,1) and X = 0 or X = 1.
 
  
We can define the U function so that:
 
  
If U <= 0.5, then X = 0
 
  
and if  0.5 < U <= 1, then X =1.
 
  
This allows the probability of Heads occurring to be 0.5 and is a good generator of a random coin flip.
+
<br />
  
<math> U~ \sim~ Unif [0,1] </math>
+
'''Note''': CDF of a U(a,b) random variable is:
:<math>\begin{align}
 
P(X = 0) &{}= 0.5\\
 
P(X = 1) &{}= 0.5\\
 
\end{align}</math>
 
The answer is:
 
 
:<math>
 
:<math>
x = \begin{cases}
+
  F(x)= \begin{cases}
0, & \text{if } U\leq 0.5 \\
+
  0 & \text{for }x < a \\[8pt]
1, & \text{if } 0.5 < U \leq 1
+
  \frac{x-a}{b-a} & \text{for }a \le x < b \\[8pt]
\end{cases}</math>
+
  1 & \text{for }x \ge b
 
+
  \end{cases}
 +
</math>
 +
 
 +
Thus, for <math> U </math> ~ <math>U(0,1) </math>, we have <math>P(U\leq 1) = 1</math> and <math>P(U\leq 1/2) = 1/2</math>.<br />
 +
More generally, we see that <math>P(U\leq a) = a</math>.<br />
 +
For this reason, we had <math>P(U\leq F(x)) = F(x)</math>.<br />
 +
 
 +
'''Reminder:  '''  <br />
 +
'''This is only for uniform distribution <math> U~ \sim~ Unif [0,1] </math> '''<br />
 +
<math> P (U \le 1) = 1 </math>  <br />
 +
<math> P (U \le 0.5) = 0.5 </math>  <br />
 +
<math> P (U \le a) = a </math>  <br />
  
* '''Code'''<br />
+
[[File:2.jpg]]            <math>P(U\leq a)=a</math>
<pre style="font-size:16px">
 
>>for ii=1:1000
 
    u=rand;
 
      if u<0.5
 
        x(ii)=0;
 
      else
 
        x(ii)=1;
 
      end
 
  end
 
>>hist(x)
 
</pre>
 
[[File:Coin_example.jpg|300px]]
 
  
Note: The role of semi-colon in Matlab: Matlab will not print out the results if the line ends in a semi-colon and vice versa.
+
Note that on a single point there is no mass probability (i.e. <math>u</math> <= 0.5, is the same as <math> u </math> < 0.5)
 +
More formally, this is saying that <math> P(X = x) = F(x)- \lim_{s \to x^-}F(x)</math> , which equals zero for any continuous random variable
  
'''Example in class:'''
+
====Limitations of the Inverse Transform Method====
  
Suppose we have the following discrete distribution:
+
Though this method is very easy to use and apply,  it does have a major disadvantage/limitation:
  
:<math>\begin{align}
+
*  We need to find the inverse cdf <math> F^{-1}(\cdot) </math>. In some cases the inverse function does not exist, or is difficult to find because it requires a closed form expression for F(x).
P(X = 0) &{}= 0.3 \\
 
P(X = 1) &{}= 0.2 \\
 
P(X = 2) &{}= 0.5
 
\end{align}</math>
 
[[File:33.jpg]]
 
  
The cumulative distribution function (cdf) for this distribution is then:
+
For example, it is too difficult to find the inverse cdf of the Gaussian distribution, so we must find another method to sample from the Gaussian distribution.
  
:<math>
+
In conclusion, we need to find another way of sampling from more complicated distributions
F(x) = \begin{cases}
+
 
0, & \text{if } x < 0 \\
+
=== Discrete Case ===
0.3, & \text{if } x < 1 \\
+
The same technique can be used for discrete case. We want to generate a discrete random variable x, that has probability mass function: <br/>
0.5, & \text{if } x < 2 \\
+
 
1, & \text{if } x \ge 2
+
:<math>\begin{align}P(X = x_i) &{}= p_i \end{align}</math>
\end{cases}</math>
+
:<math>x_0 \leq x_1 \leq x_2 \dots \leq x_n</math>
 +
:<math>\sum p_i = 1</math>
 +
 
 +
Algorithm for applying Inverse Transformation Method in Discrete Case (Procedure):<br>
 +
1. Define a probability mass function for <math>x_{i}</math> where i = 1,....,k. Note: k could grow infinitely. <br>
 +
2. Generate a uniform random number U, <math> U~ \sim~ Unif [0,1] </math><br>
 +
3. If <math>U\leq p_{o}</math>, deliver <math>X = x_{o}</math><br>
 +
4. Else, if <math>U\leq p_{o} + p_{1} </math>, deliver <math>X = x_{1}</math><br>
 +
5. Repeat the process again till we reached to <math>U\leq p_{o} + p_{1} + ......+ p_{k}</math>, deliver <math>X = x_{k}</math><br>
  
Then we can generate numbers from this distribution like this, given <math>U \sim~ Unif[0, 1]</math>:
+
Note that after generating a random U, the value of X can be determined by finding the interval <math>[F(x_{j-1}),F(x_{j})]</math> in which U lies. <br />
  
:<math>
+
In summary:
x = \begin{cases}
+
Generate a discrete r.v.x that has pmf:<br />
0, & \text{if } U\leq 0.3 \\
+
  P(X=xi)=Pi,   x0<x1<x2<... <br />
1, & \text{if } 0.3 < U \leq 0.5 \\
+
1. Draw U~U(0,1);<br />
2, & \text{if } 0.5 <U\leq 1
+
2. If F(x(i-1))<U<F(xi), x=xi.<br />
\end{cases}</math>
 
  
"Procedure"<br />
 
1. Draw U~u (0,1)<br />
 
2. if U<=0.3 deliver x=0<br />
 
3. else if 0.3<U<=0.5 deliver x=1<br />
 
4. else 0.5<U<=1 deliver x=2
 
  
  
* '''Code''' (as shown in class)<br />
+
'''Example 3.0:''' <br />
Use Editor window to edit the code  <br />
+
Generate a random variable from the following probability function:<br />
<pre style="font-size:16px">
+
{| class="wikitable"
>>close all
+
|-
>>clear all
+
|-
>>for ii=1:1000
+
| x
    u=rand;
+
| -2
      if u<=0.3
+
| -1
          x(ii)=0;
+
| 0
      elseif u<0.5
+
| 1
          x(ii)=1;
+
| 2
      else
+
|-
          x(ii)=2;
+
| f(x)
      end
+
| 0.1
    end
+
| 0.5
>>size(x)
+
| 0.07
>>hist(x)
+
| 0.03
</pre>
+
| 0.3
[[File:Discrete_example.jpg|300px]]
+
|}
 +
 
 +
Answer:<br />
 +
1. Gen U~U(0,1)<br />
 +
2. If U < 0.5 then output -1<br />
 +
else if U < 0.8 then output 2<br />
 +
else if U < 0.9 then output -2<br />
 +
else if U < 0.97 then output 0 else output 1<br />
 +
 
 +
'''Example 3.1 (from class):''' (Coin Flipping Example)<br />
 +
We want to simulate a coin flip. We have U~U(0,1) and X = 0 or X = 1.  
  
'''Example''': Generating a random variable from pdf <br>
+
We can define the U function so that:  
:<math>
 
f_{x}(x) = \begin{cases}
 
2x, & \text{if } 0\leq x \leq 1 \\
 
0, & \text{if }  otherwise
 
\end{cases}</math>
 
  
:<math>
+
If <math>U\leq 0.5</math>, then X = 0
F_{x}(x) = \begin{cases}
 
0, & \text{if } x < 0 \\
 
\int_{0}^{x}2sds = x^{2}, & \text{if } 0\leq x \leq 1 \\
 
1, & \text{if }  x > 1
 
\end{cases}</math>
 
  
:<math>\begin{align} U = x^{2}, X = F^{-1}x(U)= U^{\frac{1}{2}}\end{align}</math>
+
and if  <math>0.5 < U\leq 1</math>, then X =1.
  
'''Example''': Generating a Bernoulli random variable <br>
+
This allows the probability of Heads occurring to be 0.5 and is a good generator of a random coin flip.
:<math>\begin{align} P(X = 1) = p, P(X = 0) = 1 - p\end{align}</math>
+
 
:<math>
+
<math> U~ \sim~ Unif [0,1] </math>  
F(x) = \begin{cases}
+
:<math>\begin{align}
1-p, & \text{if } x < 1 \\
+
P(X = 0) &{}= 0.5\\
1, & \text{if } x \ge 1
+
P(X = 1) &{}= 0.5\\
\end{cases}</math>
+
\end{align}</math>
1. Draw <math> U~ \sim~ Unif [0,1] </math><br>
+
The answer is:
2. <math>
+
:<math>
X = \begin{cases}
+
x = \begin{cases}
1, & \text{if } U\leq p \\
+
0, & \text{if } U\leq 0.5 \\
0, & \text{if } U > p
+
1, & \text{if } 0.5 < U \leq 1
 
\end{cases}</math>
 
\end{cases}</math>
  
  
'''Example''': Generating a Poisson random variable <br>
+
* '''Code'''<br />
 +
<pre style="font-size:16px">
 +
>>for ii=1:1000
 +
    u=rand;
 +
      if u<0.5
 +
        x(ii)=0;
 +
      else
 +
        x(ii)=1;
 +
      end
 +
  end
 +
>>hist(x)
 +
</pre>
 +
[[File:Coin_example.jpg|300px]]
 +
 
 +
Note: The role of semi-colon in Matlab: Matlab will not print out the results if the line ends in a semi-colon and vice versa.
  
Let X ~ Poi(u). Write an algorithm to generate X.
+
'''Example 3.2 (From class):'''
The PDF of a poisson is:
 
:<math>\begin{align} f(x) = \frac {\, e^{-u} u^x}{x!} \end{align}</math>
 
We know that
 
:<math>\begin{align} P_{x+1} = \frac {\, e^{-u} u^{x+1}}{(x+1)!} \end{align}</math>
 
The ratio is <math>\begin{align} \frac {P_{x+1}}{P_x} = ... = \frac {u}{{x+1}} \end{align}</math>
 
Therefore, <math>\begin{align} P_{x+1} = \, {\frac {u}{x+1}} P_x\end{align}</math>
 
  
Algorithm: <br>
+
Suppose we have the following discrete distribution:
1) Generate U ~ U(0,1) <br>
 
2) <math>\begin{align} X = 0 \end{align}</math>
 
  <math>\begin{align} F = P(X = 0) = e^{-u}*u^0/{0!} = e^{-u} = p \end{align}</math>
 
3) If U<F, output x <br>
 
  Else, <math>\begin{align} p = (u/(x+1))^p \end{align}</math> <br>
 
        <math>\begin{align} F = F + p \end{align}</math> <br>
 
        <math>\begin{align} x = x + 1 \end{align}</math> <br>
 
4) Go to x <br>
 
 
Acknowledgements: This is from Stat 340 Winter 2013
 
  
 +
:<math>\begin{align}
 +
P(X = 0) &{}= 0.3 \\
 +
P(X = 1) &{}= 0.2 \\
 +
P(X = 2) &{}= 0.5
 +
\end{align}</math>
 +
[[File:33.jpg]]
  
'''Example''': Generating Geometric Distribution:
+
The cumulative distribution function (cdf) for this distribution is then:
  
Consider Geo(p) where p is the probability of success, and define random variable X such that X is the number of failure before the first success. x=1,2,3..... We have pmf:
 
<math>P(X=x_i) = \, p (1-p)^{x_{i-1}}</math>
 
We have CDF:
 
<math>F(x)=P(X \leq x)=1-P(X>x) = 1-(1-p)^x</math>, P(X>x) means we get at least x failures before observe the first success.
 
Now consider the inverse transform:
 
 
:<math>
 
:<math>
x = \begin{cases}
+
F(x) = \begin{cases}
1, & \text{if } U\leq p \\
+
0, & \text{if } x < 0 \\
2, & \text{if } p < U \leq 1-(1-p)^2 \\
+
0.3, & \text{if } x < 1 \\
3, & \text{if } 1-(1-p)^2 <U\leq 1-(1-p)^3 \\
+
0.5, & \text{if } x < 2 \\
....
+
1, & \text{if } x \ge 2
k, & \text{if } 1-(1-p)^{k-1} <U\leq 1-(1-p)^k
 
....
 
 
\end{cases}</math>
 
\end{cases}</math>
  
 +
Then we can generate numbers from this distribution like this, given <math>U \sim~ Unif[0, 1]</math>:
  
'''Note''': Unlike the continuous case, the discrete inverse-transform method can always be used for any discrete distribution (but it may not be the most efficient approach) <br>
+
:<math>
 +
x = \begin{cases}
 +
0, & \text{if } U\leq 0.3 \\
 +
1, & \text{if } 0.3 < U \leq 0.5 \\
 +
2, & \text{if } 0.5 <U\leq 1
 +
\end{cases}</math>
  
 +
"Procedure"<br />
 +
1. Draw U~u (0,1)<br />
 +
2. if U<=0.3 deliver x=0<br />
 +
3. else if 0.3<U<=0.5 deliver x=1<br />
 +
4. else 0.5<U<=1 deliver x=2
  
 +
Can you find a faster way to run this algorithm? Consider:
  
'''General Procedure'''<br />
+
:<math>
1. Draw U ~ U(0,1)<br />
+
x = \begin{cases}
2. If <math>U \leq P_{0}</math> deliver <math>x = x_{0}</math><br />
+
2, & \text{if } U\leq 0.5 \\
3. Else if <math>U \leq P_{0} + P_{1}</math> deliver <math>x = x_{1}</math><br />
+
1, & \text{if } 0.5 < U \leq 0.7 \\
4. Else if <math>U \leq P_{0} + P_{1} + P_{2} </math> deliver <math>x = x_{2}</math><br />
+
0, & \text{if } 0.7 <U\leq 1
...  
+
\end{cases}</math>
  Else if <math>U \leq P_{0} + ... + P_{k} </math> deliver <math>x = x_{k}</math><br />
 
  
'''Problems'''<br />
+
The logic for this is that U is most likely to fall into the largest range. Thus by putting the largest range (in this case x >= 0.5) we can improve the run time of this algorithm. Could this algorithm be improved further using the same logic?
1. We have to find <math> F^{-1} </math>
 
  
2. For many distributions, such as Gaussian, it is too difficult to find the inverse of <math> F(x) ,</math>
+
* '''Code''' (as shown in class)<br />
flipping a coin is a discrete case of uniform distribution, and for the code it is randomly flipped 1000 times for the coin, and the result we can see is closed to the express value(0.5)
+
Use Editor window to edit the code <br />
and example 2 is another discrete distribution, it shows that we can discrete uniform for 3 part like ,0,1,2, and the probability of each part or each trial is the same.
+
<pre style="font-size:16px">
Example 3 is use inverse method to figure out the probability range of each random varibles.
+
>>close all
<div style="border:1px solid #cccccc;border-radius:10px;box-shadow: 0 5px 15px 1px rgba(0, 0, 0, 0.6), 0 0 200px 1px rgba(255, 255, 255, 0.5);padding:20px;margin:20px;background:#FFFFAD;">
+
>>clear all
<h2 style="text-align:center;">Summary of Inverse Transform Method</h2>
+
>>for ii=1:1000
<p><b>Problem:</b>generate types of distribution.</p>
+
    u=rand;
<p><b>Plan:</b></p>
+
      if u<=0.3
<b style='color:lightblue;'>Continuous case:</b>
+
          x(ii)=0;
<ol>
+
      elseif u<=0.5
<li>find CDF F</li>
+
          x(ii)=1;
<li>find the inverse F<sup>-1</sup></li>
+
      else
<li>Generate a list of uniformly distributed number {x}</li>
+
          x(ii)=2;
<li>{F<sup>-1</sup>(x)} is what we want</li>
+
      end
</ol>
+
    end
<b>Matlab Instruction</b>
+
>>size(x)
<pre style="font-size:16px">&gt;&gt;u=rand(1,1000);
+
>>hist(x)
&gt;&gt;hist(u)
 
&gt;&gt;x=(-log(1-u))/2;
 
&gt;&gt;size(x)  
 
&gt;&gt;figure
 
&gt;&gt;hist(x)
 
 
</pre>
 
</pre>
<br>
+
[[File:Discrete_example.jpg|300px]]
<b style='color:lightblue'>Discrete case:</b>
+
 
<ol>
+
The algorithm above generates a vector (1,1000) containing 0's ,1's and 2's in differing proportions. Due to the criteria for accepting 0, 1 or 2 into the vector we get proportions of 0,1 &2 that correspond to their respective probabilities. So plotting the histogram (frequency of 0,1&2) doesn't give us the pmf but a frequency histogram that shows the proportions of each, which looks identical to the pmf.
<li>generate a list of uniformly distributed number {u}</li>
+
 
<li>d<sub>i</sub>=x<sub>i</sub> if<math> X=x_i, </math> if <math> F(x_{i-1})<U\leq F(x_i) </math></li>
+
'''Example 3.3''': Generating a random variable from pdf <br>
<li>{d<sub>i</sub>=x<sub>i</sub>} is what we want</li>
+
:<math>
</ol>
+
f_{x}(x) = \begin{cases}
<b>Matlab Instruction</b>
+
2x, & \text{if } 0\leq x \leq 1 \\
<pre style="font-size:16px">&gt;&gt;for ii=1:1000
+
0, & \text{if } otherwise
    u=rand;
+
\end{cases}</math>
      if u&lt;0.5
+
 
        x(ii)=0;
+
:<math>
      else
+
F_{x}(x) = \begin{cases}
        x(ii)=1;
+
0, & \text{if } x < 0 \\
      end
+
\int_{0}^{x}2sds = x^{2}, & \text{if } 0\leq x \leq 1 \\
  end
+
1, & \text{if }  x > 1
&gt;&gt;hist(x)
+
\end{cases}</math>
</pre>
 
</div>
 
  
===Acceptance-Rejection Method===
+
:<math>\begin{align} U = x^{2}, X = F^{-1}x(U)= U^{\frac{1}{2}}\end{align}</math>
  
Although the inverse transformation method does allow us to change our uniform distribution, it has two limits;
+
'''Example 3.4''': Generating a Bernoulli random variable <br>
# Not all functions have inverse functions (ie, the range of x and y have limit and do not fix the inverse functions)
+
:<math>\begin{align} P(X = 1) = p, P(X = 0) = 1 - p\end{align}</math>
# For some distributions, such as Gaussian, it is too difficult to find the inverse
+
:<math>
 +
F(x) = \begin{cases}
 +
1-p, & \text{if } x < 1 \\
 +
1, & \text{if }  x \ge 1
 +
\end{cases}</math>
 +
1. Draw <math> U~ \sim~ Unif [0,1] </math><br>
 +
2. <math>
 +
X = \begin{cases}
 +
0, & \text{if } 0 < U < 1-p \\
 +
1, & \text{if } 1-p \le U < 1
 +
\end{cases}</math>
  
To generate random samples for these functions, we will use different methods, such as the '''Acceptance-Rejection Method'''. This method is more efficient than the inverse transform method.
 
  
Suppose we want to draw random sample from a target density function ''f(x)'', ''x∈S<sub>x</sub>'', where ''S<sub>x</sub>'' is the support of ''f(x)''. If we can find some constant ''c''(≥1) (In practise, we prefer c as close to 1 as possible) and a density function ''g(x)'' having the same support ''S<sub>x</sub>'' so that ''f(x)≤cg(x), ∀x∈S<sub>x</sub>'', then we can apply the procedure for Acceptance-Rejection Method. Typically we choose a density function that we already know how to sample from for ''g(x)''.
+
'''Example 3.5''': Generating Binomial(n,p) Random Variable<br>
 
+
<math> use p\left( x=i+1\right) =\dfrac {n-i} {i+1}\dfrac {p} {1-p}p\left( x=i\right) </math>
[[File:AR_Method.png]]
 
  
 +
Step 1: Generate a random number <math>U</math>.<br>
 +
Step 2: <math>c = \frac {p}{(1-p)}</math>, <math>i = 0</math>, <math>pr = (1-p)^n</math>, <math>F = pr</math><br>
 +
Step 3: If U<F, set X = i and stop,<br>
 +
Step 4: <math> pr = \, {\frac {c(n-i)}{(i+1)}} {pr}, F = F +pr, i = i+1</math><br>
 +
Step 5: Go to step 3<br>
 +
*Note: These steps can be found in Simulation 5th Ed. by Sheldon Ross.
 +
*Note: Another method by seeing the Binomial as a sum of n independent Bernoulli random variables, U1, ..., Un. Then set X equal to the number of Ui that are less than or equal to p. To use this method, n random numbers are needed and n comparisons need to be done. On the other hand, the inverse transformation method is simpler because only one random variable needs to be generated and it makes 1 + np comparisons.<br>
 +
Step 1: Generate n uniform numbers U1 ... Un.<br>
 +
Step 2: X = <math>\sum U_i < = p</math> where P is the probability of success.
  
{{Cleanup|reason= Do not write <math>c*g(x)</math>. Instead write <math>c \times g(x)</math> or <math>\,c g(x)</math>
+
'''Example 3.6''': Generating a Poisson random variable <br>
}}
 
  
The main logic behind the Acceptance-Rejection Method is that:<br>
+
"Let X ~ Poi(u). Write an algorithm to generate X.
1. We want to generate sample points from an unknown distribution, say f(x).<br>
+
The PDF of a poisson is:
2. We use cg(x) to generate points so that we have more points than f(x) could ever generate for all x. (where c is a constant, and g(x) is a known distribution)<br>
+
:<math>\begin{align} f(x) = \frac {\, e^{-u} u^x}{x!} \end{align}</math>
3. For each value of x, we accept and reject some points based on a probability, which will be discussed below.<br>
+
We know that
 +
:<math>\begin{align} P_{x+1} = \frac {\, e^{-u} u^{x+1}}{(x+1)!} \end{align}</math>
 +
The ratio is <math>\begin{align} \frac {P_{x+1}}{P_x} = ... = \frac {u}{{x+1}} \end{align}</math>
 +
Therefore, <math>\begin{align} P_{x+1} = \, {\frac {u}{x+1}} P_x\end{align}</math>
  
Note: If the red line was only g(x) as opposed to <math>\,c g(x)</math> (i.e. c=1), then <math>g(x) \geq f(x)</math> for all values of x if and only if g and f are the same functions. This is because the sum of pdf of g(x)=1 and the sum of pdf of f(x)=1, hence, <math>g(x) \ngeqq f(x)</math> &forall;x. <br>
+
Algorithm: <br>
 +
1) Generate U ~ U(0,1) <br>
 +
2) <math>\begin{align} X = 0 \end{align}</math>
 +
  <math>\begin{align} F = P(X = 0) = e^{-u}*u^0/{0!} = e^{-u} = p \end{align}</math>
 +
3) If U<F, output x <br>
 +
  <font size="3">Else,</font> <math>\begin{align} p = (u/(x+1))^p \end{align}</math> <br>
 +
        <math>\begin{align} F = F + p \end{align}</math> <br>
 +
        <math>\begin{align} x = x + 1 \end{align}</math> <br>
 +
4) Go to 1" <br>
 +
 +
Acknowledgements: This is an example from Stat 340 Winter 2013
  
Also remember that <math>\,c g(x)</math> always generates higher probability than what we need. Thus we need an approach of getting the proper probabilities.<br><br>
 
  
c must be chosen so that <math>f(x)\leqslant c g(x)</math> for all value of x. c can only equal 1 when f and g have the same distribution. Otherwise:<br>
+
'''Example 3.7''': Generating Geometric Distribution:
Either use a software package to test if <math>f(x)\leqslant c g(x)</math>  for an arbitrarily chosen c > 0, or:<br>
 
1. Find first and second derivatives of f(x) and g(x).<br>
 
2. Identify and classify all local and absolute maximums and minimums, using the First and Second Derivative Tests, as well as all inflection points.<br>
 
3. Verify that <math>f(x)\leqslant c g(x)</math> at all the local maximums as well as the absolute maximums.<br>
 
4. Verify that <math>f(x)\leqslant c g(x)</math> at the tail ends by calculating <math>\lim_{x \to +\infty} \frac{f(x)}{\, c g(x)}</math> and <math>\lim_{x \to -\infty} \frac{f(x)}{\, c g(x)}</math> and seeing that they are both < 1. Use of L'Hopital's Rule should make this easy, since both f and g are p.d.f's, resulting in both of them approaching 0.
 
  
c should be close to the maximum of f(x)/g(x), not just some arbitrarily picked large number. Otherwise, the Acceptance-Rejection method will have more rejections (since our probability <math>f(x)\leqslant c g(x)</math> will be close to zero). This will render our algorithm inefficient.
+
Consider Geo(p) where p is the probability of success, and define random variable X such that X is the total number of trials required to achieve the first success. x=1,2,3..... We have pmf:
 
+
<math>P(X=x_i) = \, p (1-p)^{x_{i}-1}</math>
<br>
+
We have CDF:
'''Note:''' <br>
+
<math>F(x)=P(X \leq x)=1-P(X>x) = 1-(1-p)^x</math>, P(X>x) means we get at least x failures before we observe the first success.
1. Value around x<sub>1</sub> will be sampled more often under cg(x) than under f(x).There will be more samples than we actually need, if <math>\frac{f(y)}{\, c g(y)}</math> is small, the acceptance-rejection technique will need to be done to these points to get the accurate amount.In the region above x<sub>1</sub>, we should accept less and reject more. <br>
+
Now consider the inverse transform:
2. Value around x<sub>2</sub>: number of sample that are drawn and the number we need are much closer. So in the region above x<sub>2</sub>, we accept more. As a result, g(x) and f(x) are comparable.<br>
+
:<math>
3. The constant c is needed because we need to adjust the height of g(x) to ensure that it is above f(x). Besides that, it is best to keep the number of rejected varieties small for maximum efficiency. <br>  
+
x = \begin{cases}
 +
1, & \text{if } U\leq p \\
 +
2, & \text{if } p < U \leq 1-(1-p)^2 \\
 +
3, & \text{if } 1-(1-p)^2 <U\leq 1-(1-p)^3 \\
 +
....
 +
k, & \text{if } 1-(1-p)^{k-1} <U\leq 1-(1-p)^k
 +
....
 +
\end{cases}</math>
  
Another way to understand why the the acceptance probability is <math>\frac{f(y)}{\, c g(y)}</math>, is by thinking of areas. From the graph above, we see that the target function in under the proposed function c g(y). Therefore, <math>\frac{f(y)}{\, c g(y)}</math> is the proportion or the area under c g(y) that also contains f(y). Therefore we say we accept sample points for which u is less then <math>\frac{f(y)}{\, c g(y)}</math> because then the sample points are guaranteed to fall under the area of c g(y) that contains f(y).
 
  
'''Procedure'''
+
'''Note''': Unlike the continuous case, the discrete inverse-transform method can always be used for any discrete distribution (but it may not be the most efficient approach) <br>
  
#Draw Y~g(.)
 
#Draw U~u(0,1) (Note: U and Y are independent)
 
#If <math>u\leq \frac{f(y)}{cg(y)}</math> (which is <math>P(accepted|y)</math>) then x=y, else return to Step 1<br>
 
  
  
Note: Recall <math>P(U\leq a)=a</math>. Thus by comparing u and <math>\frac{f(y)}{\, c g(y)}</math>, we can get a probability of accepting y at these points. For instance, at some points that cg(x) is much larger than f(x), the probability of accepting x=y is quite small.<br>
+
'''General Procedure'''<br />
ie. At X<sub>1</sub>, low probability to accept the point since f(x) much smaller than cg(x).<br>
+
1. Draw U ~ U(0,1)<br />
At X<sub>2</sub>, high probability to accept the point.  <math>P(U\leq a)=a</math> in Uniform Distribution.
+
2. If <math>U \leq P_{0}</math> deliver <math>x = x_{0}</math><br />
 +
3. Else if <math>U \leq P_{0} + P_{1}</math> deliver <math>x = x_{1}</math><br />
 +
4. Else if <math>U \leq P_{0} + P_{1} + P_{2} </math> deliver <math>x = x_{2}</math><br />
 +
...
 +
  <font size="3">Else if</font> <math>U \leq P_{0} + ... + P_{k} </math> <font size="3">deliver</font> <math>x = x_{k}</math><br />
  
Note: Since U is the variable for uniform distribution between 0 and 1. It equals to 1 for all. The condition depends on the constant c. so the condition changes to <math>c\leq \frac{f(y)}{g(y)}</math>
+
<br /'''>===Inverse Transform Algorithm for Generating a Binomial(n,p) Random Variable(from textbook)==='''
 +
<br />step 1: Generate a random number U
 +
<br />step 2: c=p/(1-p),i=0, pr=(1-p)<sup>n</sup>, F=pr.
 +
<br />step 3: If U<F, set X=i and stop.
 +
<br />step 4: pr =[c(n-i)/(i+1)]pr, F=F+pr, i=i+1.
 +
<br />step 5: Go to step 3.
  
  
introduce the relationship of cg(x)and f(x),and prove why they have that relationship and where we can use this rule to reject some cases.
+
'''Problems'''<br />
and learn how to see the graph to find the accurate point to reject or accept the ragion above the random variable x.
+
Though this method is very easy to use and apply, it does have a major disadvantage/limitation:
for the example, x1 is bad point and x2 is good point to estimate the rejection and acceptance
+
We need to find the inverse cdf  F^{-1}(\cdot) . In some cases the inverse function does not exist, or is difficult to find because it requires a closed form expression for F(x).
 
+
For example, it is too difficult to find the inverse cdf of the Gaussian distribution, so we must find another method to sample from the Gaussian distribution.
=== Theorem ===
+
In conclusion, we need to find another way of sampling from more complicated distributions
 
+
Flipping a coin is a discrete case of uniform distribution, and the code below shows an example of flipping a coin 1000 times; the result is close to the expected value 0.5.<br>
Let <math>f: \R \rightarrow  [0,+\infty]</math> be a well-defined pdf, and <math>\displaystyle Y</math> be a random variable with pdf <math>g: \R \rightarrow [0,+\infty]</math> such that <math>\exists c \in \R^+</math> with <math>f \leq c \cdot g</math>. If <math>\displaystyle U \sim~ U(0,1)</math> is independent of <math>\displaystyle Y</math>, then the random variable defined as <math>X := Y \vert U \leq \frac{f(Y)}{c \cdot g(Y)}</math> has pdf <math>\displaystyle f</math>, and the condition <math>U \leq \frac{f(Y)}{c \cdot g(Y)}</math> is denoted by "Accepted".
+
Example 2, as another discrete distribution, shows that we can sample from parts like 0,1 and 2, and the probability of each part or each trial is the same.<br>
 +
Example 3 uses inverse method to figure out the probability range of each random varible.
 +
<div style="border:1px solid #cccccc;border-radius:10px;box-shadow: 0 5px 15px 1px rgba(0, 0, 0, 0.6), 0 0 200px 1px rgba(255, 255, 255, 0.5);padding:20px;margin:20px;background:#FFFFAD;">
 +
<h2 style="text-align:center;">Summary of Inverse Transform Method</h2>
 +
<p><b>Problem:</b>generate types of distribution.</p>
 +
<p><b>Plan:</b></p>
 +
<b style='color:lightblue;'>Continuous case:</b>
 +
<ol>
 +
<li>find CDF F</li>
 +
<li>find the inverse F<sup>-1</sup></li>
 +
<li>Generate a list of uniformly distributed number {x}</li>
 +
<li>{F<sup>-1</sup>(x)} is what we want</li>
 +
</ol>
 +
<b>Matlab Instruction</b>
 +
<pre style="font-size:16px">&gt;&gt;u=rand(1,1000);
 +
&gt;&gt;hist(u)
 +
&gt;&gt;x=(-log(1-u))/2;
 +
&gt;&gt;size(x)  
 +
&gt;&gt;figure
 +
&gt;&gt;hist(x)
 +
</pre>
 +
<br>
 +
<b style='color:lightblue'>Discrete case:</b>
 +
<ol>
 +
<li>generate a list of uniformly distributed number {u}</li>
 +
<li>d<sub>i</sub>=x<sub>i</sub> if<math> X=x_i, </math> if <math> F(x_{i-1})<U\leq F(x_i) </math></li>
 +
<li>{d<sub>i</sub>=x<sub>i</sub>} is what we want</li>
 +
</ol>
 +
<b>Matlab Instruction</b>
 +
<pre style="font-size:16px">&gt;&gt;for ii=1:1000
 +
    u=rand;
 +
      if u&lt;0.5
 +
        x(ii)=0;
 +
      else
 +
        x(ii)=1;
 +
      end
 +
  end
 +
&gt;&gt;hist(x)
 +
</pre>
 +
</div>
  
=== Proof ===
+
=== Generalized Inverse-Transform Method ===
(to be updated later)<br>
+
 
 +
Valid for any CDF F(x): return X=min{x:F(x)<math>\leq</math> U}, where U~U(0,1)
  
 +
1. Continues, possibly with flat spots (i.e. not strictly increasing)
  
<math>P(y|accepted)=f(y)</math><br />
+
2. Discrete
  
<math>P(y|accepted)=\frac{P(accepted|y)P(y)}{P(accepted)}</math><br />       
+
3. Mixed continues discrete
  
Recall the conditional probability formulas:<br />
 
  
<math>\begin{align}
+
'''Advantages of Inverse-Transform Method'''
P(A|B)=\frac{P(A \cap B)}{P(B)}, \text{ or }P(A|B)=\frac{P(B|A)P(A)}{P(B)} \text{ for pmf}
 
\end{align}</math><br />
 
  
<br />based on the concept from '''procedure-step1''':<br />
+
Inverse transform method preserves monotonicity and correlation
<math>P(y)=g(y)</math><br />
 
  
<math>P(accepted|y)=\frac{f(y)}{cg(y)}</math> <br />
+
which helps in
(the larger the value is, the larger the chance it will be selected) <br /><br />
 
  
 +
1. Variance reduction methods ...
  
<math>
+
2. Generating truncated distributions ...
\begin{align}
 
P(accepted)&=\int_y\ P(accepted|y)P(y)\\
 
          &=\int_y\ \frac{f(s)}{cg(s)}g(s)ds\\
 
          &=\frac{1}{c} \int_y\  f(s) ds\\
 
          &=\frac{1}{c}
 
\end{align}</math><br />
 
  
Therefore:<br />
+
3. Order statistics ...
<math>\begin{align}
 
P(x)&=P(y|accepted)\\
 
&=\frac{\frac{f(y)}{cg(y)}g(y)}{1/c}\\
 
&=\frac{\frac{f(y)}{c}}{1/c}\\
 
&=f(y)\end{align}</math><br /><br /><br />
 
  
'''''Here is an alternative introduction of Acceptance-Rejection Method'''''
+
===Acceptance-Rejection Method===
  
'''Comments:'''
+
Although the inverse transformation method does allow us to change our uniform distribution, it has two limits;
 +
# Not all functions have inverse functions (ie, the range of x and y have limit and do not fix the inverse functions)
 +
# For some distributions, such as Gaussian, it is too difficult to find the inverse
  
-Acceptance-Rejection Method is not good for all cases. One obvious cons is that it could be very hard to pick the g(y) and the constant c in some cases. And usually, c should be a small number otherwise the amount of work when applying the method could be HUGE.
+
To generate random samples for these functions, we will use different methods, such as the '''Acceptance-Rejection Method'''. This method is more efficient than the inverse transform method. The basic idea is to find an alternative probability distribution with density function f(x);
<br/><br />-'''Note:''' When f(y) is very different than g(y), it is less likely that the point will be accepted as the ratio above would be very small and it will be difficult for u to be less than this small value. <br/>An example would be when the target function (f) has a spike or several spikes in its domain - this would force the known distribution (g) to have density at least as large as the spikes, making the value of c larger than desired. As a result, the algorithm would be highly inefficient.
 
  
'''Acceptance-Rejection Method'''<br/>
+
Suppose we want to draw random sample from a target density function ''f(x)'', ''x∈S<sub>x</sub>'', where ''S<sub>x</sub>'' is the support of ''f(x)''. If we can find some constant ''c''(≥1) (In practice, we prefer c as close to 1 as possible) and a density function ''g(x)'' having the same support ''S<sub>x</sub>'' so that ''f(x)≤cg(x), ∀x∈S<sub>x</sub>'', then we can apply the procedure for Acceptance-Rejection Method. Typically we choose a density function that we already know how to sample from for ''g(x)''.
'''Example 1''' (discrete case)<br/>
 
We wish to generate X~Bi(2,0.5), assuming that we cannot generate this directly.<br/>
 
We use a discrete distribution DU[0,2] to approximate this.<br/>
 
<math>f(x)=Pr(X=x)=2Cx*(0.5)^2</math><br/>
 
  
{| class=wikitable  align=left
+
[[File:AR_Method.png]]
|x||0||1||2 
 
|-
 
|f(x)||1/4||1/2||1/4
 
|-
 
|g(x)||1/3||1/3||1/3 
 
|-
 
|c=f(x)/g(x)||3/4||3/2||3/4
 
|-
 
|f(x)/(cg(x))||1/2||1||1/2
 
|}
 
  
  
Since we need <math>c>=f(x)/g(x)</math><br/>
+
The main logic behind the Acceptance-Rejection Method is that:<br>
We need <math>c=3/2</math><br/>
+
1. We want to generate sample points from an unknown distribution, say f(x).<br>
 +
2. We use <math>\,cg(x)</math> to generate points so that we have more points than f(x) could ever generate for all x. (where c is a constant, and g(x) is a known distribution)<br>
 +
3. For each value of x, we accept and reject some points based on a probability, which will be discussed below.<br>
  
Therefore, the algorithm is:<br/>
+
Note: If the red line was only g(x) as opposed to <math>\,c g(x)</math> (i.e. c=1), then <math>g(x) \geq f(x)</math> for all values of x if and only if g and f are the same functions. This is because the sum of pdf of g(x)=1 and the sum of pdf of f(x)=1, hence, <math>g(x) \ngeqq f(x)</math> \,&forall;x. <br>
1. Generate <math>u,v~U(0,1)</math><br/>
 
2. Set <math>y= \lfloor 3*u \rfloor</math> (This is using uniform distribution to generate DU[0,2]<br/>
 
3. If <math>(y=0)</math> and <math>(v<1/2), output=0</math> <br/>
 
If <math>(y=2) </math> and <math>(v<1/2), output=2 </math><br/>
 
Else if <math>y=1, output=1</math><br/>
 
  
 +
Also remember that <math>\,c g(x)</math> always generates higher probability than what we need. Thus we need an approach of getting the proper probabilities.<br><br>
  
An elaboration of “c”<br/>
+
c must be chosen so that <math>f(x)\leqslant c g(x)</math> for all value of x. c can only equal 1 when f and g have the same distribution. Otherwise:<br>
c is the expected number of times the code runs to output 1 random variable.  Remember that when <math>u < f(x)/(cg(x))</math> is not satisfied, we need to go over the code again.<br/>
+
Either use a software package to test if <math>f(x)\leqslant c g(x)</math>  for an arbitrarily chosen c > 0, or:<br>
 +
1. Find first and second derivatives of f(x) and g(x).<br>
 +
2. Identify and classify all local and absolute maximums and minimums, using the First and Second Derivative Tests, as well as all inflection points.<br>
 +
3. Verify that <math>f(x)\leqslant c g(x)</math> at all the local maximums as well as the absolute maximums.<br>
 +
4. Verify that <math>f(x)\leqslant c g(x)</math> at the tail ends by calculating <math>\lim_{x \to +\infty} \frac{f(x)}{\, c g(x)}</math> and <math>\lim_{x \to -\infty} \frac{f(x)}{\, c g(x)}</math> and seeing that they are both < 1. Use of L'Hopital's Rule should make this easy, since both f and g are p.d.f's, resulting in both of them approaching 0.<br>
 +
5.Efficiency: the number of times N that steps 1 and 2 need to be called(also the number of iterations needed to successfully generate X) is a random variable and has a geometric distribution with success probability <math>p=P(U \leq f(Y)/(cg(Y)))</math> , <math>P(N=n)=(1-p(n-1))p ,n \geq 1</math>.Thus on average the number of iterations required is given by <math> E(N)=\frac{1} p</math>
  
Proof<br/>
+
c should be close to the maximum of f(x)/g(x), not just some arbitrarily picked large number. Otherwise, the Acceptance-Rejection method will have more rejections (since our probability <math>f(x)\leqslant c g(x)</math> will be close to zero). This will render our algorithm inefficient.
  
Let <math>f(x)</math> be the function we wish to generate from, but we cannot use inverse transform method to generate directly.<br/>
+
The expected number of iterations of the algorithm required with an X is c.
Let <math>g(x)</math> be the helper function <br/>
+
<br>
Let <math>kg(x)>=f(x)</math><br/>
+
'''Note:''' <br>
Since we need to generate y from <math>g(x)</math>,<br/>
+
1. Value around x<sub>1</sub> will be sampled more often under cg(x) than under f(x).There will be more samples than we actually need, if <math>\frac{f(y)}{\, c g(y)}</math> is small, the acceptance-rejection technique will need to be done to these points to get the accurate amount.In the region above x<sub>1</sub>, we should accept less and reject more. <br>
<math>Pr(select y)=g(y)</math><br/>
+
2. Value around x<sub>2</sub>: number of sample that are drawn and the number we need are much closer. So in the region above x<sub>2</sub>, we accept more. As a result, g(x) and f(x) are comparable.<br>
<math>Pr(output y|selected y)=Pr(u<f(y)/(cg(y)))= (y)/(cg(y))</math> (Since u~Unif(0,1))<br/>
+
3. The constant c is needed because we need to adjust the height of g(x) to ensure that it is above f(x). Besides that, it is best to keep the number of rejected varieties small for maximum efficiency. <br>  
<math>Pr(output y)=Pr(output y1|selected y1)Pr(select y1)+ Pr(output y2|selected y2)Pr(select y2)+…+ Pr(output yn|selected yn)Pr(select yn)=1/c</math> <br/>
 
Consider that we are asking for expected time for the first success, it is a geometric distribution with probability of success=c<br/>
 
Therefore, <math>E(X)=1/(1/c))=c</math> <br/>
 
  
Acknowledgements: Some materials have been borrowed from notes from Stat340 in Winter 2013.
+
Another way to understand why the the acceptance probability is <math>\frac{f(y)}{\, c g(y)}</math>, is by thinking of areas. From the graph above, we see that the target function in under the proposed function c g(y). Therefore, <math>\frac{f(y)}{\, c g(y)}</math> is the proportion or the area under c g(y) that also contains f(y). Therefore we say we accept sample points for which u is less then <math>\frac{f(y)}{\, c g(y)}</math> because then the sample points are guaranteed to fall under the area of c g(y) that contains f(y). <br>
 +
<br>
  
Use the conditional probability to proof if the probability is accepted, then the result is closed pdf of the original one.
+
'''There are 2 cases that are possible:''' <br>
the example shows how to choose the c for the two function g(x) and f(x).
+
-Sample of points is more than enough, <math>c g(x) \geq f(x) </math> <br>
 +
-Similar or the same amount of points, <math>c g(x) \geq f(x) </math> <br>
 +
'''There is 1 case that is not possible:''' <br>
 +
-Less than enough points, such that <math> g(x) </math> is greater than <math> f </math>, <math>g(x) \geq f(x)</math> <br>
 +
<br>
  
=== Example of Acceptance-Rejection Method===
+
'''Procedure'''
  
Generating a random variable having p.d.f.  
+
#Draw Y~g(.)
                                <math>f(x) = 20x(1 - x)^3,        0< x <1  </math> 
+
#Draw U~u(0,1) (Note: U and Y are independent)
Since this random variable (which is beta with parameters 2, 4) is concentrated in the interval (0, 1), let us consider the acceptance-rejection method with
+
#If <math>u\leq \frac{f(y)}{cg(y)}</math> (which is <math>P(accepted|y)</math>) then x=y, else return to Step 1<br>
                                    g(x) = 1,          0 < x < 1
 
To determine the constant c such that f(x)/g(x) <= c, we use calculus to determine the maximum value of
 
                                  <math> f(x)/g(x) = 20x(1 - x)^3 </math>
 
Differentiation of this quantity yields                             
 
                                  <math>d/dx[f(x)/g(x)]=20*[(1-x)^3-3x(1-x)^2]</math>
 
Setting this equal to  0  shows that the maximal value is attained when x = 1/4,
 
and thus,                           
 
                                  <math>f(x)/g(x)<= 20*(1/4)*(3/4)^3=135/64=c </math>                                  
 
Hence,
 
                                  <math>f(x)/cg(x)=(256/27)*(x*(1-x)^3)</math>                            
 
and thus the simulation procedure is as follows:
 
  
1)      Generate two random numbers U1 and U2 .
 
  
2)      If U<sub>2</sub><(256/27)*U<sub>1</sub>*(1-U<sub>1</sub>)<sup>3</sup>, set X=U<sub>2</sub>, and stop
+
Note: Recall <math>P(U\leq a)=a</math>. Thus by comparing u and <math>\frac{f(y)}{\, c g(y)}</math>, we can get a probability of accepting y at these points. For instance, at some points that cg(x) is much larger than f(x), the probability of accepting x=y is quite small.<br>
Otherwise return to step 1).  
+
ie. At X<sub>1</sub>, low probability to accept the point since f(x) is much smaller than cg(x).<br>
The average number of times that step 1) will be performed is  c = 135/64.
+
At X<sub>2</sub>, high probability to accept the point. <math>P(U\leq a)=a</math> in Uniform Distribution.
  
(The above example is from http://www.cs.bgu.ac.il/~mps042/acceptance.htm, example 2.)
+
Note: Since U is the variable for uniform distribution between 0 and 1. It equals to 1 for all. The condition depends on the constant c. so the condition changes to <math>c\leq \frac{f(y)}{g(y)}</math>
  
use the derivative to proof the accepetance-rejection method,
 
find the local maximum of f(x)/g(x).
 
and we can calculate the best constant c.
 
  
=== Simple Example of Acceptance-Rejection Method===
+
introduce the relationship of cg(x)and f(x),and prove why they have that relationship and where we can use this rule to reject some cases.
Consider the random variable X, with distribution <math> X </math> ~ <math> U[0,0.5] </math>
+
and learn how to see the graph to find the accurate point to reject or accept the ragion above the random variable x.
 +
for the example, x1 is bad point and x2 is good point to estimate the rejection and acceptance
  
So we let <math> f(x) = 2x </math> on <math> [0, 1/2] </math>
+
'''Some notes on the constant C'''<br>
 +
1. C is chosen such that <math> c g(y)\geq f(y)</math>, that is,<math> c g(y)</math> will always dominate <math>f(y)</math>. Because of this,  
 +
C will always be greater than or equal to one and will only equal to one if and only if the proposal distribution and the target distribution are the same. It is normally best to choose C such that the absolute maxima of both <math> c g(y)</math> and <math> f(y)</math> are the same.<br>
  
Let <math>g(.)</math> be <math>U[0,1]</math> distributed. So <math>g(x) = x</math> on <math>[0,1]</math>
+
2. <math> \frac {1}{C} </math> is the area of <math> F(y)</math> over the area of  <math> c G(y)</math> and is the acceptance rate of the points generated. For example, if <math> \frac {1}{C} = 0.7</math> then on average, 70 percent of all points generated are accepted.<br>
  
Then take <math>c = 2</math>
+
3. C is the average number of times Y is generated from g .
  
So <math>f(x)/cg(x) = (2x) / {(2)(x) } = 1</math> on the interval <math>[0, 1/2]</math> and
+
=== Theorem ===
  
<math>f(x)/cg(x) = (0) / {(2)(x) } = 0</math> on the interval <math>(1/2, 1]</math>
+
Let <math>f: \R \rightarrow  [0,+\infty]</math> be a well-defined pdf, and <math>\displaystyle Y</math> be a random variable with pdf <math>g: \R \rightarrow [0,+\infty]</math> such that <math>\exists c \in \R^+</math> with <math>f \leq c \cdot g</math>. If <math>\displaystyle U \sim~ U(0,1)</math> is independent of <math>\displaystyle Y</math>, then the random variable defined as <math>X := Y \vert U \leq \frac{f(Y)}{c \cdot g(Y)}</math> has pdf <math>\displaystyle f</math>, and the condition <math>U \leq \frac{f(Y)}{c \cdot g(Y)}</math> is denoted by "Accepted".
  
So we reject:
+
=== Proof ===
 +
Recall the conditional probability formulas:<br />
 +
<math>\begin{align}
 +
P(A|B)=\frac{P(A \cap B)}{P(B)}, \text{ or }P(A|B)=\frac{P(B|A)P(A)}{P(B)} \text{ for pmf}
 +
\end{align}</math><br />
  
None of the numbers generated in the interval <math>[0, 1/2]</math>
+
<math>P(y|accepted)=f(y)=\frac{P(accepted|y)P(y)}{P(accepted)}</math><br />       
 +
<br />based on the concept from '''procedure-step1''':<br />
 +
<math>P(y)=g(y)</math><br />
  
All of the numbers generated in the interval <math>(1/2, 1]</math>
+
<math>P(accepted|y)=\frac{f(y)}{cg(y)}</math> <br />
 +
(the larger the value is, the larger the chance it will be selected) <br /><br />
  
And this results in the distribution <math>f(.)</math> which is <math>U[0,1/2]</math>
 
  
a example to show why the we reject a case by using acceptance-rejection method.
+
<math>
 +
\begin{align}
 +
P(accepted)&=\int_y\ P(accepted|y)P(y)\\
 +
          &=\int_y\ \frac{f(s)}{cg(s)}g(s)ds\\
 +
          &=\frac{1}{c} \int_y\  f(s) ds\\
 +
          &=\frac{1}{c}
 +
\end{align}</math><br />
  
===Another Example of Acceptance-Rejection Method===
+
Therefore:<br />
Generate a random variable from:<br />  
+
<math>\begin{align}
  <math>f(x)=3*x^2</math>, 0< x <1<br />
+
P(x)&=P(y|accepted)\\
Assume g(x) to be uniform over interval (0,1), where 0< x <1<br />
+
&=\frac{\frac{f(y)}{cg(y)}g(y)}{1/c}\\
Therefore:<br />
+
&=\frac{\frac{f(y)}{c}}{1/c}\\
  <math>c = max(f(x)/(g(x)))= 3</math><br />  
+
&=f(y)\end{align}</math><br /><br /><br />
 +
 
 +
'''''Here is an alternative introduction of Acceptance-Rejection Method'''''
  
the best constant c is the max(f(x)/(cg(x))) and the c make the area above the f(x) and below the g(x) to be small.
+
'''Comments:'''
because g(.) is uniform so the g(x) is 1. max(g(x)) is 1
 
  <math>f(x)/(cg(x))= x^2</math><br />
 
Acknowledgement: this is example 1 from http://www.cs.bgu.ac.il/~mps042/acceptance.htm
 
  
 +
-Acceptance-Rejection Method is not good for all cases. The limitation with this method is that sometimes many points will be rejected. One obvious disadvantage is that it could be very hard to pick the <math>g(y)</math> and the constant <math>c</math> in some cases. We have to pick the SMALLEST C such that <math>cg(x) \leq f(x)</math> else the the algorithm will not be efficient. This is because <math>f(x)/cg(x)</math> will become smaller and probability <math>u \leq f(x)/cg(x)</math> will go down and many points will be rejected making the algorithm inefficient.
  
an example to show how to figure out c and f(x)/c*g(x).
+
-'''Note:''' When <math>f(y)</math> is very different than <math>g(y)</math>, it is less likely that the point will be accepted as the ratio above would be very small and it will be difficult for <math>U</math> to be less than this small value. <br/>An example would be when the target function (<math>f</math>) has a spike or several spikes in its domain - this would force the known distribution (<math>g</math>) to have density at least as large as the spikes, making the value of <math>c</math> larger than desired. As a result, the algorithm would be highly inefficient.
  
== Class 4 - Thursday, May 16 ==
+
'''Acceptance-Rejection Method'''<br/>
*When we want to find a target distribution, denoted as <math>f(x)</math>; we need to first find a proposal distribution <math>g(x)</math> which is easy to sample from. <br> The area of the f(x) is under the area of the g(x).
+
'''Example 1''' (discrete case)<br/>
*The relationship between the proposal distribution and target distribution is: <math> c \cdot g(x) \geq f(x) </math>. <br>
+
We wish to generate X~Bi(2,0.5), assuming that we cannot generate this directly.<br/>
*Chance of acceptance is less if the distance between <math>f(x)</math> and <math> c \cdot g(x)</math> is big, and vice-versa, <math> c </math>  keeps <math> \frac {f(x)}{c \cdot g(x)} </math> below 1 (so <math>f(x) \leq c \cdot g(x)</math>), and we must to choose the constant <math> C </math> to achieve this.<br />
+
We use a discrete distribution DU[0,2] to approximate this.<br/>
*In other words, <math>C</math> is chosen to make sure  <math> c \cdot g(x) \geq f(x) </math>. However, it will not make sense if <math>C</math> is simply chosen to be arbitrarily large. We need to choose <math>C</math> such that <math>c \cdot g(x)</math> fits <math>f(x)</math> as tightly as possible.<br />
+
<math>f(x)=Pr(X=x)=2Cx×(0.5)^2\,</math><br/>
*The constant c can not be negative number.<br />
 
  
'''How to find C''':<br />
+
{| class=wikitable  align=left
<math>\begin{align}
+
|<math>x</math>||0||1||2 
&c \cdot g(x) \geq f(x)\\
+
|-
&c\geq \frac{f(x)}{g(x)}  \\
+
|<math>f(x)</math>||1/4||1/2||1/4
&c= \max \left(\frac{f(x)}{g(x)}\right)
+
|-
\end{align}</math><br>
+
|<math>g(x)</math>||1/3||1/3||1/
If <math>f</math> and <math> g </math> are continuous, we can find the extremum by taking the derivative and solve for <math>x_0</math> such that:<br/>
+
|-
<math> 0=\frac{d}{dx}\frac{f(x)}{g(x)}|_{x=x_0}</math> <br/>
+
|<math>c=f(x)/g(x)</math>||3/4||3/2||3/4
Thus <math> c = \frac{f(x_0)}{g(x_0)} </math><br/>
+
|-
 +
|<math>f(x)/(cg(x))</math>||1/2||1||1/2
 +
|}
  
*The logic behind this:
 
The Acceptance-Rejection method involves finding a distribution that we know how to sample from (g(x)) and multiplying g(x) by a constant c so that <math>c \cdot g(x)</math> is always greater than or equal to f(x). Mathematically, we want <math> c \cdot g(x) \geq f(x) </math>.
 
And it means c has to be greater or equal to <math>\frac{f(x)}{g(x)}</math>. So the smallest possible c that satisfies the condition is the maximum value of <math>\frac{f(x)}{g(x)}</math> <br />. If c is made to be too large, the chance of acceptance of generated values will be small, and the algorithm will lose its purpose.
 
  
*For this method to be efficient, the constant c must be selected so that the rejection rate is low.(The efficiency for this method is<math>\left ( \frac{1}{c} \right )</math>)<br>
+
Since we need <math>c \geq f(x)/g(x)</math><br/>
*It is easy to show that the expected number of trials for an acceptance is c. Thus, the smaller the c is, the lower the rejection rate, and the better the algorithm:<br>
+
We need <math>c=3/2</math><br/>
*recall the acceptance rate is 1/c.(not rejection rate)  
 
:Let <math>X</math> be the number of trials for an acceptance, <math> X \sim~ Geo(\frac{1}{c})</math><br>
 
:<math>\mathbb{E}[X] = \frac{1}{\frac{1}{c}} = c </math>
 
*The number of trials needed to generate a sample size of <math>N</math> follows a negative binomial distribution. The expected number of trials needed is then <math>cN</math>.<br>
 
*So far, the only distribution we know how to sample from is the '''UNIFORM''' distribution. <br>
 
  
'''Procedure''': <br>
+
Therefore, the algorithm is:<br/>
1. Choose <math>g(x)</math> (simple density function that we know how to sample, i.e. Uniform so far) <br>
+
1. Generate <math>u,v~U(0,1)</math><br/>
The easiest case is UNIF(0,1). However, in other cases we need to generate UNIF(a,b). We may need to perform a linear transformation on the UNIF(0,1) variable. <br>
+
2. Set <math>y= \lfloor 3*u \rfloor</math> (This is using uniform distribution to generate DU[0,2]<br/>
2. Find a constant c such that :<math> c \cdot g(x) \geq f(x) </math>, otherwise return to step 1.
+
3. If <math>(y=0)</math> and <math>(v<\tfrac{1}{2}), output=0</math> <br/>
 +
If <math>(y=2) </math> and <math>(v<\tfrac{1}{2}), output=2 </math><br/>
 +
Else if <math>y=1, output=1</math><br/>
  
'''Recall the general procedure of Acceptance-Rejection Method'''
 
#Let <math>Y \sim~ g(y)</math>
 
#Let <math>U \sim~ Unif [0,1] </math>
 
#If <math>U \leq \frac{f(x)}{c \cdot g(x)}</math> then X=Y; else return to step 1 (This is not the way to find C. This is the general procedure.)
 
  
<hr><b>Example: Generate a random variable from the pdf</b><br>
+
An elaboration of “c”<br/>
<math> f(x) =
+
c is the expected number of times the code runs to output 1 random variable.  Remember that when <math>u < \tfrac{f(x)}{cg(x)}</math> is not satisfied, we need to go over the code again.<br/>
\begin{cases}  
 
2x,  & \mbox{if }0 \leqslant x \leqslant 1 \\
 
0, & \mbox{otherwise}
 
\end{cases} </math>
 
  
We can note that this is a special case of Beta(2,1), where,
+
Proof<br/>
<math>beta(a,b)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}x^{(a-1)}(1-x)^{(b-1)}</math><br>
 
  
Where &Gamma; (n)=(n-1)! if n is positive integer
+
Let <math>f(x)</math> be the function we wish to generate from, but we cannot use inverse transform method to generate directly.<br/>
 +
Let <math>g(x)</math> be the helper function <br/>
 +
Let <math>kg(x)>=f(x)</math><br/>
 +
Since we need to generate y from <math>g(x)</math>,<br/>
 +
<math>Pr(select y)=g(y)</math><br/>
 +
<math>Pr(output y|selected y)=Pr(u<f(y)/(cg(y)))= f(y)/(cg(y))</math> (Since u~Unif(0,1))<br/>
 +
<math>Pr(output y)=Pr(output y1|selected y1)Pr(select y1)+ Pr(output y2|selected y2)Pr(select y2)+…+ Pr(output yn|selected yn)Pr(select yn)=1/c</math> <br/>
 +
Consider that we are asking for expected time for the first success, it is a geometric distribution with probability of success=1/c<br/>
 +
Therefore, <math>E(X)=1/(1/c))=c</math> <br/>
  
<math>Gamma(z)=\int _{0}^{\infty }t^{z-1}e^{t}dt</math>
+
Acknowledgements: Some materials have been borrowed from notes from Stat340 in Winter 2013.
  
Aside: Beta function
+
Use the conditional probability to proof if the probability is accepted, then the result is closed pdf of the original one.
 +
the example shows how to choose the c for the two function <math>g(x)</math> and <math>f(x)</math>.
  
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function defined by
+
=== Example of Acceptance-Rejection Method===
<math>B(x,y)=\int_0^1 \! {t^{(x-1)}}{(1-t)^{(y-1)}}\,dt</math><br>
 
  
 +
Generating a random variable having p.d.f. <br />
 +
<math>\displaystyle f(x) = 20x(1 - x)^3,        0< x <1  </math><br />
 +
Since this random variable (which is beta with parameters (2,4)) is concentrated in the interval (0, 1), let us consider the acceptance-rejection method with<br />
 +
<math>\displaystyle g(x) = 1,0<x<1</math><br />
 +
To determine the constant c such that f(x)/g(x) <= c, we use calculus to determine the maximum value of<br />
 +
<math>\displaystyle f(x)/g(x) = 20x(1 - x)^3 </math><br />
 +
Differentiation of this quantity yields <br />                             
 +
<math>\displaystyle d/dx[f(x)/g(x)]=20*[(1-x)^3-3x(1-x)^2]</math><br />
 +
Setting this equal to  0  shows that the maximal value is attained when x = 1/4,
 +
and thus, <br />
 +
<math>\displaystyle f(x)/g(x)<= 20*(1/4)*(3/4)^3=135/64=c </math><br />
 +
Hence,<br />
 +
<math>\displaystyle f(x)/cg(x)=(256/27)*(x*(1-x)^3)</math><br />
 +
and thus the simulation procedure is as follows:
  
<math>beta(2,1)= \frac{\Gamma(3)}{(\Gamma(2)\Gamma(1))}x^1 (1-x)^0 = 2x</math><br>
+
1)     Generate two random numbers U1 and U2 .
 +
 
 +
2)     If U<sub>2</sub><(256/27)*U<sub>1</sub>*(1-U<sub>1</sub>)<sup>3</sup>, set X=U<sub>1</sub>, and stop
 +
Otherwise return to step 1).
 +
The average number of times that step 1)  will be performed is  c = 135/64.
 +
 
 +
(The above example is from http://www.cs.bgu.ac.il/~mps042/acceptance.htm, example 2.)
  
<hr>
+
use the derivative to proof the accepetance-rejection method,
<math>g=u(0,1)</math><br>
+
find the local maximum of f(x)/g(x).
<math>y=g</math><br>
+
and we can calculate the best constant c.
<math>f(x)\leq c\cdot g(x)</math><br>
 
<math>c\geq \frac{f(x)}{g(x)}</math><br>
 
<math>c = \max \frac{f(x)}{g(x)} </math><br>
 
<br><math>c = \max \frac{2x}{1}, 0 \leq x \leq 1</math><br>
 
Taking x = 1 gives the highest possible c, which is c=2
 
<br />Note that c is a scalar greater than 1.
 
  
[[File:Beta(2,1)_example.jpg|750x750px]]
+
===Another Example of Acceptance-Rejection Method===
 +
Generate a random variable from:<br />
 +
<math>\displaystyle f(x)=3*x^2, 0<x<1 </math><br />
 +
Assume g(x) to be uniform over interval (0,1), where 0< x <1<br />
 +
Therefore:<br />
 +
<math>\displaystyle c = max(f(x)/(g(x)))= 3</math><br /> 
  
Note: g follows uniform distribution, it only covers half of the graph which runs from 0 to 1 on y-axis. Thus we need to multiply by c to ensure that <math>c\cdot g</math> can cover entire f(x) area. In this case, c=2, so that makes g runs from 0 to 2 on y-axis which covers f(x).
+
the best constant c is the max(f(x)/(cg(x))) and the c make the area above the f(x) and below the g(x) to be small.
 +
because g(.) is uniform so the g(x) is 1. max(g(x)) is 1<br />
 +
<math>\displaystyle f(x)/(cg(x))= x^2</math><br />
 +
Acknowledgement: this is example 1 from http://www.cs.bgu.ac.il/~mps042/acceptance.htm
  
Comment:
+
== Class 4 - Thursday, May 16 ==  
From the picture above, we could observe that the area under f(x)=2x is a half of the area under the pdf of UNIF(0,1). This is why in order to sample 1000 points of f(x) we need to sample approximately 2000 points in UNIF(0,1).
 
And in general, if we want to sample n points from a distritubion with pdf f(x), we need to scan approximately <math>n\cdot c</math> points from the proposal distribution (g(x)) in total. <br>
 
<b>Step</b>
 
<ol>
 
<li>Draw y~u(0,1)</li>
 
<li>Draw u~u(0,1)</li>
 
<li>if <math>u \leq \frac{(2\cdot y)}{(2\cdot 1)}, x=y</math><br>
 
4.else go to 1</li>
 
</ol>
 
<span style="font-weight:bold;color:green;">Matlab Code</span>
 
<pre style="font-size:16px">
 
>>close all
 
>>clear all
 
>>ii=1;            # ii:numbers that are accepted
 
>>jj=1;            # jj:numbers that are generated
 
>>while ii<1000
 
    y=rand;
 
    u=rand;
 
    jj=jj+1;
 
    if u<=y
 
      x(ii)=y;
 
      ii=ii+1;
 
    end
 
  end
 
>>hist(x)
 
>>jj
 
  jj = 2024        # should be around 2000
 
</pre>
 
[[File:ARM_Example.jpg|300px]]
 
  
:'''*Note:''' The reason that a for loop is not used is that we need continue the looping until we get 1000 successful samples. We will reject some samples during the process and therefore do not know the number of y we are going to generate.
+
'''Goals'''<br>
 +
*When we want to find target distribution <math>f(x)</math>, we need to first find a proposal distribution <math>g(x)</math>  that is easy to sample from. <br>
 +
*Relationship between the proposal distribution and target distribution is: <math> c \cdot g(x) \geq f(x) </math>, where c is constant. This means that the area of f(x) is under the area of <math> c \cdot g(x)</math>. <br>
 +
*Chance of acceptance is less if the distance between <math>f(x)</math> and <math> c \cdot g(x)</math> is big, and vice-versa, we use <math> c </math> to keep <math> \frac {f(x)}{c \cdot g(x)} </math> below 1 (so <math>f(x) \leq c \cdot g(x)</math>). Therefore, we must find the constant <math> C </math> to achieve this.<br />
 +
*In other words, <math>C</math> is chosen to make sure  <math> c \cdot g(x) \geq f(x) </math>. However, it will not make sense if <math>C</math> is simply chosen to be arbitrarily large. We need to choose <math>C</math> such that <math>c \cdot g(x)</math> fits <math>f(x)</math> as tightly as possible. This means that we must find the minimum c such that the area of f(x) is under the area of c*g(x). <br />
 +
*The constant c cannot be a negative number.<br />
  
:'''*Note2:''' In this example, we used c=2, which means we accept half of the points we generate on average. Generally speaking, 1/c would be the probability of acceptance, and an indicator of the efficiency of your chosen proposal distribution and algorithm.
 
  
:'''*Note3:''' We use '''while''' instead of '''for''' when looping because we do not know how many iterations are required to generate 1000 successful samples.
+
'''How to find C''':<br />
  
:'''*Note4:''' If c=1, we will accept all points, which is the ideal situation.
+
<math>\begin{align}
 +
&c \cdot g(x) \geq f(x)\\
 +
&c\geq \frac{f(x)}{g(x)}  \\
 +
&c= \max \left(\frac{f(x)}{g(x)}\right)
 +
\end{align}</math><br>
  
'''
+
If <math>f</math> and <math> g </math> are continuous, we can find the extremum by taking the derivative and solve for <math>x_0</math> such that:<br/>
'''Example for A-R method:''''''
+
<math> 0=\frac{d}{dx}\frac{f(x)}{g(x)}|_{x=x_0}</math> <br/>
  
Given <math> f(x)= \frac{3}{4} (1-x^2),  -1 \leq x \leq 1 </math>,  use A-R method to generate random number
+
Thus <math> c = \frac{f(x_0)}{g(x_0)} </math><br/>
  
 +
Note: This procedure is called the Acceptance-Rejection Method.<br>
  
[[Solution:]]
+
'''The Acceptance-Rejection method''' involves finding a distribution that we know how to sample from, g(x), and multiplying g(x) by a constant c so that <math>c \cdot g(x)</math> is always greater than or equal to f(x). Mathematically, we want <math> c \cdot g(x) \geq f(x) </math>.
 +
And it means, c has to be greater or equal to <math>\frac{f(x)}{g(x)}</math>. So the smallest possible c that satisfies the condition is the maximum value of <math>\frac{f(x)}{g(x)}</math><br/>.
 +
But in case of c being too large, the chance of acceptance of generated values will be small, thereby losing efficiency of the algorithm. Therefore, it is best to get the smallest possible c such that <math> c g(x) \geq f(x)</math>. <br>
  
Let g=U(-1,1) and g(x)=1/2
+
'''Important points:'''<br>
  
let y ~ f,  
+
*For this method to be efficient, the constant c must be selected so that the rejection rate is low. (The efficiency for this method is <math>\left ( \frac{1}{c} \right )</math>)<br>
<math> cg(x)\geq f(x),
+
*It is easy to show that the expected number of trials for an acceptance is  <math> \frac{Total Number of Trials} {C} </math>. <br>
c\frac{1}{2} \geq \frac{3}{4} (1-x^2) /1,
+
*recall the '''acceptance rate is 1/c'''. (Not rejection rate)
c=max 2*\frac{3}{4} (1-x^2) = 3/2 </math>
+
:Let <math>X</math> be the number of trials for an acceptance, <math> X \sim~ Geo(\frac{1}{c})</math><br>
 +
:<math>\mathbb{E}[X] = \frac{1}{\frac{1}{c}} = c </math>
 +
*The number of trials needed to generate a sample size of <math>N</math> follows a negative binomial distribution. The expected number of trials needed is then <math>cN</math>.<br>
 +
*So far, the only distribution we know how to sample from is the '''UNIFORM''' distribution. <br>
  
The process:
 
  
:1: Draw U1 ~ U(0,1) <br>
+
'''Procedure''': <br>
:2: Draw U2~U(0,1)  <br>
 
:3: let <math> y = U1*2 - 1 </math>
 
:4: if <math>U2 \leq \frac { \frac{3}{4} * (1-y^2)} { \frac{3}{2}} = \frac{1-y^2}{2}</math>, then x=y,  '''note that''' (3/4(1-y^2)/(3/2) is getting from f(y) / (cg(y)) )
 
:5: else: return to '''step 1'''
 
  
----
+
1. Choose <math>g(x)</math> (simple density function that we know how to sample, i.e. Uniform so far) <br>
'''Use Inverse Method for this Example'''<br>
+
The easiest case is <math>U~ \sim~ Unif [0,1] </math>. However, in other cases we need to generate UNIF(a,b). We may need to perform a linear transformation on the <math>U~ \sim~ Unif [0,1] </math> variable. <br>
:<math>F(x)=\int_0^x \! 2s\,ds={x^2} -0={x^2}</math><br>
+
2. Find a constant c such that :<math> c \cdot g(x) \geq f(x) </math>, otherwise return to step 1.
:<math>y=x^2</math><br>
 
:<math>x=\sqrt y</math>
 
:<math> F^{-1}\left (\, x \, \right) =\sqrt x</math>
 
  
:*Procedure
+
'''Recall the general procedure of Acceptance-Rejection Method'''
:1: Draw <math> U~ \sim~ Unif [0,1] </math><br>
+
#Let <math>Y \sim~ g(y)</math>
:2: <math> x=F^{-1}\left (\, u\, \right) =\sqrt u</math>
+
#Let <math>U \sim~ Unif [0,1] </math>
 +
#If <math>U \leq \frac{f(Y)}{c \cdot g(Y)}</math> then X=Y; else return to step 1 (This is not the way to find C. This is the general procedure.)
  
<span style="font-weight:bold;color:green;">Matlab Code</span>
+
<hr><b>Example: <br>  
<pre style="font-size:16px">
 
>>u=rand(1,1000);
 
>>x=u.^0.5;
 
>>hist(x)
 
</pre>
 
[[File:ARM(IFM)_Example.jpg|300px]]
 
  
<span style="font-weight:bold;colour:green;">Matlab Tip:</span>
+
Generate a random variable from the pdf</b><br>
Periods, ".",meaning "element-wise", are used to describe the operation you want performed on each element of a vector. In the above example, to take the square root of every element in U, the notation U.^0.5 is used. However if you want to take the Square root of the entire matrix U the period, "*.*" would be excluded. i.e. Let matrix B=U^0.5, then <math>B^T*B=U</math>. For example if we have a two 1 X 3 matrices and we want to find out their product; using "." in the code will give us their product; however, if we don't use "." it will just give us an error. For example, a =[1 2 3] b=[2 3 4] are vectors, a.*b=[2 6 12], but a*b does not work since matrix dimensions must agree.
+
<math> f(x) =
 +
\begin{cases}
 +
2x, & \mbox{if }0 \leqslant x \leqslant 1 \\
 +
0, & \mbox{otherwise}
 +
\end{cases} </math>
  
=====Example of Acceptance-Rejection Method=====
+
We can note that this is a special case of Beta(2,1), where,
 +
<math>beta(a,b)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}x^{(a-1)}(1-x)^{(b-1)}</math><br>
  
<math>f(x)=3x^2,  0<x<1;  </math>
+
Where &Gamma; (n) = (n - 1)! if n is positive integer
<math>g(x)=1,  0<x<1</math>
 
  
<math>c = \max \frac{f(x)}{g(x)} = \max \frac{3x^2}{1} = 3 </math><br>
+
<math>Gamma(z)=\int _{0}^{\infty }t^{z-1}e^{-t}dt</math>
<math>\frac{f(x)}{c \cdot g(x)} = x^2</math>
 
  
1. Generate two uniform numbers in the unit interval <math>U_1, U_2 \sim~ U(0,1)</math><br>
+
Aside: Beta function
2. If <math>U_2 \leqslant {U_1}^2</math>, accept <math>U_1</math> as the random variable with pdf <math>f</math>, if not return to Step 1
 
  
We can also use <math>g(x)=2x</math> for a more efficient algorithm
+
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function defined by
 +
<math>B(x,y)=\int_0^1 \! {t^{(x-1)}}{(1-t)^{(y-1)}}\,dt</math><br>
  
<math>c = \max \frac{f(x)}{g(x)} = \max \frac {3x^2}{2x} = \frac {3x}{2}  </math>.
 
Use the inverse method to sample from <math>g(x)</math>
 
<math>G(x)=x^2</math>.
 
Generate <math>U</math> from <math>U(0,1)</math> and set <math>x=sqrt(u)</math>
 
  
1. Generate two uniform numbers in the unit interval <math>U_1, U_2 \sim~ U(0,1)</math><br>
+
<math>beta(2,1)= \frac{\Gamma(3)}{(\Gamma(2)\Gamma(1))}x^1 (1-x)^0 = 2x</math><br>
2. If <math>U_2 \leq \frac{3\sqrt{U_1}}{2}</math>, accept <math>U_1</math> as the random variable with pdf <math>f</math>, if not return to Step 1
 
  
 +
<hr>
 +
<math>g=u(0,1)</math><br>
 +
<math>y=g</math><br>
 +
<math>f(x)\leq c\cdot g(x)</math><br>
 +
<math>c\geq \frac{f(x)}{g(x)}</math><br>
 +
<math>c = \max \frac{f(x)}{g(x)} </math><br>
 +
<br><math>c = \max \frac{2x}{1}, 0 \leq x \leq 1</math><br>
 +
Taking x = 1 gives the highest possible c, which is c=2
 +
<br />Note that c is a scalar greater than 1.
 +
<br />cg(x) is proposal dist, and f(x) is target dist.
  
 +
[[File:Beta(2,1)_example.jpg|750x750px]]
  
'''Possible Limitations'''
+
'''Note:''' g follows uniform distribution, it only covers half of the graph which runs from 0 to 1 on y-axis. Thus we need to multiply by c to ensure that <math>c\cdot g</math> can cover entire f(x) area. In this case, c=2, so that makes g run from 0 to 2 on y-axis which covers f(x).
 
 
This method could be computationally inefficient depending on the rejection rate. We may have to sample many points before<br>  
 
we get the 1000 accepted points. In the example we did in class relating the <math>f(x)=2x</math>, <br>
 
we had to sample around 2070 points before we finally accepted 1000 sample points.<br>
 
  
'''Acceptance - Rejection Method Application on Normal Distribution''' <br>
+
'''Comment:'''<br>  
 
+
From the picture above, we could observe that the area under f(x)=2x is a half of the area under the pdf of UNIF(0,1). This is why in order to sample 1000 points of f(x), we need to sample approximately 2000 points in UNIF(0,1).
<math>X \sim∼ N(\mu,\sigma^2), \text{ or } X = \sigma Z + \mu, Z \sim~ N(0,1) </math><br>
+
And in general, if we want to sample n points from a distritubion with pdf f(x), we need to scan approximately <math>n\cdot c</math> points from the proposal distribution (g(x)) in total. <br>
<math>\vert Z \vert</math> has probability density function of <br>
+
<b>Step</b>
 +
<ol>
 +
<li>Draw y~U(0,1)</li>
 +
<li>Draw u~U(0,1)</li>
 +
<li>if <math>u \leq \frac{(2\cdot y)}{(2\cdot 1)}, u \leq y,</math> then <math> x=y</math><br>
 +
<li>Else go to Step 1</li>
 +
</ol>
  
f(x) = (2/<math>\sqrt{2\pi}</math>) e<sup>-x<sup>2</sup>/2</sup>
+
'''Note:''' In the above example, we sample 2 numbers. If second number (u) is less than or equal to first number (y), then accept x=y, if not then start all over.
  
g(x) = e<sup>-x</sup>
+
<span style="font-weight:bold;color:green;">Matlab Code</span>
 +
<pre style="font-size:16px">
 +
>>close all
 +
>>clear all
 +
>>ii=1;            # ii:numbers that are accepted
 +
>>jj=1;            # jj:numbers that are generated
 +
>>while ii<1000
 +
    y=rand;
 +
    u=rand;
 +
    jj=jj+1;
 +
    if u<=y
 +
      x(ii)=y;
 +
      ii=ii+1;
 +
    end
 +
  end
 +
>>hist(x)          # It is a histogram
 +
>>jj
 +
  jj = 2024        # should be around 2000
 +
</pre>
 +
[[File:ARM_Example.jpg|300px]]
  
Take h(x) = f(x)/g(x) and solve for h'(x) = 0 to find x so that h(x) is maximum.  
+
:'''*Note:''' The reason that a for loop is not used is that we need to continue the looping until we get 1000 successful samples. We will reject some samples during the process and therefore do not know the number of y we are going to generate.  
  
Hence x=1 maximizes h(x) => c = <math>\sqrt{2e/\pi}</math>
+
:'''*Note2:''' In this example, we used c=2, which means we accept half of the points we generate on average. Generally speaking, 1/c would be the probability of acceptance, and an indicator of the efficiency of your chosen proposal distribution and algorithm.
  
Thus f(y)/cg(y) = e<sup>-(y-1)<sup>2</sup>/2</sup>
+
:'''*Note3:''' We use '''while''' instead of '''for''' when looping because we do not know how many iterations are required to generate 1000 successful samples. We can view this as a negative binomial distribution so while the expected number of iterations required is n * c, it will likely deviate from this amount. We expect 2000 in this case.
  
 +
:'''*Note4:''' If c=1, we will accept all points, which is the ideal situation. However, this is essentially impossible because if c = 1 then our distributions f(x) and g(x) must be identical, so we will have to be satisfied with as close to 1 as possible.
  
learn how to use code to calculate the c between f(x) and g(x).
+
'''Use Inverse Method for this Example'''<br>
 +
:<math>F(x)=\int_0^x \! 2s\,ds={x^2}-0={x^2}</math><br>
 +
:<math>y=x^2</math><br>
 +
:<math>x=\sqrt y</math>
 +
:<math> F^{-1}\left (\, x \, \right) =\sqrt x</math>
  
<p style="font-weight:bold;text-size:20px;">How to transform <math>U(0,1)</math> to <math>U(a, b)</math></p>
+
:*'''Procedure'''
 +
:1: Draw <math> U~ \sim~ Unif [0,1] </math><br>
 +
:2: <math> x=F^{-1}\left (\, u\, \right) =\sqrt u</math>
  
1. Draw U from <math>U(0,1)</math>
+
<span style="font-weight:bold;color:green;">Matlab Code</span>
 +
<pre style="font-size:16px">
 +
>>u=rand(1,1000);
 +
>>x=u.^0.5;
 +
>>hist(x)
 +
</pre>
 +
[[File:ARM(IFM)_Example.jpg|300px]]
  
2. Take <math>Y=(b-a)U+a</math>
+
<span style="font-weight:bold;colour:green;">Matlab Tip:</span>
 +
Periods, ".",meaning "element-wise", are used to describe the operation you want performed on each element of a vector. In the above example, to take the square root of every element in U, the notation U.^0.5 is used. However if you want to take the square root of the entire matrix U the period, "." would be excluded. i.e. Let matrix B=U^0.5, then <math>B^T*B=U</math>. For example if we have a two 1 X 3 matrices and we want to find out their product; using "." in the code will give us their product. However, if we don't use ".", it will just give us an error. For example, a =[1 2 3] b=[2 3 4] are vectors, a.*b=[2 6 12], but a*b does not work since the matrix dimensions must agree.
  
3. Now Y follows <math>U(a,b)</math>
+
'''
 +
'''Example for A-R method:'''
 +
 
 +
Given <math> f(x)= \frac{3}{4} (1-x^2),   -1 \leq x \leq 1 </math>,  use A-R method to generate random number
  
'''Example''': Generate a random variable z from the Semicircular density <math>f(x)= \frac{2}{\pi R^2} \sqrt{R^2-x^2}, -R\leq x\leq R</math>.
 
  
-> Proposal distribution: UNIF(-R, R)
+
[[Solution:]]
  
-> We know how to generate using <math> U \sim UNIF (0,1) </math> Let <math> Y= 2RU-R=R(2U-1)</math>, therefore Y follows <math>U(a,b)</math>
+
Let g=U(-1,1) and g(x)=1/2
  
Now, we need to find c:
+
let y ~ f,  
Since c=max[f(x)/g(x)], where <br />
+
<math> cg(x)\geq f(x),
<math>f(x)= \frac{2}{\pi R^2} \sqrt{R^2-x^2}</math>, <math>g(x)=\frac{1}{2R}</math>, <math>-R\leq x\leq R</math><br />
+
c\frac{1}{2} \geq \frac{3}{4} (1-x^2) /1,  
Thus, we have to maximize R^2-x^2.
+
c=max 2\cdot\frac{3}{4} (1-x^2) = 3/2 </math>
=> When x=0, it will be maximized.
 
Therefore, c=4/pi. * Note: This also means that the probability of accepting a point is pi/4.
 
  
We will accept the points with limit f(x)/[cg(x)].
+
The process:
Since <math>\frac{f(y)}{cg(y)}=\frac{\frac{2}{\pi R^{2}} \sqrt{R^{2}-y^{2}}}{\frac{4}{\pi} \frac{1}{2R}}=\frac{\frac{2}{\pi R^{2}} \sqrt{R^{2}-R^{2}(2U-1)^{2}}}{\frac{2}{\pi R}}</math>
 
  
* Note: Y= R(2U-1)
+
:1: Draw U1 ~ U(0,1) <br>
We can also get Y= R(2U-1) by using the formula y = a+(b-a)*u, to transform U~(0,1) to U~(a,b). Letting a=-R and b=R, and substituting it in the formula y = a+(b-a)*u, we get Y= R(2U-1).
+
:2: Draw U2 ~ U(0,1) <br>
 +
:3: let <math> y = U1*2 - 1 </math>
 +
:4: if <math>U2 \leq \frac { \frac{3}{4} * (1-y^2)} { \frac{3}{4}} = {1-y^2}</math>, then x=y, '''note that''' (3/4(1-y^2)/(3/4) is getting from f(y) / (cg(y)) )
 +
:5: else: return to '''step 1'''
  
Thus, <math>\frac{f(y)}{cg(y)}=\sqrt{1-(2U-1)^{2}}</math> * this also means the probability we can accept points
+
----
  
  
1. Draw <Math>\ U</math> from <math>\ U(0,1)</math>
+
=====Example of Acceptance-Rejection Method=====
  
2. Draw <Math>\ U_{1}</math> from <math>\ U(0,1)</math>
+
<math>\begin{align}
 +
& f(x) = 3x^2,  0<x<1 \\
 +
\end{align}</math><br\>
  
3. If  <math>U_{1} \leq \sqrt{1-(2U-1)^2}, x = y </math>
+
<math>\begin{align}
  else return to step 1.
+
& g(x)=1,  0<x<1 \\
 +
\end{align}</math><br\>
  
 +
<math>c = \max \frac{f(x)}{g(x)} = \max \frac{3x^2}{1} = 3 </math><br>
 +
<math>\frac{f(x)}{c \cdot g(x)} = x^2</math>
  
 +
1. Generate two uniform numbers in the unit interval <math>U_1, U_2 \sim~ U(0,1)</math><br>
 +
2. If <math>U_2 \leqslant {U_1}^2</math>, accept <math>\begin{align}U_1\end{align}</math> as the random variable with pdf <math>\begin{align}f\end{align}</math>, if not return to Step 1
  
The condition is <br />
+
We can also use <math>\begin{align}g(x)=2x\end{align}</math> for a more efficient algorithm
<Math> U_{1} \leq \sqrt{(1-(2U-1)^2)}</Math><br>
 
<Math>\ U_{1}^2 \leq 1 - (2U -1)^2</Math><br>
 
<Math>\ U_{1}^2 - 1 \leq (2U - 1)^2</Math><br>
 
<Math>\ 1 - U_{1}^2 \geq (2U - 1)^2</Math>
 
  
 +
<math>c = \max \frac{f(x)}{g(x)} = \max \frac {3x^2}{2x} = \frac {3x}{2}  </math>.
 +
Use the inverse method to sample from <math>\begin{align}g(x)\end{align}</math>
 +
<math>\begin{align}G(x)=x^2\end{align}</math>.
 +
Generate <math>\begin{align}U\end{align}</math> from <math>\begin{align}U(0,1)\end{align}</math> and set <math>\begin{align}x=sqrt(u)\end{align}</math>
  
 +
1. Generate two uniform numbers in the unit interval <math>U_1, U_2 \sim~ U(0,1)</math><br>
 +
2. If <math>U_2 \leq \frac{3\sqrt{U_1}}{2}</math>, accept <math>U_1</math> as the random variable with pdf <math>f</math>, if not return to Step 1
  
 +
*Note :the function <math>\begin{align}q(x) = c * g(x)\end{align}</math> is called an envelop or majoring function.<br>
 +
To obtain a better proposing function <math>\begin{align}g(x)\end{align}</math>, we can first assume a new <math>\begin{align}q(x)\end{align}</math> and then solve for the normalizing constant by integrating.<br>
 +
In the previous example, we first assume <math>\begin{align}q(x) = 3x\end{align}</math>. To find the normalizing constant, we need to solve  <math>k *\sum 3x = 1</math> which gives us k = 2/3. So,<math>\begin{align}g(x) = k*q(x) = 2x\end{align}</math>.
  
'''One more example about AR method''' <br/>
+
*Source: http://www.cs.bgu.ac.il/~mps042/acceptance.htm*      
(In this example, we will see how to determine the value of c when c is a function with unknown parameters instead of a value)
+
 
Let <math>f(x)=x*e^{-x}, x>0 </math> <br/>
+
'''Possible Limitations'''
Use <math>g(x)=a*e^{-a*x}</math>to generate random variable <br/>
 
<br/>
 
Solution: First of all, we need to find c<br/>
 
<math>cg(x)>=f(x)</math> <br/>
 
<math>c>=\frac{f(x)}{g(x)}</math> <br/>
 
<math>\frac{f(x)}{g(x)}=\frac{x}{a} * e^{-(1-a)x}</math> <br/>
 
take derivative with respect to x, and set it to 0 to get the maximum, <br/>
 
<math>\frac{1}{a} * e^{-(1-a)x} - \frac{x}{a} * e^{-(1-a)x} * (1-a) = 0 </math><br/>
 
<math>x=\frac {1}{1-a}</math> <br/>
 
  
<math>\frac {f(x)}{g(x)} = \frac {e^{-1}}{a*(1-a)} </math><br/>
+
-This method could be computationally inefficient depending on the rejection rate. We may have to sample many points before<br>  
<math>\frac {f(0)}{g(0)} = 0</math><br/>
+
we get the 1000 accepted points. In the example we did in class relating the <math>f(x)=2x</math>, <br>
<math>\frac {f(infinity)}{g(infinity)} = 0</math><br/>
+
we had to sample around 2070 points before we finally accepted 1000 sample points.<br>
<br/>
+
-If the form of the proposal distribution, g, is very different from target distribution, f, then c is very large and the algorithm is not computationally efficient.<br>
therefore, <b><math>c= \frac {e^{-1}}{a*(1-a)}</math></b><br/>
 
<br/>
 
<b>In order to minimize c, we need to find the appropriate a</b> <br/>
 
Take derivative with respect to a and set it to be zero, <br/>
 
We could get <math>a= \frac {1}{2}</math> <br/>
 
<b><math>c=\frac{4}{e}</math></b>
 
<br/>
 
Procedure: <br/>
 
1. Generate u v ~unif(0,1) <br/>
 
2. Generate y from g, since g is exponential with rate 2, let y=-ln(u) <br/>
 
3. If <math>v<\frac{f(y)}{c\cdot g(y)}</math>, output y<br/>
 
Else, go to 1<br/>
 
  
Acknowledgements: The example above is from Stat 340 Winter 2013 notes.
+
'''Acceptance - Rejection Method Application on Normal Distribution''' <br>
  
'''Summary of how to find the value of c''' <br/>
+
<math>X \sim∼ N(\mu,\sigma^2), \text{ or } X = \sigma Z + \mu, Z \sim~ N(0,1) </math><br>
Let <math>h(x) = \frac {f(x)}{g(x)}</math>, and then we have the following:<br />
+
<math>\vert Z \vert</math> has probability density function of <br>
1. First, take derivative of h(x) with respect to x, get x<sub>1</sub>;<br />
+
 
2. Plug x<sub>1</sub> into h(x) and get the value(or a function) of c, denote as c<sub>1</sub>;<br />
+
f(x) = (2/<math>\sqrt{2\pi}</math>) e<sup>-x<sup>2</sup>/2</sup>
3. Check the endpoints of x and sub the endpoints into h(x);<br />
+
 
4. (if c<sub>1</sub> is a value, then we can ignore this step) Since we want the smallest value of c such that <math>f(x) \leq c\cdot g(x)</math> for all x, we want the unknown parameter that minimizes c. <br />So we take derivative of c<sub>1</sub> with respect to the unknown parameter (ie k=unknown parameter) to get the value of k. <br />Then we submit k to get the value of c<sub>1</sub>. (Double check that <math>c_1 \geq 1</math><br />
+
g(x) = e<sup>-x</sup>
5. Pick the maximum value of h(x) to be the value of c.<br />
+
 
 +
Take h(x) = f(x)/g(x) and solve for h'(x) = 0 to find x so that h(x) is maximum.  
  
For the two examples above, we need to generate the probability function to uniform distribution,
+
Hence x=1 maximizes h(x) => c = <math>\sqrt{2e/\pi}</math>
and figure out  <math>c=max\frac {f(y)}{g(y)} </math>.
 
If <math>v<\frac {f(y)}{c\cdot g(y)}</math>, output y.
 
  
 +
Thus f(y)/cg(y) = e<sup>-(y-1)<sup>2</sup>/2</sup>
  
'''Summary of when to use the Accept Rejection Method''' <br/>
 
1) When the calculation of inverse cdf cannot to be computed or too difficult to compute. <br/>
 
2) When f(x) can be evaluated to at least one of the normalizing constant. <br/>
 
3) A constant c where <math>f(x)\leq c\cdot g(x)</math><br/>
 
4) A uniform draw<br/>
 
  
----
+
learn how to use code to calculate the c between f(x) and g(x).
  
== Interpretation of 'C' ==
+
<p style="font-weight:bold;text-size:20px;">How to transform <math>U(0,1)</math> to <math>U(a, b)</math></p>
We can use the value of c to calculate the acceptance rate by '1/c'.
 
  
For instance, assume c=1.5, then we can tell that 66.7% of the points will be accepted (1/1.5=0.667).
+
1. Draw U from <math>U(0,1)</math>
  
== Class 5 - Tuesday, May 21 ==
+
2. Take <math>Y=(b-a)U+a</math>
Recall the example in the last lecture. The following code will generate a random variable required by the question in that question.
 
  
* '''Code'''<br />
+
3. Now Y follows <math>U(a,b)</math>
<pre style="font-size:16px">
 
>>close all
 
>>clear all
 
>>ii=1;
 
>>R=1;        #Note: that R is a constant in which we can change
 
                        i.e. if we changed R=4 then we would have a density between -4 and 4
 
>>while ii<1000
 
        u1 = rand;
 
        u2 = rand;
 
        y = R*(2*u2-1);
 
        if (1-u1^2)>=(2*u2-1)^2
 
          x(ii) = y;
 
          ii = ii + 1;      #Note: for beginner programers that this step increases
 
                                the ii value for next time through the while loop
 
        end
 
  end
 
>>hist(x,20)
 
</pre>
 
  
 +
'''Example''': Generate a random variable z from the Semicircular density <math>f(x)= \frac{2}{\pi R^2} \sqrt{R^2-x^2}, -R\leq x\leq R</math>.
  
 +
-> Proposal distribution: UNIF(-R, R)
  
MATLAB tips: hist(x,y) where y is the number of bars in the graph.
+
-> We know how to generate using <math> U \sim UNIF (0,1) </math> Let <math> Y= 2RU-R=R(2U-1)</math>, therefore Y follows <math>U(-R,R)</math>
  
[[File:ARM_cont_example.jpg|300px]]
+
-> In order to maximize the function we must maximize the top and minimize the bottom.
  
a histogram to show variable x, and the bars number is y.
+
Now, we need to find c:
=== Discrete Examples ===
+
Since c=max[f(x)/g(x)], where <br />
* '''Example 1''' <br>
+
<math>f(x)= \frac{2}{\pi R^2} \sqrt{R^2-x^2}</math>, <math>g(x)=\frac{1}{2R}</math>, <math>-R\leq x\leq R</math><br />
Generate random variable <math>X</math> according to p.m.f<br/>
+
Thus, we have to maximize R^2-x^2.
<math>\begin{align}
+
=> When x=0, it will be maximized.
P(x &=1) &&=0.15 \\
+
Therefore, c=4/pi. * Note: This also means that the probability of accepting a point is <math>\pi/4</math>.
P(x &=2) &&=0.25 \\
 
P(x &=3) &&=0.3 \\
 
P(x &=4) &&=0.1 \\
 
P(x &=5) &&=0.2 \\
 
\end{align}</math><br/>
 
  
The discrete case is analogous to the continuous case. Suppose we want to generate an X that is a discrete random variable with pmf f(x)=P(X=x). Suppose we can already easily generate a discrete random variable Y with pmf g(x)=P(Y=x)such that sup<sub>x</sub> {f(x)/g(x)}<= c < ∞.
+
We will accept the points with limit f(x)/[cg(x)].
The following algorithm yields our X:
+
Since <math>\frac{f(y)}{cg(y)}=\frac{\frac{2}{\pi R^{2}} \sqrt{R^{2}-y^{2}}}{\frac{4}{\pi} \frac{1}{2R}}=\frac{\frac{2}{\pi R^{2}} \sqrt{R^{2}-R^{2}(2U-1)^{2}}}{\frac{2}{\pi R}}</math>
  
Step 1. Draw discrete uniform distribution of 1, 2, 3, 4 and 5, <math>Y \sim~ g</math>.<br/>
+
* Note: Y= R(2U-1)
Step 2. Draw <math>U \sim~ U(0,1)</math>.<br/>
+
We can also get Y= R(2U-1) by using the formula y = a+(b-a)*u, to transform U~(0,1) to U~(a,b). Letting a=-R and b=R, and substituting it in the formula y = a+(b-a)*u, we get Y= R(2U-1).
Step 3. If <math>U \leq \frac{f(Y)}{c \cdot g(Y)}</math>, then <b> X = Y </b>;<br/>
 
Else return to Step 1.<br/>
 
  
How do we compute c? Recall that c can be found by maximizing the ratio :<math> \frac{f(x)}{g(x)} </math>. Note that this is different from maximizing <math> f(x) </math> and <math> g(x) </math> independently of each other and then taking the ratio to find c.
+
Thus, <math>\frac{f(y)}{cg(y)}=\sqrt{1-(2U-1)^{2}}</math> * this also means the probability we can accept points
:<math>c = max \frac{f(x)}{g(x)} = \frac {0.3}{0.2} = 1.5  </math>
+
 
:<math>\frac{p(x)}{cg(x)} =  \frac{p(x)}{1.5*0.2} = \frac{p(x)}{0.3} </math><br>
+
The algorithm to generate random variable x is then:
Note: The U is independent from y in Step 2 and 3 above.
+
 
~The constant c is a indicator of rejection rate
+
1. Draw <Math>\ U</math> from <math>\ U(0,1)</math>
  
the acceptance-rejection method of pmf, the uniform pro is the same for all variables, and there 5 parameters(1,2,3,4,5), so g(x) is 0.2
+
2. Draw <Math>\ U_{1}</math> from <math>\ U(0,1)</math>
  
* '''Code for example 1'''<br />
+
3. If  <math>U_{1} \leq \sqrt{1-(2U-1)^2}, set x = U_{1}</math>
<pre style="font-size:16px">
+
  else return to step 1.
>>close all
 
>>clear all
 
>>p=[.15 .25 .3 .1 .2];    #This a vector holding the values
 
>>ii=1;
 
>>while ii < 1000
 
    y=unidrnd(5);
 
    u=rand;
 
    if u<= p(y)/0.3
 
      x(ii)=y;
 
      ii=ii+1;
 
    end
 
  end
 
>>hist(x)
 
</pre>
 
[[File:ARM_disc_example.jpg|300px]]
 
  
unidrnd(k) draws from the discrete uniform distribution of integers <math>1,2,3,...,k</math> If this function is not built in to your MATLAB then we can do simple transformation on the rand(k) function to make it work like the unidrnd(k) function.
 
  
The acceptance rate is <math>\frac {1}{c}</math>, so the lower the c, the more efficient the algorithm. Theoretically, c equals 1 is the best case because all samples would be accepted; however it would only be true when the proposal and target distributions are exactly the same, which would never happen in practice.
 
  
For example, if c = 1.5, the acceptance rate would be <math>\frac {1}{1.5}=\frac {2}{3}</math>. Thus, in order to generate 1000 random values, a total of 1500 iterations would be required.
+
The condition is <br />
 +
<Math> U_{1} \leq \sqrt{(1-(2U-1)^2)}</Math><br>
 +
<Math>\ U_{1}^2 \leq 1 - (2U -1)^2</Math><br>
 +
<Math>\ U_{1}^2 - 1 \leq -(2U - 1)^2</Math><br>
 +
<Math>\ 1 - U_{1}^2 \geq (2U - 1)^2</Math>
  
A histogram to show 1000 random values of f(x), more random value make the probability close to the express probability value.
 
  
  
* '''Example 2'''<br>
 
p(x=1)=0.1<br />p(x=2)=0.3<br />p(x=3)=0.6<br />
 
Let g be the uniform distribution of 1, 2, or 3<br />
 
g(x)= 1/3<br />
 
<math>c=max(p_{x}/g(x))=0.6/(1/3)=1.8</math><br />
 
1,y~g<br />
 
2,u~U(0,1)<br />
 
3, If <math>U \leq \frac{f(y)}{cg(y)}</math>, set x = y. Else go to 1.
 
  
* '''Code for example 2'''<br />
+
'''One more example about AR method''' <br/>
<pre style="font-size:16px">
+
(In this example, we will see how to determine the value of c when c is a function with unknown parameters instead of a value)
>>close all
+
Let <math>f(x)=x×e^{-x}, x > 0 </math> <br/>
>>clear all
+
Use <math>g(x)=a×e^{-a×x}</math> to generate random variable <br/>
>>p=[.1 .3 .6];   
+
<br/>
>>ii=1;
+
Solution: First of all, we need to find c<br/>
>>while ii < 1000
+
<math>cg(x)>=f(x)</math> <br/>
    y=unidrnd(3);
+
<math>c>=\frac{f(x)}{g(x)}</math> <br/>
    u=rand;
+
<math>\frac{f(x)}{g(x)}=\frac{x}{a} * e^{-(1-a)x}</math> <br/>
    if u<= p(y)/1.8
+
take derivative with respect to x, and set it to 0 to get the maximum, <br/>
      x(ii)=y;
+
<math>\frac{1}{a} * e^{-(1-a)x} - \frac{x}{a} * e^{-(1-a)x} * (1-a) = 0 </math><br/>
      ii=ii+1;
+
<math>x=\frac {1}{1-a}</math> <br/>
    end
 
  end
 
>>hist(x)
 
</pre>
 
  
 +
<math>\frac {f(x)}{g(x)} = \frac {e^{-1}}{a*(1-a)} </math><br/>
 +
<math>\frac {f(0)}{g(0)} = 0</math><br/>
 +
<math>\frac {f(\infty)}{g(\infty)} = 0</math><br/>
 +
<br/>
 +
therefore, <b><math>c= \frac {e^{-1}}{a*(1-a)}</math></b><br/>
 +
<br/>
 +
<b>In order to minimize c, we need to find the appropriate a</b> <br/>
 +
Take derivative with respect to a and set it to be zero, <br/>
 +
We could get <math>a= \frac {1}{2}</math> <br/>
 +
<b><math>c=\frac{4}{e}</math></b>
 +
<br/>
 +
Procedure: <br/>
 +
1. Generate u v ~unif(0,1) <br/>
 +
2. Generate y from g, since g is exponential with rate 2, let y=-0.5*ln(u) <br/>
 +
3. If <math>v<\frac{f(y)}{c\cdot g(y)}</math>, output y<br/>
 +
Else, go to 1<br/>
  
* '''Example 3'''<br>
+
Acknowledgements: The example above is from Stat 340 Winter 2013 notes.
<math>p_{x}=e^{-3}3^{x}/x! , x>=0</math><br>(poisson distribution)
 
Try the first few p_{x}'s: .0498 .149 .224 .224 .168 .101 .0504 .0216 .0081 .0027<br>
 
  
Use the geometric distribution for <math>g(x)</math>;<br>
+
'''Summary of how to find the value of c''' <br/>
<math>g(x)=p(1-p)^{x}</math>, choose p=0.25<br>
+
Let <math>h(x) = \frac {f(x)}{g(x)}</math>, and then we have the following:<br />
Look at <math>p_{x}/g(x)</math> for the first few numbers: .199 .797 1.59 2.12 2.12 1.70 1.13 .647 .324 .144<br>
+
1. First, take derivative of h(x) with respect to x, get x<sub>1</sub>;<br />
We want <math>c=max(p_{x}/g(x))</math> which is approximately 2.12<br>
+
2. Plug x<sub>1</sub> into h(x) and get the value(or a function) of c, denote as c<sub>1</sub>;<br />
 +
3. Check the endpoints of x and sub the endpoints into h(x);<br />
 +
4. (if c<sub>1</sub> is a value, then we can ignore this step) Since we want the smallest value of c such that <math>f(x) \leq c\cdot g(x)</math> for all x, we want the unknown parameter that minimizes c. <br />So we take derivative of c<sub>1</sub> with respect to the unknown parameter (ie k=unknown parameter) to get the value of k. <br />Then we submit k to get the value of c<sub>1</sub>. (Double check that <math>c_1 \geq 1</math><br />
 +
5. Pick the maximum value of h(x) to be the value of c.<br />
  
1. Generate <math>U_{1} \sim~ U(0,1); U_{2} \sim~ U(0,1)</math><br>
+
For the two examples above, we need to generate the probability function to uniform distribution,
2. <math>j = \lfloor \frac{ln(U_{1})}{ln(.75)} \rfloor;</math><br>
+
and figure out  <math>c=max\frac {f(y)}{g(y)} </math>.
3. if <math>U_{2} < \frac{p_{j}}{cg(j)}</math>, set X = x<sub>j</sub>, else go to step 1.
+
If <math>v<\frac {f(y)}{c\cdot g(y)}</math>, output y.
  
  
*'''Example 4''' (Hypergeometric & Binomial)<br>  
+
'''Summary of when to use the Accept Rejection Method''' <br/>
 
+
1) When the calculation of inverse cdf cannot to be computed or is too difficult to compute. <br/>
Suppose we are given f(x) such that it is hypergeometically distributed, given 10 white balls, 5 red balls, and select 3 balls, let X be the number of red ball selected, without replacement.  
+
2) When f(x) can be evaluated to at least one of the normalizing constant. <br/>
 +
3) A constant c where <math>f(x)\leq c\cdot g(x)</math><br/>
 +
4) A uniform draw<br/>
  
Choose g(x) such that it is binomial distribution, Bin(3, 1/3). Find the rejection constant, c
+
==== Interpretation of 'C' ====
 +
We can use the value of c to calculate the acceptance rate by <math>\tfrac{1}{c}</math>.
  
Solution:
+
For instance, assume c=1.5, then we can tell that 66.7% of the points will be accepted (<math>\tfrac{1}{1.5} = 0.667</math>). We can also call the efficiency of the method is 66.7%.
For hypergeometric: <math>P(X=0) =\binom{10}{3}/\binom{15}{3} =0.2637, P(x=1)=\binom{10}{2} * \binom{5}{1} /\binom{15}{3}=0.4945, P(X=2)=\binom{10}{1} * \binom{5}{2} /\binom{15}{3}=0.2198,</math><br><br>
 
<math>P(X=3)=\binom{5}{3}/\binom{15}{3}= 0.02198</math>
 
  
 +
Likewise, if the minimum value of possible values for C is <math>\tfrac{4}{3}</math>, <math>1/ \tfrac{4}{3}</math> of the generated random variables will be accepted. Thus the efficient of the algorithm is 75%.
  
For Binomial g(x): P(X=0) = (2/3)^3=0.2963; P(X=1)= 3*(1/3)*(2/3)^2 = 0.4444, P(X=2)=3*(1/3)^2*(2/3)=0.2222, P(X=3)=(1/3)^3=0.03704
+
In order to ensure the algorithm is as efficient as possible, the 'C' value should be as close to one as possible, such that <math>\tfrac{1}{c}</math> approaches 1 => 100% acceptance rate.
  
Find the value of f/g for each X
 
  
X=0: 0.8898;  
+
>> close All
X=1: 1.1127;  
+
>> clear All
X=2: 0.9891;
+
>> i=1
X=3: 0.5934
+
>> j=0;
 +
>> while ii<1000
 +
y=rand
 +
u=rand
 +
if u<=y;
 +
x(ii)=y
 +
ii=ii+1
 +
end
 +
end
  
Choose the maximum which is [[c=1.1127]]
+
== Class 5 - Tuesday, May 21 ==
 +
Recall the example in the last lecture. The following code will generate a random variable required by the question.
  
Looking for the max f(x) is 0.4945 and the max g(x) is 0.4444, so we can calculate the max c is 1.1127.
+
* '''Code'''<br />
But for the graph, this c is not the best because it does not cover all the point of f(x), so we need to move the c*g(x) graph to cover all f(x), and decreasing the rejection ratio.
+
<pre style="font-size:16px">
 +
>>close all
 +
>>clear all
 +
>>ii=1;
 +
>>R=1;        #Note: that R is a constant in which we can change
 +
                        i.e. if we changed R=4 then we would have a density between -4 and 4
 +
>>while ii<1000
 +
        u1 = rand;
 +
        u2 = rand;
 +
        y = R*(2*u2-1);
 +
        if (1-u1^2)>=(2*u2-1)^2
 +
          x(ii) = y;
 +
          ii = ii + 1;      #Note: for beginner programmers that this step increases
 +
                                the ii value for next time through the while loop
 +
        end
 +
  end
 +
>>hist(x,20)                  # 20 is the number of bars
 +
 
 +
>>hist(x,30)                 #30 is the number of bars
 +
</pre>
 +
 
 +
calculate process:
 +
<math>u_{1} <= \sqrt (1-(2u-1)^2) </math> <br>
 +
<math>(u_{1})^2 <=(1-(2u-1)^2) </math> <br>
 +
<math>(u_{1})^2 -1 <=(-(2u-1)^2) </math> <br>
 +
<math>1-(u_{1})^2 >=((2u-1)^2-1) </math> <br>
 +
 
 +
 
 +
MATLAB tips: hist(x,y) plots a histogram of variable x, where y is the number of bars in the graph.
  
Limitation: If the shape of the proposed distribution g is very different from the target distribution f, then the rejection rate will be high (High c value). Computationally, the algorithm is always right; however it is inefficient and requires many iterations. <br>
+
[[File:ARM_cont_example.jpg|300px]]
Here is an example:
 
[[File:ARM_Fail.jpg]]
 
  
In the above example, we need to move c*g(x) to the peak of f to cover the whole f. Thus c will be very large and 1/c will be small.
+
=== Discrete Examples ===
The higher the rejection rate, more points will be rejected.<br>
+
* '''Example 1''' <br>
More on rejection/acceptance rate: 1/c is the acceptance rate. As c decreases (note: the minimum value of c is 1), the acceptance rate increases. In our last example, 1/c=1/1.5≈66.67%. Around 67% of points generated will be accepted.<br>
+
Generate random variable <math>X</math> according to p.m.f<br/>
<div style="margin-bottom:10px;border:10px solid red;background: yellow">one good example to understand pros and cons about the AR method. AR method is useless when dealing with sampling distribution with a peak which is high, because c will be huge<br>
+
<math>\begin{align}
which brings the acceptance rate low which leads to very time take sampling </div>
+
P(x &=1) &&=0.15 \\
<div style="border:1px solid #cccccc;border-radius:10px;box-shadow: 0 5px 15px 1px rgba(0, 0, 0, 0.6), 0 0 200px 1px rgba(255, 255, 255, 0.5);padding:20px;margin:20px;background:#FFFFAD;">
+
P(x &=2) &&=0.25 \\
<h2 style="text-align:center;">Acceptance-Rejection Method</h2>
+
P(x &=3) &&=0.3 \\
<p><b>Problem:</b> The CDF is not invertible or it is difficult to find the inverse.</p>
+
P(x &=4) &&=0.1 \\
<p><b>Plan:</b></p>
+
P(x &=5) &&=0.2 \\
<ol>
+
\end{align}</math><br/>
<li>Draw y~g(.)</li>
 
<li>Draw u~Unif(0,1)</li>
 
<li>If <math>u\leq \frac{f(y)}{cg(y)}</math>then set x=y. Else return to Step 1</li>
 
</ol>
 
<p>x will have the desired distribution.</p>
 
<b>Matlab Example</b>
 
<pre style="font-size:16px">close all
 
clear all
 
ii=1;
 
R=1;
 
while ii&lt;1000
 
  u1 = rand;
 
  u2 = rand;
 
  y = R*(2*u2-1);
 
  if (1-u1^2)&gt;=(2*u2-1)^2
 
    x(ii) = y;
 
    ii = ii + 1;
 
  end
 
end
 
hist(x,20)
 
</pre>
 
</div>
 
  
 +
The discrete case is analogous to the continuous case. Suppose we want to generate an X that is a discrete random variable with pmf f(x)=P(X=x). Suppose also that we use the discrete uniform distribution as our target distribution, then <math> g(x)= P(X=x) =0.2 </math> for all X.
  
Recall that,
+
The following algorithm then yields our X:
Suppose we have an efficient method for simulating a random variable having probability mass function {q(j),j>=0}. We can use this as the basis for simulating from the distribution having mass function {p(j),j>=0} by first simulating a random variable Y having mass function {q(j)} and then accepting this simulated value with a probability proportinal to p(Y)/q(Y).
 
Specifically, let c be a constant such that
 
                p(j)/q(j)<=c for all j such that p(j)>0
 
We now have the following technique, called the acceptance-rejection method, for simulating a random variable X having mass function p(j)=P{X=j}.
 
  
=== Sampling from commonly used distributions ===
+
Step 1 Draw discrete uniform distribution of 1, 2, 3, 4 and 5, <math>Y \sim~ g</math>.<br/>
 +
Step 2 Draw <math>U \sim~ U(0,1)</math>.<br/>
 +
Step 3 If <math>U \leq \frac{f(Y)}{c \cdot g(Y)}</math>, then <b> X = Y </b>;<br/>
 +
Else return to Step 1.<br/>
  
Please note that this is not a general technique as is that of acceptance-rejection sampling. Later, we will generalize the distributions for multidimensional purposes.
+
C can be found by maximizing the ratio :<math> \frac{f(x)}{g(x)} </math>. To do this, we want to maximize <math> f(x) </math> and minimize <math> g(x) </math>. <br>
 +
:<math>c = max \frac{f(x)}{g(x)} = \frac {0.3}{0.2} = 1.5  </math> <br/>
 +
Note: In this case <math>f(x)=P(X=x)=0.3</math> (highest probability from the discrete probabilities in the question)
 +
:<math>\frac{p(x)}{cg(x)} =  \frac{p(x)}{1.5*0.2} = \frac{p(x)}{0.3} </math><br>
 +
Note: The U is independent from y in Step 2 and 3 above.
 +
~The constant c is a indicator of rejection rate or efficiency of the algorithm. It can represent the average number of trials of the algorithm. Thus, a higher c would mean that the algorithm is comparatively inefficient.
  
* '''Gamma'''<br />
+
the acceptance-rejection method of pmf, the uniform probability is the same for all variables, and there are 5 parameters(1,2,3,4,5), so g(x) is 0.2
  
The CDF of the Gamma distribution <math>Gamma(t,\lambda)</math> is:  <br>
+
Remember that we always want to choose <math> cg </math> to be equal to or greater than <math> f </math>, but as close as possible.
<math> F(x) = \int_0^{\lambda x} \frac{e^{-y}y^{t-1}}{(t-1)!} \mathrm{d}y, \; \forall x \in (0,+\infty)</math>, where <math>t \in \N^+ \text{ and } \lambda \in (0,+\infty)</math>.<br>
+
<br />limitations: If the form of the proposal dist g is very different from target dist f, then c is very large and the algorithm is not computatively efficient.
  
 +
* '''Code for example 1'''<br />
 +
<pre style="font-size:16px">
 +
>>close all
 +
>>clear all
 +
>>p=[.15 .25 .3 .1 .2];    %This a vector holding the values
 +
>>ii=1;
 +
>>while ii < 1000
 +
    y=unidrnd(5);          %generates random numbers for the discrete uniform 
 +
    u=rand;                distribution with maximum 5.
 +
    if u<= p(y)/0.3
 +
      x(ii)=y;
 +
      ii=ii+1;
 +
    end
 +
  end
 +
>>hist(x)
 +
</pre>
 +
[[File:ARM_disc_example.jpg|300px]]
  
Neither Inverse Transformation nor Acceptance/Rejection Method can be easily applied to Gamma distribution.
+
unidrnd(k) draws from the discrete uniform distribution of integers <math>1,2,3,...,k</math> If this function is not built in to your MATLAB then we can do simple transformation on the rand(k) function to make it work like the unidrnd(k) function.  
However, we can use additive property of Gamma distribution to generate random variables.
 
  
* '''Additive Property'''<br />
+
The acceptance rate is <math>\frac {1}{c}</math>, so the lower the c, the more efficient the algorithm. Theoretically, c equals 1 is the best case because all samples would be accepted; however it would only be true when the proposal and target distributions are exactly the same, which would never happen in practice.
If <math>X_1, \dots, X_t</math> are independent exponential distributions with hazard rate <math> \lambda </math> (in other words, <math> X_i\sim~ Exp (\lambda) </math><math> Exp (\lambda)= Gamma (1, \lambda)), then \Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda) </math>
 
  
 +
For example, if c = 1.5, the acceptance rate would be <math>\frac {1}{1.5}=\frac {2}{3}</math>. Thus, in order to generate 1000 random values, on average, a total of 1500 iterations would be required.
  
Side notes: if <math> X_i\sim~ Gamma(a,\lambda)</math> and <math> Y_i\sim~ Gamma(B,\lambda)</math> are independent gamma distributions, then <math>\frac{X}{X+Y}</math> has a distribution of <math> Beta(a,B).
+
A histogram to show 1000 random values of f(x), more random value make the probability close to the express probability value.
  
  
If we want to sample from the Gamma distribution, we can consider sampling from <math>t</math> independent exponential distributions using the Inverse Method for each <math> X_i</math> and add them up.
+
* '''Example 2'''<br>
 +
p(x=1)=0.1<br />p(x=2)=0.3<br />p(x=3)=0.6<br />
 +
Let g be the uniform distribution of 1, 2, or 3<br />
 +
g(x)= 1/3<br />
 +
<math>c=max(\tfrac{p_{x}}{g(x)})=0.6/(\tfrac{1}{3})=1.8</math><br />
 +
Hence <math>\tfrac{p(x)}{cg(x)} = p(x)/(1.8 (\tfrac{1}{3}))= \tfrac{p(x)}{0.6}</math>
  
According to this property, a random variable that follows Gamma distribution is the sum of i.i.d (independent and identically distributed) exponential random variables. Now we want to generate 1000 values of <math>Gamma(20,10)</math> random variables, so we need to obtain the value of each one by adding 20 values of <math>X_i \sim~ Exp(10)</math>. To achieve this, we generate a 20-by-1000 matrix whose entries follow <math>Exp(10)</math> and add the rows together.
+
1,y~g<br />
<math> x_1 </math>~Exp(<math>\lambda </math>)
+
2,u~U(0,1)<br />
<math>x_2 </math>~Exp(<math> \lambda </math>)
+
3, If <math>U \leq \frac{f(y)}{cg(y)}</math>, set x = y. Else go to 1.
...
 
<math>x_t </math>~Exp(<math> \lambda </math>)
 
<math>x_1+x_2+...+x_t</math>
 
  
 +
* '''Code for example 2'''<br />
 
<pre style="font-size:16px">
 
<pre style="font-size:16px">
>>l=1
+
>>close all
>>u-rand(1,1000);
+
>>clear all
>>x=-(1/l)*log(u);   
+
>>p=[.1 .3 .6];    %This a vector holding the values 
 +
>>ii=1;
 +
>>while ii < 1000
 +
    y=unidrnd(3);   %generates random numbers for the discrete uniform distribution with maximum 3
 +
    u=rand;           
 +
    if u<= p(y)/0.6
 +
      x(ii)=y;   
 +
      ii=ii+1;     %else ii=ii+1
 +
    end
 +
   end
 
>>hist(x)
 
>>hist(x)
>>rand
 
 
</pre>
 
</pre>
  
  
* '''Procedure '''
+
* '''Example 3'''<br>
 +
 
 +
Suppose <math>\begin{align}p_{x} = e^{-3}3^{x}/x! , x\geq 0\end{align}</math> (Poisson distribution)
 +
 
 +
'''First:''' Try the first few <math>\begin{align}p_{x}'s\end{align}</math>:  0.0498, 0.149, 0.224, 0.224, 0.168, 0.101, 0.0504, 0.0216, 0.0081, 0.0027 for <math>\begin{align} x = 0,1,2,3,4,5,6,7,8,9 \end{align}</math><br>
  
:#Sample  independently from a uniform distribution <math>t</math> times, giving <math> U_1,\dots,U_t \sim~ U(0,1)</math>
+
'''Proposed distribution:''' Use the geometric distribution for <math>\begin{align}g(x)\end{align}</math>;<br>
:#Use the Inverse Transform Method, <math> X_i = -\frac {1}{\lambda}\log(1-U_i)</math>, giving <math> X_1,\dots,X_t \sim~Exp(\lambda)</math>
 
:#Use the additive property,<math> X = \Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda)  </math><br>
 
  
 +
<math>\begin{align}g(x)=p(1-p)^{x}\end{align}</math>, choose <math>\begin{align}p=0.25\end{align}</math><br>
  
* '''Note for Procedure '''
+
Look at <math>\begin{align}p_{x}/g(x)\end{align}</math> for the first few numbers: 0.199 0.797 1.59 2.12 2.12 1.70 1.13 0.647 0.324 0.144 for <math>\begin{align} x = 0,1,2,3,4,5,6,7,8,9 \end{align}</math><br>
:#If <math>U\sim~U(0,1)</math>, then <math>U</math> and <math>1-U</math> will have the same distribution (both follows <math>U(0,1)</math>)
 
:#This is because the range for <math>1-U</math> is still <math>(0,1)</math>, and their densities are identical over this range.
 
:#Let <math>Y=1-U</math>, <math>Pr(Y<=y)=Pr(1-U<=y)=Pr(U>=1-y)=1-Pr(U<=1-y)=1-(1-y)=y</math>, thus <math>1-U\sim~U(0,1)</math>
 
  
 +
We want <math>\begin{align}c=max(p_{x}/g(x))\end{align}</math> which is approximately 2.12<br>
  
 +
'''The general procedures to generate <math>\begin{align}p(x)\end{align}</math> is as follows:'''
  
* '''Code'''<br />
+
1. Generate <math>\begin{align}U_{1} \sim~ U(0,1); U_{2} \sim~ U(0,1)\end{align}</math><br>
<pre style="font-size:16px">
 
>>close all
 
>>clear all
 
>>lambda = 1;
 
>>u = rand(20, 1000);            Note: this command generate a 20x1000 matrix
 
                            (which means we generate 1000 number for each X_i with t=20);
 
                            all the elements are generated by rand
 
>>x = (-1/lambda)*log(1-u);     Note: log(1-u) is essentially the same as log(u) only if u~U(0,1)  
 
>>xx = sum(x)                    Note: sum(x) will sum all elements in the same column.
 
                                                size(xx) can help you to verify
 
>>size(sum(x))                  Note: see the size of x if we forget it
 
                                      (the answer is 20 1000)
 
>>hist(x(1:))                    Note: the graph of the first exponential distribution
 
>>hist(xx)
 
</pre>
 
[[File:Gamma_example.jpg|300px]]
 
  
 +
2. <math>\begin{align}j = \lfloor \frac{ln(U_{1})}{ln(.75)} \rfloor+1;\end{align}</math><br>
  
 +
3. if <math>U_{2} < \frac{p_{j}}{cg(j)}</math>, set <math>\begin{align}X = x_{j}\end{align}</math>, else go to step 1.
  
size(x) and size(u) are both 20*1000 matrix.
+
Note: In this case, <math>\begin{align}f(x)/g(x)\end{align}</math> is extremely difficult to differentiate so we were required to test points. If the function is very easy to differentiate, we can calculate the max as if it were a continuous function then check the two surrounding points for which is the highest discrete value.
Since if u~unif(0, 1), u and 1 - u have the same distribution, we can substitue 1-u with u to simply the equation.
 
Alternatively, the following command will do the same thing with the previous commands.
 
  
* '''Code'''<br />
+
* Source: http://www.math.wsu.edu/faculty/genz/416/lect/l04-46.pdf*
<pre style="font-size:16px">
 
>>close all
 
>>clear all
 
>>lambda = 1;
 
>>xx = sum((-1/lambda)*log(rand(20, 1000))); ''This is simple way to put the code in one line.  
 
                                              Here we can use either log(u) or log(1-u) since U~U(0,1);
 
>>hist(xx)
 
</pre>
 
  
in the matrix rand(20,1000) means 20 row with 1000 numbers for each.
+
*'''Example 4''' (Hypergeometric & Binomial)<br>
use the code to show the generalize the distributions for multidimensional purposes in different cases, such as sum xi (each xi not equal xj), and they are independent, or matrix. Finally, we can see the conclusion is shown by the histogram.
 
  
=== Other Sampling Method: Coordinate System ===
+
Suppose we are given f(x) such that it is hypergeometically distributed, given 10 white balls, 5 red balls, and select 3 balls, let X be the number of red ball selected, without replacement.
[[File:Unnamed_QQ_Screenshot20130521203625.png‎]]
 
* From cartesian to polar coordinates <br />
 
<math> R=\sqrt{x_{1}^2+x_{2}^2}= x_{2}/sin(\theta)= x_{1}/cos(\theta)</math> <br />
 
<math> tan(\theta)=x_{2}/x_{1} \rightarrow \theta=tan^{-1}(x_{2}/x_{1})</math> <br />
 
  
 +
Choose g(x) such that it is binomial distribution, Bin(3, 1/3). Find the rejection constant, c
  
if the graph is straight line, we can set the length of the line is R, and x=cos(sigma) , y=sin(sigma)
+
Solution:
 +
For hypergeometric: <math>P(X=0) =\binom{10}{3}/\binom{15}{3} =0.2637, P(x=1)=\binom{10}{2} * \binom{5}{1} /\binom{15}{3}=0.4945, P(X=2)=\binom{10}{1} * \binom{5}{2} /\binom{15}{3}=0.2198,</math><br><br>
 +
<math>P(X=3)=\binom{5}{3}/\binom{15}{3}= 0.02198</math>
  
=== '''Matlab''' ===
 
  
If X is a matrix; <br />
+
For Binomial g(x): P(X=0) = (2/3)^3=0.2963; P(X=1)= 3*(1/3)*(2/3)^2 = 0.4444, P(X=2)=3*(1/3)^2*(2/3)=0.2222, P(X=3)=(1/3)^3=0.03704
:*: ''X(1,:)'' returns the first row <br/ >
 
:*: ''X(:,1)'' returns the first column <br/ >
 
:*: ''X(i,i)'' returns the (i,i)th entry <br/ >
 
:*: ''sum(X,1)'' or ''sum(X)'' is a summation of the rows of X, sum(X) also does the same thing. The output is a row vector of the sums of each column. <br />
 
:*: ''sum(X,2)'' is a summation of the columns of X, returning a vector. <br/ >
 
:*: ''rand(r,c)'' will generate random numbers in r row and c columns <br />
 
:*: The dot operator (.), when placed before a function, such as +,-,^, *, and many others specifies to apply that function to every element of a vector or a matrix. For example, to add a constant c to elements of a matrix A, do A.+c as opposed to simply A+c. The dot operator is not required for functions that can only take a number as their input (such as log).<br>
 
:*: Matlab processes loops very slowly, while it is fast with matrices and vectors, so it is preferable to use  the dot operator to and matrices of random numbers than loops if it is possible.<br>
 
  
== Class 6 - Thursday, May 23 ==
+
Find the value of f/g for each X
  
=== Announcement ===
+
X=0: 0.8898;
1. On the day of each lecture, students from the morning section can only contribute the first half of the lecture (i.e. 8:30 - 9:10 am), so that the second half can be saved for the ones from the afternoon section. After the day of lecture, students are free to contribute anything.
+
X=1: 1.1127;
 +
X=2: 0.9891;
 +
X=3: 0.5934
  
=== Standard Normal distribution ===
+
Choose the maximum which is [[c=1.1127]]
If X ~ N(0,1) i.e. Standard Normal Distribution - then its p.d.f. is of the form
 
:<math>f(x) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2}</math>
 
  
*Warning : the General Normal distribution is  
+
Looking for the max f(x) is 0.4945 and the max g(x) is 0.4444, so we can calculate the max c is 1.1127.
:
+
But for the graph, this c is not the best because it does not cover all the point of f(x), so we need to move the c*g(x) graph to cover all f(x), and decreasing the rejection ratio.
<table>
 
<tr>
 
<td><div onmouseover="document.getElementById('woyun').style.visibility='visible'"
 
onmouseout="document.getElementById('woyun').style.visibility='hidden'">
 
<math>
 
f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }
 
</math>
 
</div>
 
</td>
 
<td>
 
<div id="woyun" style="
 
  
visibility:hidden;
+
Limitation: If the shape of the proposed distribution g is very different from the target distribution f, then the rejection rate will be high (High c value). Computationally, the algorithm is always right; however it is inefficient and requires many iterations. <br>
width:100px;
+
Here is an example:  
height:100px;
+
[[File:ARM_Fail.jpg]]
background:#FFFFAD;
+
 
position:relative;
+
In the above example, we need to move c*g(x) to the peak of f to cover the whole f. Thus c will be very large and 1/c will be small.
animation:movement infinite;
+
The higher the rejection rate, more points will be rejected.<br>
animation-duration:2s;
+
More on rejection/acceptance rate: 1/c is the acceptance rate. As c decreases (note: the minimum value of c is 1), the acceptance rate increases. In our last example, 1/c=1/1.5≈66.67%. Around 67% of points generated will be accepted.<br>
animation-direction:alternate;
+
<div style="margin-bottom:10px;border:10px solid red;background: yellow"> the example below provides a better understanding about the pros and cons of the AR method. The AR method is useless when dealing with sampling distribution with a higher peak since c will be large, hence making our algorithm inefficient<br>
 +
which brings the acceptance rate low which leads to very time consuming sampling </div>
 +
<div style="border:1px solid #cccccc;border-radius:10px;box-shadow: 0 5px 15px 1px rgba(0, 0, 0, 0.6), 0 0 200px 1px rgba(255, 255, 255, 0.5);padding:20px;margin:20px;background:#FFFFAD;">
 +
<h2 style="text-align:center;">Acceptance-Rejection Method</h2>
 +
<p><b>Problem:</b> The CDF is not invertible or it is difficult to find the inverse.</p>
 +
<p><b>Plan:</b></p>
 +
<ol>
 +
<li>Draw y~g(.)</li>
 +
<li>Draw u~Unif(0,1)</li>
 +
<li>If <math>u\leq \frac{f(y)}{cg(y)}</math>then set x=y. Else return to Step 1</li>
 +
</ol>
 +
<p>x will have the desired distribution.</p>
 +
<b>Matlab Example</b>
 +
<pre style="font-size:16px">close all
 +
clear all
 +
ii=1;
 +
R=1;
 +
while ii&lt;1000
 +
  u1 = rand;
 +
  u2 = rand;
 +
  y = R*(2*u2-1);
 +
  if (1-u1^2)&gt;=(2*u2-1)^2
 +
    x(ii) = y;
 +
    ii = ii + 1;
 +
  end
 +
end
 +
hist(x,20)
 +
</pre>
 +
</div>
  
  
/* Safari and Chrome */
+
Recall that,
-webkit-animation:movement infinite;
+
Suppose we have an efficient method for simulating a random variable having probability mass function {q(j),j>=0}. We can use this as the basis for simulating from the distribution having mass function {p(j),j>=0} by first simulating a random variable Y having mass function {q(j)} and then accepting this simulated value with a probability proportional to p(Y)/q(Y).
-webkit-animation-duration:2s;
+
Specifically, let c be a constant such that
-webkit-animation-direction:alternate;
+
                p(j)/q(j)<=c for all j such that p(j)>0
 
+
We now have the following technique, called the acceptance-rejection method, for simulating a random variable X having mass function p(j)=P{X=j}.
 
+
 
@keyframes movement
+
=== Sampling from commonly used distributions ===
{
 
from {left:0px;}
 
to {left:200px;}
 
}
 
 
 
@-webkit-keyframes movement /* Safari and Chrome */
 
{
 
from {left:0px;}
 
to {left:200px;}
 
}"
 
>which is almost useless in this course</div>
 
</td>
 
</tr>
 
</table>
 
where <math> \mu </math> is the mean or expectation of the distribution and <math> \sigma </math> is standard deviation <br />
 
<br />
 
*N(0,1) is standard normal. <math> \mu </math> =0 and <math> \sigma </math>=1 <br />
 
<br />
 
  
Let X and Y be independent standard normal.
+
Please note that this is not a general technique as is that of acceptance-rejection sampling. Later, we will generalize the distributions for multidimensional purposes.
  
Let <math> \theta </math> and R denote the Polar coordinate of the vector (X, Y)
+
* '''Gamma'''<br />
  
Note: R must satisfy two properties:
+
The CDF of the Gamma distribution <math>Gamma(t,\lambda)</math> is(t denotes the shape, <math>\lambda</math> denotes the scale: <br>
 +
<math> F(x) = \int_0^{x} \frac{e^{-y}y^{t-1}}{(t-1)!} \mathrm{d}y, \; \forall x \in (0,+\infty)</math>, where <math>t \in \N^+ \text{ and } \lambda \in (0,+\infty)</math>.<br>
  
:1. Be a positive number (as it is a length)
+
Note that the CDF of the Gamma distribution does not have a closed form.
  
:2. It must be from a distribution that has more data points closer to the origin so that as we go further from the origin, less points are generated (the two options are Chi-squared and Exponential distribution)
+
The gamma distribution is often used to model waiting times between a certain number of events. It can also be expressed as the sum of infinitely many independent and identically distributed exponential distributions. This distribution has two parameters: the number of exponential terms n, and the rate parameter <math>\lambda</math>. In this distribution there is the Gamma function, <math>\Gamma </math> which has some very useful properties. "Source: STAT 340 Spring 2010 Course Notes" <br/>
  
The form of the joint distribution of R and <math>\theta</math> will show that the best choice for distribution of R<sup>2</sup> is exponential.
+
Neither Inverse Transformation nor Acceptance-Rejection Method can be easily applied to Gamma distribution.
 +
However, we can use additive property of Gamma distribution to generate random variables.
  
 +
* '''Additive Property'''<br />
 +
If <math>X_1, \dots, X_t</math> are independent exponential distributions with hazard rate <math> \lambda </math> (in other words, <math> X_i\sim~ Exp (\lambda) </math><math>,  Exp (\lambda)= Gamma (1, \lambda)), then \Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda) </math>
  
We cannot use the Inverse Transformation Method since F(x) does not have a closed form solution. So we will use joint probability function of two independent standard normal random variables and polar coordinates to simulate the distribution:
 
  
We know that
+
Side notes: if <math> X_i\sim~ Gamma(a,\lambda)</math> and <math> Y_i\sim~ Gamma(B,\lambda)</math> are independent gamma distributions, then <math>\frac{X}{X+Y}</math> has a distribution of <math> Beta(a,B). </math>
  
:R<sup>2</sup>= X<sup>2</sup>+Y<sup>2</sup> and <math> \tan(\theta) = \frac{y}{x} </math>
 
:<math>f(x) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2}</math>
 
:<math>f(y) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} y^2}</math>
 
:<math>f(x,y) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2} * \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} y^2}=\frac{1}{2\pi}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} (x^2+y^2)} </math><br /> - Since both the distributions are independent
 
It can also be shown using 1-1 transformation that the joint distribution of R and θ is given by,
 
1-1 transformation:<br />
 
Let <math>d=R^2</math><br />
 
<math>x= \sqrt {d}\cos \theta </math>
 
<math>y= \sqrt {d}\sin \theta </math>
 
then
 
<math>\left| J\right| = \left| \dfrac {1} {2}d^{-\frac {1} {2}}\cos \theta d^{\frac{1}{2}}\cos \theta +\sqrt {d}\sin \theta \dfrac {1} {2}d^{-\frac{1}{2}}\sin \theta \right| = \dfrac {1} {2}</math>
 
It can be shown that the pdf of <math> d </math> and <math> \theta </math> is:
 
:<math>\begin{matrix}  f(d,\theta) = \frac{1}{2}e^{-\frac{d}{2}}*\frac{1}{2\pi},\quad  d = R^2 \end{matrix},\quad for\quad 0\leq d<\infty\ and\quad 0\leq \theta\leq 2\pi </math>
 
  
 +
If we want to sample from the Gamma distribution, we can consider sampling from <math>t</math> independent exponential distributions using the Inverse Method for each <math> X_i</math> and add them up. Note that this only works the specific set of gamma distributions where t is a positive integer.
  
 +
According to this property, a random variable that follows Gamma distribution is the sum of i.i.d (independent and identically distributed) exponential random variables. Now we want to generate 1000 values of <math>Gamma(20,10)</math> random variables, so we need to obtain the value of each one by adding 20 values of <math>X_i \sim~ Exp(10)</math>. To achieve this, we generate a 20-by-1000 matrix whose entries follow <math>Exp(10)</math> and add the rows together.<br />
 +
<math> x_1 \sim~Exp(\lambda)</math><br />
 +
<math>x_2 \sim~Exp(\lambda)</math><br />
 +
...<br />
 +
<math>x_t \sim~Exp(\lambda)</math><br />
 +
<math>x_1+x_2+...+x_t~</math>
  
Note that <math> \begin{matrix}f(r,\theta)\end{matrix}</math> consists of two density functions, Exponential and Uniform, so assuming that r and <math>\theta</math> are independent
+
<pre style="font-size:16px">
<math> \begin{matrix} \Rightarrow d \sim~ Exp(1/2),  \theta \sim~ Unif[0,2\pi] \end{matrix} </math>
+
>>l=1
::* <math> \begin{align} R^2 = x^2 + y^2 \end{align} </math>
+
>>u-rand(1,1000);
::* <math> \tan(\theta) = \frac{y}{x} </math>
+
>>x=-(1/l)*log(u)
<math>\begin{align} f(d) = Exp(1/2)=\frac{1}{2}e^{-\frac{d}{2}}\ \end{align}</math>
+
>>hist(x)
<br>
+
>>rand
<math>\begin{align} f(\theta) =\frac{1}{2\pi}\ \end{align}</math>
+
</pre>
<br>
 
To sample from the normal distribution, we can generate a pair of independent standard normal X and Y by:<br />
 
1) Generating their polar coordinates<br />
 
2) Transforming back to rectangular (Cartesian) coordinates.<br />
 
==== Expectation of a Standard Normal distribution ====
 
The expectation of a standard normal distribution is 0
 
:Below is the proof:
 
 
 
:<math>\operatorname{E}[X]= \;\int_{-\infty}^{\infty} x \frac{1}{\sqrt{2\pi}}  e^{-x^2/2} \, dx.</math>
 
:<math>\phi(x) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2}.</math>
 
:<math>=\;\int_{-\infty}^{\infty} x \phi(x), dx.</math>
 
:Since the first derivative ''ϕ''′(''x'') is −''xϕ''(''x'')
 
:<math>=\;\ - \int_{-\infty}^{\infty} \phi'(x), dx.</math>
 
:<math>= - \left[\phi(x)\right]_{-\infty}^{\infty}</math>
 
:<math>= 0</math><br />
 
More intuitively, because x is an odd function (f(x)+f(-x)=0). Taking integral of x will give <math>x^2/2 </math> which is an even function (f(x)=f(-x)). If support is from negative infinity to infinity, then the integral will return 0.<br />
 
  
* '''Procedure (Box-Muller Transformation Method):''' <br />
 
Pseudorandom approaches to generating normal random variables used to be limited. Inefficient methods such as inverse Gaussian function, sum of uniform random variables, and acceptance-rejection were used. In 1958, a new method was proposed by George Box and Mervin Muller of Princeton University. This new technique had the easy of use and accuracy that grew more valuable as computers became more computationally astute since then.
 
The Box-Muller method takes a sample from a bivariate independent standard normal distribution, each component of which is thus a univariate standard normal. The algorithm is based on the following two properties of the bivariate independent standard normal distribution:
 
if Z = (Z<sub>1</sub>, Z<sub>2</sub>) has this distribution, then
 
1.R<sup>2</sup>=Z<sub>1</sub><sup>2</sup>+Z<sub>2</sub><sup>2</sup> is exponentially distributed with mean 2, i.e.
 
    P(R<sup>2</sup> <= x) = 1-e<sup>-x/2</sup>.
 
2.GivenR<sup>2</sup>, the point (Z<sub>1</sub>,Z<sub>2</sub>) is uniformly distributed on the circle of radius R centered at the origin.
 
We can use these properties to build the algorithm:
 
  
1) Generate random number <math> \begin{align} U_1,U_2 \sim~ \mathrm{Unif}(0, 1) \end{align} </math> <br />
+
* '''Procedure '''
2) Generate polar coordinates using the exponential distribution of d and uniform distribution of θ,
 
  
 +
:#Sample  independently from a uniform distribution <math>t</math> times, giving <math> U_1,\dots,U_t \sim~ U(0,1)</math>
 +
:#Use the Inverse Transform Method, <math> X_i = -\frac {1}{\lambda}\log(1-U_i)</math>, giving <math> X_1,\dots,X_t \sim~Exp(\lambda)</math>
 +
:#Use the additive property,<math> X = \Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda)  </math><br>
  
  
<math> \begin{align} R^2 = d = -2\log(U_1), & \quad r = \sqrt{d} \\ & \quad \theta = 2\pi    U_2  \end{align} </math>
+
* '''Note for Procedure '''
 +
:#If <math>U\sim~U(0,1)</math>, then <math>U</math> and <math>1-U</math> will have the same distribution (both follows <math>U(0,1)</math>)
 +
:#This is because the range for <math>1-U</math> is still <math>(0,1)</math>, and their densities are identical over this range.
 +
:#Let <math>Y=1-U</math>, <math>Pr(Y<=y)=Pr(1-U<=y)=Pr(U>=1-y)=1-Pr(U<=1-y)=1-(1-y)=y</math>, thus <math>1-U\sim~U(0,1)</math>
  
  
<math> \begin{matrix} \ R^2 \sim~ Exp(2),  \theta \sim~ Unif[0,2\pi] \end{matrix} </math> <br />
 
  
 +
* '''Code'''<br />
 +
<pre style="font-size:16px">
 +
>>close all
 +
>>clear all
 +
>>lambda = 1;
 +
>>u = rand(20, 1000);            Note: this command generate a 20x1000 matrix
 +
                            (which means we generate 1000 number for each X_i with t=20);
 +
                            all the elements are generated by rand
 +
>>x = (-1/lambda)*log(1-u);      Note: log(1-u) is essentially the same as log(u) only if u~U(0,1)
 +
>>xx = sum(x)                    Note: sum(x) will sum all elements in the same column.
 +
                                                size(xx) can help you to verify
 +
>>size(sum(x))                  Note: see the size of x if we forget it
 +
                                      (the answer is 20 1000)
 +
>>hist(x(1:))                    Note: the graph of the first exponential distribution
 +
>>hist(xx)
 +
</pre>
 +
[[File:Gamma_example.jpg|300px]]
  
3) Transform polar coordinates (i.e. R and θ) back to Cartesian coordinates (i.e. X and Y), <br> <math> \begin{align} x = R\cos(\theta) \\ y = R\sin(\theta) \end{align} </math> <br />.
 
 
Alternatively,<br> <math> x =\cos(2\pi U_2)\sqrt{-2\ln U_1}\, </math> and<br> <math> y =\sin(2\pi U_2)\sqrt{-2\ln U_1}\, </math><br />
 
  
  
Note: In steps 2 and 3, we are using a similar technique as that used in the inverse transform method. <br />
+
size(x) and size(u) are both 20*1000 matrix.
The Box-Muller Transformation Method generates a pair of independent Standard Normal distributions, X and Y (Using the transformation of polar coordinates). <br />
+
Since if u~unif(0, 1), u and 1 - u have the same distribution, we can substitute 1-u with u to simply the equation.
 
+
Alternatively, the following command will do the same thing with the previous commands.
  
 
* '''Code'''<br />
 
* '''Code'''<br />
Line 1,815: Line 1,880:
 
>>close all
 
>>close all
 
>>clear all
 
>>clear all
>>u1=rand(1,1000);
+
>>lambda = 1;
>>u2=rand(1,1000);
+
>>xx = sum((-1/lambda)*log(rand(20, 1000))); ''This is simple way to put the code in one line.  
>>d=-2*log(u1);
+
                                              Here we can use either log(u) or log(1-u) since U~U(0,1);
>>tet=2*pi*u2;
+
>>hist(xx)
>>x=d.^0.5.*cos(tet);
 
>>y=d.^0.5.*sin(tet);
 
>>hist(tet)        
 
>>hist(d)
 
>>hist(x)
 
>>hist(y)
 
 
</pre>
 
</pre>
  
"''Remember'': For the above code to work the "." needs to be after the d to ensure that each element of d is raised to the power of 0.5.<br /> Otherwise matlab will raise the entire matrix to the power of 0.5."
+
In the matrix rand(20,1000) means 20 row with 1000 numbers for each.
 +
use the code to show the generalize the distributions for multidimensional purposes in different cases, such as sum xi (each xi not equal xj), and they are independent, or matrix. Finally, we can see the conclusion is shown by the histogram.
  
Note:<br>the first graph is hist(tet) and it is a uniform distribution.<br>The second one is hist(d) and it is a uniform distribution.<br>The third one is hist(x) and it is a normal distribution.<br>The last one is hist(y) and it is also a normal distribution.
+
=== Other Sampling Method: Box Muller ===
 
+
[[File:Unnamed_QQ_Screenshot20130521203625.png‎]]
Attention:There is a "dot" between sqrt(d) and "*". It is because d and tet are vectors. <br>
+
* From cartesian to polar coordinates <br />
 +
<math> R=\sqrt{x_{1}^2+x_{2}^2}= x_{2}/sin(\theta)= x_{1}/cos(\theta)</math> <br />
 +
<math> tan(\theta)=x_{2}/x_{1} \rightarrow \theta=tan^{-1}(x_{2}/x_{1})</math> <br />
 
   
 
   
 +
*Box-Muller Transformation:<br>
 +
It is a transformation that consumes two continuous uniform random variables <math> X \sim U(0,1), Y \sim U(0,1) </math> and outputs a bivariate normal random variable with <math> Z_1\sim N(0,1), Z_2\sim N(0,1). </math>
  
[[File:Normal_theta.jpg|300px]][[File:Normal_d.jpg|300px]]
+
=== '''Matlab''' ===
[[File:normal_x.jpg|300x300px]][[File:normal_y.jpg|300x300px]]
 
  
As seen in the histograms above, X and Y generated from this procedure have a standard normal distribution.
+
If X is a matrix,
 +
* ''X(1,:)'' returns the first row
 +
* ''X(:,1)'' returns the first column
 +
* ''X(i,j)'' returns the (i,j)th entry
 +
* ''sum(X,'''1''')'' or ''sum(X)'' is a summation of the '''rows''' of X. The output is a row vector of the sums of each column.
 +
* ''sum(X,'''2''')'' is a summation of the '''columns''' of X, returning a vector.
 +
* ''rand(r,c)'' will generate uniformly distributed random numbers in r rows and c columns.
 +
* The dot operator (.), when placed before a function, such as +,-,^, *, and many others specifies to apply that function to every element of a vector or a matrix. For example, to add a constant c to elements of a matrix A, do A.+c as opposed to simply A+c. The dot operator is not required for functions that can only take a number as their input (such as log).
 +
* Matlab processes loops very slow, while it is fast with matrices and vectors, so it is preferable to use the dot operator to and matrices of random numbers than loops if it is possible.
  
* '''Code'''<br />
+
== Class 6 - Thursday, May 23 ==
<pre style="font-size:16px">
 
>>close all
 
>>clear all
 
>>x=randn(1,1000);
 
>>hist(x)
 
>>hist(x+2)
 
>>hist(x*2+2)
 
</pre>
 
  
Note: randn is random sample from a standard normal distribution.<br />
+
=== Announcement ===
Note: hist(x+2) will be centered at 2 instead of at 0. <br />
+
1. On the day of each lecture, students from the morning section can only contribute the first half of the lecture (i.e. 8:30 - 9:10 am), so that the second half can be saved for the ones from the afternoon section. After the day of lecture, students are free to contribute anything.
      hist(x*3+2) is also centered at 2. The mean doesn't change, but the variance of x*3+2 becomes nine times (3^2) the variance of x.<br />
 
[[File:Normal_x.jpg|300x300px]][[File:Normal_x+2.jpg|300x300px]][[File:Normal(2x+2).jpg|300px]]
 
<br />
 
  
<b>Comment</b>: Box-Muller transformations are not computationally efficient. The reason for this is the need to compute sine and cosine functions. A way to get around this time-consuming difficulty is by an indirect computation of the sine and cosine of  a random angle (as opposed to a direct computation which generates  U  and then computes the sine and cosine of 2πU. <br />
+
=== Standard Normal distribution ===
 +
If X ~ N(0,1) i.e. Standard Normal Distribution - then its p.d.f. is of the form
 +
:<math>f(x) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2}</math>
  
'''Alternative Methods of generating normal distribution'''<br />
+
*Warning : the General Normal distribution is:
1. Even though we cannot use inverse transform method, we can approximate this inverse using different functions.One method would be '''rational approximation'''.<br />
+
<table>
2.'''Central limit theorem''' : If we sum 12 independent U(0,1) distribution and subtract 6 (which is E(ui)*12)we will approximately get a standard normal distribution.<br />
+
<tr>
3. '''Ziggurat algorithm''' which is known to be faster than Box-Muller transformation and a version of this algorithm is used for the randn function in matlab.<br />
+
<td><div onmouseover="document.getElementById('woyun').style.visibility='visible'"
 
+
onmouseout="document.getElementById('woyun').style.visibility='hidden'">
If Z~N(0,1) and X= μ +Zσ then X~<math> N(\mu, \sigma^2)</math>
+
<math>
 
+
f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }
If Z<sub>1</sub>, Z<sub>2</sub>... Z<sub>d</sub> are independent identically distributed N(0,1),
+
</math>
then Z=(Z<sub>1</sub>,Z<sub>2</sub>...Z<sub>d</sub>)<sup>T</sup> ~N(0, I<sub>d</sub>), where 0 is the zero vector and I<sub>d</sub> is the identity matrix.
+
</div>
 +
</td>
 +
<td>
 +
<div id="woyun" style="
  
For the histogram, the constant is the parameter that affect the center of the graph.
+
visibility:hidden;
 +
width:100px;
 +
height:100px;
 +
background:#FFFFAD;
 +
position:relative;
 +
animation:movement infinite;
 +
animation-duration:2s;
 +
animation-direction:alternate;
  
=== Proof of Box Muller Transformation ===
 
  
Definition:
+
/* Safari and Chrome */
A transformation which transforms from a '''two-dimensional continuous uniform''' distribution to a '''two-dimensional bivariate normal''' distribution (or complex normal distribution).
+
-webkit-animation:movement infinite;
 +
-webkit-animation-duration:2s;
 +
-webkit-animation-direction:alternate;
  
Let U<sub>1</sub> and U<sub>2</sub> be independent uniform (0,10) random variables. Then
 
<math>X_{1} = -2lnU_{1}*cos(2\pi U_{2})</math>
 
  
<math>X_{1} = -2lnU_{1}*sin(2\pi U_{2})</math>
+
@keyframes movement
are '''independent''' N(0,1) random variables.
+
{
 +
from {left:0px;}
 +
to {left:200px;}
 +
}
  
This is a standard transformation problem. The joint distribution is given by
+
@-webkit-keyframes movement /* Safari and Chrome */
                  f(x1 ,x2) = f<sub>u1</sub>, <sub>u2</sub>(g1^− 1(x1,x2),g2^− 1(x1,x2)) * | J |
+
{
 
+
from {left:0px;}
where J is the Jacobian of the transformation,
+
to {left:200px;}
                   
+
}"
                  J = |∂u<sub>1</sub>/∂x<sub>1</sub>,∂u<sub>1</sub>/∂x<sub>2</sub>|
+
>which is almost useless in this course</div>
                      |∂u<sub>2</sub>/∂x<sub>1</sub>,∂u<sub>2</sub>/∂x<sub>2</sub>|
+
</td>
where
+
</tr>
      u<sub>1</sub> = g<sub>1</sub> ^-1(x1,x2)
+
</table>
      u<sub>2</sub> = g<sub>2</sub> ^-1(x1,x2)
+
where <math> \mu </math> is the mean or expectation of the distribution and <math> \sigma </math> is standard deviation <br />
 +
<br />
 +
*N(0,1) is standard normal. <math> \mu </math> =0 and <math> \sigma </math>=1 <br />
 +
<br />
 +
 
 +
Let X and Y be independent standard normal.
 +
 
 +
Let <math> \theta </math> and R denote the Polar coordinate of the vector (X, Y)  
 +
where <math> X = R \cdot \sin\theta </math> and <math> Y = R \cdot \cos \theta </math>
 +
 
 +
[[File:rtheta.jpg]]
  
Inverting the above transformations, we have
+
Note: R must satisfy two properties:
    u1 = exp^{-(x<sub>1</sub> ^2+ x<sub>2</sub> ^2)/2}
 
    u2 = (1/2pi)*tan^-1 (x<sub>2</sub>/x<sub>1</sub>)
 
  
Finally we get
+
:1. Be a positive number (as it is a length)
  f(x1,x2) = {exp^(-(x1^2+x2^2)/2)}/2pi
 
which factors into two standard normal pdfs.
 
  
=== General Normal distributions ===
+
:2. It must be from a distribution that has more data points closer to the origin so that as we go further from the origin, less points are generated (the two options are Chi-squared and Exponential distribution)  
General normal distribution is a special version of normal distribution. The domain of the general normal distribution is affected by the standard deviation and  translated by the mean value. The pdf of the general normal distribution is
 
<math>f(x) = 1/ sigma. *phi * ( (x - nu)/ sigma) </math>, where <math>phi(x) = 1/ (2pie)^1/2 .* e ^ (- 1/2 * x^2) </math>
 
  
The special case of the normal distribution is standard normal distribution, which the variance is 1 and the mean is zero. If X is a general normal deviate, then Z = (X − μ)/σ will have a standard normal distribution.
+
The form of the joint distribution of R and <math>\theta</math> will show that the best choice for distribution of R<sup>2</sup> is exponential.
  
If Z ~ N(0,1), and we want <math>X </math>~<math> N(\mu, \sigma^2)</math>, then <math>X = \mu + \sigma * Z</math> Since <math>E(x) = \mu +\sigma*0 = \mu </math> and <math>Var(x) = 0 +\sigma^2*1</math>
 
  
If <math>Z_1,...Z_d</math> ~ N(0,1) and are independent then <math>Z = (Z_1,..Z_d)^{T} </math>~ <math>N(0,I_d)</math>
+
We cannot use the Inverse Transformation Method since F(x) does not have a closed form solution. So we will use joint probability function of two independent standard normal random variables and polar coordinates to simulate the distribution:
ie.
 
* '''Code'''<br />
 
<pre style="font-size:16px">
 
>>close all
 
>>clear all
 
>>z1=randn(1,1000);    <-generate variable from standard normal distribution
 
>>z2=randn(1,1000);
 
>>z=[z1;z2];          <-produce a vector
 
>>plot(z(1,:),z(2,:),'.')
 
</pre>
 
[[File:Nonstdnormal_example.jpg|300px]]
 
  
If Z~N(0,Id) and X= <math>\underline{\mu} +  \Sigma^{\frac{1}{2}} \,Z </math> then <math>\underline{X}</math> ~<math>N(\underline{\mu},\Sigma)</math>
+
We know that
  
=== Bernoulli Distribution ===
+
<math>R^{2}= X^{2}+Y^{2}</math> and <math> \tan(\theta) = \frac{y}{x} </math> where X and Y are two independent standard normal
The Bernoulli distribution is a discrete probability distribution, which usually describe an event that only has two possible results, i.e. success or failure. If the event succeed, we usually take value 1 with success probability p, and take value 0 with failure probability q = 1 - p.
+
:<math>f(x) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2}</math>
 +
:<math>f(y) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} y^2}</math>
 +
:<math>f(x,y) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2} * \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} y^2}=\frac{1}{2\pi}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} (x^2+y^2)} </math><br /> - Since for independent distributions, their joint probability function is the multiplication of two independent probability functions. It can also be shown using 1-1 transformation that the joint distribution of R and θ is given by, 1-1 transformation:<br />
  
P ( x = 0) = q = 1 - p
 
P ( x = 1) = p
 
P ( x = 0) + P (x = 1) = p + q = 1
 
  
If X~Ber(p), its pdf is of the form <math>f(x)= p^{x}(1-p)^{(1-x)}</math>, x=0,1
+
'''Let <math>d=R^2</math>'''<br />
<br> P is the success probability.
 
 
The Bernoulli distribution is a special case of binomial distribution, which the variate x only has two outcomes; so that the Bernoulli also can use the probability density function of the binomial distribution with the variate x only take 0 and 1.
 
  
Let x1,s2 denote the lifetime of 2 independent particles, x1~exp(lambda), x2~exp(lambda)
+
<math>x= \sqrt {d}\cos \theta </math>
we are interested in y=min(x1,x2)
+
<math>y= \sqrt {d}\sin \theta </math>
 +
then
 +
<math>\left| J\right| = \left| \dfrac {1} {2}d^{-\frac {1} {2}}\cos \theta d^{\frac{1}{2}}\cos \theta +\sqrt {d}\sin \theta \dfrac {1} {2}d^{-\frac{1}{2}}\sin \theta \right| = \dfrac {1} {2}</math>
 +
It can be shown that the joint density of <math> d /R^2</math> and <math> \theta </math> is:
 +
:<math>\begin{matrix}  f(d,\theta) = \frac{1}{2}e^{-\frac{d}{2}}*\frac{1}{2\pi},\quad  d = R^2 \end{matrix},\quad for\quad 0\leq d<\infty\ and\quad 0\leq \theta\leq 2\pi </math>
  
<pre style="font-size:16px">
 
  
Procedure:
 
  
To simulate the event of flipping a coin, let P be the probability of flipping head and X = 1 and 0 represent
+
Note that <math> \begin{matrix}f(r,\theta)\end{matrix}</math> consists of two density functions, Exponential and Uniform, so assuming that r and <math>\theta</math> are independent
flipping head and tail respectively:
+
<math> \begin{matrix} \Rightarrow d \sim~ Exp(1/2),  \theta \sim~ Unif[0,2\pi] \end{matrix} </math>
 +
::* <math> \begin{align} R^2 = d = x^2 + y^2 \end{align} </math>
 +
::* <math> \tan(\theta) = \frac{y}{x} </math>
 +
<math>\begin{align} f(d) = Exp(1/2)=\frac{1}{2}e^{-\frac{d}{2}}\ \end{align}</math>
 +
<br>
 +
<math>\begin{align} f(\theta) =\frac{1}{2\pi}\ \end{align}</math>
 +
<br>
  
1. Draw U ~ Uniform(0,1)
+
To sample from the normal distribution, we can generate a pair of independent standard normal X and Y by:<br />
 +
 
 +
1) Generating their polar coordinates<br />
 +
2) Transforming back to rectangular (Cartesian) coordinates.<br />
  
2. If U <= P
 
  
  X = 1
+
'''Alternative Method of Generating Standard Normal Random Variables'''<br /> 
  
  Else
+
Step 1: Generate <math>u_{1}</math> ~<math>Unif(0,1)</math><br />
 +
Step 2: Generate <math>Y_{1}</math> ~<math>Exp(1)</math>,<math>Y_{2}</math>~<math>Exp(2)</math><br />
 +
Step 3: If <math>Y_{2} \geq(Y_{1}-1)^2/2</math>,set <math>V=Y1</math>,otherwise,go to step 1<br />
 +
Step 4: If <math>u_{1} \leq 1/2</math>,then <math>X=-V</math><br />
  
  X = 0
+
===Expectation of a Standard Normal distribution===<br />
  
3. Repeat as necessary
+
The expectation of a standard normal distribution is 0<br />
  
</pre>
+
'''Proof:''' <br />
  
An intuitive way to think of this is in the coin flip example we discussed in a previous lecture. In this example we set p = 1/2 and this allows for 50% of points to be heads or tails.
+
:<math>\operatorname{E}[X]= \;\int_{-\infty}^{\infty} x \frac{1}{\sqrt{2\pi}}  e^{-x^2/2} \, dx.</math>
 +
:<math>\phi(x) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2}.</math>
 +
:<math>=\;\int_{-\infty}^{\infty} x \phi(x), dx.</math>
 +
:Since the first derivative ''ϕ''′(''x'') is −''xϕ''(''x'')
 +
:<math>=\;\ - \int_{-\infty}^{\infty} \phi'(x), dx.</math>
 +
:<math>= - \left[\phi(x)\right]_{-\infty}^{\infty}</math>
 +
:<math>= 0</math><br />
  
* '''Code to Generate Bernoulli(p = 0.3)'''<br />
+
'''Note,''' more intuitively, because x is an odd function (f(x)+f(-x)=0). Taking integral of x will give <math>x^2/2 </math> which is an even function (f(x)=f(-x)). This is in relation to the symmetrical properties of the standard normal distribution. If support is from negative infinity to infinity, then the integral will return 0.<br />
<pre style="font-size:16px">
 
i = 1;
 
  
while (i <=1000)
 
    u =rand();
 
    p = 0.3;
 
    if (u <= p)
 
        x(i) = 1;
 
    else
 
        x(i) = 0;
 
    end
 
    i = i + 1;
 
end
 
  
hist(x)
+
'''Procedure (Box-Muller Transformation Method):''' <br />
</pre>
 
  
However, we know that if <math>\begin{align} X_i \sim Bernoulli(p) \end{align}</math> where each <math>\begin{align} X_i \end{align}</math> is independent,<br />
+
Pseudorandom approaches to generating normal random variables used to be limited. Inefficient methods such as inverse Gaussian function, sum of uniform random variables, and acceptance-rejection were used. In 1958, a new method was proposed by George Box and Mervin Muller of Princeton University. This new technique was easy to use and also had the accuracy to the inverse transform sampling method that it grew more valuable as computers became more computationally astute. <br>
<math>U = \sum_{i=1}^{n} X_i \sim Binomial(n,p)</math><br />
+
The Box-Muller method takes a sample from a bivariate independent standard normal distribution, each component of which is thus a univariate standard normal. The algorithm is based on the following two properties of the bivariate independent standard normal distribution: <br>
So we can sample from binomial distribution using this property.
+
if <math>Z = (Z_{1}, Z_{2}</math>) has this distribution, then <br>
Note: For Binomial distribution, we can consider it as a set of n Bernoulli add together.
 
  
 +
1.<math>R^2=Z_{1}^2+Z_{2}^2</math> is exponentially distributed with mean 2, i.e. <br>
 +
<math>P(R^2 \leq x) = 1-e^{-x/2}</math>. <br>
 +
2.Given <math>R^2</math>, the point <math>(Z_{1},Z_{2}</math>) is uniformly distributed on the circle of radius R centered at the origin. <br>
 +
We can use these properties to build the algorithm: <br>
  
* '''Code to Generate Binomial(n = 10,p = 0.3)'''<br />
 
<pre style="font-size:16px">
 
p = 0.3;
 
n = 10;
 
  
for k=1:5000
+
1) Generate random number <math> \begin{align} U_1,U_2 \sim~ \mathrm{Unif}(0, 1) \end{align} </math> <br />
    i = 1;
+
2) Generate polar coordinates using the exponential distribution of d and uniform distribution of θ,
    while (i <= n)
 
        u=rand();
 
        if (u <= p)
 
            y(i) = 1;
 
        else
 
            y(i) = 0;
 
        end
 
        i = i + 1;
 
    end
 
  
    x(k) = sum(y==1);
 
end
 
  
hist(x)
 
  
</pre>
+
<math> \begin{align} R^2 = d = -2\log(U_1), & \quad r = \sqrt{d} \\ & \quad \theta = 2\pi    U_2  \end{align} </math>
Note: We can also regard the Bernoulli Distribution as either a conditional distribution or <math>f(x)= p^{x}(1-p)^{(1-x)}</math>, x=0,1.
 
  
Comments on Matlab:
 
When doing operations on vectors, always put a dot before the operator if you want the operation to be done to every element in the vector.
 
example: Let V be a vector with dimension 2*4 and you want each element multiply by 3.
 
        The  Matlab code is 3.*V
 
  
some examples for using code to generate distribution.
+
<math> \begin{matrix} \ R^2 \sim~ Exp(1/2),  \theta \sim~ Unif[0,2\pi] \end{matrix} </math> <br />
  
== Class 7 - Tuesday, May 28 ==
+
Note: If U~unif(0,1), then ln(1-U)=ln(U)
  
Note that the material in this lecture will not be on the exam; it was only to supplement what we have learned.
+
3) Transform polar coordinates (i.e. R and θ) back to Cartesian coordinates (i.e. X and Y), <br> <math> \begin{align} x = R\cos(\theta) \\ y = R\sin(\theta) \end{align} </math> <br />.
  
===Universality of the Uniform Distribution/Inverse Method===
+
Alternatively,<br> <math> x =\cos(2\pi U_2)\sqrt{-2\ln U_1}\, </math> and<br> <math> y =\sin(2\pi U_2)\sqrt{-2\ln U_1}\, </math><br />
  
The inverse method is universal in the sense that we can potentially sample from any distribution where we can find the inverse of the cumulative distribution function.
 
  
Procedure:
+
'''Note:''' In steps 2 and 3, we are using a similar technique as that used in the inverse transform method. <br />
 +
The Box-Muller Transformation Method generates a pair of independent Standard Normal distributions, X and Y (Using the transformation of polar coordinates). <br />
  
1.Generate U~Unif [0, 1)<br>
+
If you want to generate a number of independent standard normal distributed numbers (more than two), you can run the Box-Muller method several times.<br/>
2.set <math>x=F^{-1}(u)</math><br>
+
For example: <br />
3.X~f(x)<br>
+
If you want 8 independent standard normal distributed numbers, then run the Box-Muller methods 4 times (8/2 times). <br />  
 +
If you want 9 independent standard normal distributed numbers, then run the Box-Muller methods 5 times (10/2 times), and then delete one. <br />
  
'''Example 1'''<br>
 
  
Let <math>X</math><sub>1</sub>,<math>X</math><sub>2</sub> denote the lifetime of two independent particles:<br>
+
'''Matlab Code'''<br />
<math>X</math><sub>1</sub>~exp(<math>\lambda</math><sub>1</sub>)<br>
 
<math>X</math><sub>2</sub>~exp(<math>\lambda</math><sub>2</sub>)<br>
 
  
We are interested in <math>y=min(X</math><sub>1</sub><math>,X</math><sub>2</sub><math>)</math><br>
+
<pre style="font-size:16px">
Design an algorithm based on the Inverse-Transform Method to generate samples according to <math>f</math><sub>y</sub><math>(y)</math><br>
+
>>close all
 +
>>clear all
 +
>>u1=rand(1,1000);
 +
>>u2=rand(1,1000);
 +
>>d=-2*log(u1);
 +
>>tet=2*pi*u2;
 +
>>x=d.^0.5.*cos(tet);
 +
>>y=d.^0.5.*sin(tet);
 +
>>hist(tet)        
 +
>>hist(d)
 +
>>hist(x)
 +
>>hist(y)
 +
</pre>
 +
<br>
 +
'''Remember''': For the above code to work the "." needs to be after the d to ensure that each element of d is raised to the power of 0.5.<br /> Otherwise matlab will raise the entire matrix to the power of 0.5."<br>
  
'''Solution:'''<br>
+
'''Note:'''<br>the first graph is hist(tet) and it is a uniform distribution.<br>The second one is hist(d) and it is a exponential distribution.<br>The third one is hist(x) and it is a normal distribution.<br>The last one is hist(y) and it is also a normal distribution.
  
x~exp(<math>\lambda</math>)<br>
+
Attention:There is a "dot" between sqrt(d) and "*". It is because d and tet are vectors. <br>
 +
  
<math>f_{x}(x)=\lambda e^{-\lambda x},x\geq0 </math> <br>
+
[[File:Normal_theta.jpg|300px]][[File:Normal_d.jpg|300px]]
 +
[[File:normal_x.jpg|300x300px]][[File:normal_y.jpg|300x300px]]
  
<math>1-F_Y(y) = P(Y>y)</math> = P(min(X<sub>1</sub>,X<sub>2</sub>) > y) = <math>\, P((X_1)>y) P((X_2)>y) = e^{\, -(\lambda_1 + \lambda_2) y}</math><br>
+
As seen in the histograms above, X and Y generated from this procedure have a standard normal distribution.
  
<math>F_Y(y)=1-e^{\, -(\lambda_1 + \lambda_2) y}, y\geq 0</math><br>
+
* '''Code'''<br />
 +
<pre style="font-size:16px">
 +
>>close all
 +
>>clear all
 +
>>x=randn(1,1000);
 +
>>hist(x)
 +
>>hist(x+2)
 +
>>hist(x*2+2)<br>
 +
</pre>
 +
<br>
 +
'''Note:'''<br>
 +
1. randn is random sample from a standard normal distribution.<br />
 +
2. hist(x+2) will be centered at 2 instead of at 0. <br />
 +
3. hist(x*3+2) is also centered at 2. The mean doesn't change, but the variance of x*3+2 becomes nine times (3^2) the variance of x.<br />
 +
[[File:Normal_x.jpg|300x300px]][[File:Normal_x+2.jpg|300x300px]][[File:Normal(2x+2).jpg|300px]]
 +
<br />
  
<math>U=1-e^{\, -(\lambda_1 + \lambda_2) y}</math> => <math>y=\, {-\frac {1}{{\lambda_1 +\lambda_2}}} ln(1-u)</math><br>
+
<b>Comment</b>:<br />
 +
Box-Muller transformations are not computationally efficient. The reason for this is the need to compute sine and cosine functions. A way to get around this time-consuming difficulty is by an indirect computation of the sine and cosine of  a random angle (as opposed to a direct computation which generates  U  and then computes the sine and cosine of 2πU. <br />
  
'''Procedure:'''
 
  
Step1: Generate U~ U(0, 1)<br>
 
Step2: set <math>x=\, {-\frac {1}{{\lambda_1 +\lambda_2}}} ln(u)</math><br>
 
  
If we generalize this example from two independent particles to n independent particles we will have:<br>
+
'''Alternative Methods of generating normal distribution'''<br />
  
<math>X</math><sub>1</sub>~exp(<math>\lambda</math><sub>1</sub>)<br><math>X</math><sub>2</sub>~exp(<math>\lambda</math><sub>2</sub>)<br> ...<br> <math>X</math><sub>n</sub>~exp(<math>\lambda</math><sub>n</sub>)<br>.
+
1. Even though we cannot use inverse transform method, we can approximate this inverse using different functions.One method would be '''rational approximation'''.<br />
 +
2.'''Central limit theorem''' : If we sum 12 independent U(0,1) distribution and subtract 6 (which is E(ui)*12)we will approximately get a standard normal distribution.<br />
 +
3. '''Ziggurat algorithm''' which is known to be faster than Box-Muller transformation and a version of this algorithm is used for the randn function in matlab.<br />
  
And the algorithm using the inverse-transform method as follows:
+
If Z~N(0,1) and X= μ +Zσ then X~<math> N(\mu, \sigma^2)</math>
  
step1: Generate U~U(0,1)
+
If Z<sub>1</sub>, Z<sub>2</sub>... Z<sub>d</sub> are independent identically distributed N(0,1),
 +
then Z=(Z<sub>1</sub>,Z<sub>2</sub>...Z<sub>d</sub>)<sup>T</sup> ~N(0, I<sub>d</sub>), where 0 is the zero vector and I<sub>d</sub> is the identity matrix.
  
Step2: <math>y=\, {-\frac {1}{{ \sum\lambda_i}}} ln(1-u)</math><br>
+
For the histogram, the constant is the parameter that affect the center of the graph.
  
 +
=== Proof of Box Muller Transformation ===
  
'''Example 2'''<br>
+
'''Definition:'''<br />
Consider U~Unif[0,1)<br>
+
A transformation which transforms from a '''two-dimensional continuous uniform''' distribution to a '''two-dimensional bivariate normal''' distribution (or complex normal distribution).
<math>X=\, a (1-\sqrt{1-u})</math>,
 
<br>where a>0 and a is a real number
 
What is the distribution of X?<br>
 
  
'''Solution:'''<br>
+
Let U<sub>1</sub> and U<sub>2</sub> be independent uniform (0,1) random variables. Then
 +
<math>X_{1} = ((-2lnU_{1})^.5)*cos(2\pi U_{2})</math>
  
We can find a form for the cumulative distribution function of X by isolating U as U~Unif[0,1) will take values from the range of F(X)uniformly. It then remains to differentiate the resulting form by X to obtain the probability density function.
+
<math>X_{2} = (-2lnU_{1})^0.5*sin(2\pi U_{2})</math>
 +
are '''independent''' N(0,1) random variables.
  
<math>X=\, a (1-\sqrt{1-u})</math><br>
+
This is a standard transformation problem. The joint distribution is given by
=><math>1-\frac {x}{a}=\sqrt{1-u}</math><br>
+
                  f(x1 ,x2) = f<sub>u1</sub>, <sub>u2</sub>(g1^− 1(x1,x2),g2^− 1(x1,x2)) * | J |
=><math>u=1-(1-\frac {x}{a})^2</math><br>
 
=><math>u=\, {\frac {x}{a}} (2-\frac {x}{a})</math><br>
 
<math>f(x)=\frac {dF(x)}{dx}=\frac {2}{a}-\frac {2x}{a^2}=\, \frac {2}{a} (1-\frac {x}{a})</math><br>
 
[[File:Example_2_diagram.jpg]]
 
  
'''Example 3'''<br>
+
where J is the Jacobian of the transformation,
 
+
                   
Suppose F<sub>X</sub>(x) = x<sup>n</sup>, 0 ≤ x ≤ 1, n ∈ N > 0. Generate values from X.<br>
+
                  J = |∂u<sub>1</sub>/∂x<sub>1</sub>,∂u<sub>1</sub>/∂x<sub>2</sub>|
 +
                      |∂u<sub>2</sub>/∂x<sub>1</sub>,∂u<sub>2</sub>/∂x<sub>2</sub>|
 +
where
 +
      u<sub>1</sub> = g<sub>1</sub> ^-1(x1,x2)
 +
      u<sub>2</sub> = g<sub>2</sub> ^-1(x1,x2)
  
'''Solution:'''<br>
+
Inverting the above transformation, we have
<br>
+
    u1 = exp^{-(x<sub>1</sub> ^2+ x<sub>2</sub> ^2)/2}
1. generate u ~ Unif[0, 1)<br>
+
    u2 = (1/2pi)*tan^-1 (x<sub>2</sub>/x<sub>1</sub>)
2. Set x <- U<sup>1/n</sup><br>
 
<br>
 
For example, when n = 20,<br>
 
u = 0.6 => x = u<sub>1/20</sub> = 0.974<br>
 
u = 0.5 => x = u<sub>1/20</sub> = 0.966<br>
 
u = 0.2 => x = u<sub>1/20</sub> = 0.923<br>
 
<br>
 
Observe from above that the values of X for n = 20 are close to 1, this is because we can view X<sup>n</sup> as the maximum of n independent random variables X, X~Unif(0,1) and is much likely to be close to 1 as n increases. This observation is the motivation for method 2 below.<br>
 
  
Recall that
+
Finally we get
If Y = max (X<sub>1</sub>, X<sub>2</sub>, ... , X<sub>n</sub>), where X<sub>1</sub>, X<sub>2</sub>, ... , X<sub>n</sub> are independent, <br>
+
  f(x1,x2) = {exp^(-(x1^2+x2^2)/2)}/2pi
F<sub>Y</sub>(y) = P(Y ≤ y) = P(max (X<sub>1</sub>, X<sub>2</sub>, ... , X<sub>n</sub>) ≤ y) = P(X<sub>1</sub> ≤ y, X<sub>2</sub> ≤ y, ... , X<sub>n</sub> ≤ y) = F<sub>x<sub>1</sub></sub>(y) F<sub>x<sub>2</sub></sub>(y) ... F<sub>x<sub>n</sub></sub>(y)<br>
+
which factors into two standard normal pdfs.
Similarly if <math> Y = min(X_1,\ldots,X_n)</math> then the cdf of <math>Y</math> is <math>F_Y = 1- </math><math>\prod</math><math>(1- F_{X_i})</math><br>
 
<br>
 
Method 1: Following the above result we can see that in this example, F<sub>X</sub> = x<sup>n</sup> is the cumulative distribution function of the max of n uniform random variables between 0 and 1 (since for U~Unif(0, 1), F<sub>U</sub>(x) =
 
Method 2:  generate X by having a sample of n independent U~Unif(0, 1) and take the max of the n samples to be x. However, the solution given above using inverse-transform method only requires generating one uniform random number instead of n of them, so it is a more efficient method.
 
<br>
 
  
generate the Y = max (X1, X2, ... , Xn), Y = min (X1, X2, ... , Xn), pdf and cdf, but (xi and xj are independent) i,j=1,2,3,4,5.....
 
  
'''Example 4 (New)'''<br>
+
(The quote is from http://mathworld.wolfram.com/Box-MullerTransformation.html)
Now, we are having an similar example as example 1 just doing the maximum way.
+
(The proof is from http://www.math.nyu.edu/faculty/goodman/teaching/MonteCarlo2005/notes/GaussianSampling.pdf)
  
Let X<sub>1</sub>,X<sub>2</sub> denote the lifetime of two independent particles:<br>
+
=== General Normal distributions ===
<math>\, X_1, X_2 \sim exp(\lambda)</math><br>
+
General normal distribution is a special version of the standard normal distribution. The domain of the general normal distribution is affected by the standard deviation and  translated by the mean value.
 
+
*The pdf of the general normal distribution is
We are interested in Z=max(X<sub>1</sub>,X<sub>2</sub>)<br>
+
:
Design an algorithm based on the Inverse-Transform Method to generate samples according to f<sub>Z</sub>(z)<br>
+
<table>
 
+
<tr>
<math>\, F_Z(z)=P[Z<=z] = F_{X_1}(z) \cdot F_{X_2}(z) = (1-e^{-\lambda z})^2</math><br>
+
<td><div onmouseover="document.getElementById('woyun').style.visibility='visible'"
<math> \text{thus } F^{-1}(z) = -\frac{1}{\lambda}\log(1-\sqrt z)</math><br>
+
onmouseout="document.getElementById('woyun').style.visibility='hidden'">
 +
<math>
 +
f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }
 +
</math>
 +
</div>
 +
</td>
 +
<td>
 +
<div id="woyun" style="visibility:hidden">which is almost useless in this course</div>
 +
</td>
 +
</tr>
 +
</table>
 +
where <math> \mu </math> is the mean or expectation of the distribution and <math> \sigma </math> is standard deviation <br />
  
To sample Z: <br>
+
The probability density must be scaled by 1/sigma so that the integral is still 1.(Acknowledge: https://en.wikipedia.org/wiki/Normal_distribution)
<math>\, \text{Step 1: Generate } U \sim U[0,1)</math><br>
+
The special case of the normal distribution is standard normal distribution, which the variance is 1 and the mean is zero. If X is a general normal deviate, then <math> Z=\dfrac{X - (\mu)}{\sigma} </math> will have a standard normal distribution.
<math>\, \text{Step 2: Let } Z = -\frac{1}{\lambda}\log(1-\sqrt U)</math>, therefore we can generate random variable of Z.<br><br>
 
  
'''Discrete Case:'''
+
If Z ~ N(0,1), and we want <math>X </math>~<math> N(\mu, \sigma^2)</math>, then <math>X = \mu + \sigma * Z</math> Since <math>E(x) = \mu +\sigma*0 = \mu </math> and <math>Var(x) = 0 +\sigma^2*1</math>
<font size="3">
 
  u~unif(0,1)<br>
 
  x <- 0, S <- P<sub>0</sub><br>
 
  while u < S<br>
 
        x <- x + 1<br>
 
        S <- S + P<sub>0</sub><br>
 
  Return x<br></font>
 
  
===Decomposition Method===
+
If <math>Z_1,...Z_d</math> ~ N(0,1) and are independent then <math>Z = (Z_1,..Z_d)^{T} </math>~ <math>N(0,I_d)</math>
The CDF, F, is a composition if <math>F_{X}(x)</math> can be written as:
+
ie.
 +
* '''Code'''<br />
 +
<pre style="font-size:16px">
 +
>>close all
 +
>>clear all
 +
>>z1=randn(1,1000);    <-generate variable from standard normal distribution
 +
>>z2=randn(1,1000);
 +
>>z=[z1;z2];          <-produce a vector
 +
>>plot(z(1,:),z(2,:),'.')
 +
</pre>
 +
[[File:Nonstdnormal_example.jpg|300px]]
  
<math>F_{X}(x) = \sum_{i=1}^{n}p_{i}F_{X_{i}}(x)</math> where
+
If Z~N(0,Id) and X= <math>\underline{\mu} +  \Sigma^{\frac{1}{2}} \,Z </math> then <math>\underline{X}</math> ~<math>N(\underline{\mu},\Sigma)</math>
  
1)  p<sub>i</sub> > 0
+
=====Non-Standard Normal Distributions=====
  
2)  <math>\sum_{i=1}^{n}</math>p<sub>i</sub> = 1.
+
'''Example 1: Single-variate Normal'''
  
3)   <math>F_{X_{i}}(x)</math> is a CDF
+
If X ~ Norm(0, 1) then (a + bX) has a normal distribution with a mean of <math>\displaystyle a</math> and a standard deviation of <math>\displaystyle b</math> (which is equivalent to a variance of <math>\displaystyle b^2</math>).  Using this information with the Box-Muller transform, we can generate values sampled from some random variable <math>\displaystyle Y\sim N(a,b^2) </math> for arbitrary values of <math>\displaystyle a,b</math>.
  
The general algorithm to generate random variables from a composition CDF is:
+
# Generate a sample u from Norm(0, 1) using the Box-Muller transform.
 +
# Set v = a + bu.
  
1)  Generate U, V ~ <math>U(0,1)</math>
+
The values for v generated in this way will be equivalent to sample from a <math>\displaystyle N(a, b^2)</math>distribution.  We can modify the MatLab code used in the last section to demonstrate this.  We just need to add one line before we generate the histogram:
  
2)  If u < p<sub>1</sub>, v=<math>F_{X_{1}}(x)</math><sup>-1</sup>
+
<pre style='font-size:16px'>
 +
v = a + b * x;
 +
</pre>
  
3)  Else if u < p<sub>1</sub>+p<sub>2</sub>, v=<math>F_{X_{2}}(x)</math><sup>-1</sup>
+
For instance, this is the histogram generated when b = 15, a = 125:
  
4)  ....
+
[[File:Hist normal.jpg|center|500]]
  
<b>Explanation</b><br>
+
'''Example 2: Multi-variate Normal'''
Each random variable that is a part of X contributes <math>p_{i}*F_{X_{i}}(x)</math> to <math>F_{X}(x)</math> every time.
 
From a sampling point of view, that is equivalent to contributing <math>F_{X_{i}}(x)</math> <math>p_{i}</math> of the time. The logic of this is similar to that of the Accept-Reject Method, but instead of rejecting a value depending on the value u takes, we instead decide which distribution to sample it from.
 
  
=== Examples of Decomposition Method ===
+
The Box-Muller method can be extended to higher dimensions to generate multivariate normals. The objects generated will be nx1 vectors, and their variance will be described by nxn covariance matrices.
<b>Example 1</b> <br>
 
f(x) = 5/12(1+(x-1)<sup>4</sup>)  0<=x<=2 <br>
 
f(x) = 5/12+5/12(x-1))<sup>4</sup> = 5/6*(1/2)+1/6*(5/2)(x-1))<sup>4</sup> <br>
 
Let f<sub>x1</sub> = 1/2 and f<sub>x2</sub> = 5/2(x-1)<sup>4</sup> <br>
 
  
Algorithm:
+
<math>\mathbf{z} = N(\mathbf{u}, \Sigma)</math> defines the n by 1 vector <math>\mathbf{z}</math> such that:
Generate U~Unif(0,1) <br>
 
If 0<u<5/6, then we sample from f<sub>x1</sub> <br>
 
Else if 5/6<u<1, we sample from f<sub>x2</sub> <br>
 
We can find the inverse CDF of f<sub>x2</sub> and utilize the Inverse Transform Method in order to sample from f<sub>x2</sub> <br>
 
Sampling from f<sub>x1</sub> is more straightforward since it is uniform over the interval (0,2) <br>
 
  
divided f(x) to two pdf of x1 and x2, with uniform distribution, of two range of uniform.
+
* <math>\displaystyle u_i</math> is the average of <math>\displaystyle z_i</math>
 +
* <math>\!\Sigma_{ii}</math> is the variance of <math>\displaystyle z_i</math>
 +
* <math>\!\Sigma_{ij}</math> is the co-variance of <math>\displaystyle z_i</math> and <math>\displaystyle z_j</math>
  
<b>Example 2</b> <br>
+
If <math>\displaystyle z_1, z_2, ..., z_d</math> are normal variables with mean 0 and variance 1, then the vector <math>\displaystyle (z_1, z_2,..., z_d) </math> has mean 0 and variance <math>\!I</math>, where 0 is the zero vector and <math>\!I</math> is the identity matrix. This fact suggests that the method for generating a multivariate normal is to generate each component individually as single normal variables.
<math>f(x)=\frac{1}{4}e^{-x}+2x+\frac{1}{12} \quad for \quad 0\leq x \leq 3 </math> <br>
 
We can rewrite f(x) as <math>f(x)=(\frac{1}{4})*e^{-x}+(\frac{2}{4})*4x+(\frac{1}{4})*\frac{1}{3}</math> <br>
 
Let f<sub>x1</sub> = <math>e^{-x}</math>, f<sub>x2</sub> = 4x, and f<sub>x3</sub> = <math>\frac{1}{3}</math> <br>
 
Generate U~Unif(0,1)<br>
 
If <math>0<u<\frac{1}{4}</math>, we sample from f<sub>x1</sub> <br><br>
 
If <math>\frac{1}{4}\leq u < \frac{3}{4}</math>, we sample from f<sub>x2</sub> <br><br>
 
Else if <math>\frac{3}{4} \leq u < 1</math>, we sample from f<sub>x3</sub> <br>
 
We can find the inverse CDFs of f<sub>x1</sub> and f<sub>x2</sub> and utilize the Inverse Transform Method in order to sample from f<sub>x1</sub> and f<sub>x2</sub> <br><br>
 
We find F<sub>x1</sub> = <math> 1-e^{-x}</math> and F<sub>x2</sub> = <math>2x^{2}</math> <br>
 
We find the inverses are <math> X = -ln(1-u)</math> for F<sub>x1</sub> and <math> X = \sqrt{\frac{U}{2}}</math> for F<sub>x2</sub> <br>
 
Sampling from f<sub>x3</sub> is more straightforward since it is uniform over the interval (0,3) <br>
 
  
In general, to write an <b>efficient </b> algorithm for: <br>
+
The mean and the covariance matrix of a multivariate normal distribution can be adjusted in ways analogous to the single variable case. If <math>\mathbf{z} \sim N(0,I)</math>, then <math>\Sigma^{1/2}\mathbf{z}+\mu \sim N(\mu,\Sigma)</math>. Note here that the covariance matrix is symmetric and nonnegative, so its square root should always exist.
<math>F_{X}(x) = p_{1}F_{X_{1}}(x) + p_{2}F_{X_{2}}(x) + ... + p_{n}F_{X_{n}}(x)</math> <br>
 
We would first rearrange <math> {p_i} </math> such that <math> p_i > p_j </math> for <math> i < j </math> <br> <br>
 
Then Generate <math> U</math>~<math>Unif(0,1) </math> <br>
 
If <math> u < p_1 </math> sample from <math> f_1 </math> <br>
 
else if <math> u<p_i </math> sample from <math> f_i </math> for <math> 1<i < n </math><br>
 
else sample from <math> f_n </math> <br>
 
  
when we divided the pdf of different range of f(x1) f(x2) and f(x3), and generate all of them and inverse, U~U(0,1)
+
We can compute <math>\mathbf{z}</math> in the following way:
  
=== Example of Decomposition Method ===
+
# Generate an n by 1 vector <math>\mathbf{x} = \begin{bmatrix}x_{1} & x_{2} & ... & x_{n}\end{bmatrix}</math> where <math>x_{i}</math> ~ Norm(0, 1) using the Box-Muller transform.
 +
# Calculate <math>\!\Sigma^{1/2}</math> using singular value decomposition.
 +
# Set <math>\mathbf{z} = \Sigma^{1/2} \mathbf{x} + \mathbf{u}</math>.
  
F<sub>x</sub>(x) = 1/3*x+1/3*x<sup>2</sup>+1/3*x<sup>3</sup>, 0<= x<=1
+
The following MatLab code provides an example, where a scatter plot of 10000 random points is generated.  In this case x and y have a co-variance of 0.9 - a very strong positive correlation.
  
let U =F<sub>x</sub>(x) = 1/3*x+1/3*x<sup>2</sup>+1/3*x<sup>3</sup>, solve for x.
+
<pre style='font-size:16px'>
 +
x = zeros(10000, 1);
 +
y = zeros(10000, 1);
 +
for ii = 1:10000
 +
    u1 = rand;
 +
    u2 = rand;
 +
    R2 = -2 * log(u1);
 +
    theta = 2 * pi * u2;
 +
    x(ii) = sqrt(R2) * cos(theta);
 +
    y(ii) = sqrt(R2) * sin(theta);
 +
end
  
P<sub>1</sub>=1/3, F<sub>x1</sub>(x)= x, P<sub>2</sub>=1/3,F<sub>x2</sub>(x)= x<sup>2</sup>,
+
E = [1, 0.9; 0.9, 1];
P<sub>3</sub>=1/3,F<sub>x3</sub>(x)= x<sup>3</sup>
+
[u s v] = svd(E);
 +
root_E = u * (s ^ (1 / 2)) * u';
  
'''Algorithm:'''
+
z = (root_E * [x y]');
 +
z(1,:) = z(1,:) + 0;
 +
z(2,:) = z(2,:) + -3;
  
Generate U ~ Unif [0,1)
+
scatter(z(1,:), z(2,:))
 +
</pre>
  
Generate V~ Unif [0,1)
+
Note: The svd command computes the matrix singular value decomposition.
  
if 0<u<1/3, x = v
+
[u,s,v] = svd(E) produces a diagonal matrix s of the same dimension as E, with nonnegative diagonal elements in decreasing order, and unitary matrices u and v so that E = u*s*v'.
  
else if u<2/3, x = v<sup>1/2</sup>
+
This code generated the following scatter plot:
  
else x = v<sup>1/3</sup><br>
+
[[File:scatter covar.jpg|center|500px]]
  
 +
In Matlab, we can also use the function "sqrtm()" or "chol()" (Cholesky Decomposition) to calculate square root of a matrix directly. Note that the resulting root matrices may be different but this does materially affect the simulation.
 +
Here is an example:
  
'''Matlab Code:'''
+
<pre style='font-size:16px'>
<pre style="font-size:16px">
+
E = [1, 0.9; 0.9, 1];
u=rand
+
r1 = sqrtm(E);
v=rand
+
r2 = chol(E);
if u<1/3
+
</pre>
x=v
 
elseif u<2/3
 
x=sqrt(v)
 
else
 
x=v^(1/3)
 
end
 
</pre>
 
===Fundamental Theorem of Simulation===
 
Consider two shapes, A and B, where B is a sub-shape (subset) of A.
 
We want to sample uniformly from inside the shape B.
 
Then we can sample uniformly inside of A, and throw away all samples outside of B, and this will leave us with a uniform sample from within B.
 
(Basis of the Accept-Reject algorithm)
 
  
The advantage of this method is that we can sample a unknown distribution from a easy distribution. The disadvantage of this method is that it may need to reject many points, which is inefficient.
+
R code for a multivariate normal distribution:
  
inverse each part of partial CDF, the partial CDF is divided by the original CDF, partial range is uniform distribution.
+
<pre style='font-size:16px'>
 +
n=10000;
 +
r2<--2*log(runif(n));
 +
theta<-2*pi*(runif(n));
 +
x<-sqrt(r2)*cos(theta);
  
=== Practice Example from Lecture 7 ===
+
y<-sqrt(r2)*sin(theta);
 +
a<-matrix(c(x,y),nrow=n,byrow=F);
 +
e<-matrix(c(1,.9,09,1),nrow=2,byrow=T);
 +
svde<-svd(e);
 +
root_e<-svde$u %*% diag(svde$d)^1/2;
 +
z<-t(root_e %*%t(a));
 +
z[,1]=z[,1]+5;
 +
z[,2]=z[,2]+ -8;
 +
par(pch=19);
 +
plot(z,col=rgb(1,0,0,alpha=0.06))
 +
</pre>
  
Let X1, X2 denote the lifetime of 2 independent particles, X1~exp(<math>\lambda_{1}</math>), X2~exp(<math>\lambda_{2}</math>)
+
[[File:m_normal.png|center|500px]]
  
We are interested in Y = min(X1, X2)
+
=== Bernoulli Distribution ===
 +
The Bernoulli distribution is a discrete probability distribution, which usually describes an event that only has two possible results, i.e. success or failure (x=0 or 1). If the event succeed, we usually take value 1 with success probability p, and take value 0 with failure probability q = 1 - p.
  
Design an algorithm based on the Inverse Method to generate Y
+
P ( x = 0) = q = 1 - p <br />
 +
P ( x = 1) = p  <br />
 +
P ( x = 0) + P (x = 1) = p + q = 1 <br />
  
<math>f_{x_{1}}(x)=\lambda_{1} e^{(-\lambda_{1}x)},x\geq0 \Rightarrow F(x1)=1-e^{(-\lambda_{1}x)}</math><br />
+
If X~Ber(p), its pdf is of the form <math>f(x)= p^{x}(1-p)^{(1-x)}</math>, x=0,1
<math>f_{x_{2}}(x)=\lambda_{2} e^{(-\lambda_{2}x)},x\geq0 \Rightarrow F(x2)=1-e^{(-\lambda_{2}x)}</math><br />
+
<br> P is the success probability.
<math>then, 1-F(y)=p(min(x_{1},x_{2}) \geq y)=e^{(-(\lambda_{1}+\lambda_{2})y)},F(y)=1-e^{(-(\lambda_{1}+\lambda_{2}) y)}</math>)<br />
+
<math>u \sim unif[0,1),u = F(x),\geq y = -1/(\lambda_{1}+\lambda_{2})log(1-u)</math>
+
The Bernoulli distribution is a special case of binomial distribution, where the variate x only has two outcomes; so that the Bernoulli also can use the probability density function of the binomial distribution with the variate x taking values 0 and 1.
  
===Question 2===
+
The most famous example for the Bernoulli Distribution would be the "Flip Coin" question, which has only two possible outcomes(Success or Failure) with the same probabilities of 0.5
  
Use Acceptance and Rejection Method to sample from <math>f_X(x)=b*x^n*(1-x)^n</math> , <math>n>0</math>, <math>0<x<1</math>
+
Let x1,x2 denote the lifetime of 2 independent particles, x1~exp(<math>\lambda</math>), x2~exp(<math>\lambda</math>)
 +
we are interested in y=min(x1,x2)
  
Solution:
+
<pre style="font-size:16px">
This is a beta distribution,  Beta ~<math>\int _{0}^{1}b*x^{n}*(1-x)^{n}dx-1</math>
 
  
U<sub>1~Unif[0,1)
+
Procedure:
  
 +
To simulate the event of flipping a coin, let P be the probability of flipping head and X = 1 and 0 represent
 +
flipping head and tail respectively:
  
U<sub>2~Unif[0,1)
+
1. Draw U ~ Uniform(0,1)
  
fx=<math> bx^{1/2}(1-x)^{1/2} <= bx^{-1/2}\sqrt2  ,0<=x<=1/2 </math>
+
2. If U <= P
  
 +
  X = 1
  
 +
  Else
  
The beta distribution maximized at 0.5 with value <math>(1/4)^n</math>.
+
  X = 0
So, <math>c=b*(1/4)^n</math>
 
Algorithm:
 
1.Draw <math>U_1</math> from <math>U(0, 1)</math>.<math> U_2</math> from <math>U(0, 1)<math>
 
2.If <math>U_2<=b*(U_1)^n*(1-(U_1))^n/b*(1/4)^n=(4*(U_1)*(1-(U_1)))^n</math>
 
  then X=U_1
 
  Else return to step 1.
 
  
Discrete Case:
+
3. Repeat as necessary
Most discrete random variables do not have a closed form inverse CDF. Also, its CDF <math>F:X \rightarrow [0,1]</math> is not necessarily onto. This means that not every point in the interval <math> [0,1] </math> has a preimage in the support set of X through the CDF function.<br />
 
  
Let <math>X</math> be a discrete random variable where <math>a \leq X \leq b</math> and <math>a,b \in \mathbb{Z}</math> . <br>
+
</pre>
To sample from <math>X</math>, we use the partition method below: <br>
 
  
<math>\, \text{Step 1: Generate u from } U \sim Unif[0,1]</math><br>
+
An intuitive way to think of this is in the coin flip example we discussed in a previous lecture. In this example we set p = 1/2 and this allows for 50% of points to be heads or tails.
<math>\, \text{Step 2: Set } x=a, s=P(X=a)</math><br />
 
<math>\, \text{Step 3: While } u>s, x=x+1, s=s+P(X=x)</math> <br />
 
<math>\, \text{Step 4: Return } x</math><br />
 
  
==Class 8 - Thursday, May 30, 2013==
+
* '''Code to Generate Bernoulli(p = 0.3)'''<br />
 +
<pre style="font-size:16px">
 +
i = 1;
  
In this lecture, we will discuss algorithms to generate 3 well-known distributions: Binomial, Geometric and Poisson. For each of these distributions, we will first state its general understanding, probability mass function, expectation and variance. Then, we will derive one or more algorithms to sample from each of these distributions, and implement the algorithms on Matlab. <br \>
+
while (i <=1000)
 +
    u =rand();
 +
    p = 0.3;
 +
    if (u <= p)
 +
        x(i) = 1;
 +
    else
 +
        x(i) = 0;
 +
    end
 +
    i = i + 1;
 +
end
  
'''Bernoulli distribution'''
+
hist(x)
 +
</pre>
  
The Bernoulli distribution is a special case of the binomial distribution, where n = 1. X ~ B(1, p) has the same meaning as X ~ Bern(p). B(n, p), is the distribution of the sum of n independent Bernoulli trials, Bern(p), each with the same probability p.  
+
However, we know that if <math>\begin{align} X_i \sim Bernoulli(p) \end{align}</math> where each <math>\begin{align} X_i \end{align}</math> is independent,<br />
 
+
<math>U = \sum_{i=1}^{n} X_i \sim Binomial(n,p)</math><br />
Algorithm:
+
So we can sample from binomial distribution using this property.
 +
Note: We can consider Binomial distribution as the sum of n, ''independent'', Bernoulli distributions
 +
<div style="background:#CCFF33;border-radius:5px;box-shadow: 10px 10px 5px #888888;padding:30px;">
 +
* '''Code to Generate Binomial(n = 20,p = 0.7)'''<br />
 +
<pre style="font-size:16px">
 +
p = 0.7;
 +
n = 20;
  
1. Generate u~Unif(0,1) <br>
+
for k=1:5000
2. If u <= p, then x = 1 <br>
+
    i = 1;
Else x = 0  
+
    for i=1:n
 +
        u=rand();
 +
        if (u <= p)
 +
            y(i) = 1;
 +
        else
 +
            y(i) = 0;
 +
        end
 +
    end
  
===The Binomial Distribution===
+
    x(k) = sum(y==1);
 +
end
  
If X~Bin(n,p), then its pmf is of form:
+
hist(x)
f(x)=(nCx) p<sup>x</sup>(1-p)<sup>(n-x)</sup>, x=0,1,...n <br />
 
Or f(x) = <math>(n!/x!(n-x)!)</math> p<sup>x</sup>(1-p)<sup>(n-x)</sup>, x=0,1,...n <br />
 
  
mean (x) = E(x) = np; variance = np(1-p)
+
</pre>
  
Generate n uniform random number <math>U_1,...,U_R</math> and let X be the number of <math>U_i</math> that are less than or equal to p.
 
The logic behind this algorithm is that the Binomial Distribution is simply a Bernoulli Trial, with a probability of success of p, repeated n times. Thus, we can sample from the distribution by sampling from n Bernoulli. The sum of these n bernoulli trials will represent one binomial sampling. Thus, in the below example, we are sampling 1000 realizations from 20 Bernoulli random variables. By summing up the rows of the 20 by 1000 matrix that is produced, we are summing up the 20 bernoulli outcomes to produce one binomial sampling. We have 1000 rows, which means we have realizations from 1000 binomial random variables when this sum is done (the output of the sum is a 1 by 1000 sized vector).<br />
 
MATLAB tips: to get a pdf f(x), we can use code binornd(N,P). N means number of trails and p is the probability of success. a=[2 3 4],if set a<3, will produce a=[1 0 0]. If you set "a == 3", it will produce [0 1 0]. So we can use it to get the number which is less than or equal p.<br />
 
  
Procedure for Bernoulli
 
U~Unif(0,1)
 
if U <= p
 
x = 1
 
else
 
x = 0
 
  
'''Code'''<br>
 
<pre style="font-size:16px">
 
>>a=[3 5 8];
 
>>a<5
 
ans= 1 0 0
 
  
>>rand(20,1000)
+
</div>
>>rand(20,1000)<0.4
+
Note: We can also regard the Bernoulli Distribution as either a conditional distribution or <math>f(x)= p^{x}(1-p)^{(1-x)}</math>, x=0,1.
>>A = sum(rand(20,1000)<0.4)
 
>>hist(A)
 
>>mean(A)
 
Note: `1` in the above code means sum the matrix by column
 
  
>>sum(sum(rand(20,1000)<0.4)>8)/1000
+
Comments on Matlab:
This is an estimate of Pr[X>8].
+
When doing operations on vectors, always put a dot before the operator if you want the operation to be done to every element in the vector.  
 +
example: Let V be a vector with dimension 2*4 and you want each element multiply by 3.
 +
        The  Matlab code is 3.*V
  
</pre>
+
some examples for using code to generate distribution.
  
[[File:Binomial_example.jpg|300px]]
+
== Class 7 - Tuesday, May 28 ==
  
remark: a=[2 3 4],if set a<3, will produce a=[1 0 0]. If you set "a == 3", it will produce [0 1 0].
 
using code to find some value what i want to get from the matrix. It`s useful to define some matrixs.
 
  
===The Geometric Distribution===
+
[[Note that the material in this lecture will not be on the exam; it was only to supplement what we have learned.]]
 +
===Universality of the Uniform Distribution/Inverse Method===
  
x=1, f(x)=p
+
The inverse method is universal in the sense that we can potentially sample from any distribution where we can find the inverse of the cumulative distribution function.
x=2, f(x)=p(1-p)
 
x=3, f(x)=p(1-p)^2
 
  
General speaking, if X~G(p) then its pdf is of the form f(x)=(1-p)<sup>(x-1)</sup>*p, x=1,2,...<br />
+
Procedure:
The random variable X is the number of trials required until the first success in a series of independent''' Bernoulli trials'''.<br />
 
  
 +
1) Generate U~Unif (0, 1)<br>
 +
2) Set <math>x=F^{-1}(u)</math><br>
 +
3) X~f(x)<br>
  
 +
'''Remark'''<br>
 +
1) The preceding can be written algorithmically for discrete random variables as <br>
 +
Generate a random number U ~ U(0,1] <br>
 +
If U < p<sub>0</sub> set X = x<sub>0</sub> and stop <br>
 +
If U < p<sub>0</sub> + p<sub>1</sub> set X = x<sub>1</sub> and stop <br>
 +
... <br>
 +
2) If the x<sub>i</sub>, i>=0, are ordered so that x<sub>0</sub> < x<sub>1</sub> < x<sub>2</sub> <... and if we let F denote the distribution function of X, then X will equal x<sub>j</sub> if F(x<sub>j-1</sub>) <= U < F(x<sub>j</sub>)
  
Other properties
+
'''Example 1'''<br>
  
 +
Let <math>X</math><sub>1</sub>,<math>X</math><sub>2</sub> denote the lifetime of two independent particles:<br>
 +
<math>X</math><sub>1</sub>~exp(<math>\lambda</math><sub>1</sub>)<br>
 +
<math>X</math><sub>2</sub>~exp(<math>\lambda</math><sub>2</sub>)<br>
  
Probability mass function : P(X=k) = P(1-P)^(k-1)
+
We are interested in <math>y=min(X</math><sub>1</sub><math>,X</math><sub>2</sub><math>)</math><br>
 +
Design an algorithm based on the Inverse-Transform Method to generate samples according to <math>f</math><sub>y</sub><math>(y)</math><br>
  
Tail probability : P(X>n) = (1-p)^n
+
'''Solution:'''<br>
  
 +
x<sub>1</sub>~exp(<math>\lambda_1</math>)<br>
 +
x<sub>2</sub>~exp(<math>\lambda_2</math>)<br>
 +
<math>f_{x(x)}=\lambda e^{-\lambda x},x\geq0 </math> <br>
 +
<math>F_X(x)=1-e^{-\lambda x}, x\geq 0</math><br>
  
<span style="background:#F5F5DC">
+
<math>1-F_Y(y) = P(Y>y)</math> = P(min(X<sub>1</sub>,X<sub>2</sub>) > y) = <math>\, P((X_1)>y) P((X_2)>y) = e^{\, -(\lambda_1 + \lambda_2) y}</math><br>
  
Mean of x = 1/p
+
<math>F_Y(y)=1-e^{\, -(\lambda_1 + \lambda_2) y}, y\geq 0</math><br>
Var(x) = (1-p)/p^2
 
  
There are two ways to look at a geometric distribution.
+
<math>U=1-e^{\, -(\lambda_1 + \lambda_2) y}</math> => <math>y=\, {-\frac {1}{{\lambda_1 +\lambda_2}}} ln(1-u)</math><br>
  
<b>1st Method</b>
+
'''Procedure:'''
  
We look at the number of trials before the first success. This includes the last trial in which you succeeded. This will be used in our course.
+
Step1: Generate U~ U(0, 1)<br>
  
pdf is of form f(x)=>(1-p)<sup>(x-1)</sup>*(p), x = 1, 2, 3, ...
+
Step2: set <math>y=\, {-\frac {1}{{\lambda_1 +\lambda_2}}} ln(1-u)</math><br>
  
<b>2nd Method</b>
+
    or set <math>y=\, {-\frac {1} {{\lambda_1 +\lambda_2}}} ln(u)</math><br>
 +
Since it is a uniform distribution, therefore after generate a lot of times 1-u and u are the same.
  
This involves modeling the failure before the first success. This does not include the last trial in which we succeeded.
 
  
pdf is of form f(x)=> ((1-p)^x)*p , x = 0, 1, 2, ....
+
* '''Matlab Code'''<br />
 +
<pre style="font-size:16px">
 +
>> lambda1 = 1;
 +
>> lambda2 = 2;
 +
>> u = rand;
 +
>> y = -log(u)/(lambda1 + lambda2)  
 +
</pre>
  
</span>
+
If we generalize this example from two independent particles to n independent particles we will have:<br>
 +
 
 +
<math>X</math><sub>1</sub>~exp(<math>\lambda</math><sub>1</sub>)<br><math>X</math><sub>2</sub>~exp(<math>\lambda</math><sub>2</sub>)<br> ...<br> <math>X</math><sub>n</sub>~exp(<math>\lambda</math><sub>n</sub>)<br>.
 +
 
 +
And the algorithm using the inverse-transform method as follows:
  
 +
step1: Generate U~U(0,1)
  
If Y~Exp(l) then X=floor(Y)+1 is geometric.<br />
+
Step2: <math>y=\, {-\frac {1}{{ \sum\lambda_i}}} ln(1-u)</math><br>
Choose e^(-l)=1-p. Then X ~ geo (p) <br />
 
  
P (X > x) = (1-p)<sup>x</sup>(because first x trials are not successful) <br/>
 
  
'''Proof''' <br/>
+
'''Example 2'''<br>
 +
Consider U~Unif[0,1)<br>
 +
<math>X=\, a (1-\sqrt{1-u})</math>,
 +
<br>where a>0 and a is a real number
 +
What is the distribution of X?<br>
  
P(X>x) = P( floor(Y) + 1 > X) = P(floor (Y) > x- 1) = P(Y>= x) = e<sup>(-<math>\lambda</math> * x)</sup> <br>
+
'''Solution:'''<br>
  
SInce p = 1- e<sup>-<math>\lambda</math></sup> or <math>\lambda</math>= <math>-log(1-p)</math>, then <br>
+
We can find a form for the cumulative distribution function of X by isolating U as U~Unif[0,1) will take values from the range of F(X)uniformly. It then remains to differentiate the resulting form by X to obtain the probability density function.
  
P(X>x) = e<sup>(-<math>\lambda</math> * x)</sup> = e<sup>log(1-p)*x</sup> = (1-p)<sup>x</sup> <br/>
+
<math>X=\, a (1-\sqrt{1-u})</math><br>
 +
=><math>1-\frac {x}{a}=\sqrt{1-u}</math><br>
 +
=><math>u=1-(1-\frac {x}{a})^2</math><br>
 +
=><math>u=\, {\frac {x}{a}} (2-\frac {x}{a})</math><br>
 +
<math>f(x)=\frac {dF(x)}{dx}=\frac {2}{a}-\frac {2x}{a^2}=\, \frac {2}{a} (1-\frac {x}{a})</math><br>
 +
[[File:Example_2_diagram.jpg]]
  
Note that floor(Y)>X -> Y >= X+1 <br/>
+
'''Example 3'''<br>
  
proof how to use EXP distribution to find P(X>x)=(1-p)^x
+
Suppose F<sub>X</sub>(x) = x<sup>n</sup>, 0 ≤ x ≤ 1, n ∈ N > 0. Generate values from X.<br>
  
 +
'''Solution:'''<br>
 
<br>
 
<br>
Suppose X has the exponential distribution with rate parameter <math> \lambda > 0 </math> <br>
+
1. Generate <math>U ~\sim~ Unif[0, 1)</math><br>
the <math>\left \lfloor X \right \rfloor </math> and <math>\left \lceil X \right \rceil </math> have geometric distribution on <math> \mathcal{N} </math>