# Difference between revisions of "stat340s13"

Jump to: navigation, search

## Introduction, Class 1 - Tuesday, May 7

### Course Instructor: Ali Ghodsi

Lecture:
001: T/Th 8:30-9:50am MC1085
002: T/Th 1:00-2:20pm DC1351
Tutorial:
2:30-3:20pm Mon M3 1006
Office Hours:
Friday at 10am, M3 4208

### Midterm

Monday June 17,2013 from 2:30pm-3:20pm

### Final

Saturday August 10,2013 from 7:30pm-10:00pm

### TA(s):

TA Day Time Location
Lu Cheng Monday 3:30-5:30 pm M3 3108, space 2
Han ShengSun Tuesday 4:00-6:00 pm M3 3108, space 2
Yizhou Fang Wednesday 1:00-3:00 pm M3 3108, space 1
Huan Cheng Thursday 3:00-5:00 pm M3 3111, space 1
Wu Lin Friday 11:00-1:00 pm M3 3108, space 1

### Four Fundamental Problems

1. Classification: Given an input object X, we have a function which will take in this input X and identify which 'class (Y)' it belongs to (Discrete Case)

  i.e taking value from x, we could predict y.


(For example, if you have 40 images of oranges and 60 images of apples (represented by x), you can estimate a function that takes the images and states what type of fruit it is - note Y is discrete in this case.)
2. Regression: Same as classification but in the continuous case except y is non discrete. (Example of stock prices)
(A simple practice might be investigating the hypothesis that higher levels of education cause higher levels of income.)
3. Clustering: Use common features of objects in same class or group to form clusters.(in this case, x is given, y is unknown)
4. Dimensionality Reduction (aka Feature extraction, Manifold learning): Used when we have a variable in high dimension space and we want to reduce the dimension

### Applications

Most useful when structure of the task is not well understood but can be characterized by a dataset with strong statistical regularity
Examples:

• Computer Vision, Computer Graphics, Finance (fraud detection), Machine Learning
• Search and recommendation (eg. Google, Amazon)
• Automatic speech recognition, speaker verification
• Text parsing
• Face identification
• Tracking objects in video
• Financial prediction(e.g. credit cards)
• Fraud detection
• Medical diagnosis

### Course Information

Prerequisite: (One of CS 116, 126/124, 134, 136, 138, 145, SYDE 221/322) and (STAT 230 with a grade of at least 60% or STAT 240) and (STAT 231 or 241)

Antirequisite: CM 361/STAT 341, CS 437, 457

General Information

• No required textbook
• Recommended: "Simulation" by Sheldon M. Ross
• Computing parts of the course will be done in Matlab, but prior knowledge of Matlab is not essential (will have a tutorial on it)
• First midterm will be held on Monday, June 17 from 2:30 to 3:30
• Announcements and assignments will be posted on Learn.
• Other course material on: http://wikicoursenote.com/wiki/
• Log on to both Learn and wikicoursenote frequently.
• Email all questions and concerns to UWStat340@gmail.com. Do not use your personal email address! Do not email instructor or TAs about the class directly to their personal accounts!

Wikicourse note (10% of final mark): When applying for an account in the wikicourse note, please use the quest account as your login name while the uwaterloo email as the registered email. This is important as the quest id will be used to identify the students who make the contributions. Example:
User: questid
Email: questid@uwaterloo.ca
After the student has made the account request, do wait for several hours before students can login into the account using the passwords stated in the email. During the first login, students will be ask to create a new password for their account.

As a technical/editorial contributor: Make contributions within 1 week and do not copy the notes on the blackboard.

All contributions are now considered general contributions you must contribute to 50% of lectures for full marks

• A general contribution can be correctional (fixing mistakes) or technical (expanding content, adding examples, etc.) but at least half of your contributions should be technical for full marks.

Do not submit copyrighted work without permission, cite original sources. Each time you make a contribution, check mark the table. Marks are calculated on an honour system, although there will be random verifications. If you are caught claiming to contribute but have not, you will not be credited.

Wikicoursenote contribution form : https://docs.google.com/forms/d/1Sgq0uDztDvtcS5JoBMtWziwH96DrBz2JiURvHPNd-xs/viewform

- you can submit your contributions multiple times.
- you will be able to edit the response right after submitting
- send email to make changes to an old response : uwstat340@gmail.com

### Tentative Topics

- Random variable and stochastic process generation
- Discrete-Event Systems
- Variance reduction
- Markov Chain Monte Carlo

### Tentative Marking Scheme

Item Value
Assignments (~6) 30%
WikiCourseNote 10%
Midterm 20%
Final 40%

The final exam is going to be closed book and only non-programmable calculators are allowed.
A passing mark must be achieved in the final to pass the course

## Class 2 - Thursday, May 9

### Generating Random Numbers

#### Introduction

Simulation is the imitation of a process or system over time. Computational power has introduced the possibility of using simulation study to analyze models used to describe a situation.

In order to perform a simulation study, we must first: <br\> 1. Use a computer to generate (pseudo) random numbers.
2. Use these numbers to generate values of random variable from distributions.
3. Using the concept of discrete events, we show how the random variables can be used to generate the behavior of a stochastic model over time. (Note: A stochastic model is the opposite of deterministic model, where there are several directions the process can evolve to)
4. After continually generating the behavior of the system, we can obtain estimators and other quantities of interest.

The building block of a simulation study is the ability to generate a random number. This random number is a value from a random variable distributed uniformly on (0,1). There are many different methods of generating a random number:

Physical Method: Roulette wheel, lottery balls, dice rolling, card shuffling etc.
Numerically/Arithmetically: Use of a computer to successively generate pseudorandom numbers. The sequence of numbers can appear to be random; however they are deterministically calculated with an equation which defines pseudorandom.


(Source: Ross, Sheldon M., and Sheldon M. Ross. Simulation. San Diego: Academic, 1997. Print.)

In general, a deterministic model produces specific results given certain inputs by the model user, contrasting with a stochastic model which encapsulates randomness and probabilistic events.
A computer cannot generate truly random numbers because computers can only run algorithms, which are deterministic in nature. They can, however, generate Pseudo Random Numbers

Pseudo Random Numbers are the numbers that seem random but are actually deterministic. Although the pseudo random numbers are deterministic, these numbers have a sequence of value and all of them have the appearances of being independent uniform random variables. Being deterministic, pseudo random numbers are valuable and beneficial due to the ease to generate and manipulate.

When people do the test many times, the results will be the closed express values, which make the trial look deterministic, however for each trial, the result is random. So, it looks like pseudo random numbers.

#### Mod

Let $n \in \N$ and $m \in \N^+$, then by Division Algorithm, $\exists q, \, r \in \N \;\text{with}\; 0\leq r \lt m, \; \text{s.t.}\; n = mq+r$, where $q$ is called the quotient and $r$ the remainder. Hence we can define a binary function $\mod : \N \times \N^+ \rightarrow \N$ given by $r:=n \mod m$ which returns the remainder after division by m.
Generally, mod means taking the reminder after division by m.
We say that n is congruent to r mod m if n = mq + r, where m is an integer.
if y = ax + b, then $b:=y \mod a$.

For example:
30 = 4 * 7 + 2
2 = 30 mod 7

25 = 8 * 3 + 1
1 = 25 mod 3

Note: $\mod$ here is different from the modulo congruence relation in $\Z_m$, which is an equivalence relation instead of a function.

The modulo operation is useful for determining if an integer divided by another integer produces a non-zero remainder. But both integers should satisfy n = mq + r, where m, r, q, and n are all integers, and r is smaller than m.

#### Mixed Congruential Algorithm

We define the Linear Congruential Method to be $x_{k+1}=(ax_k + b) \mod m$, where $x_k, a, b, m \in \N, \;\text{with}\; a, m \neq 0$. Given a seed (i.e. an initial value $x_0 \in \N$), we can obtain values for $x_1, \, x_2, \, \cdots, x_n$ inductively. The Multiplicative Congruential Method, invented by Berkeley professor D. H. Lehmer, may also refer to the special case where $b=0$ and the Mixed Congruential Method is case where $b \neq 0$

An interesting fact about Linear Congruential Method is that it is one of the oldest and best-known pseudo random number generator algorithms. It is very fast and requires minimal memory to retain state. However, this method should not be used for applications that require high randomness. They should not be used for Monte Carlo simulation and cryptographic applications. (Monte Carlo simulation will consider possibilities for every choice of consideration, and it shows the extreme possibilities. This method is not precise enough.)

First consider the following algorithm
$x_{k+1}=x_{k} \mod m$

Example
$\text{Let }x_{0}=10,\,m=3$

\begin{align} x_{1} &{}= 10 &{}\mod{3} = 1 \\ x_{2} &{}= 1 &{}\mod{3} = 1 \\ x_{3} &{}= 1 &{}\mod{3} =1 \\ \end{align}

$\ldots$

Excluding $x_{0}$, this example generates a series of ones. In general, excluding $x_{0}$, the algorithm above will always generate a series of the same number less than M. Hence, it has a period of 1. The period can be described as the length of a sequence before it repeats. We want a large period with a sequence that is random looking. We can modify this algorithm to form the Multiplicative Congruential Algorithm.

$x_{k+1}=(a \cdot x_{k} + b) \mod m$(a little tip: $(a \cdot b)\mod c = (a\mod c)\cdot(b\mod c))$

Example
$\text{Let }a=2,\, b=1, \, m=3, \, x_{0} = 10$
\begin{align} \text{Step 1: } 0&{}=(2\cdot 10 + 1) &{}\mod 3 \\ \text{Step 2: } 1&{}=(2\cdot 0 + 1) &{}\mod 3 \\ \text{Step 3: } 0&{}=(2\cdot 1 + 1) &{}\mod 3 \\ \end{align}
$\ldots$

This example generates a sequence with a repeating cycle of two integers.

(If we choose the numbers properly, we could get a sequence of "random" numbers. How do we find the value of $a,b,$ and $m$? At the very least $m$ should be a very large, preferably prime number. The larger $m$ is, the higher the possibility to get a sequence of "random" numbers. This is easier to solve in Matlab. In Matlab, the command rand() generates random numbers which are uniformly distributed on the interval (0,1)). Matlab uses $a=7^5, b=0, m=2^{31}-1$ – recommended in a 1988 paper, "Random Number Generators: Good Ones Are Hard To Find" by Stephen K. Park and Keith W. Miller (Important part is that $m$ should be large and prime)

Note: $\frac {x_{n+1}}{m-1}$ is an approximation to the value of a U(0,1) random variable.

MatLab Instruction for Multiplicative Congruential Algorithm:
Before you start, you need to clear all existing defined variables and operations:

>>clear all
>>close all

>>a=17
>>b=3
>>m=31
>>x=5
>>mod(a*x+b,m)
ans=26
>>x=mod(a*x+b,m)


(Note:
1. Keep repeating this command over and over again and you will get random numbers – this is how the command rand works in a computer.
2. There is a function in MATLAB called RAND to generate a random number between 0 and 1.
For example, in MATLAB, we can use rand(1,1000) to generate 1000's numbers between 0 and 1. This is essentially a vector with 1 row, 1000 columns, with each entry a random number between 0 and 1.
3. If we would like to generate 1000 or more numbers, we could use a for loop

(Note on MATLAB commands:
1. clear all: clears all variables.
2. close all: closes all figures.
3. who: displays all defined variables.
4. clc: clears screen.

5. ; : prevents the results from printing.

>>a=13
>>b=0
>>m=31
>>x(1)=1
>>for ii=2:1000
x(ii)=mod(a*x(ii-1)+b,m);
end
>>size(x)
ans=1    1000
>>hist(x)


(Note: The semicolon after the x(ii)=mod(a*x(ii-1)+b,m) ensures that Matlab will not print the entire vector of x. It will instead calculate it internally and you will be able to work with it. Adding the semicolon to the end of this line reduces the run time significantly.)

This algorithm involves three integer parameters $a, b,$ and $m$ and an initial value, $x_0$ called the seed. A sequence of numbers is defined by $x_{k+1} = ax_k+ b \mod m$.

Note: For some bad $a$ and $b$, the histogram may not look uniformly distributed.

Note: In MATLAB, hist(x) will generate a graph representing the distribution. Use this function after you run the code to check the real sample distribution.

Example: $a=13, b=0, m=31$
The first 30 numbers in the sequence are a permutation of integers from 1 to 30, and then the sequence repeats itself so it is important to choose $m$ large to decrease the probability of each number repeating itself too early. Values are between $0$ and $m-1$. If the values are normalized by dividing by $m-1$, then the results are approximately numbers uniformly distributed in the interval [0,1]. There is only a finite number of values (30 possible values in this case). In MATLAB, you can use function "hist(x)" to see if it looks uniformly distributed. We saw that the values between 0-30 had the same frequency in the histogram, so we can conclude that they are uniformly distributed.

If $x_0=1$, then

$x_{k+1} = 13x_{k}\mod{31}$

So,

\begin{align} x_{0} &{}= 1 \\ x_{1} &{}= 13 \times 1 + 0 &{}\mod{31} = 13 \\ x_{2} &{}= 13 \times 13 + 0 &{}\mod{31} = 14 \\ x_{3} &{}= 13 \times 14 + 0 &{}\mod{31} =27 \\ \end{align}

etc.

For example, with $a = 3, b = 2, m = 4, x_0 = 1$, we have:

$x_{k+1} = (3x_{k} + 2)\mod{4}$

So,

\begin{align} x_{0} &{}= 1 \\ x_{1} &{}= 3 \times 1 + 2 \mod{4} = 1 \\ x_{2} &{}= 3 \times 1 + 2 \mod{4} = 1 \\ \end{align}

etc.

FAQ:

1.Why is it 1 to 30 instead of 0 to 30 in the example above?
$b = 0$ so in order to have $x_k$ equal to 0, $x_{k-1}$ must be 0 (since $a=13$ is relatively prime to 31). However, the seed is 1. Hence, we will never observe 0 in the sequence.
Alternatively, {0} and {1,2,...,30} are two orbits of the left multiplication by 13 in the group $\Z_{31}$.
2.Will the number 31 ever appear?Is there a probability that a number never appears?
The number 31 will never appear. When you perform the operation $\mod m$, the largest possible answer that you could receive is $m-1$. Whether or not a particular number in the range from 0 to $m - 1$ appears in the above algorithm will be dependent on the values chosen for $a, b$ and $m$.

Examples:[From Textbook]
If $x_0=3$ and $x_n=(5x_{n-1}+7)\mod 200$, find $x_1,\cdots,x_{10}$.
Solution:
\begin{align} x_1 &{}= (5 \times 3+7) &{}\mod{200} &{}= 22 \\ x_2 &{}= 117 &{}\mod{200} &{}= 117 \\ x_3 &{}= 592 &{}\mod{200} &{}= 192 \\ x_4 &{}= 2967 &{}\mod{200} &{}= 167 \\ x_5 &{}= 14842 &{}\mod{200} &{}= 42 \\ x_6 &{}= 74217 &{}\mod{200} &{}= 17 \\ x_7 &{}= 371092 &{}\mod{200} &{}= 92 \\ x_8 &{}= 1855467 &{}\mod{200} &{}= 67 \\ x_9 &{}= 9277342 &{}\mod{200} &{}= 142 \\ x_{10} &{}= 46386717 &{}\mod{200} &{}= 117 \\ \end{align}

Comments:
Typically, it is good to choose $m$ such that $m$ is large, and $m$ is prime. Careful selection of parameters '$a$' and '$b$' also helps generate relatively "random" output values, where it is harder to identify patterns. For example, when we used a composite (non prime) number such as 40 for $m$, our results were not satisfactory in producing an output resembling a uniform distribution.

The computed values are between 0 and $m-1$. If the values are normalized by dividing by $m-1$, their result is numbers uniformly distributed on the interval $\left[0,1\right]$ (similar to computing from uniform distribution).

From the example shown above, if we want to create a large group of random numbers, it is better to have large, prime $m$ so that the generated random values will not repeat after several iterations. Note: the period for this example is 8: from '$x_2$' to '$x_9$'.

There has been a research on how to choose uniform sequence. Many programs give you the options to choose the seed. Sometimes the seed is chosen by CPU.

Theorem (extra knowledge)
Let c be a non-zero constant. Then for any seed x0, and LCG will have largest max. period if and only if
(i) m and c are coprime;
(ii) (a-1) is divisible by all prime factor of m;
(iii) if and only if m is divisible by 4, then a-1 is also divisible by 4.

We want our LCG to have a large cycle. We call a cycle with m element the maximal period. We can make it bigger by making m big and prime. Recall:any number you can think of can be broken into a factor of prime Define coprime:Two numbers X and Y, are coprime if they do not share any prime factors.

Example:

Xn=(15Xn-1 + 4) mod 7


(i) m=7 c=4 -> coprime;
(ii) a-1=14 and a-1 is divisible by 7;
(iii) dose not apply.
(The extra knowledge stops here)

In this part, I learned how to use R code to figure out the relationship between two integers division, and their remainder. And when we use R to calculate R with random variables for a range such as(1:1000),the graph of distribution is like uniform distribution.

#### Summary of Multiplicative Congruential Algorithm

Problem: generate Pseudo Random Numbers.

Plan:

1. find integer: a b m(large prime) x0(the seed) .
2. $x_{k+1}=(ax_{k}+b)$mod m

Matlab Instruction:

>>clear all
>>close all
>>a=17
>>b=3
>>m=31
>>x=5
>>mod(a*x+b,m)
ans=26
>>x=mod(a*x+b,m)


$\lt math\gt Insert formula here$$\lt math\gt Insert formula here$[/itex][/itex]=== Inverse Transform Method === Now that we know how to generate random numbers, we use these values to sample form distributions such as exponential. However, to easily use this method, the probability distribution consumed must have a cumulative distribution function (cdf) $F$ with a tractable (that is, easily found) inverse $F^{-1}$.

Theorem:
If we want to generate the value of a discrete random variable X, we must generate a random number U, uniformly distributed over (0,1). Let $F:\R \rightarrow \left[0,1\right]$ be a cdf. If $U \sim U\left[0,1\right]$, then the random variable given by $X:=F^{-1}\left(U\right)$ follows the distribution function $F\left(\cdot\right)$, where $F^{-1}\left(u\right):=\inf F^{-1}\big(\left[u,+\infty\right)\big) = \inf\{x\in\R | F\left(x\right) \geq u\}$ is the generalized inverse.
Note: $F$ need not be invertible everywhere on the real line, but if it is, then the generalized inverse is the same as the inverse in the usual case. We only need it to be invertible on the range of F(x), [0,1].

Proof of the theorem:
The generalized inverse satisfies the following:
\begin{align} \forall u \in \left[0,1\right], \, x \in \R, \\ &{} F^{-1}\left(u\right) \leq x &{} \\ \Rightarrow &{} F\Big(F^{-1}\left(u\right)\Big) \leq F\left(x\right) &&{} F \text{ is non-decreasing} \\ \Rightarrow &{} F\Big(\inf \{y \in \R | F(y)\geq u \}\Big) \leq F\left(x\right) &&{} \text{by definition of } F^{-1} \\ \Rightarrow &{} \inf \{F(y) \in [0,1] | F(y)\geq u \} \leq F\left(x\right) &&{} F \text{ is right continuous and non-decreasing} \\ \Rightarrow &{} u \leq F\left(x\right) &&{} \text{by definition of } \inf \\ \Rightarrow &{} x \in \{y \in \R | F(y) \geq u\} &&{} \\ \Rightarrow &{} x \geq \inf \{y \in \R | F(y)\geq u \}\Big) &&{} \text{by definition of } \inf \\ \Rightarrow &{} x \geq F^{-1}(u) &&{} \text{by definition of } F^{-1} \\ \end{align}

That is $F^{-1}\left(u\right) \leq x \Leftrightarrow u \leq F\left(x\right)$

Finally, $P(X \leq x) = P(F^{-1}(U) \leq x) = P(U \leq F(x)) = F(x)$, since $U$ is uniform on the unit interval.

This completes the proof.

Therefore, in order to generate a random variable X~F, it can generate U according to U(0,1) and then make the transformation x=$F^{-1}(U)$

Note that we can apply the inverse on both sides in the proof of the inverse transform only if the pdf of X is monotonic. A monotonic function is one that is either increasing for all x, or decreasing for all x. Of course, this holds true for all CDFs, since they are monotonic by definition.

In short, what the theorem tells us is that we can use a random number $U from U(0,1)$ to randomly sample a point on the CDF of X, then apply the inverse of the CDF to map the given probability to its domain, which gives us the random variable X.

Example 1 - Exponential: $f(x) = \lambda e^{-\lambda x}$
Calculate the CDF:
$F(x)= \int_0^x f(t) dt = \int_0^x \lambda e ^{-\lambda t}\ dt$ $= \frac{\lambda}{-\lambda}\, e^{-\lambda t}\, | \underset{0}{x}$ $= -e^{-\lambda x} + e^0 =1 - e^{- \lambda x}$
Solve the inverse:
$y=1-e^{- \lambda x} \Rightarrow 1-y=e^{- \lambda x} \Rightarrow x=-\frac {ln(1-y)}{\lambda}$
$y=-\frac {ln(1-x)}{\lambda} \Rightarrow F^{-1}(x)=-\frac {ln(1-x)}{\lambda}$
Note that 1 − U is also uniform on (0, 1) and thus −log(1 − U) has the same distribution as −logU.
Steps:
Step 1: Draw U ~U[0,1];
Step 2: $x=\frac{-ln(U)}{\lambda}$

MatLab Code:

>>u=rand(1,1000);
>>hist(u)       #will generate a fairly uniform diagram


#let λ=2 in this example; however, you can make another value for λ
>>x=(-log(1-u))/2;
>>size(x)       #1000 in size
>>figure
>>hist(x)       #exponential


Example 2 - Continuous Distribution:

$f(x) = \dfrac {\lambda } {2}e^{-\lambda \left| x-\theta \right| } for -\infty \lt X \lt \infty , \lambda \gt 0$

Calculate the CDF:

$F(x)= \frac{1}{2} e^{-\lambda (\theta - x)} , for \ x \le \theta$
$F(x) = 1 - \frac{1}{2} e^{-\lambda (x - \theta)}, for \ x \gt \theta$

Solve for the inverse:

$F^{-1}(x)= \theta + ln(2y)/\lambda, for \ 0 \le y \le 0.5$
$F^{-1}(x)= \theta - ln(2(1-y))/\lambda, for \ 0.5 \lt y \le 1$

Algorithm:
Steps:
Step 1: Draw U ~ U[0, 1];
Step 2: Compute $X = F^-1(U)$ i.e. $X = \theta + \frac {1}{\lambda} ln(2U)$ for U < 0.5 else $X = \theta -\frac {1}{\lambda} ln(2(1-U))$

Example 3 - $F(x) = x^5$:
Given a CDF of X: $F(x) = x^5$, transform U~U[0,1].
Sol: Let $y=x^5$, solve for x: $x=y^\frac {1}{5}$. Therefore, $F^{-1} (x) = x^\frac {1}{5}$
Hence, to obtain a value of x from F(x), we first set u as an uniform distribution, then obtain the inverse function of F(x), and set $x= u^\frac{1}{5}$

Algorithm:
Steps:
Step 1: Draw U ~ rand[0, 1];
Step 2: X=U^(1/5);

Example 4 - BETA(1,β):
Given u~U[0,1], generate x from BETA(1,β)
Solution: $F(x)= 1-(1-x)^\beta$, $u= 1-(1-x)^\beta$
Solve for x: $(1-x)^\beta = 1-u$, $1-x = (1-u)^\frac {1}{\beta}$, $x = 1-(1-u)^\frac {1}{\beta}$
let β=3, use Matlab to construct N=1000 observations from Beta(1,3)
Matlab Code:
>> u = rand(1,1000);

  x = 1-(1-u)^(1/3);


>> hist(x,50)
>> mean(x)

Example 5 - Estimating $\pi$:
Let's use rand() and Monte Carlo Method to estimate $\pi$
N= total number of points
Nc = total number of points inside the circle
Prob[(x,y) lies in the circle=$\frac {Area(circle)}{Area(square)}$
If we take square of size 2, circle will have area =$\pi (\frac {2}{2})^2 =\pi$.
Thus $\pi= 4(\frac {N_c}{N})$

  For example, UNIF(a,b)
$y = F(x) = (x - a)/ (b - a)$
$x = (b - a ) * y + a$
$X = a + ( b - a) * U$
where U is UNIF(0,1)


Limitations:
1. This method is flawed since not all functions are invertible or monotonic: generalized inverse is hard to work on.
2. It may be impractical since some CDF's and/or integrals are not easy to compute such as Gaussian distribution.

We learned how to prove the transformation from cdf to inverse cdf,and use the uniform distribution to obtain a value of x from F(x). We can also use uniform distribution in inverse method to determine other distributions. The probability of getting a point for a circle over the triangle is a closed uniform distribution, each point in the circle and over the triangle is almost the same. Then, we can look at the graph to determine what kind of distribution the graph resembles.

#### Probability Distribution Function Tool in MATLAB

disttool         #shows different distributions


This command allows users to explore the effect of changes of parameters on the plot of either a CDF or PDF.

change the value of mu and sigma can change the graph skew side.

## Class 3 - Tuesday, May 14

### Recall the Inverse Transform Method

To sample X with CDF F(x),

1. Draw u~U(0,1)
2. X = F-1(u)

Proof
First note that $P(U\leq a)=a, \forall a\in[0,1]$

$P(X\leq x)$

$= P(F^{-1}(U)\leq x)$ (since $X= F^{-1}(U)$ by the inverse method)
$= P((F(F^{-1}(U))\leq F(x))$ (since $F$ is monotonically increasing)
$= P(U\leq F(x))$ (since $P(U\leq a)= a$ for $U \sim U(0,1), a \in [0,1]$, this is explained further below)
$= F(x) , \text{ where } 0 \leq F(x) \leq 1$

This is the c.d.f. of X.

Note: that the CDF of a U(a,b) random variable is:

$F(x)= \begin{cases} 0 & \text{for }x \lt a \\[8pt] \frac{x-a}{b-a} & \text{for }a \le x \lt b \\[8pt] 1 & \text{for }x \ge b \end{cases}$

Thus, for $U$ ~ $U(0,1)$, we have $P(U\leq 1) = 1$ and $P(U\leq 1/2) = 1/2$.
More generally, we see that $P(U\leq a) = a$.
For this reason, we had $P(U\leq F(x)) = F(x)$.

Reminder:
This is only for uniform distribution $U~ \sim~ Unif [0,1]$
$P (U \le 1) = 1$
$P (U \le 0.5) = 0.5$
$P (U \le a) = a$

$P(U\leq a)=a$

Note that on a single point there is no mass probability (i.e. $u$ <= 0.5, is the same as $u$ < 0.5) More formally, this is saying that $P(X = x) = F(x)- \lim_{s \to x^-}F(x)$ which equals zero for any continuous random variable

#### Limitations of the Inverse Transform Method

Though this method is very easy to use and apply, it does have a major disadvantage/limitation:

• We need to find the inverse cdf $F^{-1}(\cdot)$. In some cases the inverse function does not exist, or is difficult to find because it requires a closed form expression for F(x).

For example, it is too difficult to find the inverse cdf of the Gaussian distribution, so we must find another method to sample from the Gaussian distribution.

### Discrete Case

The same technique can be used for discrete case. We want to generate a discrete random variable x, that has probability mass function:

\begin{align}P(X = x_i) &{}= p_i \end{align}
$x_0 \leq x_1 \leq x_2 \dots \leq x_n$
$\sum p_i = 1$

Algorithm for applying Inverse Transformation Method in Discrete Case (Procedure):
1. Define a probability mass function for $x_{i}$ where i = 1,....,k. Note: k could grow infinitely.
2. Generate a uniform random number U, $U~ \sim~ Unif [0,1]$
3. If $U\leq p_{o}$, deliver $X = x_{o}$
4. Else, if $U\leq p_{o} + p_{1}$, deliver $X = x_{1}$
5. Repeat the process again till we reached to $U\leq p_{o} + p_{1} + ......+ p_{k}$, deliver $X = x_{k}$

Note that after generating a random U, the value of X can be determined by finding the interval $[F(x_{j-1}),F(x_{j})]$ in which U lies.

Example 3.0:
Generate a random variable from the following probability function:

 x -2 -1 0 1 2 f(x) 0.1 0.5 0.07 0.03 0.3

Answer:
1. Gen U~U(0,1)
2. If U < 0.5 then output -1
else if U < 0.8 then output 2
else if U < 0.9 then output -2
else if U < 0.97 then output 0 else output 1

Example 3.1 (from class): (Coin Flipping Example)
We want to simulate a coin flip. We have U~U(0,1) and X = 0 or X = 1.

We can define the U function so that:

If U <= 0.5, then X = 0

and if 0.5 < U <= 1, then X =1.

This allows the probability of Heads occurring to be 0.5 and is a good generator of a random coin flip.

$U~ \sim~ Unif [0,1]$

\begin{align} P(X = 0) &{}= 0.5\\ P(X = 1) &{}= 0.5\\ \end{align}

The answer is:

$x = \begin{cases} 0, & \text{if } U\leq 0.5 \\ 1, & \text{if } 0.5 \lt U \leq 1 \end{cases}$

• Code
>>for ii=1:1000
u=rand;
if u<0.5
x(ii)=0;
else
x(ii)=1;
end
end
>>hist(x)


Note: The role of semi-colon in Matlab: Matlab will not print out the results if the line ends in a semi-colon and vice versa.

Example