# techniques for Normal and Gamma Sampling

### Techniques for Normal and Gamma Sampling - May 19, 2009

We have examined two general techniques for sampling from distributions. However, for certain distributions more practical methods exist. We will now look at two cases,
Gamma distributions and Normal distributions, where such practical methods exist.

#### Gamma Distribution

Given the additive property of the gamma distribution,

If $X_1, \dots, X_t$ are independent random variables with $X_i\sim~ Exp (\lambda)$ then,

$\Sigma_{i=1}^t X_i \sim~ Gamma (t, \lambda)$

We can use the Inverse Transform Method and sample from independent uniform distributions seen before to generate a sample following a Gamma distribution.

Procedure
1. Sample independently from a uniform distribution $t$ times, giving $u_1,\dots,u_t$
2. Sample independently from an exponential distribution $t$ times, giving $x_1,\dots,x_t$ such that,
\begin{align} x_1 \sim~ Exp(\lambda)\\ \vdots \\ x_t \sim~ Exp(\lambda) \end{align}

Using the Inverse Transform Method,
\begin{align} x_i = -\frac {1}{\lambda}\log(u_i) \end{align}
\begin{align} X &{}= x_1 + x_2 + \dots + x_t \\ X &{}= -\frac {1}{\lambda}\log(u_1) - \frac {1}{\lambda}\log(u_2) \dots - \frac {1}{\lambda}\log(u_t) \\ X &{}= -\frac {1}{\lambda}\log(\prod_{i=1}^{t}u_i) \sim~ Gamma (t, \lambda) \end{align}

This procedure can be illustrated in Matlab using the code below with $t = 5, \lambda = 1$ :

U = rand(10000,5);
X = sum( -log(U), 2);
hist(X)


#### Normal Distribution

The cdf for the Standard Normal distribution is:

$F(Z) = \int_{-\infty}^{Z}\frac{1}{\sqrt{2\pi}}e^{-x^2/2}dx$

We can see that the normal distribution is difficult to sample from using the general methods seen so far, eg. the inverse is not easy to obtain from F(Z); we may be able to use the Acceptance-Rejection method, but there are still better ways to sample from a Standard Normal Distribution.

=====Box-Muller Method===== [Add a picture WikiSysop 19:25, 1 June 2009 (UTC)]

"Diagram of the Box Muller transform, which transforms uniformly distributed value pairs to normally distributed value pairs." [Box-Muller Transform, Wikipedia]

The Box-Muller or Polar method uses an approach where we have one space that is easy to sample in, and another with the desired distribution under a transformation. If we know such a transformation for the Standard Normal, then all we have to do is transform our easy sample and obtain a sample from the Standard Normal distribution.

Properties of Polar and Cartesian Coordinates
If x and y are points on the Cartesian plane, r is the length of the radius from a point in the polar plane to the pole, and θ is the angle formed with the polar axis then,
• \begin{align} r^2 = x^2 + y^2 \end{align}
• $\tan(\theta) = \frac{y}{x}$

• \begin{align} x = r \cos(\theta) \end{align}
• \begin{align} y = r \sin(\theta) \end{align}

Let X and Y be independent random variables with a standard normal distribution,

$X \sim~ N(0,1)$
$Y \sim~ N(0,1)$

also, let $r$ and $\theta$ be the polar coordinates of (x,y). Then the joint distribution of independent x and y is given by,

\begin{align} f(x,y) = f(x)f(y) &{}= \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}} \\ &{}=\frac{1}{2\pi}e^{-\frac{x^2+y^2}{2}} \end{align}

It can also be shown that the joint distribution of r and θ is given by,

$\begin{matrix} f(r,\theta) = \frac{1}{2}e^{-\frac{d}{2}}*\frac{1}{2\pi},\quad d = r^2 \end{matrix}$

Note that $\begin{matrix}f(r,\theta)\end{matrix}$ consists of two density functions, Exponential and Uniform, so assuming that r and $\theta$ are independent $\begin{matrix} \Rightarrow d \sim~ Exp(1/2), \theta \sim~ Unif[0,2\pi] \end{matrix}$

Procedure for using Box-Muller Method
1. Sample independently from a uniform distribution twice, giving \begin{align} u_1,u_2 \sim~ \mathrm{Unif}(0, 1) \end{align}
2. Generate polar coordinates using the exponential distribution of d and uniform distribution of θ,
\begin{align} d = -2\log(u_1),& \quad r = \sqrt{d} \\ & \quad \theta = 2\pi u_2 \end{align}
3. Transform r and θ back to x and y,
\begin{align} x = r\cos(\theta) \\ y = r\sin(\theta) \end{align}

Notice that the Box-Muller Method generates a pair of independent Standard Normal distributions, x and y.

This procedure can be illustrated in Matlab using the code below:

u1 = rand(5000,1);
u2 = rand(5000,1);

d = -2*log(u1);
theta = 2*pi*u2;

x = d.^(1/2).*cos(theta);
y = d.^(1/2).*sin(theta);

figure(1);

subplot(2,1,1);
hist(x);
title('X');
subplot(2,1,2);
hist(y);
title('Y');


Also, we can confirm that d and theta are indeed exponential and uniform random variables, respectively, in Matlab by:

subplot(2,1,1);
hist(d);
title('d follows an exponential distribution');
subplot(2,1,2);
hist(theta);
title('theta follows a uniform distribution over [0, 2*pi]');


##### Useful Properties (Single and Multivariate)

Box-Muller can be used to sample a standard normal distribution. However, there are many properties of Normal distributions that allow us to use the samples from Box-Muller method to sample any Normal distribution in general.

Properties of Normal distributions
• \begin{align} \text{If } & X = \mu + \sigma Z, & Z \sim~ N(0,1) \\ &\text{then } X \sim~ N(\mu,\sigma ^2) \end{align}
• \begin{align} \text{If } & \vec{Z} = (Z_1,\dots,Z_d)^T, & Z_1,\dots,Z_d \sim~ N(0,1) \\ &\text{then } \vec{Z} \sim~ N(\vec{0},I) \end{align}
• \begin{align} \text{If } & \vec{X} = \vec{\mu} + \Sigma^{1/2} \vec{Z}, & \vec{Z} \sim~ N(\vec{0},I) \\ &\text{then } \vec{X} \sim~ N(\vec{\mu},\Sigma) \end{align}

These properties can be illustrated through the following example in Matlab using the code below:

Example: For a Multivariate Normal distribution $u=\begin{bmatrix} -2 \\ 3 \end{bmatrix}$ and $\Sigma=\begin{bmatrix} 1&0.5\\ 0.5&1\end{bmatrix}$

u = [-2; 3];
sigma = [ 1 1/2; 1/2 1];

r = randn(15000,2);
ss = chol(sigma);

X = ones(15000,1)*u' + r*ss;
plot(X(:,1),X(:,2), '.');


Note: In the example above, we had to generate the square root of $\Sigma$ using the Cholesky decomposition,

ss = chol(sigma);


which gives $ss=\begin{bmatrix} 1&0.5\\ 0&0.8660\end{bmatrix}$. Matlab also has the sqrtm function:

ss = sqrtm(sigma);


which gives a different matrix, $ss=\begin{bmatrix} 0.9659&0.2588\\ 0.2588&0.9659\end{bmatrix}$, but the plot looks about the same (X has the same distribution).