a Deeper Look into Importance Sampling: Difference between revisions

From statwiki
Jump to navigation Jump to search
Line 50: Line 50:
::<math>\displaystyle E[(w(x))^2] >= (E[\int \left| \frac{f(x)h(x)}{g(x)} \right|])^2 </math> <br />
::<math>\displaystyle E[(w(x))^2] >= (E[\int \left| \frac{f(x)h(x)}{g(x)} \right|])^2 </math> <br />


::<math>\displaystyle (E[\int \left| \fract{f(x)h(x)}{g(x)} \right|])^2 </math>
::<math>\displaystyle (E[\int \left| \frac{f(x)h(x)}{g(x)} \right|])^2 </math>
::<math>\displaystyle = (\int \frac{f(x)\left| h(x) \right|}{g(x)} g(x) dx)^2 </math>
::<math>\displaystyle = (\int \frac{f(x)\left| h(x) \right|}{g(x)} g(x) dx)^2 </math>
::<math>\displaystyle = (\int f(x)\left| h(x) \right| g(x)dx)^2 </math> since <math>\displaystyle f </math> and <math>\displaystyle g</math> are density functions, <math>\displaystyle f, g </math> cannot be negative.
::<math>\displaystyle = (\int f(x)\left| h(x) \right| g(x)dx)^2 </math> since <math>\displaystyle f </math> and <math>\displaystyle g</math> are density functions, <math>\displaystyle f, g </math> cannot be negative.


===[[Continuing on]] - June 4, 2009===
===[[Continuing on]] - June 4, 2009===

Revision as of 01:36, 4 June 2009

A Deeper Look into Importance Sampling - June 2, 2009

From last class, we have determined that an integral can be written in the form [math]\displaystyle{ I = \displaystyle\int h(x)f(x)\,dx }[/math] [math]\displaystyle{ = \displaystyle\int \frac{h(x)f(x)}{g(x)}g(x)\,dx }[/math] We continue our discussion of Importance Sampling here.

Importance Sampling

We can see that the integral [math]\displaystyle{ \displaystyle\int \frac{h(x)f(x)}{g(x)}g(x)\,dx = \int \frac{f(x)}{g(x)}h(x) g(x)\,dx }[/math] is just [math]\displaystyle{ \displaystyle E_g(h(x)) \rightarrow }[/math]the expectation of h(x) with respect to g(x), where [math]\displaystyle{ \displaystyle \frac{f(x)}{g(x)} }[/math] is a weight [math]\displaystyle{ \displaystyle\beta(x) }[/math]. In the case where [math]\displaystyle{ \displaystyle f \gt g }[/math], a greater weight for [math]\displaystyle{ \displaystyle\beta(x) }[/math] will be assigned. Thus, the points with more weight are deemed more important, hence "importance sampling". This can be seen as a variance reduction technique.

Problem

The method of Importance Sampling is simple but can lead to some problems. The [math]\displaystyle{ \displaystyle \hat I }[/math] estimated by Importance Sampling could have infinite standard error.

Given [math]\displaystyle{ \displaystyle I= \int w(x) g(x) dx }[/math] [math]\displaystyle{ = \displaystyle E_g(w(x)) }[/math] [math]\displaystyle{ = \displaystyle \frac{1}{N}\sum_{i=1}^{N} w(x_i) }[/math] where [math]\displaystyle{ \displaystyle w(x)=\frac{f(x)h(x)}{g(x)} }[/math].

Obtaining the second moment,

[math]\displaystyle{ \displaystyle E[(w(x))^2] }[/math]
[math]\displaystyle{ \displaystyle = \int (\frac{h(x)f(x)}{g(x)})^2 g(x) dx }[/math]
[math]\displaystyle{ \displaystyle = \int \frac{h^2(x) f^2(x)}{g^2(x)} g(x) dx }[/math]
[math]\displaystyle{ \displaystyle = \int \frac{h^2(x)f^2(x)}{g(x)} dx }[/math]

We can see that if [math]\displaystyle{ \displaystyle g(x) \rightarrow 0 }[/math], then [math]\displaystyle{ \displaystyle E[(w(x))^2] \rightarrow \infty }[/math]. This occurs if [math]\displaystyle{ \displaystyle g }[/math] has a thinner tail than [math]\displaystyle{ \displaystyle f }[/math] then [math]\displaystyle{ \frac{h^2(x)f^2(x)}{g(x)} }[/math] could be infinitely large.

Remark 1

It is evident that [math]\displaystyle{ \displaystyle g(x) }[/math] should be chosen such that it has a thicker tail than [math]\displaystyle{ \displaystyle f(x) }[/math]. If [math]\displaystyle{ \displaystyle f }[/math] is large over set [math]\displaystyle{ \displaystyle A }[/math] but [math]\displaystyle{ \displaystyle g }[/math] is small, then [math]\displaystyle{ \displaystyle \frac{f}{g} }[/math] would be large and it would result in a large variance.

Remark 2

It is useful if we can choose [math]\displaystyle{ \displaystyle g }[/math] to be similar to [math]\displaystyle{ \displaystyle f }[/math] in terms of shape. Analytically, we can show that there is a [math]\displaystyle{ \displaystyle g }[/math] that would result in a variance that is minimized.

Theorem: The choice of [math]\displaystyle{ \displaystyle g }[/math] that minimizes variance of [math]\displaystyle{ \hat I }[/math] is [math]\displaystyle{ \displaystyle g^*(x)=\frac{\left| h(x) \right| f(x)}{\int h(s) f(s) ds} }[/math].

Proof:
We know that [math]\displaystyle{ \displaystyle w(x)=\frac{f(x)h(x)}{g(x)} }[/math]

The variance of [math]\displaystyle{ \displaystyle w(x) }[/math] is

[math]\displaystyle{ \displaystyle Var[w(x)] }[/math]
[math]\displaystyle{ \displaystyle = E[(w(x)^2)] - [E[w(x)]]^2 }[/math]
[math]\displaystyle{ \displaystyle = \int (\frac{f(x)h(x)}{g(x)})^2 g(x) dx - [\int \frac{f(x)h(x)}{g(x)}g(x)dx]^2 }[/math]
[math]\displaystyle{ \displaystyle = \int (\frac{f(x)h(x)}{g(x)})^2 g(x) dx - [\int f(x)h(x)]^2 }[/math]

As we can see, the second term does not depend on [math]\displaystyle{ \displaystyle g(x) }[/math]. Therefore to minimize [math]\displaystyle{ \displaystyle Var[w(x)] }[/math] we need to minimize the first term. We will use Jenson's Inequality.

Jenson's Inequality
If [math]\displaystyle{ \displaystyle g }[/math] is a convex function then [math]\displaystyle{ \displaystyle g(\alpha x_1 + (1-\alpha)x_2) \lt = \alpha g(x_1) + (1-\alpha) g(x_2) }[/math]

Using Jenson's Inequality,

[math]\displaystyle{ \displaystyle g(E[x]) \lt = E[g(x)] }[/math]
[math]\displaystyle{ \displaystyle E[(w(x))^2] \gt = (E[\left| w(x) \right|])^2 }[/math]
[math]\displaystyle{ \displaystyle E[(w(x))^2] \gt = (E[\int \left| \frac{f(x)h(x)}{g(x)} \right|])^2 }[/math]
[math]\displaystyle{ \displaystyle (E[\int \left| \frac{f(x)h(x)}{g(x)} \right|])^2 }[/math]
[math]\displaystyle{ \displaystyle = (\int \frac{f(x)\left| h(x) \right|}{g(x)} g(x) dx)^2 }[/math]
[math]\displaystyle{ \displaystyle = (\int f(x)\left| h(x) \right| g(x)dx)^2 }[/math] since [math]\displaystyle{ \displaystyle f }[/math] and [math]\displaystyle{ \displaystyle g }[/math] are density functions, [math]\displaystyle{ \displaystyle f, g }[/math] cannot be negative.

Continuing on - June 4, 2009