markov Chain Definitions

From statwiki
Revision as of 09:45, 30 August 2017 by Conversion script (talk | contribs) (Conversion script moved page Markov Chain Definitions to markov Chain Definitions: Converting page titles to lowercase)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Practical application for estimation: The general concept for the application of this lies within having a set of generated [math]x_i[/math] which approach a distribution [math]f(x)[/math] so that a variation of importance estimation can be used to estimate an integral in the form
[math] I = \displaystyle\int^\ h(x)f(x)\,dx [/math] by [math]\hat{I} = \frac{1}{N}\displaystyle\sum_{i=1}^Nh(x_i)[/math]
All that is required is a Markov chain which eventually converges to [math]f(x)[/math].

In the previous example, the entries [math]p_{ij}[/math] in the transition matrix [math]P[/math] represent the probability of reaching state [math]j[/math] from state [math]i[/math] after one step. For this reason, the sum over all entries j in a particular column sum to 1, as this itself must be a pmf if a transition from [math]i[/math] is to lead to a state still within the state set for [math]X_t[/math].

Homogeneous Markov Chain
The probability matrix [math]P[/math] is the same for all indicies [math]n\in T[/math]. [math]Pr(X_n=j|X_{n-1}=i)= Pr(X_1=j|X_0=i)[/math]

If we denote the pmf of X_n by a probability vector [math]\frac{}{}\mu_n = [P(X_n=x_1),P(X_n=x_2),..,P(X_n=x_i)][/math]
where [math]i[/math] denotes an ordered index of all possible states of [math]X[/math].
Then we have a definition for the
marginal probabilty [math]\frac{}{}\mu_n(i) = P(X_n=i)[/math]
where we simplify [math]X_n[/math] to represent the ordered index of a state rather than the state itself.

From this definition it can be shown that, [math]\frac{}{}\mu_{n-1}P=\mu_0P^n[/math]

Proof:

[math]\mu_{n-1}P=[\sum_{\forall i}(\mu_{n-1}(i))P_{i1},\sum_{\forall i}(\mu_{n-1}(i))P_{i2},..,\sum_{\forall i}(\mu_{n-1}(i))P_{ij}][/math] and since

[math]\sum_{\forall i}(\mu_{n-1}(i))P_{ij}=\sum_{\forall i}P(X_n=x_i)Pr(X_n=j|X_{n-1}=i)=\sum_{\forall i}P(X_n=x_i)\frac{Pr(X_n=j,X_{n-1}=i)}{P(X_n=x_i)}[/math] [math]=\sum_{\forall i}Pr(X_n=j,X_{n-1}=i)=Pr(X_n=j)=\mu_{n}(j)[/math]

Therefore,
[math]\frac{}{}\mu_{n-1}P=[\mu_{n}(1),\mu_{n}(2),...,\mu_{n}(i)]=\mu_{n}[/math]

With this, it is possible to define [math]P(n)[/math] as an n-step transition matrix where [math]\frac{}{}P_{ij}(n) = Pr(X_n=j|X_0=i)[/math]
Theorem: [math]\frac{}{}\mu_n=\mu_0P^n[/math]
Proof: [math]\frac{}{}\mu_n=\mu_{n-1}P[/math] From the previous conclusion
[math]\frac{}{}=\mu_{n-2}PP=...=\mu_0\prod_{i = 1}^nP[/math] And since this is a homogeneous Markov chain, [math]P[/math] does not depend on [math]i[/math] so
[math]\frac{}{}=\mu_0P^n[/math]

From this it becomes easy to define the n-step transition matrix as [math]\frac{}{}P(n)=P^n[/math]

Summary

  • transition matrix [math]( P )[/math] is an NxN when [math]N=|X|[/math] matrix with [math]P_{ij}=Pr(X_1=j|X_0=i)[/math] where [math]i,j \in X[/math]
  • n-step transition matrix [math]( P_n )[/math] also NxN with [math]P_{ij}(n)=Pr(X_n=j|X_0=i)[/math]
  • marginal (probability of X) [math]\mu_n(i) = Pr(X_n=i)[/math]
  • Theorem: [math]P_n=P^n[/math]
  • Theorem: [math]\mu_n=\mu_0P^n[/math]

---

Definitions of different types of state sets

Define [math]i,j \in[/math] State Space
If [math]P_{ij}(n) \gt 0[/math] for some [math]n[/math] , then we say [math]i[/math] reaches [math]j[/math] denoted by [math]i\longrightarrow j[/math]
This also mean j is accessible by i [math]j\longleftarrow i[/math]
If [math]i\longrightarrow j[/math] and [math]j\longrightarrow i[/math] then we say [math]i[/math] and [math]j[/math] communicate, denoted by [math]i\longleftrightarrow j[/math]

Theorems
1) [math]i\longleftrightarrow i[/math]
2) [math]i\longleftrightarrow j,j\longleftrightarrow k\Rightarrow i\longleftrightarrow k[/math]
3) The set of states of [math]X[/math] can be written as a unique disjoint union of subsets [math]X=X_1 \bigcup X_2\bigcup... \bigcup X_k,k\gt 0 [/math] where two states [math]i[/math] and [math]j[/math] communicate [math]IFF[/math] they belong to the same subset

More Definitions
A set is Irreducible if all states communicate with each other ([math]\Leftrightarrow k=1[/math])
A subset of states (Class) is Closed if once you enter it, you can never leave. Alternatively, a subset of states (Class) is Open if it is possible to leave the Class.
An Absorbing set is a closed set with only 1 element

Again on Markov Chain - June 11, 2009