again on Markov Chain

From statwiki
Revision as of 18:37, 17 June 2009 by Hclam (talk | contribs) (Periodic and aperiodic Markov chain)
Jump to: navigation, search

Decomposition of Markov chain

In the previous lecture it was shown that a Markov Chain can be written as the disjoint union of its classes. This decomposition is always possible and it is reduced to one class only in the case of an irreducible chain.

Let [math]\displaystyle X = \{1, 2, 3, 4\}[/math] and be the transition matrix:

[math]P=\left(\begin{matrix}1/3&2/3&0&0\\ 2/3&1/3&0&0\\ 1/4&1/4&1/4&1/4\\ 0&0&0&1 \end{matrix}\right)[/math]

The decomposition in classes is:

class 1: [math]\displaystyle \{1, 2\} \rightarrow [/math] From the matrix we see that the states 1 and 2 have only [math]\displaystyle P_{12}[/math] and [math]\displaystyle P_{21}[/math] as nonzero transition probability
class 2: [math]\displaystyle \{3\} \rightarrow [/math] The state 3 can go to every other state but none of the others can go to it
class 3: [math]\displaystyle \{4\} \rightarrow [/math] This is an absorbing state since it is a close class and there is only one element

Recurrent and Transient states

A state i is called [math]\emph{recurrent}[/math] or [math]\emph{persistent}[/math] if

[math]\displaystyle Pr(x_{n}=i[/math] for some [math]\displaystyle n\geq / x_{0}=i)=1 [/math]

That means that the probability to come back to the state i, starting from the state i, is 1.

If it is not the case, then we call the state i [math]\emph{transient} [/math].

It is straight forward to prove that a finite irreducible chain is recurrent.

Given a Markov chain, a state [math]\displaystyle i[/math] is [math]\emph{recurrent}[/math] if and only if [math]\displaystyle \sum_{\forall n}P_{ii}(n)=\infty[/math].
A state [math]\displaystyle i[/math] is [math]\emph{transient}[/math] if and only if [math]\displaystyle \sum_{\forall n}P_{ii}(n)\geq\infty[/math]


  • If [math]\displaystyle i[/math] is [math]\emph{recurrent}[/math] and [math]i\longleftrightarrow j[/math] then [math]\displaystyle j[/math] is [math]\emph{recurrent}[/math]
  • If [math]\displaystyle i[/math] is [math]\emph{transient}[/math] and [math]i\longleftrightarrow j[/math] then [math]\displaystyle j[/math] is [math]\emph{transient}[/math]
  • A finite Markov chain should have at least one recurrent state
  • The states of a finite, irreducible Markov chain are all recurrent (proved using the previous preposition and the fact that there is only one class in this kind of chain)

In the example above, state one and two are a closed set, so they are both recurrent states. State four is an absorbing state, so it is also recurrent. State three is transient.

Let [math]\displaystyle X=\{\cdots,-2,-1,0,1,2,\cdots\}[/math] and suppose that [math]\displaystyle P_{i,i+1}=p [/math], [math]\displaystyle P_{i,i-1}=q=1-p[/math] and [math]\displaystyle P_{i,j}=0[/math] otherwise. This is the Random Walk that we have already seen in a previous lecture, except it extends infinitely in both directions.

We now see other properties of this particular Markov chain:

  • Since all states communicate if one of them is recurrent, then all states will be recurrent. On the other hand, if one of them is transient, then all the other will be transient too.
  • Consider now the case in which we are in state [math]\displaystyle 0[/math]. If we move of n steps to the right or to the left, the only way to go back to [math]\displaystyle 0[/math] is to have n steps on the opposite direction.

[math]\displaystyle Pr(x_{2n}=0/X_{0}=0)=P_{00}(2n)=[ {2n \choose n} ]p^{n}q^{n}[/math] We now want to know if this event is transient or recurrent or, equivalently, whether [math]\displaystyle \sum_{\forall i}P_{ii}(n)\geq\infty[/math] or not.

To proceed with the analysis, we use the [math]\emph{Stirling }[/math] [math]\displaystyle\emph{formula}[/math]:

[math]\displaystyle n!\sim~n^{n}\sqrt(n)e^{-n}\sqrt(2\pi)[/math]

The probability could therefore be approximated by:

[math]\displaystyle P_{00}(n)=\sim~\frac{(4pq)^{n}}{\sqrt(n\pi)}[/math]

And the formula becomes:

[math]\displaystyle \sum_{\forall n}P_{00}(n)=\sum_{\forall n}\frac{(4pq)^{n}}{\sqrt(n\pi)}[/math]

We can conclude that if [math]\displaystyle 4pq \lt 1[/math] then the state is transient, otherwise is recurrent.

[math]\displaystyle \sum_{\forall n}P_{00}(n)=\sum_{\forall n}\frac{(4pq)^{n}}{\sqrt(n\pi)} = \begin{cases} \infty, & \text{if } p = \frac{1}{2} \\ \leq \infty, & \text{if } p\neq \frac{1}{2} \end{cases}[/math]

An alternative to Stirling's approximation is to use the generalized binomial theorem to get the following formula:

[math] \frac{1}{\sqrt{1 - 4x}} = \sum_{n=0}^{\infty} \binom{2n}{n} x^n [/math]

Then substitute in [math]x = pq[/math].

[math] \frac{1}{\sqrt{1 - 4pq}} = \sum_{n=0}^{\infty} \binom{2n}{n} p^n q^n = \sum_{n=0}^{\infty} P_{00}(2n) [/math]

So we reach the same conclusion: all states are recurrent iff [math]p = q = \frac{1}{2}[/math].

Convergence of Markov chain

We define the [math]\displaystyle \emph{Recurrence}[/math] [math]\emph{time}[/math][math]\displaystyle T_{i,j}[/math] as the minimum time to go from the state i to the state j. It is also possible that the state j is not reachable from the state i.

[math]\displaystyle T_{ij}=\begin{cases} min\{n: x_{n}=i\}, & \text{if }\exists n \\ \infty, & \text{otherwise } \end{cases}[/math]

The mean of the recurrent time [math]\displaystyle m_{i}[/math]is defined as:

[math]m_{i}=\displaystyle E(T_{ij})=\sum nf_{ii} [/math]

where [math]\displaystyle f_{ij}=Pr(x_{1}\neq j,x_{2}\neq j,\cdots,x_{n-1}\neq j,x_{n}=j/x_{0}=i)[/math]

Using the objects we just introduced, we say that:

[math]\displaystyle \text{state } i=\begin{cases} \text{null}, & \text{if } m_{i}\neq\infty \\ \text{non-null or positive} , & \text{otherwise } \end{cases}[/math]

In a finite state Markov Chain, all the recurrent state are positive

Periodic and aperiodic Markov chain

A Markov chain is called [math]\emph{periodic}[/math] of period [math]\displaystyle n[/math] if every return to state i from state i occurs in multiples of [math]\displaystyle n[/math] steps. (Note: n cannot equal 1, otherwise, the chain is aperiodic)

Considerate the three-state chain:

[math]P=\left(\begin{matrix} 0&1&0\\ 0&0&1\\ 1&0&0 \end{matrix}\right)[/math]

It's evident that, starting from the state 1, we will return to it every 3 steps and so it works for the other two states. The chain is therefore periodic.

A irreducible Markov chain is called [math]\emph{aperiodic}[/math] if:

[math]\displaystyle Pr(x_{n}=j | x_{0}=i) \gt 0 \text{ and } Pr(x_{n+1}=j | x_{0}=i) \gt 0 \text{ for some } n\ge 0 [/math]

Consider the chain [math]P=\left(\begin{matrix} 0&0.5&0&0.5\\ 0.5&0&0.5&0\\ 0&0.5&0&0.5\\ 0.5&0&0.5&0\\ \end{matrix}\right)[/math]

This chain is periodic, since every return to state i from state i must take a multiple of 2 steps.