Chapter 9 Stationary Distribution of Markov Chain (Lecture on 02/02/2021)
Previously we have discussed irreducibility, aperiodicity, persistence, non-null persistence, and a application of stochastic process. Now we tend to discuss the stationary distribution and the limiting distribution of a stochastic process.
How does a Markov chain behave after a long time n has elapsed?
The sequence \(\{X_n\}_{n\geq 1}\) cannot generally converge to some particular state in the state space since it enjoys the random fluctuation which is specified by the transition matrix. However, we might hold out some hope that the distribution of \(X_n\) settles down. Indeed, subject to certain conditions, this turns out to be the case.
The distribution settels down means that as \(n\to\infty\), the marginal distribution of \(X_n\) is the same as the marginal distribution of \(X_{n+1}\).
We shall see the existence of a limiting distribution for \(X_n\) as \(n\to \infty\) is closely tied with the existence of a stationary distribution.Definition 9.1 (Stationary distribution) The vector \(\boldsymbol{\pi}\) is called a stationary distribution of the chain if \(\boldsymbol{\pi}\) has entries \((\pi_j:j\in\mathcal{S})\) such that:
\(\pi_j\geq 0,\forall j\), and \(\sum_j\pi_j=1\).
\(\boldsymbol{\pi}=\boldsymbol{\pi}\mathbf{P}\), where \(\mathbf{P}\) is the trnasition matrix of the chain. Thus, \(\pi_j=\sum_i\pi_ip_{ij},\forall j\).
Such a distribution is called stationary as \(\boldsymbol{\pi}\mathbf{P}^2=(\boldsymbol{\pi}\mathbf{P})\mathbf{P}=\boldsymbol{\pi}\mathbf{P}=\boldsymbol{\pi}\), similarly, \(\boldsymbol{\pi}\mathbf{P}^n=\boldsymbol{\pi}\), for all \(n\geq 0\).
Note that for a time homogenous Markov chain, if \(X_0\) has marginal distribution \(\boldsymbol{\pi}\), then \(X_n\) has marginal distribution \(\boldsymbol{\pi}\mathbf{P}^n\). Now if \(\boldsymbol{\pi}\mathbf{P}^n=\boldsymbol{\pi}\), every \(X_n\) has the same marginal distribution. It suggests that, once the chain hits the stationary distribution, meaning that the marginal distribution of the chain becomes to the stationary distribution, then the marginal distribution of the chain will remain the stationary distribution.
Theorem 9.1 (Renewal Theorem) An irreducible chain has a stationary distribution \(\boldsymbol{\pi}\) if and only if all states are non-null persistent. In this case, \(\boldsymbol{\pi}\) is the unique stationary distribution and is given by \(\pi_i=\frac{1}{\mu_i}\) for each \(i\in\mathcal{S}\), where \(\mu_i\) is the mean recurrence time for state \(i\).
This is a intuitive result in the following sense. The \(\mu_i\) is the average number of times the Markov chain is going to visit a certain state. It should be connecting with the probability of visiting that state. Emperically, one over the average number of times the Markov chain visit a certain state is the probability of visiting that state.
Note that we have already proved the following two results:
Decomposition theorem (Theorem 7.1): \(\mathcal{S}=T\cup C_1\cup C_2\cup\cdots\) where \(T\) is the set of transient state and \(C_1,C_2,\cdots\) are sets of intercommunicating persistent states. Every state with in each \(C_i\) has the same properties.
Lemma 7.1: For a Markov chain with finite state space, at least one state is non-null persistent.
If we are consider a finite Marko chain, we can look at the intercommunication patterns to find a decomposition of \(\mathcal{S}\) (we have discussed one example, see Example 7.1). You know that each persistent group of states is non-null persistent. The reason is that since the state space is finite, then using result (b) mentioned above, at least one state is non-null persistent and therefore, since they are intercoummunicating, all of them are non-null persistent. Thus, within every group of persistent states, there exist a unique stationary distribution.
If the entire state space of a Markov chain is irreducible, we can find a unique stationary distribution. When the entire state space of a Markov chain is not irreducible, we have to use the decomposition theorem, and find stationary distribution for every persistent group of states.
Example 9.1 Let \(S=\{1,2,3,4,5,6\}\) and the transition probability matrix is given by \[\begin{equation} \begin{pmatrix} \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 \\ \frac{1}{4} & \frac{3}{4} & 0 & 0 & 0 & 0 \\ \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & 0 & 0 \\ \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4}\\ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \\ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \end{pmatrix} \tag{9.1} \end{equation}\]
We have showed previously (Example 7.1) that the state space of this Markov chain can be decomposed into the set of transient states \(T=\{3,4\}\) and two sets of persistent states \(C_1=\{1,2\}\) and \(C_2=\{5,6\}\).
Now let us find the stationary distribution of \(C_1\). We need to find \(\boldsymbol{\pi}=(\pi_1,\pi_2)\) such that \[\begin{equation} \begin{pmatrix} \pi_1 & \pi_1 \end{pmatrix}=\begin{pmatrix} \pi_1 & \pi_1 \end{pmatrix}\begin{pmatrix} \frac{1}{2} & \frac{1}{2}\\ \frac{1}{4} & \frac{3}{4} \end{pmatrix} \tag{9.2} \end{equation}\] Therefore, we have \[\begin{equation} \left\{\begin{aligned} &\pi_1=\frac{1}{2}\pi_1+\frac{1}{4}\pi_2\\ &\pi_2=\frac{1}{2}\pi_1+\frac{3}{4}\pi_2\\ &\pi_1+\pi_2=1\end{aligned} \right.\Longleftrightarrow \left\{\begin{aligned} & \pi_1=\frac{1}{3} \\ &\pi_2=\frac{2}{3} \end{aligned}\right. \tag{9.3} \end{equation}\]
Note that in Example 7.1, we have calculate that the mean recurrence time for state 1 is 3. Using the Theorem 9.1 we can actually get the stationary distribution of this subchain directly from the result of Example 7.1.
Example 9.2 (Random walk with retaining barriers) A particle performs a random walk on the non-negative integers with a retaining barrier at 0. The transition porbabilities are \(p_{00}=q\), \(p_{i,i+1}=p, \forall i\geq 0\) and \(p_{i,i-1}=q\) for \(i\geq 1\). \(p\) and \(q\) satisfies \(p+q=1\).
This chain is obviously irreducible since all states are intercommunicating. Now let us find the stationary distribution. By definition, the stationary distribution \(\boldsymbol{\pi}\) satisfies \(\boldsymbol{\pi}=\boldsymbol{\pi}\mathbf{P}\). Therefore, we have \[\begin{equation} \begin{pmatrix} \pi_0 & \pi_1 & \cdots \end{pmatrix} = \begin{pmatrix} q & p & 0 & 0 & 0 & \cdots\\ q & 0 & p & 0 & 0 & \cdots\\ 0 & q & 0 & p & 0 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \end{pmatrix} =\begin{pmatrix} \pi_0 & \pi_1 & \cdots \end{pmatrix} \tag{9.4} \end{equation}\]
We get the following system of equations: \[\begin{equation} \begin{split} &\pi_0 q+\pi_1 q=\pi_0 \quad\quad\quad (1)\\ &\pi_0 p+\pi_2 q=\pi_1 \quad\quad\quad (2)\\ &\cdots \cdots \cdots\\ &\pi_{n-1} p+\pi_{n+1} q=\pi_n \quad\quad\quad (n+1)\\ &\cdots \cdots \cdots\\ \end{split} \end{equation}\]
From \((1)\) we have \(\pi_1q=\pi_0(1-q)\) or \(\pi_1=\frac{p}{q}\pi_0\). From \((2)\), we obtain \(\pi_0p+\pi_2q=\pi_1=\frac{p}{q}\pi_0\), therefore \(\pi_2=(\frac{p}{q})^2\pi_0\). By induction, we prove that \(\pi_n=(\frac{p}{q})^n\pi_0\). Assume \(\pi_{n-1}=(\frac{p}{q})^{n-1}\pi_0\) and \(\pi_n=(\frac{p}{q})^n\pi_0\), we try to find \(\pi_{n+1}\). From equation \((n+1)\), we have \((\frac{p}{q})^{n-1}\pi_0p+\pi_{n+1}q=(\frac{p}{q})^n\pi_0\), from which we can obtain \(\pi_{n+1}=(\frac{p}{q})^{n+1}\pi_0\).
In addition, \(\sum_{n=0}^{\infty}\pi_n=1\), we have \(\pi_0[1+\sum_{n=1}^{\infty}(\frac{p}{q})^n]=1\). The sum \(1+\sum_{n=1}^{\infty}(\frac{p}{q})^n\) is finite if and only if \(\frac{p}{q}<1\). Hence, no chance of having a stationary distribution when \(\frac{p}{q}\geq1\). This also tells you that the chain is not non-null persistent when \(\frac{p}{q}\geq1\). This is an intuitively correct result, because when \(p>q\), there are less force of bringing the chain back to 0, the chain have a porpensity to go diverge.
When \(\frac{p}{q}<1\), we have \(\pi_0=1-\frac{p}{q}\) and it implies that \(\pi_n=(\frac{p}{q})^n(1-\frac{p}{q})\). The stationary distribution is obtained in a nice close form.
This random walk is different with the simple random walk we discussed before. It does not allow the chain to visit negative integers. When it hits state 0, it can only remains there with probability \(q\), or go up with probability \(p\). All the state in simple random walk have the period equal to 2, while for this random walk, it is not true. In fact, every state in this chain is aperiodic. The reason is that every state is intercommunicating and state 0 is aperiodic, since the chain can retain at 0.
Now we will establish the link between stationary distribution and limiting distribution of a Markov chain. Thus, we will try to characterize \(\lim_{n\to\infty}p_{ij}(n)\). While doing this, periodicity of the chain can pose some issues. For example, suppose the transition matrix of a Markov chain is given by \[\begin{equation} \mathbf{P}=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \tag{9.5} \end{equation}\] then we have \[\begin{equation} p_{11}(n)=p_{22}(n)=\left\{\begin{aligned} & 0 & n\,\text{is odd}\\ & 1 & n\,\text{is even}\end{aligned}\right. \tag{9.6} \end{equation}\] Therefore, \(\{P_{ii}(n)\}_{n\geq 1}\) is an alternating sequence and \(\lim_{n\to\infty}p_{ii}(n)\) does not exist.
To avoid such complications, when we try to characteristic \(\lim_{n\to\infty}p_{ij}(n)\), we will only deal with irreducible, aperodic chains.