Chapter 4 Conditions for Recurrent and Transient State (Lecture on 01/14/2021)
We are interested in the first passage time of the chain, denoted as \(f_{ij}(n)=P(X_1\neq j,\cdots,X_{n-1}\neq j,X_n=j|X_0=i)\). This is the probability that the first visit to state \(j\) starting from state \(i\) takes place at time \(n\). Define \(f_{ij}=\sum_{n=1}^{\infty}f_{ij}(n)\), which is the probability that state \(j\) is eventually visited from state \(i\). We would like to know the condition under which \(f_{ij}=1\). In other words, we would like to put conditions on the n-step transition probabilities \(p_{ij}(n)\).
We will also use \(\lim_{s\uparrow 1}P_{ij}(s)\) to describe \(P_{ij}(1)\), which is valid by Abel’s theorem.
Theorem 4.1 (Abel Theorem) For the generating functions, we have
\(P_{ii}(s)=1+F_{ii}(s)P_{ii}(s)\)
- \(P_{ij}(s)=F_{ij}(s)P_{jj}(s)\) if \(i\neq j\).
Proof. Fix \(i,j\in S\) and let \(A_m=\{X_m=j\}\). Let \(B_r\) be the event that the first visit to \(j\) (after time 0) takes place at time \(r\), conditioning on knowing that the state \(j\) will be visited within \(m\) steps. \[\begin{equation} \begin{split} P(A_m|X_0=i)&=\sum_{r=1}^{m}P(A_m\cap B_r|X_0=i) \quad [B_1\cup\cdots\cup B_m=\Omega]\\ &=\sum_{r=1}^mP(A_m|B_r,X_0=i)P(B_r|X_0=i)\\ &=\sum_{r=1}^mP(A_m|X_r=j,X_0=i)P(B_r|X_0=i)\\ &=\sum_{r=1}^mP(A_m|X_r=j)P(X_1\neq j,\cdots,X_{r-1}\neq j,X_r=j|X_0=i)\\ &=\sum_{r=1}^mP(X_m=j|X_r=j)f_{ij}(r)\\ &=\sum_{r=1}^mp_{jj}(m-r)f_{ij}(r) \end{split} \tag{4.1} \end{equation}\]
On the other hand, by definition, \[\begin{equation} P(A_m|X_0=i)=P(X_m=j|X_0=i)=p_{ij}(m) \tag{4.2} \end{equation}\]
Therefore, from (4.1) and (4.2), we can obtain \[\begin{equation} p_{ij}(m)=\sum_{r=1}^mp_{jj}(m-r)f_{ij}(r) \tag{4.3} \end{equation}\]
Thus, \[\begin{equation} \sum_{m=0}^{\infty}s^mp_{ij}(m)=\sum_{m=0}^{\infty}s^m[\sum_{r=1}^mp_{jj}(m-r)f_{ij}(r)]\\ \tag{4.4} \end{equation}\] and this is just \[\begin{equation} P_{ij}(s)=\delta_{ij}+F_{ij}(s)P_{jj}(s) \tag{4.5} \end{equation}\] Finally we get that if \(i=j\), we have \(P_{ii}(s)=1+F_{ii}(s)P_{ii}(s)\) or \(P_{ii}(s)=\frac{1}{1-F_{ii}(s)}\). When \(i\neq j\), \(P_{ij}(s)=F_{ij}(s)P_{jj}(s)\).More detailed proof from (4.4) to (4.5):
By collecting terms of \(s^m\) we have \[\begin{equation} \begin{split} F_{ij}(s)P_{jj}(s) &= (\sum_{m=0}^\infty s^m f_{ij}(m))(\sum_{l=0}^\infty s^l p_{jj}(l)) \\ &= f_{ij}(0)p_{jj}(0)s^0+\sum_{m+l=1}f_{ij}(m)p_{jj}(l)s+\cdots+\sum_{m+l=k}f_{ij}(k)p_{jj}(l)s^k+\cdots \\ &= 0 + \sum_{k=1}^\infty \sum_{m=0}^k f_{ij}(m)p_{jj}(k-m)s^k \\ &= 0 + \sum_{k=1}^\infty \sum_{m=1}^k f_{ij}(m)p_{jj}(k-m)s^k \end{split} \end{equation}\] Notice that since \(f_{ij}(0)=0\), effectively \(m\) stars from 1. Then use (4.3) we have \[\begin{equation} \begin{split} P_{ij}(s)&=p_{ij}(0)+\sum_{n=1}^{\infty}s^np_{ij}(n)\\ &=\delta_{ij}+\sum_{n=1}^\infty s^n\sum_{m=1}^n f_{ij}(m)p_{jj}(n-m)\\ &=\delta_{ij}+F_{ij}(s)P_{jj}(s) \end{split} \end{equation}\] that is, equation (4.5).Corollary 4.1 (a) State \(j\) is persistent (recurrent) if \(\sum_{n=1}^{\infty}p_{jj}(n)=\infty\).
- State \(j\) is transient if \(\sum_{n=1}^{\infty}p_{jj}(n)<\infty\).
Proof. (a) Since \(P_{ii}(s)=\frac{1}{1-F_{ii}(s)}\), taking \(\lim_{s\uparrow 1}\) on both sides, we have \[\begin{equation} \lim_{s\uparrow 1}P_{ii}(s)=\frac{1}{1-\lim_{s\uparrow 1}F_{ii}(s)} \tag{4.6} \end{equation}\]
Thus, \(\sum_{n=0}^{\infty}p_{jj}(n)=\frac{1}{1-\sum_{n=0}^{\infty}f_{jj}(n)}=\frac{1}{1-f_{jj}}\). Therefore, the condition such that \(j\) is a recurrent state, \(f_{jj}=1\), implies \(\sum_{n=1}^{\infty}p_{jj}(n)=\infty\).
- Since \(\sum_{n=0}^{\infty}P_{jj}(n)=\frac{1}{1-f_{jj}}\). If \(j\) is transient, \(f_{jj}<1\) and we have \(\sum_{n=1}^{\infty}p_{jj}(n)<\infty\).
Example 4.1 (One-dimensional Random Walk) Is state 0 persistent or transient for one-dimensional random walk? Let us see if \(\sum_{n=1}^{\infty}p_{00}(n)=\infty\) or not.
From Lecture 3, we have \(p_{00}(2n)={{2n} \choose n}p^n(1-p)^n\) and \(p_{00}(2n+1)=0\). Therefore, we need to show \(\sum_{n=1}^{\infty}p_{00}(2n)=\infty\) if we want to show \(0\) is a persistent state. \[\begin{equation} \sum_{n=1}^{\infty}p_{00}(2n)=\sum_{n=1}^{\infty}{{2n} \choose n}(p(1-p))^n=\sum_{n=1}^{\infty}\frac{2n!}{n!n!}(p(1-p))^n \tag{4.7} \end{equation}\]
Using the sterling approximation, we have \[\begin{equation} \frac{2n!}{n!n!}\approx\frac{(2n)^{2n+1/2}e^{-2n}\sqrt{2\pi}}{(n^{n+1/2}e^{-n}\sqrt{2\pi})^2}=\frac{4^n}{\sqrt{\pi}}\frac{1}{\sqrt{n}} \tag{4.8} \end{equation}\] Therefore, \[\begin{equation} p_{00}(2n)={{2n} \choose n}(p(1-p))^n\approx\frac{4^n}{\sqrt{\pi}\sqrt{n}}(p(1-p))^n=\frac{[4p(1-p)]^n}{\sqrt{\pi}\sqrt{n}} \end{equation}\] and now \(\sum_{n=1}^{\infty}p_{00}(2n)\approx\sum_{n=1}^{\infty}\frac{[4p(1-p)]^n}{\sqrt{\pi}\sqrt{n}}\) where \(4p(1-p)=(p+(1-p))^2-(p-(1-p))^2=1-(p-(1-p))^2\leq 1\). If \(p\neq 1-p\), we have \(4p(1-p)<1\) and since \(\sum_{n=1}^{\infty}\frac{[4p(1-p)]^n}{\sqrt{\pi}\sqrt{n}}<\infty\) we finally obtain \(\sum_{n=1}^{\infty}p_{00}(2n)<\infty\).
Therefore, if \(p\neq 1-p\), we have 0 is a transient state. This intuitively makes sense because either the probability of going upward or going downward is larger, the chain is less likely to go back to 0.
When \(p=1-p\), we have \(4p(1-p)=1\) which implies that \(\sum_{n=1}^{\infty}p_{00}(2n)\approx\sum_{n=1}^{\infty}\frac{1}{\sqrt{\pi}\sqrt{n}}=\infty\) because \(\sum_{n=1}^{\infty}\frac{1}{n^{1+\alpha}}<\infty\) if and only if \(\alpha>0\). Therefore, if \(p=1-p\), 0 is a recurrent state.
Example 4.2 (Two-dimensional Random Walk) In a two-dimensional random walk, at time \(k\), it can go up, down, right and left. Suppose the probability of going to the four directions is the same (\(\frac{1}{4}\)). Is 0 a recurrent state?