# Chapter 4 Conditions for Recurrent and Transient State (Lecture on 01/14/2021)

We are interested in the first passage time of the chain, denoted as $$f_{ij}(n)=P(X_1\neq j,\cdots,X_{n-1}\neq j,X_n=j|X_0=i)$$. This is the probability that the first visit to state $$j$$ starting from state $$i$$ takes place at time $$n$$. Define $$f_{ij}=\sum_{n=1}^{\infty}f_{ij}(n)$$, which is the probability that state $$j$$ is eventually visited from state $$i$$. We would like to know the condition under which $$f_{ij}=1$$. In other words, we would like to put conditions on the n-step transition probabilities $$p_{ij}(n)$$.

Definition 4.1 (Generating Functions) Define $$P_{ij}(s)=\sum_{n=0}^{\infty}s^np_{ij}(n)$$ and $$F_{ij}(s)=\sum_{n=0}^{\infty}s^nf_{ij}(n)$$ with conventions that $$P_{ij}(0)=\delta_{ij}$$ and $$F_{ij}(0)=0$$ where \delta_{ij}=\left\{\begin{aligned} & 1 & i=j\\ & 0 & o.w. \end{aligned}\right.. With this representation, we have $$f_{ij}=\sum_{n=0}^{\infty}f_{ij}(n)=F_{ij}(1)$$, we usually assume $$|s|<1$$.

We will also use $$\lim_{s\uparrow 1}P_{ij}(s)$$ to describe $$P_{ij}(1)$$, which is valid by Abel’s theorem.

Theorem 4.1 (Abel Theorem) For the generating functions, we have

1. $$P_{ii}(s)=1+F_{ii}(s)P_{ii}(s)$$

2. $$P_{ij}(s)=F_{ij}(s)P_{jj}(s)$$ if $$i\neq j$$.

Proof. Fix $$i,j\in S$$ and let $$A_m=\{X_m=j\}$$. Let $$B_r$$ be the event that the first visit to $$j$$ (after time 0) takes place at time $$r$$, conditioning on knowing that the state $$j$$ will be visited within $$m$$ steps. $$$\begin{split} P(A_m|X_0=i)&=\sum_{r=1}^{m}P(A_m\cap B_r|X_0=i) \quad [B_1\cup\cdots\cup B_m=\Omega]\\ &=\sum_{r=1}^mP(A_m|B_r,X_0=i)P(B_r|X_0=i)\\ &=\sum_{r=1}^mP(A_m|X_r=j,X_0=i)P(B_r|X_0=i)\\ &=\sum_{r=1}^mP(A_m|X_r=j)P(X_1\neq j,\cdots,X_{r-1}\neq j,X_r=j|X_0=i)\\ &=\sum_{r=1}^mP(X_m=j|X_r=j)f_{ij}(r)\\ &=\sum_{r=1}^mp_{jj}(m-r)f_{ij}(r) \end{split} \tag{4.1}$$$

On the other hand, by definition, $$$P(A_m|X_0=i)=P(X_m=j|X_0=i)=p_{ij}(m) \tag{4.2}$$$

Therefore, from (4.1) and (4.2), we can obtain $$$p_{ij}(m)=\sum_{r=1}^mp_{jj}(m-r)f_{ij}(r) \tag{4.3}$$$

Thus, $$$\sum_{m=0}^{\infty}s^mp_{ij}(m)=\sum_{m=0}^{\infty}s^m[\sum_{r=1}^mp_{jj}(m-r)f_{ij}(r)]\\ \tag{4.4}$$$ and this is just $$$P_{ij}(s)=\delta_{ij}+F_{ij}(s)P_{jj}(s) \tag{4.5}$$$ Finally we get that if $$i=j$$, we have $$P_{ii}(s)=1+F_{ii}(s)P_{ii}(s)$$ or $$P_{ii}(s)=\frac{1}{1-F_{ii}(s)}$$. When $$i\neq j$$, $$P_{ij}(s)=F_{ij}(s)P_{jj}(s)$$.

More detailed proof from (4.4) to (4.5):

By collecting terms of $$s^m$$ we have $$$\begin{split} F_{ij}(s)P_{jj}(s) &= (\sum_{m=0}^\infty s^m f_{ij}(m))(\sum_{l=0}^\infty s^l p_{jj}(l)) \\ &= f_{ij}(0)p_{jj}(0)s^0+\sum_{m+l=1}f_{ij}(m)p_{jj}(l)s+\cdots+\sum_{m+l=k}f_{ij}(k)p_{jj}(l)s^k+\cdots \\ &= 0 + \sum_{k=1}^\infty \sum_{m=0}^k f_{ij}(m)p_{jj}(k-m)s^k \\ &= 0 + \sum_{k=1}^\infty \sum_{m=1}^k f_{ij}(m)p_{jj}(k-m)s^k \end{split}$$$ Notice that since $$f_{ij}(0)=0$$, effectively $$m$$ stars from 1. Then use (4.3) we have $$$\begin{split} P_{ij}(s)&=p_{ij}(0)+\sum_{n=1}^{\infty}s^np_{ij}(n)\\ &=\delta_{ij}+\sum_{n=1}^\infty s^n\sum_{m=1}^n f_{ij}(m)p_{jj}(n-m)\\ &=\delta_{ij}+F_{ij}(s)P_{jj}(s) \end{split}$$$ that is, equation (4.5).

Corollary 4.1 (a) State $$j$$ is persistent (recurrent) if $$\sum_{n=1}^{\infty}p_{jj}(n)=\infty$$.

1. State $$j$$ is transient if $$\sum_{n=1}^{\infty}p_{jj}(n)<\infty$$.

Proof. (a) Since $$P_{ii}(s)=\frac{1}{1-F_{ii}(s)}$$, taking $$\lim_{s\uparrow 1}$$ on both sides, we have $$$\lim_{s\uparrow 1}P_{ii}(s)=\frac{1}{1-\lim_{s\uparrow 1}F_{ii}(s)} \tag{4.6}$$$

Thus, $$\sum_{n=0}^{\infty}p_{jj}(n)=\frac{1}{1-\sum_{n=0}^{\infty}f_{jj}(n)}=\frac{1}{1-f_{jj}}$$. Therefore, the condition such that $$j$$ is a recurrent state, $$f_{jj}=1$$, implies $$\sum_{n=1}^{\infty}p_{jj}(n)=\infty$$.

1. Since $$\sum_{n=0}^{\infty}P_{jj}(n)=\frac{1}{1-f_{jj}}$$. If $$j$$ is transient, $$f_{jj}<1$$ and we have $$\sum_{n=1}^{\infty}p_{jj}(n)<\infty$$.
Corollary 4.2 If $$j$$th state is transient, then $$p_{ij}(n)\to 0$$ as $$n\to\infty$$ for all $$i$$.
Proof. If the $$j$$th state is transient, then $$\sum_np_{ij}(n)<\infty$$. Therefore, using result from real analysis, we have $$p_{ij}(n)\to 0$$ as $$n\to\infty$$.
If a state is transient, the probability that the chain visit that state is getting smaller (and eventually to 0) as time increases.

Example 4.1 (One-dimensional Random Walk) Is state 0 persistent or transient for one-dimensional random walk? Let us see if $$\sum_{n=1}^{\infty}p_{00}(n)=\infty$$ or not.

From Lecture 3, we have $$p_{00}(2n)={{2n} \choose n}p^n(1-p)^n$$ and $$p_{00}(2n+1)=0$$. Therefore, we need to show $$\sum_{n=1}^{\infty}p_{00}(2n)=\infty$$ if we want to show $$0$$ is a persistent state. $$$\sum_{n=1}^{\infty}p_{00}(2n)=\sum_{n=1}^{\infty}{{2n} \choose n}(p(1-p))^n=\sum_{n=1}^{\infty}\frac{2n!}{n!n!}(p(1-p))^n \tag{4.7}$$$

Using the sterling approximation, we have $$$\frac{2n!}{n!n!}\approx\frac{(2n)^{2n+1/2}e^{-2n}\sqrt{2\pi}}{(n^{n+1/2}e^{-n}\sqrt{2\pi})^2}=\frac{4^n}{\sqrt{\pi}}\frac{1}{\sqrt{n}} \tag{4.8}$$$ Therefore, $$$p_{00}(2n)={{2n} \choose n}(p(1-p))^n\approx\frac{4^n}{\sqrt{\pi}\sqrt{n}}(p(1-p))^n=\frac{[4p(1-p)]^n}{\sqrt{\pi}\sqrt{n}}$$$ and now $$\sum_{n=1}^{\infty}p_{00}(2n)\approx\sum_{n=1}^{\infty}\frac{[4p(1-p)]^n}{\sqrt{\pi}\sqrt{n}}$$ where $$4p(1-p)=(p+(1-p))^2-(p-(1-p))^2=1-(p-(1-p))^2\leq 1$$. If $$p\neq 1-p$$, we have $$4p(1-p)<1$$ and since $$\sum_{n=1}^{\infty}\frac{[4p(1-p)]^n}{\sqrt{\pi}\sqrt{n}}<\infty$$ we finally obtain $$\sum_{n=1}^{\infty}p_{00}(2n)<\infty$$.

Therefore, if $$p\neq 1-p$$, we have 0 is a transient state. This intuitively makes sense because either the probability of going upward or going downward is larger, the chain is less likely to go back to 0.

When $$p=1-p$$, we have $$4p(1-p)=1$$ which implies that $$\sum_{n=1}^{\infty}p_{00}(2n)\approx\sum_{n=1}^{\infty}\frac{1}{\sqrt{\pi}\sqrt{n}}=\infty$$ because $$\sum_{n=1}^{\infty}\frac{1}{n^{1+\alpha}}<\infty$$ if and only if $$\alpha>0$$. Therefore, if $$p=1-p$$, 0 is a recurrent state.

Sterling Approximation: $$n!$$ can be approximated by $$n^{n+\frac{1}{2}}e^{-n}\sqrt{2\pi}$$.

Example 4.2 (Two-dimensional Random Walk) In a two-dimensional random walk, at time $$k$$, it can go up, down, right and left. Suppose the probability of going to the four directions is the same ($$\frac{1}{4}$$). Is 0 a recurrent state?

State 0 is a recurrent state for one-dimensional and two-dimensional random walk, but for dimension larger than 2, 0 is not a recurrent state. Intuitively this is because it has too many directions to go to and it is less likely to return.