Chapter 22 Homework 2: Markov Chain: Problems and Tentative Solutions

Exercise 22.1 (Subchain from a Markov chain) Assume \(X=\{X_n:n\geq 0\}\) is a Markov chain and let \(\{n_k:k\geq 0\}\) be an unbounded increasing sequence of positive integers. Define a new stochastic process \(Y=\{Y_k:k\geq 0\}\) such that \(Y_k=X_{n_k}\). Show that \(Y\) is a Markov chain. Is \(Y\) time-homogeneous Markov chain without additional conditions?

Proof. By definition of Markov chain, for \(Y_n\) to be a Markov chain, we need to show that \[\begin{equation} \begin{split} P(Y_{k}=s&|Y_0=x_0,\cdots,Y_{k-1}=x_{k-1})\\ &=P(Y_{k}=s|Y_{k-1}=x_{k-1}) \end{split} \tag{22.1} \end{equation}\] By definition of stochastic process \(Y_n\), to show (22.1) is just to show \[\begin{equation} \begin{split} P(X_{n_{k}}=s&|X_{n_0}=x_0,\cdots,X_{n_{k-1}}=x_{k-1})\\ &=P(X_{n_{k}}=s|X_{n_{k-1}}=x_{k-1}) \end{split} \tag{22.2} \end{equation}\] for a sequence of increasing integers \(n_0<n_1<\cdots n_k\). Since \(X\) is a Markov chain, (22.2) automatically follows because it is just the alternative defintion of Markov chain.

\(Y\) may not be homogenous if we do not have further restrictions because \(n_1,\cdots,n_k,\cdots\) may not be equally spaced.

Exercise 22.2 (Equivalent conditions for Markov property) Show that the Markov property is equivalent to each of the following conditions:

  1. For all \(n,m\geq 1\) and all \(s,x_0,\cdots,x_n\in \mathcal{S}\) \[\begin{equation} P(X_{n+m}=s|X_1=x_1,\cdots,X_n=x_n)=P(X_{n+m}=s|X_n=x_n) \tag{22.3} \end{equation}\]

  2. For all \(0\leq n_1<\cdots<n_k\leq n\), all \(m>1\), \(s,x_1,\cdots,x_k\in\mathcal{S}\) \[\begin{equation} P(X_{n+m}=s|X_{n_1}=x_1,\cdots,X_{n_k}=x_k)=P(X_{n+m}=s|X_{n_k}=x_k) \tag{22.4} \end{equation}\]

Proof. For the first statement, we prove by induction. For \(m=1\), the statement is just the definition of Markov chain, it automatically holds. Now assume for \(m=d\) we have \[\begin{equation} \begin{split} P(X_{n+d}=s&|X_1=x_1,\cdots,X_n=x_n)\\ &=P(X_{n+d}=s|X_n=x_n) \end{split} \tag{22.5} \end{equation}\] we consider the case when \(m=d+1\). We have \[\begin{equation} \begin{split} &P(X_{n+d+1}=s|X_1=x_1,\cdots,X_n=x_n)\\ &=\sum_k P(X_{n+d+1}=s,X_{n+d}=k|X_1=x_1,\cdots,X_n=x_n)\\ &=\sum_k P(X_{n+d+1}=s|X_{n+d}=k,X_1=x_1,\cdots,X_n=x_n)P(X_{n+d}=k|X_1=x_1,\cdots,X_n=x_n)\\ &=\sum_k P(X_{n+d+1}=s|X_{n+d}=k)P(X_{n+d}=k|X_n=x_n)\\ &=\sum_k P(X_{n+d+1}=s,X_{n+d}=k|X_n=x_n)\\ &=P(X_{n+d+1}=s|X_n=x_n) \end{split} \tag{22.6} \end{equation}\] where the second to the last line uses the Markov property of \(X\) and the assumption (22.5). Therefore, by induction we have proved that for \(m\in\mathbb{N}_+\), the statement holds.

For the second claim, we can use the same strategy and just use the alternative definition of Markov chain from the lecture notes.

Exercise 22.3 (Transition matrix for some physical process) Write the transition matrix of the following Markov chains.

  1. \(n\) black balls and \(n\) white balls are placed in two urns so that each urn contains \(n\) balls. At each stage one ball is selected at random from each urn and the two balls interchange. The state of the system is the number of white balls in the first urn.

  2. Consider two urns A and B containing a total of \(n\) balls. An experiment is performed in which a ball is selected at random at time \(t\) \((t = 1, \cdots)\) from among the totality of \(n\) balls. Then an urn is selected at random (probability of selecting \(A\) is \(p\)) and the ball previously drawn is placed in this urn. The state of the system at each trial is the number of balls in \(A\).

Proof. For part (a), let \(A_n\) denote the stochastic process of the number of white balls in the first urn. We are interested in \(P(A_{n+1}=j|A_n=i)\) for \(i=0,1,\cdots,n\). Suppose at stage \(n\), there are \(i\) (\(i=1,2,\cdots,n-1\)) white balls in the first urn. Then at stage \(n+1\), there can be \(i\) white balls (if the two balls selected from each urn are both white or black), \(i-1\) white balls (if select white ball from the first urn and black ball from the second urn) or \(i+1\) (if select black ball from the first urn and white ball from the second urn). Therefore \[\begin{equation} P(A_{n+1}=j|A_n=i)=\left\{\begin{aligned} & (\frac{i}{n})^2 & j=i-1\\ & \frac{2i(n-i)}{n^2} & j=i \\ & (\frac{n-i}{n})^2 & j=i+1\\ & 0 & o.w.\end{aligned} \right. \tag{22.7} \end{equation}\] for \(i=1,\cdots,n-1\). Similarly, for two special case \(i=0\) and \(i=n\) we have \[\begin{equation} \begin{split} &P(A_{n+1}=j|A_n=0)=\left\{\begin{aligned} & 1 & j=1\\ & 0 & o.w.\end{aligned} \right.\\ &P(A_{n+1}=j|A_n=n)=\left\{\begin{aligned} & 1 & j=n-1\\ & 0 & o.w.\end{aligned} \right. \end{split} \tag{22.8} \end{equation}\]

For part (b), let \(A_n\) denote the stochastic process of number of balls in urn A. We want \(P(A_{n+1}=j|A_n=i)\) for \(i=0,1,\cdots,n\). Suppose at stage \(n\), there are \(i\) (\(i=1,2,\cdots,n-1\)) balls in urn A. Then at stage \(n+1\), there can be \(i\) balls (select ball from urn A, place to urn A, or select ball from urn B, place to urn B), \(i-1\) balls (select ball from urn A, place to urn B) or \(i+1\) balls (select from urn B, place to urn A). Therefore, \[\begin{equation} P(A_{n+1}=j|A_n=i)=\left\{\begin{aligned} & \frac{i}{n}(1-p) & j=i-1\\ & \frac{i}{n}p+\frac{n-i}{n}(1-p) & j=i \\ & \frac{n-i}{n}p & j=i+1\\ & 0 & o.w.\end{aligned} \right. \tag{22.9} \end{equation}\] for \(i=1,\cdots,n-1\). Similarly, for two special case \(i=0\) and \(i=n\) we have \[\begin{equation} \begin{split} &P(A_{n+1}=j|A_n=0)=\left\{\begin{aligned} & 1-p & j=0\\ & p & j=1\\ & 0 & o.w.\end{aligned} \right.\\ &P(A_{n+1}=j|A_n=n)=\left\{\begin{aligned} & 1-p & j=n-1\\ & p & j=n\\ & 0 & o.w.\end{aligned} \right. \end{split} \tag{22.10} \end{equation}\]
Exercise 22.4 (Classify states of Markov Chain) Consider the Markov chain with state space \(\mathcal{S}=\{1,2,3,4\}\) and transition matrix \[\begin{equation} \begin{pmatrix} \frac{1}{3} & \frac{2}{3} & 0 & 0 \\ \frac{1}{2} & \frac{1}{2} & 0 & 0 \\ \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{4}\\ 0 & 0 & 0 & 1 \end{pmatrix} \tag{22.11} \end{equation}\] Classify the states of the chain, and calculate \(f_{34}\), the probability of absorption in state 4, starting from state 3.

Proof. We can classified the states as \(S=T\cup C_1\cup C_2\) where \(T=\{3\}\), \(C_1=\{1,2\}\) and \(C_2=\{4\}\). To show this partition is valid, we first show state 3 is a transient state. To see this, since \(f_{33}(1)=\frac{1}{4}\) and \(f_{33}(n)=0\) for \(n=2,3,\cdots\), we have \(f_{33}=\frac{1}{4}<1\), so state 3 is transient. Then obviously \(C_1\) and \(C_2\) are closed and irreducible, we left with prove state 1 and state 4 are recurrent. For state 4, we have \(f_{44}(1)=1\) and \(f_{44}(n)=0\) for \(n=2,3,\cdots\) so it is recurrent. As for state 1, we have \(f_{11}(1)=\frac{1}{3}\) and \(f_{11}(n)=\frac{2}{3}(\frac{1}{2})^{n-2}\frac{1}{2}\) for \(n=2,3,\cdots\). Thus, \[\begin{equation} f_{11}=\frac{1}{3}+\sum_{n=2}^{\infty}\frac{2}{3}(\frac{1}{2})^{n-2}\frac{1}{2}=1 \tag{22.12} \end{equation}\] and we have state 1 is persistent as we desired. We can consider only state \(\{1,2\}\), then it is also a valid Markov chain with finite states. Using Lemma 7.1, state 1 and 2 are non-null. For the same reason, state 4 is non-null as well.

We have \(f_{34}(n)=(\frac{1}{4})^{n-1}\frac{1}{2}\) for \(n=1,2,\cdots\), therefore, we have \[\begin{equation} f_{34}=\sum_{n=1}^{\infty}(\frac{1}{4})^{n-1}\frac{1}{2}=\frac{2}{3} \tag{22.13} \end{equation}\]

Exercise 22.5 (Stationary distribution when transition matrix is doubly stochastic) A transition matrix \(\mathbf{P}=(p_{ij})^K_{i,j=1}\) with finite state space \(\mathbf{S}=\{1,\cdots,K\}\) is known to be doubly stochastic matrix if \(\sum_{i=1}^KP_{ij}=1\), for all \(j=1,\cdots,K\). Prove that the stationary distribution of a doubly stochasitc matrix is a discrete uniform distribution.

Proof. By definition, the stationary distribution of a Markov chain is some vector \(\boldsymbol{\pi}\) satisfies \[\begin{equation} \begin{split} &\boldsymbol{\pi} P=\boldsymbol{\pi}\\ &\sum\boldsymbol{\pi}=1 \end{split} \tag{22.14} \end{equation}\]

Now for \(P\) to be a doubly stochastic matrix, use the condition of (22.14), we have a system of linear equations \[\begin{equation} \begin{split} &\sum_{i=1}^K\pi_iP_{ij}=\pi_j,\quad j=1,\cdots,K\\ &\sum_{i=1}^K\pi_i=1 \end{split} \tag{22.15} \end{equation}\] Written in matrix form, (22.15) is just \[\begin{equation} \begin{pmatrix} P_{11}-1 & \cdots & P_{1K} \\ \vdots & & \vdots \\ P_{1K} & \cdots & P_{KK}-1 \\ 1 & \cdots & 1 \end{pmatrix} \begin{pmatrix} \pi_1 \\ \vdots \\ \pi_K \end{pmatrix}=\begin{pmatrix} 0\\ \vdots\\ 0 \\ 1 \end{pmatrix} \tag{22.16} \end{equation}\] Obviously, the system of linear equations (22.16) have \(K\) nonzero equations and \(K\) unknown parameters. Thus, it has the unique solution, which is given by \(\pi_i=\frac{1}{K}\) for \(i=1,\cdots,K\). Therefore, we have proved the stationary distribution of a doubly stochastic matrix is a discrete uniform distribution.

Exercise 22.6 (Stationary distribution of a Markov chain used in Sociology) Sociologists often assume that the social classes of successive generations in a family can be regarded as a Markov chain. Thus, the occupation of a son is assumed to depend only on his father’s occupation and not on his grandfather\(^{\prime}\)s. Suppose that such a model is appropriate and that the transition probability matrix is given by \[\begin{equation} \begin{matrix} & Lower & Middle & Upper\\ Lower & 0.40 & 0.50 & 0.10\\ Middle & 0.05 & 0.70 & 0.25\\ Upper & 0.05 & 0.50 & 0.45 \end{matrix} \tag{22.17} \end{equation}\] where columns correspond to son\(^{\prime}\)s class and rows correspond to father\(^{\prime}\)s class. For such a model, what fraction of people are middle class in the long run?

Proof. We need to find the stationary distribution for such transition probability matrix. Suppose the stationary distribution is denoted as \(\boldsymbol{\pi}=(\pi_1,\pi_2,\pi_3)\), we have the system of equations to solve \(\boldsymbol{\pi}\) as \[\begin{equation} \begin{pmatrix} -0.6 & 0.05 & 0.05 \\ 0.5 & -0.3 & 0.5 \\ 0.1 & 0.25 & -0.55 \\ 1 & 1 & 1 \end{pmatrix} \begin{pmatrix} \pi_1 \\ \pi_2 \\ \pi_3 \end{pmatrix}=\begin{pmatrix} 0\\ 0 \\ 0 \\ 1 \end{pmatrix} \tag{22.18} \end{equation}\] Using Gaussian elimination, we can find the unique solution to (22.18) is \(\boldsymbol{\pi}=(\frac{1}{13},\frac{5}{8},\frac{31}{104})\). Thus, the fraction of people are middle class in the long run is \(\frac{5}{8}\).

Exercise 22.7 (Gambler\(^{\prime}\)s ruin problem, expected stopping time) Consider the following random walk with state space \(\mathcal{S}=\{0,1,2,3,4\}\) and the transition matrix: \[\begin{equation} \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ p & 0 & q & 0 & 0\\ 0 & p & 0 & q & 0 \\ 0 & 0 & p & 0 & q \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix} \tag{22.19} \end{equation}\] where \(q=1-p\). Find \(d(k)=E[\) time to absorption into state \(0\) or \(4\), initial state is \(k]\). Prove \[\begin{equation} d(k)=\left\{\begin{aligned} &\frac{k}{q-p}-\frac{4}{q-p}\frac{1-(\frac{q}{p})^k}{1-(\frac{q}{p})^4} & p\neq \frac{1}{2}\\ &k(4-k) & p=\frac{1}{2} \end{aligned}\right. \tag{22.20} \end{equation}\]

Proof. Let \(t\) be the number of stages until absorption and \(E_k=E(t|X_0=k)\) be the expected time of absorption when starting at state \(k\). Obviously \(E_0=E_4=0\). Now we consider the case of \(k=1,2,3\).

Condition on the first move \(\Delta_1\), we have \[\begin{equation} \begin{split} E_k&=E(t|X_0=k)\\ &=E(t|X_0=k\cap\Delta_1=1)Pr(\Delta_1=1)\\ &+E(t|X_0=k\cap\Delta_1=-1)Pr(\Delta_1=-1)\\ &=pE(t|X_1=k+1)+qE(t|X_1=k-1)\\ &=p(1+E(t|X_0=k+1))+q(1+E(t|X_0=k-1))\\ &=1+pE_{k+1}+qE_{k-1} \end{split} \tag{22.21} \end{equation}\]

Now we have the equation \(E_k=1+pE_{k+1}+qE_{k-1}\), if \(p\neq q\), the general solution of it is of the form \(E_k=C_1r_1^k+C_2r_2^k+\gamma\) where \(r_1\) and \(r_2\) are the root for equation \(px^2-x+q=0\) and \(\gamma\) is a particular solution to the equation. Thus, we can obtain \(r_1=1\) and \(r_2=\frac{q}{p}\), we guess \(\gamma=an+b\), plug in the equation we have \[\begin{equation} ak+b=1+p(a(k+1)+b)+q(a(k-1)+b) \tag{22.22} \end{equation}\] from which we can get a particular soultion as \(\gamma=\frac{k}{q-p}\). Thus, we have the general soultion for (22.21) as \[\begin{equation} E_k=C_1+C_2\cdot(\frac{q}{p})^k+\frac{k}{q-p} \tag{22.23} \end{equation}\] plugging the two special case \(E_0=E_4=0\), we can solve for \(C_1\) and \(C_2\) and finally \[\begin{equation} E_k=\frac{k}{q-p}-\frac{4}{q-p}\frac{1-(\frac{q}{p})^k}{1-(\frac{q}{p})^4} \tag{22.24} \end{equation}\]

If \(p=q=\frac{1}{2}\), then the general solution to equation (22.21) has the form \(E_k=C_1r^k+C_2kr^k+\gamma\) where \(r\) is the root for equation \(x^2-2x+1=0\) and we guess the particular solution \(\gamma\) to have the form \(ak^2+bk+c\). Plug-in we can get \(r=1\) and \(\gamma=-k^2\). Then plug in the special case, we can finally get, when \(p=q=\frac{1}{2}\), the general solution to equation (22.21) is \[\begin{equation} E_k=4k-k^2=k(4-k) \tag{22.25} \end{equation}\]

From (22.24) and (22.25) we have the final result as we desired.

Exercise 22.8 (Gambler\(^{\prime}\)s ruin problem) Consider an insurance company that earns \(\$1\) per day (from interest) with probability \(p\), but on each day, independent of the past, might suffer a claim against it for the amount \(\$2\) with probability \(q=1-p\). Whenever such a claim is suffered, \(\$2\) is removed from the reserve of money. Thus on the \(n\)th day, the net income for that day is exactly \(\delta_n\) as in the gamblers ruin problem: 1 with probability \(p\), \(−1\) with probability \(q\). Find \(P(\) the fortune of the company hits \(\$10\) before becoming \(−\$10)\).

Proof. We first prove some general result, then calculate the specific probability asked by the problem.

Suppose the company start with \(\$i\) in total, and can obatin \(\$1\) with probability \(p\) or lose \(\$2\) with probability \(q=1-p\) in a day. The company will keep operating until either it gains in total \(\$N\) or lose all of if money. Let \(X_n\) be the number of money that company has, then \(X_n=i+\Delta_1+\cdots+\Delta_n\) where \(\{\Delta_n\}\) is a one dimensional random walk with probability to go up as \(p\). Now let \(\tau_i=\min\{n\geq 0,X_n\in\{N,0\}|X_0=i\}\) be the time at which the company stop operating. Then let \(P_i(N)=Pr(X_{\tau_i}=N)\), that is, the probability that the company gains \(\$ N\) before lose \(\$ i\), for simplicity, just call this event “win”. We now calculate \(P_i(N)\).

Consider after the first day of operation, if \(\Delta_1=1\), now the company\(^{\prime}\)s money increase to \(i+1\), and by Markov property of the chain, it will finally “win” with probability \(P_{i+1}(N)\). Similarly, if \(\Delta_1=-2\), the company\(^{\prime}\)s probability to “win” is \(P_{i-2}(N)\). Thus, we obtained the equation \[\begin{equation} P_i(N)=pP_{i+1}(N)+qP_{i-1}(N) \tag{22.26} \end{equation}\] To solve for \(P_i(N)\), consider the characteristic function of (22.26), \[\begin{equation} z=p\cdot z^2+q \tag{22.27} \end{equation}\] which has two roots \[\begin{equation} \left\{\begin{aligned} & z_1=1\\ & z_2=\frac{q}{p} \end{aligned}\right. \tag{22.28} \end{equation}\] Therefore, if \(p\neq q\), the general solution to (22.26) is given by \[\begin{equation} P_i(N)=C_1\cdot 1^i+C_2\cdot (\frac{q}{p})^i \tag{22.29} \end{equation}\] where \(C_1\) and \(C_2\) can be found by notice that \(P_0(N)=0\) and \(P_N(N)=1\). Therefore, \(C_1=\frac{-1}{(\frac{q}{p})^N-1}\) and \(C_2=\frac{1}{(\frac{q}{p})^N-1}\) and \[\begin{equation} P_i(N)=\frac{(\frac{q}{p})^i-1}{(\frac{q}{p})^N-1} \tag{22.30} \end{equation}\] If \(p=q\), then the general solution to (22.26) is given by \[\begin{equation} P_i(N)=C_1\cdot 1^i+C_2\cdot i\cdot 1^i \tag{22.31} \end{equation}\] Again use the special case to find \(C_1\) and \(C_2\), we can obtain \[\begin{equation} P_i(N)=\frac{i}{N} \tag{22.32} \end{equation}\]

Now, go back to the problem we have, it is equivalent to calculate \(P_{10}(20)\). By above argument, we have \[\begin{equation} P_{10}(20)=\left\{\begin{aligned} & \frac{(\frac{q}{p})^{10}-1}{(\frac{q}{p})^{20}-1} & p\neq q \\ & \frac{1}{2} & p=q \end{aligned}\right. \tag{22.33} \end{equation}\]
Exercise 22.9 (Classify the states and calculate the mean recurrence time) Consider a Markov chain with three states \(\mathcal{S}=\{1,2,3\}\) and transition matrix \[\begin{equation} \begin{pmatrix} 1-2p & 2p & 0 \\ p & 1-2p & p \\ 0 & 2p & 1-2p \end{pmatrix} \tag{22.34} \end{equation}\] where \(0<p<0.5\). Classify the states of the chain. Calculate \(p_ii(n)\), \(i=1,2,3\). Calculate the mean recurrence time \(\mu_i\), \(i=1,2,3\) of the states.

Proof. From the decomposition theorem (Theorem 7.1) and the following lemma (Lemma 7.1), since all three states are intercommunicate, all of them forms a closed, irreducible set. That is, \(\mathcal{S}=C:=\{1,2,3\}\). Thus, all three states are non-null recurrent.

We calculate \(p_{ii}(n)\), which is the \(i\)th diagonal element of the matrix \(P^n\), where \(P\) is the transition probability matrix. To compute its \(n\)th power, we first find the eigenvalue and eigenvector of \(P\). After some calculation, we have the eigenvalue of \(P\) as \[\begin{equation} \begin{split} & r_1=1\\ & r_2=1-4p\\ & r_3=1-2p \end{split} \tag{22.35} \end{equation}\] and the corresponding eigenvectors as \[\begin{equation} \begin{split} &\mathbf{V}_1=(1,1,1)^T\\ &\mathbf{V}_2=(1,-1,1)^T\\ &\mathbf{V}_3=(-1,0,1)^T \end{split} \tag{22.36} \end{equation}\] Since we only care about the \(i\)th diagonal element of \(P^n\), we can find it by the following method:

  1. Find \(a_i,b_i,c_i\) such that \(a_i\mathbf{V}_1+b_i\mathbf{V}_2+c_i\mathbf{V}_3=\mathbf{I}_i\) where \(\mathbf{I}_i\) represent the vector whose \(i\)th element is 1 and all the other elements are 0.

  2. Since \(P^n\mathbf{I}_i=a_ir_1^n\mathbf{V}_1+b_ir_2^n\mathbf{V}_2+c_ir_3^n\mathbf{V}_3\), then \(p_{ii}(n)=a_ir_1^nV_{1i}+b_ir_2^nV_{2i}+c_ir_3^nV_{3i}\).

Using this method, we can find \[\begin{equation} \begin{split} &a_1=\frac{1}{4},\quad b_1=\frac{1}{4},\quad c_1=-\frac{1}{2}\\ &a_2=\frac{1}{2},\quad b_2=-\frac{1}{2},\quad c_2=0\\ &a_3=\frac{1}{4},\quad b_3=\frac{1}{4},\quad c_3=\frac{1}{2} \end{split} \tag{22.37} \end{equation}\] and finally we obtain that \[\begin{equation} \begin{split} &p_{11}(n)=\frac{1}{4}+\frac{1}{4}(1-4p)^n+\frac{1}{2}(1-2p)^n\\ &p_{22}(n)=\frac{1}{2}+\frac{1}{2}(1-4p)^n\\ &p_{33}(n)=p_{11}(n) \end{split} \tag{22.38} \end{equation}\]

Therefore, for any \(i\) in \(\{1,2,3\}\), we have \(\sum_{n=1}^{\infty}p_{ii}(n)=\infty\), they are all persistent state. Then by the lemma, all three states are non-null persistent.

As for the mean recurrent time, we calculate it for state 2. Firstly, we calculate \(P_{11}(s)\) as follows: \[\begin{equation} \begin{split} P_{22}(s)&=1+\frac{1}{2}\sum_{n=0}^{\infty}[s^n+((1-4p)s)^n]\\ &=1+\frac{1}{2}(\frac{s}{1-s}+\frac{(1-4p)s}{1-(1-4p)s})\\ &=\frac{1}{2}(\frac{1}{1-s}+\frac{1}{1-(1-4p)s})\\ &=\frac{1-(1-2p)s}{(1-s)(1-(1-4p)s)} \end{split} \tag{22.39} \end{equation}\] Now from the fact that \(F_{ii}(s)=1-\frac{1}{P_{ii}(s)}\), we have \[\begin{equation} F_{22}(s)=1-\frac{(1-s)(1-(1-4p)s)}{1-(1-2p)s} \tag{22.40} \end{equation}\] and finally, \(\mu_i=\sum_{n}nf_{ii}(n)=\lim_{s\uparrow 1}\frac{d}{ds}F_{ii}(s)\) we have \[\begin{equation} \begin{split} \mu_2&=\lim_{s\uparrow 1}\frac{d}{ds}F_{22}(s)\\ &=\lim_{s\uparrow 1}[\frac{(1-(1-4p)s)(1-(1-2p)s)}{((1-(1-2p)s))^2}+\frac{(1-4p)(1-s)(1-(1-2p)s)}{((1-(1-2p)s))^2}\\ &-\frac{(1-2p)(1-s)(1-(1-4p)s)}{((1-(1-2p)s))^2}\\ &=\frac{8p^2}{4p^2}=2 \end{split} \tag{22.41} \end{equation}\] Therefore, the mean recurrence time for state 2 is 2.

Using the same method, we can find \[\begin{equation} P_{11}(s)=P_{33}(s)=\frac{1}{4}[\frac{1}{1-s}+\frac{1}{1-(1-4p)s}+\frac{2}{1-(1-2p)s}] \tag{22.42} \end{equation}\] and therefore \[\begin{equation} \begin{split} \mu_{11}&=\mu_{33}=\lim_{s\uparrow 1}\frac{d}{ds}(1-\frac{1}{P_{11}(s)})\\ &=\frac{256p^4}{64p^4}=4 \end{split} \tag{22.43} \end{equation}\] Therefore, for state 1 and state 3, the mean recurrence time is 4.