Chapter 11 Midterm (Lecture on 02/09/2021)
Exercise 11.1 (Transition matrix for physical process) 1. Provide the transition matrix for the following Markov chains.
Conisder the sequence of tosses of a coin with probability of “heads” \(p\). At time \(n\) (after \(n\) tosses of the coin) the state of process is the number of heads in the \(n\) tosses minus the number of tails.
\(N\) black balls and \(N\) white balls are placed in two urns so that each urn contains \(N\) balls. At each stage one ball is selected at random from each urn and the two balls interchange. The state of the system is the number of white balls in the first urn.
Consider two urns A and B containing a total of \(N\) balls. An experiment is performed in which a ball is selected at random at time \(n\) \((n = 1, \cdots)\) from among the totality of \(N\) balls. Then an urn is selected at random (probability of selecting A is \(p\)) and the ball previously drawn is placed in this urn. The state of the system at each trial is the number of balls in A.
Now assume that at time \(n+1\) a ball and an urn are chosen with probability depending on the content of the urn (i.e. a ball is chosen from A with probability \(\frac{k}{N}\) or from B with probability \(\frac{N-k}{N}\). Urn A is chosen with probability \(\frac{k}{N}\) or urn B is chosen with probability \(\frac{N-k}{N}\)). The state of the system at each trial is represented by the number of balls in A.
- Recall that the state-space can be partitioned into communicating classes. What are the communicating classes in part (d)?
Proof. For (a), let \(X_n\) be the stochastic process described in the problem. We want to find \(P(X_{n+1}=j|X_n=i)\). Suppose \(X_n=i\), then for \(X_{n+1}\), it can only take \(i+1\) or \(i-1\), with probability \(p\) and \(1-p\), respectively. Thus, we have \[\begin{equation} P(X_{n+1}=j|X_n=i)=p_{ij}=\left\{\begin{aligned} &p & j=i+1\\ &1-p & j=i-1 \\ &0 & o.w. \end{aligned}\right. \tag{11.1} \end{equation}\] for all \(i,j\) and \(n\).
For (b), let \(X_n\) denote the stochastic process of the number of white balls in the first urn. We are interested in \(P(X_{n+1}=j|X_n=i)\) for \(i=0,1,\cdots,N\). Suppose at stage \(n\), there are \(i\) (\(i=1,2,\cdots,N-1\)) white balls in the first urn. Then at stage \(n+1\), there can be \(i\) white balls (if the two balls selected from each urn are both white or black), \(i-1\) white balls (if select white ball from the first urn and black ball from the second urn) or \(i+1\) (if select black ball from the first urn and white ball from the second urn). Therefore \[\begin{equation} P(X_{n+1}=j|X_n=i)=\left\{\begin{aligned} & (\frac{i}{N})^2 & j=i-1\\ & \frac{2i(N-i)}{N^2} & j=i \\ & (\frac{N-i}{N})^2 & j=i+1\\ & 0 & o.w.\end{aligned} \right. \tag{11.2} \end{equation}\] for \(i=1,\cdots,N-1\). Similarly, for two special case \(i=0\) and \(i=N\) we have \[\begin{equation} \begin{split} &P(X_{n+1}=j|X_n=0)=\left\{\begin{aligned} & 1 & j=1\\ & 0 & o.w.\end{aligned} \right.\\ &P(X_{n+1}=j|X_n=N)=\left\{\begin{aligned} & 1 & j=N-1\\ & 0 & o.w.\end{aligned} \right. \end{split} \tag{11.3} \end{equation}\]
For part (c), let \(X_n\) denote the stochastic process of number of balls in urn A. We want \(P(X_{n+1}=j|X_n=i)\) for \(i=0,1,\cdots,N\). Suppose at stage \(n\), there are \(i\) (\(i=1,2,\cdots,N-1\)) balls in urn A. Then at stage \(n+1\), there can be \(i\) balls (select ball from urn A, place to urn A, or select ball from urn B, place to urn B), \(i-1\) balls (select ball from urn A, place to urn B) or \(i+1\) balls (select from urn B, place to urn A). Therefore, \[\begin{equation} P(X_{n+1}=j|X_n=i)=\left\{\begin{aligned} & \frac{i}{N}(1-p) & j=i-1\\ & \frac{i}{N}p+\frac{N-i}{N}(1-p) & j=i \\ & \frac{N-i}{N}p & j=i+1\\ & 0 & o.w.\end{aligned} \right. \tag{11.4} \end{equation}\] for \(i=1,\cdots,N-1\). Similarly, for two special case \(i=0\) and \(i=N\) we have \[\begin{equation} \begin{split} &P(X_{n+1}=j|X_n=0)=\left\{\begin{aligned} & 1-p & j=0\\ & p & j=1\\ & 0 & o.w.\end{aligned} \right.\\ &P(X_{n+1}=j|X_n=N)=\left\{\begin{aligned} & 1-p & j=N-1\\ & p & j=N\\ & 0 & o.w.\end{aligned} \right. \end{split} \tag{11.5} \end{equation}\]
Finally, in part (d), let \(X_n\) be the process, we want \(P(X_{n+1}=j|X_n=i)\). Given \(X_n=i\), \(X_{n+1}\) can take value \(i+1,i,i-1\), with probability given by (11.6). \[\begin{equation} P(X_{n+1}=j|X_n=i)=\left\{\begin{aligned} & \frac{i(N-i)}{N^2} & j=i-1\\ & (\frac{i}{N})^2+(\frac{N-i}{N})^2 & j=i \\ & \frac{i(N-i)}{N^2} & j=i+1\\ & 0 & o.w.\end{aligned} \right. \tag{11.6} \end{equation}\] for \(i=1,\cdots,N-1\) and finally the special cases are \[\begin{equation} \begin{split} &P(X_{n+1}=j|X_n=0)=\left\{\begin{aligned} & 1 & j=0\\ & 0 & o.w.\end{aligned} \right.\\ &P(X_{n+1}=j|X_n=N)=\left\{\begin{aligned} & 1 & j=N\\ & 0 & o.w.\end{aligned} \right. \end{split} \tag{11.7} \end{equation}\]
For (e), based on the transition matrix of part (d), we have the communicating classes as \(C_1=\{0\}\), \(C_2=\{1,2,\cdots,n-1\}\) and \(C_3=\{n\}\).Exercise 11.2 (Markov chain calculation) (a) Consider a Markov chain with transition probability matrix \[\begin{equation} \mathbf{P}=\begin{pmatrix} \frac{1}{2} & 1-\frac{1}{2} & 0 & 0 & 0 & \cdots \\ \frac{1}{3} & 0 & 1-\frac{1}{3} & 0 & 0 & \cdots\\ \frac{1}{4} & 0 & 0 & 1-\frac{1}{4} & 0 & \cdots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \end{pmatrix} \tag{11.8} \end{equation}\] Compute \(f_{00}(n),n\geq 1\).
- Consider a Markov chian with state-space \(S=\{1,2,3,4,5\}\) with corresponding transition matrix \[\begin{equation} \begin{pmatrix} \frac{2}{3} & \frac{1}{3} & 0 & 0 & 0 \\ \frac{1}{4} & \frac{3}{4} & 0 & 0 & 0 \\ \frac{1}{3} & 0 & 0 & \frac{1}{3} & \frac{1}{3} \\ 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2}\\ 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \end{pmatrix} \tag{11.9} \end{equation}\] Identify the communicating classes. Which states are transient and which states are persistent? Find mean recurrence times for states 1 and 3.
Proof. For (a), we notice that \[\begin{equation} \begin{split} &f_{00}(1)=\frac{1}{2}\\ &f_{00}(2)=\frac{1}{2}\times \frac{1}{3}=(1-\frac{1}{2})\times\frac{1}{3}\\ &f_{00}(3)=\frac{1}{2}\times \frac{2}{3}\times\frac{1}{4}=(1-\frac{1}{2})\times(1-\frac{1}{3})\times\frac{1}{4} \end{split} \tag{11.10} \end{equation}\]
Therefore, we notice the pattern, which is \[\begin{equation} f_{00}(n)=\prod_{i=2}^n(1-\frac{1}{i})\times\frac{1}{n+1}\quad n=2,3,\cdots \tag{11.11} \end{equation}\]
This can be verified by definition of \(f_{00}(n)\). \(f_{00}(n)\) is the probability that the chain first go back to state 0 in step \(n\). This can be done only by going upward in first \(n-1\) steps, and go back to state 0 in the \(n\)th step. The probability of this is given by (11.11).
Thus, \(f_{00}(1)=\frac{1}{2}\) and \(f_{00}(n)=\prod_{i=2}^n(1-\frac{1}{i})\cdot\frac{1}{n+1}\), for \(n=2,3,\cdots\).
For part (b), since state 1 and 2 are intercommunicating, state 4 and 5 are intercommunicating, and \(C_1=\{1,2\}\), \(C_2=\{4,5\}\) are closed and irreducible sets of states. We can use the decomposition theorem (Theorem 7.1) to partition the state space as \(\mathcal{S}=C_1\cup C_2\cup T\) where \(T=\{3\}\) is the transient state. From Theorem 7.1 and Lemma 7.1, we immediately have that state 1,2,4,5 are (non-null) persistent and state 3 is transitent.
Now for the mean recurrence time, for state 3, since it is trnasient, \(\mu_3=\infty\). We left with state 1. Since \[\begin{equation} f_{11}(n)=\left\{\begin{aligned} &\frac{2}{3} & n=1 \\ &\frac{1}{3}(\frac{3}{4})^{n-2}\frac{1}{4} & n\geq 2 \end{aligned} \right. \tag{11.12} \end{equation}\]
By definition, we have \[\begin{equation} \begin{split} \mu_1&=\sum_{n=1}^{\infty}nf_{11}(n)\\ &=1\cdot\frac{2}{3}+\frac{1}{3}\cdot\frac{1}{4}\sum_{n=2}^{\infty}n\cdot(\frac{3}{4})^{n-2}\\ &=1\cdot\frac{2}{3}+\frac{1}{3}\cdot\frac{1}{4}\cdot 20=\frac{7}{3} \end{split} \tag{11.13} \end{equation}\]
Exercise 11.3 (Gambler\(^{\prime}\)s ruin problem, expected stopping time) Consider the following random walk with state space \(\mathcal{S}=\{0,1,2,3,4,5\}\) and the transition matrix: \[\begin{equation} \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ p & 0 & q & 0 & 0 & 0\\ 0 & p & 0 & q & 0 & 0 \\ 0 & 0 & p & 0 & q & 0 \\ 0 & 0 & 0 & p & 0 & q \\ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} \tag{11.14} \end{equation}\] where \(q=1-p\). Find \(d(k)=E[\) time to absorption into state \(0\) or \(5\), initial state is \(k]\). Prove \[\begin{equation} d(k)=\left\{\begin{aligned} &\frac{k}{q-p}-\frac{5}{q-p}\frac{1-(\frac{q}{p})^k}{1-(\frac{q}{p})^5} & p\neq \frac{1}{2}\\ &k(5-k) & p=\frac{1}{2} \end{aligned}\right. \tag{11.15} \end{equation}\]
Proof. Let \(t\) be the number of stages until absorption and \(E_k=E(t|X_0=k)\) be the expected time of absorption when starting at state \(k\). Obviously \(E_0=E_5=0\). Now we consider the case of \(k=1,2,3,4\).
Condition on the first move \(\Delta_1\), where \(\Delta_1=\left\{\begin{aligned} &+1 & p\\ & -1 & q \end{aligned}\right.\), we have \[\begin{equation} \begin{split} E_k&=E(t|X_0=k)\\ &=E(t|X_0=k\cap\Delta_1=1)Pr(\Delta_1=1)+E(t|X_0=k\cap\Delta_1=-1)Pr(\Delta_1=-1)\\ &=pE(t|X_1=k+1)+qE(t|X_1=k-1)\\ &=p(1+E(t|X_0=k+1))+q(1+E(t|X_0=k-1))\\ &=1+pE_{k+1}+qE_{k-1} \end{split} \tag{11.16} \end{equation}\]
Now we have the equation \(E_k=1+pE_{k+1}+qE_{k-1}\), if \(p\neq q\), the general solution of it is of the form \(E_k=C_1r_1^k+C_2r_2^k+\gamma\) where \(r_1\) and \(r_2\) are the root for equation \(px^2-x+q=0\) and \(\gamma\) is a particular solution to the equation. Thus, we can obtain \(r_1=1\) and \(r_2=\frac{q}{p}\), we guess \(\gamma=an+b\), plug in the equation we have \[\begin{equation} ak+b=1+p(a(k+1)+b)+q(a(k-1)+b) \tag{11.17} \end{equation}\] from which we can get a particular soultion as \(\gamma=\frac{k}{q-p}\). Thus, we have the general soultion for (11.16) as \[\begin{equation} E_k=C_1+C_2\cdot(\frac{q}{p})^k+\frac{k}{q-p} \tag{11.18} \end{equation}\] plugging the two special case \(E_0=E_5=0\), we can solve for \(C_1\) and \(C_2\) and finally \[\begin{equation} E_k=\frac{k}{q-p}-\frac{5}{q-p}\frac{1-(\frac{q}{p})^k}{1-(\frac{q}{p})^5} \tag{11.19} \end{equation}\]
If \(p=q=\frac{1}{2}\), then the general solution to equation (11.16) has the form \(E_k=C_1r^k+C_2kr^k+\gamma\) where \(r\) is the root for equation \(x^2-2x+1=0\) and we guess the particular solution \(\gamma\) to have the form \(ak^2+bk+c\). Plug-in we can get \(r=1\) and \(\gamma=-k^2\). Then plug in the special case, we can finally get, when \(p=q=\frac{1}{2}\), the general solution to equation (11.16) is \[\begin{equation} E_k=5k-k^2=k(5-k) \tag{11.20} \end{equation}\]
From (11.19) and (11.20) we have the final result as we desired.
Proof. Notice that this matrix is doubly stochastic, using the conclusion of Exercise 22.5, we have stationary distribution is uniform distribution. Thus, \(\boldsymbol{\pi}=(\frac{1}{m+1},\frac{1}{m+1},\cdots,\frac{1}{m+1})\).
Because the chain is obviously irreducible and all states are non-null persistent, we have \[\begin{equation} \lim_{n\to\infty}P(X_n=j|X_0=i)=\lim_{n\to\infty}p_{ij}(n)=\pi_j=\frac{1}{m+1} \tag{11.22} \end{equation}\]