Chapter 23 Homework 3: Poisson Process, Birth and Death Process: Problems and Tentative Solutions
Exercise 23.1 (Birth process, probability of even and odd number of occurence) Let \(X(t)\) be a pure birth continuous time Markov chain. Assume that \[\begin{equation} \begin{split} &P(\text{an event happen in (t,t+h)}|X(t)=\text{odd})=\lambda_1h+o(h)\\ &P(\text{an event happen in (t,t+h)}|X(t)=\text{even})=\lambda_2h+o(h) \end{split} \tag{23.1} \end{equation}\] where \(\frac{o(h)}{h}\to 0\) as \(h\to 0\). Take \(X(0)=0\). Find the following probabilities: \(P_1(t)=P(X(t)=\text{odd})\) and \(P_2(t)=P(X(t)=\text{even})\).
Hint: Derive the differential equations: \[\begin{equation} \begin{split} &P_1^{\prime}(t)=-\lambda_1P_1(t)+\lambda_2P_2(t)\\ &P_2^{\prime}(t)=\lambda_1P_1(t)-\lambda_2P_2(t)\\ \end{split} \tag{23.2} \end{equation}\] Proof. We start with consider \(P(X(t+h)=\text{odd})\), we have
\[\begin{equation}
\begin{split}
P(X(t+h)=\text{odd})&=\sum_iP(X(t+h)=\text{odd}|X(t)=i)P(X(t)=i)\\
&=P(X(t+h)=\text{odd}|X(t)=\text{even})P(X(t)=\text{even})\\
&+P(X(t+h)=\text{odd}|X(t)=\text{odd})P(X(t)=\text{odd})
\end{split}
\tag{23.3}
\end{equation}\]
using (23.1), we have
\[\begin{equation}
\begin{split}
P(X(t+h)=\text{odd})&=(\lambda_2 h+o(h))P(X(t)=\text{even})\\
&+(1-\lambda_1 h+o(h))P(X(t)=\text{odd})
\end{split}
\tag{23.4}
\end{equation}\]
Now denote \(P_1(t)=P(X(t)=\text{odd})\) and \(P_2(t)=P(X(t)=\text{even})\). By (23.4) we have
\[\begin{equation}
\begin{split}
P_1(t+h)&=(\lambda_2 h+o(h))P_2(t)+(1-\lambda_1 h+o(h))P_1(t)\\
&=\lambda_2 hP_2(t)+(1-\lambda_1 h)P_1(t)+o(h)
\end{split}
\tag{23.5}
\end{equation}\]
which implies
\[\begin{equation}
\frac{P_1(t+h)-P_1(t)}{h}=-\lambda_1 P_1(t)+\lambda_2 P_2(t)+\frac{o(h)}{h}
\tag{23.6}
\end{equation}\]
Take the limit as \(h\to 0\) on both side, we get
\[\begin{equation}
P_1^{\prime}(t)=-\lambda_1P_1(t)+\lambda_2P_2(t)
\tag{23.7}
\end{equation}\]
Very similarly, we can get the other differential equation \[\begin{equation} P_2^{\prime}(t)=\lambda_1P_1(t)-\lambda_2P_2(t) \tag{23.8} \end{equation}\] The boundary conditions are \[\begin{equation} \left\{\begin{aligned} & P_1(0)=0\\ &P_2(0)=1 \end{aligned}\right. \tag{23.9} \end{equation}\]
Equation (23.7) and (23.8) can be combined to write in matrix form as \[\begin{equation} \mathbf{P}^{\prime}(x)=\begin{pmatrix} P^{\prime}_1(x)\\P^{\prime}_2(x)\end{pmatrix}= \begin{pmatrix} -\lambda_1 & \lambda_2\\ \lambda_1 & -\lambda_2 \end{pmatrix} \begin{pmatrix} P_1(x)\\P_2(x)\end{pmatrix}=\boldsymbol{\Lambda}\mathbf{P}(x) \tag{23.10} \end{equation}\]Suppose \(\mathbf{P}(x)=\boldsymbol{\eta}e^{rt}\), then \(\mathbf{P}^{\prime}(x)=r\boldsymbol{\eta}e^{rt}\), substitute into (23.10), we have \[\begin{equation} (\boldsymbol{\Lambda}-r \mathbf{I})\boldsymbol{\eta}e^{rt}=\mathbf{0} \tag{23.11} \end{equation}\] since \(e^{rt}\) are not zero, we can drop it and finally get
\[\begin{equation} (\boldsymbol{\Lambda}-r \mathbf{I})\boldsymbol{\eta}=\mathbf{0} \tag{23.12} \end{equation}\]
The matrix \(\boldsymbol{\Lambda}\) has two distinct eigenvalues \(r_1=0\) and \(r_2=-\lambda_1-\lambda_2\), with corresponding eigenvector \(\begin{pmatrix} \frac{\lambda_2}{\lambda_1}& 1\end{pmatrix}^T\) and \(\begin{pmatrix} -1& 1\end{pmatrix}^T\), therefore, the solution to (23.10) is \[\begin{equation} \mathbf{P}(x)=c_1\begin{pmatrix} \frac{\lambda_2}{\lambda_1}\\ 1\end{pmatrix}+c_2e^{-(\lambda_1+\lambda_2)t}\begin{pmatrix} -1\\ 1\end{pmatrix}^T \tag{23.13} \end{equation}\]
Since the boundary condition is \(\mathbf{P}(0)=\begin{pmatrix} 0& 1\end{pmatrix}^T\), we can find \(c_1=\frac{\lambda_1}{\lambda_1+\lambda_2}\) and \(c_2=\frac{\lambda_2}{\lambda_1+\lambda_2}\). Thus, we finally get \[\begin{equation} \begin{pmatrix} P_1(x)\\P_2(x)\end{pmatrix}=\begin{pmatrix} \frac{\lambda_2}{\lambda_1+\lambda_2}(1-e^{-(\lambda_1+\lambda_2)t})\\ \frac{\lambda_2}{\lambda_1+\lambda_2}(\frac{\lambda_1}{\lambda_2}+e^{-(\lambda_1+\lambda_2)t})\end{pmatrix} \tag{23.14} \end{equation}\]
which is free of \(\lambda\) as we desired.
Exercise 23.5 (Linear birth and death process, expected number of individuals) Consider a birth and death process \(\{X(t):t\geq 0\}\) with \(\lambda_n=n\lambda+a\) and \(\mu_n=n\mu\), with \(\lambda,\mu,a>0\). If the population starts with \(i\) individuals, prove that \[\begin{equation} M(t)=E[X(t)]=\left\{\begin{aligned} &at+i & \lambda=\mu\\ & \frac{a}{\lambda-\mu}\{e^{(\lambda-\mu)t}-1\}+ie^{(\lambda-\mu)t} & \lambda\neq \mu\end{aligned}\right. \tag{23.18} \end{equation}\]
Proof. If we define \(p_n(t)=P(X(t)=n)\), then we have showed in class that \[\begin{equation} p_n^{\prime}(t)=-(\lambda_n+\mu_n)p_n(t)+\lambda_{n-1}p_{n-1}(t)+\mu_{n+1}p_{n+1}(t) \tag{23.19} \end{equation}\]
Substitute in \(\lambda_n=n\lambda+a\) and \(\mu_n=n\mu\), we have \[\begin{equation} p_n^{\prime}(t)=-(n\lambda+a+n\mu)p_n(t)+((n-1)\lambda+a)p_{n-1}(t)+(n+1)\mu p_{n+1}(t) \tag{23.20} \end{equation}\]
Then since \[\begin{equation} M(t)=E(X(t))=\sum_{n=1}^{\infty}np_n(t) \tag{23.21} \end{equation}\] taking derivatives with respect to \(t\) and substitute in (23.20) we can get \[\begin{equation} \begin{split} M^{\prime}(t)&=\sum_{n=1}^{\infty}np_n^{\prime}(t)\\ &=\sum_{n=1}^{\infty}n[-n(\lambda+\mu)p_n(t)-ap_n(t)+(n-1)\lambda p_{n-1}(t)\\ &+ap_{n-1}(t)+(n+1)\mu p_{n+1}(t)]\\ &=-(\lambda+\mu)\sum_{n=1}^{\infty}n^2p_n(t)+\lambda\sum_{n=1}^{\infty}n(n-1)p_{n-1}(t)+\mu\sum_{n=1}^{\infty}n(n+1)p_{n+1}(t)\\ &-a\sum_{n=1}^{\infty}np_n(t)+a\sum_{n=1}^{\infty}np_{n-1}(t) \end{split} \tag{23.22} \end{equation}\] Using the result from the class notes (Equation (13.18)), we have \[\begin{equation} \begin{split} M^{\prime}(t)&=\lambda M(t)-\mu M(t)+a\sum_{n=0}^{\infty}p_n(t)\\ &=\lambda M(t)-\mu M(t)+a \end{split} \tag{23.23} \end{equation}\]
Firstly, consider the special case, if \(\lambda=\mu\), then the differential equation reduced to \[\begin{equation} M^{\prime}(t)=a \tag{23.24} \end{equation}\] which has the solution \[\begin{equation} M(t)=M(0)+at \tag{23.25} \end{equation}\] Note that \(M(0)=E(N(0))=i\) for a this linear birth and death process. Thus, \[\begin{equation} M(t)=at+i \tag{23.26} \end{equation}\]
In addition, if \(\lambda\neq \mu\), the differential equation is \[\begin{equation} M^{\prime}(t)-(\lambda-\mu)M(t)=a \tag{23.27} \end{equation}\] From ODE class, for a differential equation of the form \[\begin{equation} \frac{dy}{dx}+P(x)y=Q(x),\quad Q(x)\not\equiv 0 \tag{23.28} \end{equation}\] Its solution is \[\begin{equation} y=Ce^{-\int P(x)dx}+e^{-\int P(x)dx}\int Q(x)e^{\int P(x)dx}dx \tag{23.29} \end{equation}\]
For our specific problem, we have \[\begin{equation} \begin{split} M(t)&=Ce^{\int_0^t(\lambda-\mu)dt}+e^{\int_0^t(\lambda-\mu)dt}\int_0^tae^{-\int_0^t(\lambda-\mu)dt}dt\\ &=Ce^{(\lambda-\mu)t}+e^{(\lambda-\mu)t}a\int_0^te^{-(\lambda-\mu)t}dt\\ &=Ce^{(\lambda-\mu)t}+e^{(\lambda-\mu)t}a[\frac{1}{\lambda-\mu}-\frac{1}{\lambda-\mu}e^{-(\lambda-\mu)t}]\\ &=Ce^{(\lambda-\mu)t}+\frac{a}{\lambda-\mu}\{e^{(\lambda-\mu)t}-1\} \end{split} \tag{23.30} \end{equation}\]
The constant \(C\) need to be solved from the boundary condition \(M(0)=i\), substitute \(t=0\) in (23.30), we have \(C=i\). Thus, we finally get \[\begin{equation} M(t)=ie^{(\lambda-\mu)t}+\frac{a}{\lambda-\mu}\{e^{(\lambda-\mu)t}-1\} \tag{23.31} \end{equation}\] as we desired.Exercise 23.6 (Poisson process, arrival time) A cable car starts off with \(n\) riders. The times between successive stops of the car are independent exponential random variables with rate \(\lambda\). At each stop one rider gets off. This takes no time and no additional riders get on. After a rider gets off the car, he or she walks home. Independently of all else, the walk takes an exponential time with rate \(\lambda\).
What is the distribution of time at which the last rider departs the car?
- Suppose the last rider departs the car at time \(t\). What is the probability that the \(i\)th rider is in home by that time, \(i=1,\cdots,n\)?
Proof. For part (a), let \(T\) be the random variable we care about. Then \(T\) is a sum of \(n\) independent exponential random variables with rate \(\lambda\). Therefore, \(T\sim Gamma(n,\lambda)\).
For part (b), suppose a rider gets off at time \(s\), then at time \(t\), the probability that this rider has not arrived at home is \[\begin{equation} \int_{t-s}^{\infty}\lambda e^{\lambda x}dx=e^{-\lambda(t-s)} \tag{23.32} \end{equation}\] Given that the last rider departs the car at time \(t\), the time that the other \(n-1\) rider gets off follows a uniform distribution on \([0,t]\). Hence, the \(i\)th rider is not at home by \(t\) has the probability \[\begin{equation} \int_0^{t}e^{-\lambda(t-s)}\cdot\frac{1}{t}ds=\frac{1-e^{-\lambda t}}{\lambda t} \tag{23.33} \end{equation}\] Thus, the probability that the \(i\)th rider is in home by that time, for \(i=1,\cdots,n\), is \[\begin{equation} P=\left\{\begin{aligned} & 1-\frac{1-e^{-\lambda t}}{\lambda t} & i=1,\cdots,n-1\\ & 0 & i=n\end{aligned}\right. \tag{23.34} \end{equation}\]Exercise 23.7 (Compound Poisson process and its application) Let \(\{N(t):t\geq 0\}\) be a Poisson process with rate \(\lambda\). Define a sequence of nonnegative, independent and identically distributed random variables \(\{D_i:i\geq 1\}\) independent from \(N(t)\). A stochastic process \(\{Y(t):t\geq 0\}\) is a compound Poisson process if \(Y(t)=\sum_{i=1}^{N(t)}D_i\).
Find \(E[Y(t)]\) and \(Cov(Y(t),Y(s))\).
Customers arrive at an automatic teller machine (ATM) in accordance with a Poisson process with rate 12 per hour. The amount of money withdrawn on each transaction is a random variable with mean $30 and standard deviation $50. (A negative withdrawal means that money was deposited.) Suppose that the machine is in use 15 hours per day. Formulate this scenario as a compound Poisson process to find the expected amount of money withdrawn per day. Can you provide an approximate probability that the total daily withdraw is greater than $5000?
Proof. For part (a), by Wald identity, we have \[\begin{equation} E(Y(t))=E(D_1+\cdots+D_{N(t)})=E(N(t))E(D_1)=\lambda tE(D_1) \tag{23.35} \end{equation}\]
As for the covariance, firstly notice that \[\begin{equation} \begin{split} Var(Y(t))&=E(Var(Y(t)|N(t)))+Var(E(Y(t)|N(t)))\\ &=E(N(t)Var(D))+Var(N(t)E(D))\\ &=Var(D)E(N(t))+E(D)^2Var(N(t))\\ &=Var(D)\lambda t+E(D)^2\lambda t\\ &=\lambda tE(D^2) \end{split} \tag{23.36} \end{equation}\]
Assume w.l.o.g. that \(t<s\), then we have \[\begin{equation} \begin{split} Cov(Y(t),Y(s))&=Cov(Y(t),Y(s)-Y(t)+Y(t))\\ &=Cov(Y(t),Y(t-s))+Var(Y(t))=Var(Y(t))=\lambda tE(D^2) \end{split} \tag{23.37} \end{equation}\] Therefore, in general we have \[\begin{equation} Cov(Y(t),Y(s))=\lambda \min\{s,t\}E(D^2) \tag{23.38} \end{equation}\]
For part (b), let \(Y(t)\) be the total amount of money withdrawn in the interval \([0,t]\), where the time \(t\) is measured in hours. Assuming that the successive amounts withdrawn are independent and identically distributed random variables, the stochastic process \(\{Y(t):t\geq 0\}\) is then a compound Poisson process, and \(Y(15)\) denotes the daily withdraw. Its mean and variance can be compute using (23.35) and (23.36), that is \[\begin{equation} \begin{split} &E(Y(15))=12\times 15\times30=5400\\ &Var(Y(15))=12\times 15\times(30^2+50^2)=612000 \end{split} \tag{23.39} \end{equation}\] Therefore, by CLT we have \[\begin{equation} P(Y(t)\geq 5000)=P(Z\geq \frac{5000-5400}{\sqrt{612000}})\approx 0.6954 \tag{23.39} \end{equation}\] Thus, the approximate probability that the total daily withdraw is greater than $5000 is 0.6954.Exercise 23.8 (Process that descirbe the decay of particles) At each (discrete) time point \(n=0,1,2,\cdots\), a number \(Y_n\) of particles enters a chamber, where the \(Y_n\), \(n\geq 0\), are independent and identically Poisson distributed with parameter \(\lambda\), that is, for all \(n\geq 0\), \(Pr(Y_n=y)=\lambda^y\exp(-\lambda)/y!\), for \(y\in\{0,1,2,\cdots\}\). Each particle in the chamber may decay during two successive time points with probability \(q\) (where \(0<q<1\)) which is constant over time and the same for all particles. Moreover, particles decay independently of each other and \(Y_n\) is independent of the number of particles that decay between time points \(n-1\) and \(n\). Let \(X_n\) be the number of particles in the chamber at time \(n\), where \(X_n\) includes particles present just after the entry of the \(Y_n\) particles.
Show that \(X=\{X_n:n\geq 0\}\) is a time homogeneous Markov chain, and prove that its transition probabilities are given by \[\begin{equation} P(X_{n+1}=j|X_n=i)=\exp(-\lambda)\sum_{r=0}^{\min\{i,j\}}\frac{i!}{(i-r)!r!(j-r)!}(1-q)^rq^{i-r}\lambda^{j-r} \tag{23.40} \end{equation}\]
- Show that the limiting probabilities for the number of particles in the chamber are given by \[\begin{equation} \lim_{n\to\infty}P(X_n=j)=\exp(-\lambda q^{-1})\frac{(\lambda q^{-1})^j}{j!},\quad j\in\{0,1,2,\cdots\} \tag{23.41} \end{equation}\]
Proof. For part (a), let \(Z_n\) denote the number of particles remained in the chamber at time \(t=n\), then we have \[\begin{equation} \begin{split} P(X_{n+1}=j|X_n=i)&=\sum_{r=0}^{\min\{i,j\}}P(X_{n+1}=j|Z_n=r,X_n=i)P(Z_n=r|X_n=i)\\ &=\sum_{r=0}^{\min\{i,j\}}P(Y_{n+1}=j-r)P(Z_n=r)\\ &=\sum_{r=0}^{\min\{i,j\}}\lambda^{j-r}\exp(-\lambda)\frac{i!}{(j-r)!r!(i-r)!}(1-q)^rq^{i-r}\\ &=\exp(-\lambda)\sum_{r=0}^{\min\{i,j\}}\frac{i!}{(i-r)!r!(j-r)!}(1-q)^rq^{i-r}\lambda^{j-r} \end{split} \tag{23.42} \end{equation}\] as we desried. Since \(P(X_{n+1}=j|X_n=i)\) is a constant as a function of time \(t\), the Markov chain is time homogeneous.
For part (b), since \(X\) is a irreducible Markov chain, we only need to check (23.40) and (23.41) satisfies the detailed balance condition (Proposition 10.2). In fact, for any \(i,j\) \[\begin{equation} \begin{split} \pi_ip_{ij}&=\exp(-\lambda q^{-1})\frac{(\lambda q^{-1})^i}{i!}\exp(-\lambda)\sum_{r=0}^{\min\{i,j\}}\frac{i!}{(i-r)!r!(j-r)!}(1-q)^rq^{i-r}\lambda^{j-r}\\ &=\exp(-\lambda (q^{-1}+1))\sum_{r=0}^{\min\{i,j\}}\frac{(1-q)^rq^{-r}\lambda^{i+j-r}}{(i-r)!r!(j-r)!} \end{split} \tag{23.43} \end{equation}\]and similarly, \[\begin{equation} \begin{split} \pi_jp_{ji}&=\exp(-\lambda q^{-1})\frac{(\lambda q^{-1})^j}{j!}\exp(-\lambda)\sum_{r=0}^{\min\{i,j\}}\frac{j!}{(i-r)!r!(j-r)!}(1-q)^rq^{j-r}\lambda^{i-r}\\ &=\exp(-\lambda (q^{-1}+1))\sum_{r=0}^{\min\{i,j\}}\frac{(1-q)^rq^{-r}\lambda^{i+j-r}}{(i-r)!r!(j-r)!} \end{split} \tag{23.44} \end{equation}\] Thus, we have \(\pi_ip_{ij}=\pi_jp_{ji}\), (23.41) gives the stationary distribution. Since any state of \(X\) is non-null persistent, the stationary distribution is actually the limiting distribution.