Chapter 24 Quiz 1: Brownian Motion and Markov Process: Problems and Tentative Solutions

Exercise 24.1 (View Brownian motion as a special Gaussian process) Let \(W=\{W_t:t\geq 0\}\) be a Brownian motion. Show that a Brownian motion can be viewed as a Gaussian process with mean 0 and \(Cov(W_s,W_t)=\min\{s,t\}\).

Proof. Consider any finite dimensional distribution of \(W\), denoted as \(F_{W_{t_1},\cdots,W_{t_k}}(w_{t_1},\cdots,w_{t_k})\), w.l.o.g. we can assume \(t_1<\cdots<t_k\) and for simplicity of notation, we can surpass \(t\) and just denote the f.d.d.s. as \(F_{W_1,\cdots,W_k}(w_1,\cdots,w_k)\). Firstly, from the independent increment property and \(W_t-W_s\sim N(0,t-s)\) for \(0\leq s\leq t\), since \(W_1,\cdots,W_k\) can be expressed as a linear combination of independent normally distributed random variable \(W_1,W_2-W_1,\cdots,W_k-W_{k-1}\), we have the f.d.d.s. of \(W\) is normally distributed. The mean is \(E(W_t)=0\) and variance \(Var(W_t)=t\), while the covariance is, assume w.l.o.g. \(s<t\) \[\begin{equation} \begin{split} Cov(W_sW_t)&=E(W_sW_t)=E(W_s(W_t-W_s+W_s))\\ &=E(W_s^2)+E(W_s)E(W_t-W_s)=s \end{split} \tag{24.1} \end{equation}\] Thus, the Brownian motion is a Gaussian process with mean function 0 and covariance function \(Cov(W_s,W_t)=\min\{s,t\}\).

Exercise 24.2 (Brownian bridge has stationary increment) A stochastic process \(B=\{B_t:t\in\mathcal{R}\}\) is said to have stationary increment if the distribution of \(B_{t+h}-B_t\) does not depend on \(t\). Consider the Brownian bridge \(B_t=W_t-tW_1\), \(t\in [0,1]\), where \(\{W_t:t\geq 0\}\) is a Brownian motion as in Exercise . Show that it has stationary increment.

Proof. For \(h\) and \(t\) such that \(t\in[0,1]\) and \(h+t\in[0,1]\), we have \[\begin{equation} \begin{split} B_{t+h}-B_t&=W_{t+h}-(t+h)W_1-W_t+tW_1\\ &=(W_{t+h}-W_t)-hW_1\\ &=W_{t+h}-W_t-h(W_1-W_{t+h})-hW_{t+h}\\ &=(1-h)(W_{t+h}-W_t)-h(W_1-W_{t+h})-hW_t\\ \end{split} \tag{24.2} \end{equation}\]

Where \(W_t\) is a Brownian motion. By definition, we have \(W_{t+h}-W_t\), \(W_1-W_{t+h}\) and \(W_t\) mutually independent, and \(W_{t+h}-W_t\sim N(0,h)\), \(W_1-W_{t+h}\sim N(0,1-h-t)\) and \(W_t\sim N(0,t)\). Thus, \(B_{t+h}-B_t\) also has a normal distribution with mean \(E(B_{t+h}-B_t)=0\) and variance \[\begin{equation} Var(B_{t+h}-B_t)=(1-h)^2\cdot h+h^2(1-h-t)+h^2t=h(1-h) \tag{24.3} \end{equation}\]

Therefore, \(B_{t+h}-B_t\sim N(0,h(1-h))\) which is not related to \(t\). Thus, it has stationary increments.

Exercise 24.3 (Condition for Markovian) Show that a stochastic process with independent increments is a Markov process.

Proof. Consider a stochastic process \(X=\{X(t,\omega):t\in\mathcal{T},\omega\in\Omega\}\) that has independent increments. Then for any \(t_1<t_2<\cdots<t_n\) and any \(s\), we have \[\begin{equation} \begin{split} &P(X_{t_n}=s|X_{t_1}=x_1,\cdots,X_{t_{n-1}}=x_{n-1})\\ &=P(X_{t_n}-X_{t_{n-1}}=s-x_{n-1}|X_{t_1}=x_1,X_{t_2}-X_{t_1}=x_2-x_1,\\ &X_{t_3}-X_{t_2}=x_3-x_2,\cdots,X_{t_{n-1}}-X_{t_{n-2}}=x_{n-1}-x_{n-2})\\ &=P(X_{t_n}-X_{t_{n-1}}=s-x_{n-1})\quad (by\,independence)\\ &=P(X_{t_n}-X_{t_{n-1}}=s-x_{n-1}|X_{t_{n-1}}=x_{n-1})\\ &=P(X_{t_n}=s|X_{t_{n-1}}=x_{n-1}) \end{split} \tag{24.4} \end{equation}\] Therefore, by defintion of Markov process, we know that \(X\) is actually a Markov process.

Exercise 24.4 (Equivalent definition for Markov property) For a stochastic process \(\{X_n:n\geq 1\}\), show that (a) and (b) are equivalent.

(a). It has Markow property, i.e. \[\begin{equation} P(X_{n+1}=x_{n+1}|X_n=x_n,\cdots,X_1=x_1)=P(X_{n+1}=x_{n+1}|X_n=x_n) \tag{24.5} \end{equation}\]

(b). For all finite sets of time points \(\{l\in L:1\leq l<n\}\) and \(\{k\in K:k>n\}\), \[\begin{equation} \begin{split} P&(\{X_k=x_k,k\in K\},\{X_l=x_l,l\in L\}|X_n=x_n)\\ &=P(\{X_k=x_k,k\in K\}|X_n=x_n)P(\{X_l=x_l,l\in L\}|X_n=x_n) \end{split} \tag{24.6} \end{equation}\] i.e. past given the present is independent of future given the present.

Proof. We first show (a) \(\Longrightarrow\) (b). Consider \(\{l\in L:1\leq l<n\}\) and \(\{k\in K:k>n\}\), we have \[\begin{equation} \begin{split} P&(\{X_k=x_k,k\in K\},\{X_l=x_l,l\in L\}|X_n=x_n)\\ &=P(\{X_k=x_k,k\in K\}|\{X_l=x_l,l\in L\},X_n=x_n)P(\{X_l=x_l,l\in L\}|X_n=x_n)\\ &=P(\{X_k=x_k,k\in K\}|X_n=x_n)P(\{X_l=x_l,l\in L\}|X_n=x_n)\quad (\text{By Markov property}) \end{split} \tag{24.7} \end{equation}\] Therefore, we have (a) \(\Longrightarrow\) (b).

Now, for (b) \(\Longrightarrow\) (a), take set \(K=\{n+1\}\) and \(L=\{1,2,\cdots,n-1\}\), then from (b) we have \[\begin{equation} \begin{split} P&(X_{n+1}=x_{n+1},X_{n-1}=x_{n-1},\cdots,X_1=x_1|X_n=x_n)\\ &=P(X_{n+1}=x_{n+1}|X_n=x_n)P(X_{n-1}=x_{n-1},\cdots,X_1=x_1|X_n=x_n) \end{split} \tag{24.8} \end{equation}\] On the other hand, \[\begin{equation} \begin{split} P&(X_{n+1}=x_{n+1},X_{n-1}=x_{n-1},\cdots,X_1=x_1|X_n=x_n)\\ &=P(X_{n+1}=x_{n+1}|X_n=x_n,X_{n-1}=x_{n-1},\cdots,X_1=x_1)\\ &\times P(X_{n-1}=x_{n-1},\cdots,X_1=x_1|X_n=x_n) \end{split} \tag{24.9} \end{equation}\] Compare (24.8) and (24.9) we can find \[\begin{equation} P(X_{n+1}=x_{n+1}|X_n=x_n,\cdots,X_1=x_1)=P(X_{n+1}=x_{n+1}|X_n=x_n) \tag{24.10} \end{equation}\] which is just (a), so we have (b) \(\Longrightarrow\) (a) as well.

In conclusion, (a) \(\Longleftrightarrow\) (b).