Chapter 21 Homework 1: Properties of Stochastic Process: Problems and Tentative Solutions

Exercise 21.1 (Weakly stationary process) Consider real valued random variable \(A_i,B_i\), \(i=1,\cdots,k\) such that \(E(A_i)=E(B_i)=0\) and \(Var(A_i)=Var(B_i)=\sigma^2>0\) for \(i=1,\cdots,k\). Moreover, assume they are mutually uncorrelated, that is, \(E(A_iA_l)=E(B_iB_l)=0\), for \(i\neq l\), and \(E(A_iB_l)=0\), for all \(i,l\). Define the stochastic process \(X=\{X_t,t\in\mathcal{R}\}\) by \(X_t=\sum_{i=1}^k(A_i\cos(\omega_i t)+B\sin(\omega_i t))\), where \(\omega_i\) are real constants \(i=1,\cdots,k\). Show that \(X\) is weakly stationary.

Proof. Firstly, consider the expectation, for any \(t\in\mathcal{R}\), we have \[\begin{equation} \begin{split} E(X_t)&=E(\sum_{i=1}^k(A_i\cos(\omega_it)+B_i\sin(\omega_it)))\\ &=\sum_{i=1}^k\cos(\omega_it)E(A_i)+\sin(\omega_it)E(B_i)=0 \end{split} \tag{21.1} \end{equation}\] As a function of \(t\), \(E(X_t)\) is a constant.

Then, for the covariance between \(X_t\) and \(X_{t+r}\) of any \(t,t+r\in\mathcal{R}\), since \(E(X_t)=E(X_{t+r})=0\), we have \[\begin{equation} \begin{split} Cov&(X_t,X_{t+r})=E(X_tX_{t+r})\\ &=E[\{\sum_{i=1}^k(A_i\cos(\omega_it)+B_i\sin(\omega_it))\}\times\{\sum_{i=1}^k(A_i\cos(\omega_i(t+r))+B_i\sin(\omega_i(t+r))\}]\\ &=\sum_{i=1}^kE(A_i^2)\cos(\omega_it)\cos(\omega_i(t+r))+E(B_i^2)\sin(\omega_it)\sin(\omega_i(t+r))\\ &=\sum_{i=1}^k\sigma_i^2\cos(\omega_ir) \end{split} \tag{21.2} \end{equation}\] which is a function of \(r\) only. Thus, the stochastic process \(X_t\) is weakly stationary.

Exercise 21.2 (Weakly but not strongly stationary process) Consider a discrete-time, real-valued stochastic process \(X=\{X_n: n\geq 1\}\) defined by \(X_n=\cos(nU)\), where \(U\) is uniformly distributed on \((-\pi,\pi)\). Show that \(X\) is weakly stationary but not strongly stationary.

Proof. Firstly, consider the expectation of \(X_n\) for any \(n\in\mathbb{N}_+\), we have \[\begin{equation} \begin{split} E(X_n)&=E(\cos(nU))\\ &=\int_{-\pi}^{\pi}\cos(nu)\cdot\frac{1}{2\pi}du=0 \end{split} \tag{21.3} \end{equation}\] which is a constant as a function of \(n\).

Then, for the covariance between \(X_n\) and \(X_{n+m}\) with any \(n\in\mathbb{N}_+\) and \(m\in\mathbb{N}\), \(E(X_n)=E(X_{n+m})=0\), so \[\begin{equation} \begin{split} Cov&(X_n,X_{n+m})=E(X_tX_{t_r})\\ &=\int_{-\pi}^{\pi}\cos(nu)\cos((n+m)u)\cdot\frac{1}{2\pi}du\\ &=\frac{1}{4\pi}[\int_{-\pi}^{\pi}(\cos((2n+m)u)+\cos(mu))du]\\ &=\frac{1}{4\pi}[\frac{\sin((2n+m)u)}{2n+m}|_{-\pi}^{\pi}+\frac{\sin(mu)}{m}|_{-\pi}^{\pi}]\\ &=0 \end{split} \tag{21.4} \end{equation}\] which is a function of \(m\). Therefore, the stochastic process is weakly stationary.

The process is not strongly stationary. For example, consider the event \([X_1\leq -1+\epsilon,X_3\geq 1-\epsilon]\) for some small \(\epsilon>0\). The event has zero probability because \(X_1\leq 1-\epsilon\) happens when \(U\) is near \(-\pi\) or \(\pi\), but \(X_3\geq 1-\epsilon\) happens when \(U\) is near \(-\frac{2\pi}{3},0\) or \(-\frac{2\pi}{3}\). Therefore, we can pick small enough \(\epsilon\) such that those regions are not overlapped. However, the event \([X_2\leq -1+\epsilon,X_4\geq 1-\epsilon]\) have positive probability because we can pick \(U\) around \(\frac{\pi}{2}\) to make both conditions hold.

Exercise 21.3 (Find correlation function from spectral density) Consider a weakly stationary process \(X=\{X_t:t\in\mathcal{R}\}\) with zero mean and unit variance. Find the correlation function of \(X\) if the spectral density function \(f\) of \(X\) is given by:

    1. \(f(u)=0.5\exp(-|u|)\), \(u\in\mathcal{R}\)
    1. \(f(u)=\phi(\alpha^2+u^2)^{-1}\), \(u\in\mathcal{R}\)
    1. \(f(u)=\frac{1}{2}\sigma(\pi\alpha)^{-1}\exp(-u^2/(4\alpha))\), \(u\in\mathcal{R}\)

Proof. Since we have variance equal to 1, the correlation function is just covariance function. Now using the one-to-one correspondence between the spectral density and covariance function \[\begin{equation} c(t)=\int_{-\infty}^{\infty}\exp(itu)f(u)du \tag{21.5} \end{equation}\] we can get the correlation function for each case.

For part (a), we have \[\begin{equation} \begin{split} c(t)&=\int_{-\infty}^{\infty}0.5\exp(-|u|+itu)du\\ &=E_{U_1}(\exp(itu))=\frac{1}{1+t^2} \end{split} \tag{21.6} \end{equation}\] where since \(U_1\sim Laplace(0,1)\) we have the last equation. Therefore, in this case, \(c(t)=\frac{1}{1+t^2}\).

For part(b), we have \[\begin{equation} \begin{split} c(t)&=\int_{-\infty}^{\infty}\frac{\phi\exp(itu)}{\alpha^2+u^2}du\\ &=\frac{\phi\pi}{\alpha}E_{U_2}(\exp(itu))=\frac{\phi\pi}{\alpha\exp(\alpha|t|)} \end{split} \tag{21.7} \end{equation}\] where since \(U_2\sim Cauchy(0,\alpha)\) we have the last equation. Therefore, \(c(t)=\frac{\phi\pi}{\alpha\exp(\alpha|t|)}\).

Finally, for part (c) \[\begin{equation} \begin{split} c(t)&=\int_{-\infty}^{\infty}\frac{\sigma\exp(itu-u^2/(4\alpha))}{2\pi\alpha}du\\ &=\frac{\sigma}{\sqrt{\pi\alpha}}E_{U_3}(\exp(itu))=\frac{\sigma\exp(-\alpha t^2)}{\sqrt{\pi\alpha}} \end{split} \tag{21.8} \end{equation}\] where since \(U_3\sim N(0,2\alpha)\) we have the last equation. Therefore, \(c(t)=\frac{\sigma\exp(-\alpha t^2)}{\sqrt{\pi\alpha}}\).

Exercise 21.4 (Strong and weak stationarity for Gaussian process) A stochastic process \(X=\{X_t:t\in\mathcal{R}\}\) is called the Gaussian process with mean \(\mu\) and covariance kernel \(\kappa(\cdot,\cdot)\) if for any \(t_1,\cdots,t_n\) and any \(n\geq 1\), \((X_{t_1},\cdots,X_{t_n})^T\sim N(\mu\mathbf{1},\mathbf{K})\), where \(\mathbf{K}=(\kappa(t_i,t_j))_{i,j=1}^n\). Show that the strong and weak stationarity are equivalent for a Gaussian process.

Proof. For any stochastic process, the strong stationary implies weak stationary. Therefore, for Gaussian process, we only need to show the weak stationary implies strong stationary.

Consider a weakly stationary Gaussian process \(X_t\) with mean function \(m(\cdot)\) and covariance function \(C(\cdot,\cdot)\). By definition, any finite dimensional distributions of it is \[\begin{equation} F(X_1,\cdots,X_k)=\Phi(m(\mathbf{X}),C(\mathbf{X})) \tag{21.9} \end{equation}\] where \(\Phi(m(\mathbf{X}),C(\mathbf{X}))\) represent the distribution function of a multivariate normal distribution with mean vector \(m(\mathbf{X})=(m(X_1),\cdots,m(X_k))^T\) and covariance matrix \(C(\mathbf{X})\), whose \(i,j\)th term is \(C(X_i,X_j)\) for \(1\leq i,j\leq k\). Similarly, the distribution for \(X_{1+t_0},\cdots,X_{k+t_0}\) is also multivariate normal with mean vector \(m_{t_0}(\mathbf{X})=(m(X_{1+t_0}),\cdots,m(X_{k+t_0}))^T\) and covariance matrix \(C_{t_0}(\mathbf{X})\), whose \(i,j\)th term is \(C(X_{i+t_0},X_{j+t_0})\). From the weakly stationary assumption, we have \(m(\mathbf{X})=m_{t_0}(\mathbf{X})\) and \(C(\mathbf{X})=C_{t_0}(\mathbf{X})\), and since they are all multivariate normal distribution, we have \[\begin{equation} F(X_1,\cdots,X_k)=F(X_{1+t_0},\cdots,X_{k+t_0}) \tag{21.10} \end{equation}\] for any \(k\in\mathbb{N}_+\) and \(t_0\in\mathcal{T}\). Thus, the Gaussian process is also strongly stationary.

Exercise 21.5 (Necessary and sufficient condition for a Gaussian process to be Markovian) By definition, a continuous-time real-valued stochastic process \(X=\{X_t:t\in\mathcal{R}\}\) is called a Markov process if for all \(n\), for all \(x_1,\cdots,x_n\), and all increasing sequences \(t_1<\cdots<t_n\) of index points, \[\begin{equation} Pr(X_{t_n}\leq x|X_{t_1}=x_1,\cdots,X_{t_{n-1}}=x_{n-1})=Pr(X_{t_{n}}\leq x|X_{t_{n-1}}=x_{n-1}) \tag{21.11} \end{equation}\] Let \(Z\) be a real valued Gaussian process. Show that \(Z\) is a Markov process if and only if \[\begin{equation} E(Z_{t_n}|Z_{t_1}=x_1,\cdots,Z_{t_{n-1}}=x_{n-1})=E(Z_{t_{n}}|Z_{t_{n-1}}=x_{n-1}) \tag{21.12} \end{equation}\]

Proof. (\(\Longleftarrow\)) This is trivial since if \(Z\) is a Markov process, then by definition for all \(n\) and for all \(x,x_1,\cdots,x_{n-1}\) and any sequence of index points \(t_1<\cdots<t_n\), we have \(Pr(Z_{t_n}\leq x|Z_{t_1}=x_1,\cdots,Z_{t_{n-1}}=x_{n-1})=Pr(Z_{t_n}\leq x|Z_{t_{n-1}}=x_{n-1})\). Then of course \(E(Z_{t_n}|Z_{t_1}=x_1,\cdots,Z_{t_{n-1}}=x_{n-1})=E(Z_{t_n}|Z_{t_{n-1}}=x_{n-1})\). We have the necessarity.

(\(\Longrightarrow\)) Since the f.d.d.s. of Guassian process is multivariate normal distribution, we can actually write down the explict form of the distribution of \(Z_{t_n}|Z_{t_1},\cdots,Z_{t_{n-1}}\) and \(Z_{t_n}|Z_{t_{n-1}}\) using the properties of multivariate normal distribution. Actually, let us denote the joint distribution of \((Z_{t_n},Z_{t_{n-1}})\) as \[\begin{equation} \begin{pmatrix} Z_{t_n}\\Z_{t_{n-1}}\end{pmatrix}\sim N(\begin{pmatrix} \mu_{t_n}\\ \mu_{t_{n-1}}\end{pmatrix},\begin{pmatrix} \Sigma_{11} & \Sigma_{12}\\ \Sigma_{21} & \Sigma_{22}\end{pmatrix}) \tag{21.13} \end{equation}\] Then, \(Z_{t_n}|Z_{t_{n-1}}=x_{n-1}\sim N(\mu_{t_n}+\Sigma_{12}\Sigma_{22}^{-1}(x_{n-1}-\mu_{n-1}),\Sigma_{11}-\Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21})\) for any \(x_{n-1}\).

Similarly, denote the joint distribution of \((Z_{t_n},\mathbf{Z}_{t_{n-1}})\) where \(\mathbf{Z}_{t_{n-1}}=(Z_{t_{n-1}},\cdots,Z_{t_1})\) as \[\begin{equation} \begin{pmatrix} Z_{t_n}\\\mathbf{Z}_{t_{n-1}}\end{pmatrix}\sim N(\begin{pmatrix} \mu_{t_n}\\ \boldsymbol{\mu}_{t_{n-1}}\end{pmatrix},\begin{pmatrix} \Lambda_{11} & \Lambda_{12}\\ \Lambda_{21} & \Lambda_{22}\end{pmatrix}) \tag{21.14} \end{equation}\] Then, \(Z_{t_n}|\mathbf{Z}_{t_{n-1}}=\mathbf{x}_{n-1}\sim N(\mu_{t_n}+\Lambda_{12}\Lambda_{22}^{-1}(\mathbf{x}_{n-1}-\boldsymbol{\mu}_{n-1}),\Lambda_{11}-\Lambda_{12}\Lambda_{22}^{-1}\Lambda_{21})\) for any vector \(\mathbf{x}_{n-1}=(x_1,\cdots,x_{n-1})\).

Now, from the assumption, we know \[\begin{equation} \mu_{t_n}+\Sigma_{12}\Sigma_{22}^{-1}(x_{n-1}-\mu_{n-1})=\mu_{t_n}+\Lambda_{12}\Lambda_{22}^{-1}(\mathbf{x}_{n-1}-\boldsymbol{\mu}_{n-1}) \tag{21.15} \end{equation}\] for any \(x_{n-1}\) and \(\mathbf{x}_{n-1}\). Then specifically we can choose \(x_{n-1}=\mu_{n-1}+\Sigma_{21}\) and \(\mathbf{x}_{n-1}=\boldsymbol{\mu}_{n-1}+\Lambda_{21}\) and subsitituted in () we have \[\begin{equation} \Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}=\Lambda_{12}\Lambda_{22}^{-1}\Lambda_{21} \tag{21.16} \end{equation}\] from which we can see the two distributions in (21.13) and (21.14) are identical. Therefore, the condition for Markov process is satisfied and \(Z\) is indeed a Markov process. We have the sufficiency.

Exercise 21.6 (Condition for Markovian) Show that any stochastic process \(X=\{X_n,n=0,\cdots\}\) with independent increments is a Markov process.

Proof. Consider a stochastic process \(X=\{X(n,\omega):n\in\mathbb{N}_+,\omega\in\Omega\}\) that has independent increments. Then for any \(n\geq 1\) and any \(s\), we have \[\begin{equation} \begin{split} &P(X_n=s|X_0=x_0,X_1=x_1,\cdots,X_{n-1}=x_{n-1})\\ &=P(X_n-X_{n-1}=s-x_{n-1}|X_0=x_0,X_1-X_0=x_1-x_0,\\ &X_2-X_1=x_2-x_1,\cdots,X_{n-1}-X_{n-2}=x_{n-1}-x_{n-2})\\ &=P(X_n-X_{n-1}=s-x_{n-1})\quad (by\,independence)\\ &=P(X_n-X_{n-1}=s-x_{n-1}|X_{n-1}=x_{n-1})\\ &=P(X_n=s|X_{n-1}=x_{n-1}) \end{split} \tag{21.17} \end{equation}\] Therefore, by defintion of Markov process, we know that \(X\) is actually a Markov process.

Exercise 21.7 (View Brownian motion as a special Gaussian process) Let \(W=\{W_t:t\geq 0\}\) be a Brownian motion (see Definition 2.5). Show that a Brownian motion can be viewed as a Gaussian process with mean 0 and \(Cov(W_s,W_t)=\min\{s,t\}\).

Proof. Consider any finite dimensional distribution of \(W\), denoted as \(F_{W_{t_1},\cdots,W_{t_k}}(w_{t_1},\cdots,w_{t_k})\), w.l.o.g. we can assume \(t_1<\cdots<t_k\) and for simplicity of notation, we can surpass \(t\) and just denote the f.d.d.s. as \(F_{W_1,\cdots,W_k}(w_1,\cdots,w_k)\). Firstly, from the independent increment property and \(W_t-W_s\sim N(0,t-s)\) for \(0\leq s\leq t\), since \(W_1,\cdots,W_k\) can be expressed as a linear combination of independent normally distributed random variable \(W_1,W_2-W_1,\cdots,W_k-W_{k-1}\), we have the f.d.d.s. of \(W\) is normally distributed. The mean is \(E(W_t)=0\) and variance \(Var(W_t)=t\), while the covariance is, assume w.l.o.g. \(s<t\) \[\begin{equation} \begin{split} Cov(W_sW_t)&=E(W_sW_t)=E(W_s(W_t-W_s+W_s))\\ &=E(W_s^2)+E(W_s)E(W_t-W_s)=s \end{split} \tag{21.18} \end{equation}\] Thus, the Brownian motion is a Gaussian process with mean function 0 and covariance function \(Cov(W_s,W_t)=\min\{s,t\}\).

Exercise 21.8 (Moments of Brownian motion) Show that for a Brownian motion \(E(|W_s-W_t|^{2n})=C_n|s-t|^n\), where \(C_n=\frac{2n!}{2^nn!}\).

Proof. Suppose \(Y\sim N(0,\nu)\), we compute \(E(|Y|^{2n})\) as follows: \[\begin{equation} \begin{split} E(|y|^{2n})&=\frac{1}{\sqrt{2\pi\nu}}\int_{-\infty}^{\infty}|y|^{2n}\exp(-\frac{y^2}{2\nu})dy\\ &=\frac{2}{\sqrt{2\pi\nu}}\int_{0}^{\infty}y^{2n}\exp(-\frac{y^2}{2\nu})dy\quad (y=\sqrt{2\nu z})\\ &=\frac{2}{\sqrt{2\pi\nu}}\int_{0}^{\infty}(2\nu z)^{n}\exp(-\frac{y^2}{2\nu})\frac{\nu dz}{\sqrt{2\nu z}}\\ &=\frac{\nu^n2^n}{\pi}\int_{0}^{\infty}z^{n-\frac{1}{2}}e^{-z}dz\\ &=\frac{\nu^n2^n}{\pi}\Gamma(n+\frac{1}{2})\\ &=\frac{(2n)!}{2^nn!}\nu^n=C_n\nu^n \end{split} \tag{21.19} \end{equation}\]

Now since \(W_s-W_t\) follows \(N(0,|s-t|)\), using the above result we immediately have \(E(|W_s-W_t|^{2n})=C_n|s-t|^n\) where \(C_n=\frac{(2n)!}{2^nn!}\).