2.9 Gaussian Processes

We will often deal with Gaussian temporal processes which are stationary processes whose joint distribution is Gaussian. That is, for the series $$Y_1,\dots,Y_n$$, they are jointly distributed as $$\mathcal{N}(\mathbf{1}\mu, \Sigma(\gamma))$$, where $$\gamma$$ is the autocovariance function such that $$\gamma(k) = \text{Cov}(Y_j, Y_{j+k})$$ for all integers $$j$$ and $$k$$. Note that we also have $$\gamma(0) = \text{Cov}(Y_j, Y_j) = \text{Var}(Y_j)$$ for all $$j$$.

From the Cauchy-Schwarz inequality, we can see that $\begin{eqnarray*} \text{Cov}(Y_j,Y_{j+k})^2 & \leq & \text{Var}(Y_j)\text{Var}(Y_{j+k})\\ & = & \gamma(0)^2\\ |\text{Cov}(Y_j,Y_{j+k})| & \leq & \gamma(0) \end{eqnarray*}$

In practice, we generally assume $$\gamma(k)\rightarrow 0$$ as $$k\rightarrow\infty$$ so that the dependence between observations $$Y_j$$ and $$Y_{j+k}$$ decays after a certain lag distance $$k$$. If it is the case that $$\gamma(k)=0$$ for $$k > m$$, then the time series is referred to as an $$m$$-dependent series.