Chapter 3 Markov Chain: Definition and Basic Properties (Lecture on 01/12/2021)
The next few chapters will be mainly about discrete time, discrete state space stochastic process, mainly from the context of Markov chain.
Let \(X=\{X(t,\omega):t\in T,\omega\in\Omega\}\), \(X(t,\omega)\in S\), consider the case both \(T\) and \(S\) are discrete. Define the random variabel \(X_t(\omega)=X(t,\omega)\) and consider the sequence of random variables \(\{X_0,X_1,\cdots\}\) which take values in some countable set \(S\), called the state space. Each \(X_n\) is a discrete random variable that takes one of the \(N\) possible values, where \(N=|S|\). We are allowing \(N=\infty\). This is the set up for discrete time, discrete state stochastic process.
Definition 3.1 (Markov Chain) The process \(X=\{X_0,X_1,\cdots\}\) is a Markov chain if it satisfies the Markov condition: \[\begin{equation} P(X_{n}=s|X_0=x_0,X_1=x_1,\cdots,X_{n-1}=x_{n-1})=P(X_{n}=s|X_{n-1}=x_{n-1}),\quad\forall s \tag{3.1} \end{equation}\]
The Markov property described in this way is equivalent to\[\begin{equation} P(X_{n}=s|X_{n_1}=x_{n_1},\cdots,X_{n_k}=x_{n_k})=P(X_{n}=s|X_{n_k}=x_{n_k}),\quad\forall s \tag{3.2} \end{equation}\] for all \(n_1<n_2<\cdots<n_k\leq n-1\).
We have asssumed that \(X\) takes values in some countable set \(S\). Since \(S\) is countable, it can be put in one-to-one correspondence with some subset \(S^{\prime}\) of the integers. Thus, without loss of generality, we can say the following: if \(X_n=i\), it actually means that the chain is in the \(i\)th state at the \(n\)th time points.
The evolution of a chain is described by the transition probabilities, defined as \(P(X_{n+1}=j|X_n=i)\). This probability may depend on \(n,i,j\). We will restric our attention to the case when transition probabilities do not depend on \(n\).
Theorem 3.1 (Properties of Transition Probability Matrix) If \(\mathbf{P}\) is a transition probability matrix, then
\(0\leq p_{ij}\leq 1\), \(\forall i,j\).
- \(\sum_{j}p_{ij}=1\), \(\forall i\).
Theorem 3.2 (Chapman-Kolmogorov Equation) \[\begin{equation} p_{ij}(m,m+n+r)=\sum_{k}p_{ik}(m,m+n)p_{kj}(m+n,m+n+r),\quad \forall r \tag{3.3} \end{equation}\]
Since the transition probability matrix is \(P(m,m+n+r)=\{p_{ij}(m,m+n+r)\}_{i,j=1}^{|S|}\), the Chapman-Kolmogorov equation tells us \[\begin{equation} P(m,m+n+r)=P(m,m+n)P(m+n,m+n+r),\quad \forall n,m,r \tag{3.4} \end{equation}\] Specificly, we can take \(r=n=1\) and we have \[\begin{equation} P(m,m+2)=P(m,m+1)P(m+1,m+2),\quad \forall n,m,r \tag{3.5} \end{equation}\] If we further assume time homogeneity, then \(P(m,m+1)=P(m+1,m+2)=P\) and (3.5) becomes \[\begin{equation} P(m,m+2)=p^2 \tag{3.6} \end{equation}\] In general, we have \[\begin{equation} P(m,m+n)=p^n,\quad \forall n \tag{3.7} \end{equation}\] That is, if the one step transition probability matrix is \(P=\{p_{ij}\}_{i,j=1}^{|S|}\) where \(p_{ij}=P(X_{n+1}=j|X_n=i)\), then the \(i,j\)th entry of the n-step transition probability matrix \(P(X_{m+n}=j|X_m=i)=(P^n)_{i,j}\) where \((P^n)_{i,j}\) denotes the \(i,j\)th entry of \(P^n\).
Definition 3.4 (Persistent State) State \(i\) is called persistent (recurrent) if \(P(X_n=i\, \text{for some}\, n\geq 1|X_0=i)=1\). This is to say that the probability of the chain eventually return to i, having started from i, is 1.
If this probability is less than 1, state \(i\) is known as the transient state.
We will be interested in the first passage time defined as \(f_{ij}(n)=P(X_1\neq j,\cdots,X_{n-1}\neq j,X_n=j|X_0=i)\). This is the probability that state \(j\) is first visited from state \(i\) at time n. Write \(f_{ij}=\sum_{n=1}^{\infty}f_{ij}(n)\), it is the probability that state \(j\) is ever visited from state \(i\). If \(f_{ij}=1\), we are interested in the constraints it imples on the transition probability.