Chapter 8 States and Chains
Suppose we have a Markov chain which is currently in state i. It is natural to ask questions such as the following:
Are there any states that we cannot ever get to?
Once we leave state i are we guaranteed to get back?
If we are certain to return to i how long does this take on average?
If it is possible to return to i, for what values of n is it possible to return in n steps?
To answer these questions, we study properties of a Markov chain, and it particular introduce classes of states.
8.1 Communication of States
In this section, we formalise the notion of one state being accessible from another.
A state i is said to communicate with a state j if there is a non-zero probability that a Markov chain currently in state i will move to state j in the future. Mathematically p(n)ij>0 for some n≥0. This is denoted by i→j.
That is to say, state i can communicate with state j if it is possible to move from i to j.
Note in Defintion 8.1.1 that n=0 is permitted. It follows that any state i is said communicate with itself: i→i necessarily.

Note that if Xt=2 for some t, that is the Markov chain is in state 2, then one can move to state 4 for example by the path Xt=2,Xt+1=1,Xt+2=4 (other routes are available). Therefore 2→4. Similarly 1→4 and 3→4.
However since p44=1, or equivalently p41=p42=p43=0, it is impossible for the Markov chain to leave state 4. That is state 4 cannot communicate with any of the states 1,2,3.
States i and j are said to intercommunicate if i→j and j→i. This is denoted by i↔j
Considering again the Markov Chain governing the company website of Example 7.4.2, seen in Example 8.1.2, one can easily observe that 1↔2,2↔3 and 1↔3.
However since state 4 does not communicate with states 1,2 or 3, it follows that state 4 does not intercommunicate with any of the states 1,2 or 3
We have introduced the notions of communication and intercommunication, as we anticipate that properties of states will be shared by those that intercommunicate with each other. In this vein, one could group together all states that can intercommunicate to partition the Markov chain into communicating classes.
We introduce two notions that capture collections of states that have strong properties regarding communication together.
A set C of states is called irreducible if i↔j, for all i,j∈C.
A Markov chain is said to be irreducible itself, if the set of all states is irreducible.
A set C of states is said to be closed if for any i∈C and j∉C, then pij=0.
Once a Markov chain reaches a closed state C, it will subsequently never leave C.
Consider the Markov chain represented by the following:
Show that the set {1,2} is both irreducible and closed.

From the Markov chain diagram, one can read the transition matrix for the Markov chain as
P=(1212000014340000141414140014014140140000121200001212).
Note that p12=12>0 so there is a path from state 1 to state 2, and p21=14>0 so there is a path from state 2 to state 1. Therefore 1↔2, that is {1,2} is irreducible.
Also note p13=p14=p15=p16=p23=p24=p25=p26=0, so by Definition 8.1.6 the set {1,2} is closed.
Therefore {1,2} is a closed and irreducible set.Are there any other irreducible and closed sets in the Markov chain of Example 8.1.7? What about sets that are only irreducible, and sets that are only closed?
The terminology introduced in both Defintion 8.1.5 and Definition 8.1.6 freely for the remainder of the course.
If a closed set C of states contains only one state i, that is pii=1 and pij=0 for all j≠i, we call i an absorbing state.
Find an absorbing state among the Markov chains we have seen in previous examples.
8.2 Recurrence
A state of a Markov chain is called reccurent if P[Xn=i for some n≥1∣X0=i]=1.
The essence of this definition is that a Markov chain that is currently in some recurrent state is certain to return to that state again in the future.
A state of a Markov chain that is not recurrent is called transient.
A Markov chain that is currently in some transient state is not certain to return to that state again in the future.
Consider the Markov Chain governing Mary Berrys’ choice of Nottingham coffee shop of Example 7.1.3, that is the Markov chain described by the diagram
Show that Latte Da is a recurrent state.

Suppose that the Mary Berry visits Latte Da on her tth trip to Nottingham. Mathematically in terms of the Markov chain: Xt=Latte Da. First we calculate the probability that Mary Berry doesn’t visit Latte Da on her next N visits to Nottingham.
P(Mary Berry doesn't visit Latte Da on next N visits∣Xt=Latte Da)=P(Xt+N=Xt+N−1=⋯=Xt+1=Deja Brew∣Xt=Latte Da)=P(Xt+N=Deja Brew∣Xt+N−1=Deja Brew)×P(Xt+N−1=Deja Brew∣Xt+N−2=Deja Brew)×……×P(Xt+2=Deja Brew∣Xt+1=Deja Brew)×P(Xt+1=Deja Brew∣Xt=Latte Da)=P(X1=Deja Brew∣X0=Deja Brew)×P(X1=Deja Brew∣X0=Deja Brew)×……×P(X1=Deja Brew∣X0=Deja Brew)×P(X1=Deja Brew∣X0=Latte Da)=56×23×…23=56(23)N−1
Consider again the Markov Chain governing the company website of Example 7.4.2, seen in Example 8.1.2. Show that state 1 is a transient state.
Suppose the user is on the Home Page of the website, that is Xt=1 for some t. The user could click on the link to the Staff Page, that is, state 4 of the Markov chain. At this point it is impossible for the user to return to the Home Page, or state 1. That is to say, it is not certain that the Markov chain will ever return to state 1. Therefore state 1 is transiant.
8.3 Mean Recurrence Times
The mean recurrence time of a state i, denoted μi, is given by μi={∑n≥1nf(n)ii,if state i recurrent,∞,if state i transiant.
Suppose i is a recurrent state. It follows from Definition 8.3.1, that the mean recurrence time is the average time that it takes for the Markov chain currently in state i to return to i. This can be seen by noting that the summation is over all possibilities for how long it could take the Markov chain to return as required, and that f(n)ii is the probability that the Markov chain moves from state i to state i in exactly n steps.
Calculate the mean recurrence time μ1 for the Markov Chain governing the company website of Example 7.4.2.
Substituting in the known value of f(1)11=0 and f(n)11=13⋅12n−2 where n≥2 from Example 7.4.4 into Definition 8.3.1 obtain μ1=∑n≥1nf(n)11=f(1)11+∞∑n=2nf(2)11=0+∞∑n=2n⋅13⋅12n−2=13∞∑n=2n2n−2. Using computer code calculate ∞∑n=2n2n−2=6 and so μ1=13⋅6=2.
Note that even if state i is recurrent, the mean recurrence time may still be ∞.
Consider a recurrent state i. The state i is said to be positive recurrent if μi<∞, or null recurrent if μi=∞.
It follows from Example 8.3.2 that state 1 in the Markov chain governing the company website is positive recurrent since μ1=2<∞.
If i↔j, then i is positive recurrent if and only if j is positive recurrent.
This lemma provides us with a shortcut to show positive recurrence of a large number of states.
Combining Example 8.1.4, Example 8.3.4 and Lemma 8.3.5 shows that states 2 and state 3 are also positive recurrent in the running company website Markov chain example.
Note that Lemma 8.3.5 does not tell us anything about the mean recurrent time of intercommunicating recurrent states beyond finiteness. Namely knowing that μ1=2 in Example 8.3.2 does not provide any new information about μ2 and μ3, beyond what we would have known given μ1<∞.
If a Markov chain has a finite number of states, then all recurrent states are positive.
8.4 Periodicity
Consider a Markov chain in some state i. Consider the values of n for which p(n)ii>0, that is, the positive integers n for which it is possible for the Markov chain to return to i in n steps. Throughout this section we denote this collection of values by {a1,a2,a3,…}.
The period of state i is given by di=gcd
Recall the scenario of Week 6 Questions, Questions 1 to 8:
Every year Ria chooses exactly two apprentices to compete in the fictitious competition Nottingham’s Got Mathematicians. Apprentices are recommended to Ria by Daniel and Lisa. Initially Daniel and Lisa recommend one candidate each. However if Daniel selects the apprentice who finishes second among Ria’s nominees, then this opportunity for recommendation is given to Lisa the following year. Similarly if Lisa chooses the candidate who who finishes second among Ria’s nominees, then this opportunity for recommendation is given to Daniel the following year. This rule is repeated every year, even if Daniel or Lisa choose both the candidates.
Generally Lisa is better at picking competitors: a Lisa endorsed candidate beats a Daniel endorsed candidate 75 \% of the time.
This scenario can be modelled by the Markov chain:
Calculate d_1, the period of state 1.

Consider all the possible paths that start and end in state 1: \begin{align*} 1 &\rightarrow 0 \rightarrow 1 \\ 1 &\rightarrow 2 \rightarrow 1 \\ 1 &\rightarrow 0 \rightarrow 1 \rightarrow 0 \rightarrow 1 \\ 1 &\rightarrow 0 \rightarrow 1 \rightarrow 2 \rightarrow 1 \\ 1 &\rightarrow 2 \rightarrow 1 \rightarrow 0 \rightarrow 1 \\ 1 &\rightarrow 2 \rightarrow 1 \rightarrow 2 \rightarrow 1 \\ 1 &\rightarrow 0 \rightarrow 1 \rightarrow 0 \rightarrow 1 \rightarrow 0 \rightarrow 1 \\ &\vdots \end{align*} The lengths of these paths respectively are 2,2,4,4,4,4,6, \ldots. Calculate d_1 = \gcd ( 2,2,4,4,4,4,6, \ldots) =2.
This definition goes a long way towards answering the question “If it is possible to return to i, for what values of n is it possible to return in n steps?” identified at the opening of the chapter. Namely for a given value n, it is possible to return from to state i to state i if and only if d_i divides n exactly.
If i \leftrightarrow j, then i and j have the same period: d_i = d_j.
Consider the Markov chain of Example 8.4.2. Calculate d_0 and d_2, the periods of states 0 and 2.
Clearly states 0,1,2 intercommunicate, that is, 0 \leftrightarrow 1 and 1 \leftrightarrow 2. From Example 8.4.2, we know d_1 = 2 and so by Lemma 8.4.3 it follows that d_0=d_2=2.
A state is said to be aperiodic if d_i=1.
Since trivially 1 divides all positive integers n, this definition is equivalent to saying that it is possible to return to state i in any number of steps. Aperiodicity is often a feature of the most mathematically interesting Markov chains, and will be a key assumption when it comes to talking about steady states.