Chapter 9 States, Recurrence and Periodicity
Suppose we have a Markov chain which is currently in state . It is natural to ask questions such as the following:
Are there any states that we cannot ever get to?
Once we leave state are we guaranteed to get back?
If we are certain to return to how long does this take on average?
If it is possible to return to , for what values of is it possible to return in steps?
To answer these questions, we study properties of a Markov chain, and it particular introduce classes of states.
9.1 Communication of States
In this section, we formalise the notion of one state being accessible from another.
A state is said to communicate with a state if there is a non-zero probability that a Markov chain currently in state will move to state in the future. Mathematically for some . This is denoted by .
That is to say, state can communicate with state if it is possible to move from to .
Note in Defintion 9.1.1 that is permitted. It follows that any state is said communicate with itself: necessarily.

Note that if for some , that is the Markov chain is in state 2, then one can move to state 4 for example by the path (other routes are available). Therefore . Similarly and .
However since , or equivalently , it is impossible for the Markov chain to leave state 4. That is state cannot communicate with any of the states .
States and are said to intercommunicate if and . This is denoted by
Considering again the Markov Chain governing the company website of Example 8.4.2, seen in Example 9.1.2, one can easily observe that and .
However since state does not communicate with states or , it follows that state does not intercommunicate with any of the states or
We have introduced the notions of communication and intercommunication, as we anticipate that properties of states will be shared by those that intercommunicate with each other. In this vein, one could group together all states that can intercommunicate to partition the Markov chain into communicating classes.
We introduce two notions that capture collections of states that have strong properties regarding communication together.
A set of states is called irreducible if , for all .
A Markov chain is said to be irreducible itself, if the set of all states is irreducible.
A set of states is said to be closed if for any and , then .
Once a Markov chain reaches a closed state , it will subsequently never leave .
Consider the Markov chain represented by the following:
Show that the set is both irreducible and closed.

From the Markov chain diagram, one can read the transition matrix for the Markov chain as
Note that so there is a path from state to state , and so there is a path from state to state . Therefore , that is is irreducible.
Also note , so by Definition 9.1.6 the set is closed.
Therefore is a closed and irreducible set.Are there any other irreducible and closed sets in the Markov chain of Example 9.1.7? What about sets that are only irreducible, and sets that are only closed?
The terminology introduced in both Defintion 9.1.5 and Definition 9.1.6 freely for the remainder of the course.
If a closed set of states contains only one state , that is and for all , we call an absorbing state.
Find an absorbing state among the Markov chains we have seen in previous examples.
9.2 Recurrence
A state of a Markov chain is called recurrent if
The essence of this definition is that a Markov chain that is currently in some recurrent state is certain to return to that state again in the future.
A state of a Markov chain that is not recurrent is called transient.
A Markov chain that is currently in some transient state is not certain to return to that state again in the future.
Consider the Markov Chain governing Mary Berrys’ choice of Nottingham coffee shop of Example 8.1.3, that is the Markov chain described by the diagram
Show that Latte Da is a recurrent state.

Suppose that the Mary Berry visits Latte Da on her trip to Nottingham. Mathematically in terms of the Markov chain: . First we calculate the probability that Mary Berry doesn’t visit Latte Da on her next visits to Nottingham.
Consider again the Markov Chain governing the company website of Example 8.4.2, seen in Example 9.1.2. Show that state 1 is a transient state.
Suppose the user is on the Home Page of the website, that is for some . The user could click on the link to the Staff Page, that is, state 4 of the Markov chain. At this point it is impossible for the user to return to the Home Page, or state 1. That is to say, it is not certain that the Markov chain will ever return to state 1. Therefore state 1 is transiant.
If , then state is recurrent if and only if state is recurrent.
It follows from Example 9.2.3 and Lemma 9.2.5 that the state Deja Brew in the Mary Berry coffee shop example is also recurrent.
Indeed Lemma 9.2.5 indicates that it makes sense to label communicating classes as either recurrent or transiant: if one state in a communicating class is recurrent/transient then all the states in that class must be recurrent/transient respectively. This leads to the following definition.
An irreducible Markov chain is said to be recurrent if it contains at least one recurrent state.
An irreducible Markov chain being recurrent as per Definition 9.2.6 is equivalent to every state of the Markov chain being recurrent.
9.3 Mean Recurrence Times
The mean recurrence time of a state , denoted , is given by
Suppose is a recurrent state. It follows from Definition 9.3.1, that the mean recurrence time is the average time that it takes for the Markov chain currently in state to return to . This can be seen by noting that the summation is over all possibilities for how long it could take the Markov chain to return as required, and that is the probability that the Markov chain moves from state to state in exactly steps.
Calculate the mean recurrence time for the Markov Chain governing the company website of Example 8.4.2.
Substituting in the known value of and where from Example 8.4.4 into Definition 9.3.1 obtain Using computer code calculate and so
Note that even if state is recurrent, the mean recurrence time may still be .
Consider a recurrent state . The state is said to be positive recurrent if , or null recurrent if .
It follows from Example 9.3.2 that state in the Markov chain governing the company website is positive recurrent since .
If , then is positive recurrent if and only if is positive recurrent.
This lemma provides us with a shortcut to show positive recurrence of a large number of states.
Combining Example 9.1.4, Example 9.3.4 and Lemma 9.3.5 shows that states and state are also positive recurrent in the running company website Markov chain example.
An irreducible Markov chain is said to be positive-recurrent if it contains at least one positive-recurrent state.
An irreducible Markov chain being positive-recurrent as per Definition 9.3.7 is equivalent to every state of the Markov chain being positive-recurrent.
Note that Lemma 9.3.5 does not tell us anything about the mean recurrent time of intercommunicating recurrent states beyond finiteness. Namely knowing that in Example 9.3.2 does not provide any new information about and , beyond what we would have known given .
If a Markov chain has a finite number of states, then all recurrent states are positive.
An irreducible Markov chain with a finite number of states is positive-recurrant.
9.4 Periodicity
Consider a Markov chain in some state . Consider the values of for which , that is, the positive integers for which it is possible for the Markov chain to return to in steps. Throughout this section we denote this collection of values by .
The period of state is given by
Recall the scenario of Week 6 Questions, Questions 1 to 8:
Every year Ria chooses exactly two apprentices to compete in the fictitious competition Nottingham’s Got Mathematicians. Apprentices are recommended to Ria by Daniel and Lisa. Initially Daniel and Lisa recommend one candidate each. However if Daniel selects the apprentice who finishes second among Ria’s nominees, then this opportunity for recommendation is given to Lisa the following year. Similarly if Lisa chooses the candidate who who finishes second among Ria’s nominees, then this opportunity for recommendation is given to Daniel the following year. This rule is repeated every year, even if Daniel or Lisa choose both the candidates.
Generally Lisa is better at picking competitors: a Lisa endorsed candidate beats a Daniel endorsed candidate of the time.
This scenario can be modelled by the Markov chain:
Calculate , the period of state .

Consider all the possible paths that start and end in state : The lengths of these paths respectively are . Calculate
This definition goes a long way towards answering the question “If it is possible to return to , for what values of is it possible to return in steps?” identified at the opening of the chapter. Namely for a given value , it is possible to return from to state to state if and only if divides exactly.
If , then and have the same period:
Consider the Markov chain of Example 9.4.2. Calculate and , the periods of states and .
Clearly states intercommunicate, that is, and . From Example 9.4.2, we know and so by Lemma 9.4.3 it follows that .
A state is said to be aperiodic if .
A Markov chain is aperiodic if all of its states are aperiodic.
Since trivially divides all positive integers , this definition is equivalent to saying that it is possible to return to state in any number of steps. Aperiodicity is often a feature of the most mathematically interesting Markov chains, and will be a key assumption when it comes to talking about steady states.
An absorbing state is aperiodic.