Chapter 10 Equilibrium Distributions
Consider the Markov Chain governing Mary Berrys’ choice of Nottingham coffee shop of Example 8.1.3, that is the Markov chain described by the diagram

Suppose that Mary Berry visits Latte Da on her first trip to Nottingham, that is . The probabilities that Mary Berry visits either coffee shop on her visit to Nottingham are displayed in the following table:

This calculation can be verified using the following R code:
x <- matrix(c(1/6,5/6,1/3,2/3),nrow=2,byrow=T)
y <- matrix(c(1,0),nrow=1,byrow=T)
for (i in 1:25)
y<- y %*% x
y
Alternatively suppose that Mary Berry visits Deja Brew on her first trip to Nottingham. This change in initial state means the probabilities that Mary Berry visits either coffee shop on her visit to Nottingham are now as in the following table:

Note that the initial state doesn’t appear to affect the long-term behavior of the Markov chain: the probability that Mary Berry goes to Latte Da on a suitably distant visit to Nottingham is approximately or irrespective of which coffee shop she tries first. Equivalently the probability that Mary Berry goes to Deja Brew on a suitably distant visit to Nottingham is approximately or irrespective of which coffee shop she tries firs. Mathematically one could say that the limiting behavior of the Markov chain seems to have distribution .
This type of long-term or limiting behaviour of Markov chains will be the focus of this chapter.
10.1 Definition of Equilibrium Distributions
Throughout this chapter consider a Markov chain with state space and transition matrix .
Let be the row vector where the is in the position. Then performing the matrix multiplication results in the vector , that is the vector with the entry in the position is the probability of moving to state from .
More generally calculating gives the vector that is the vector with the entry in the position as the probability of moving to state in steps from state .
However this can generalise further: think of a row vector, which we seek to multiply on the left by , as describing the probability mass function of . So a row vector would be defined by . In this content, would represent being equal to as a certain event, all other states for are impossible. It then follows that describes the probability mass function of . More generally describes the probability mass function of .
Convince yourself, by direct calculation using and , that gives the row vector whose entry is .
The above discussion is crucial to the interpretation of the following definition:
A row vector is called an equilibrium or stationary distribution of the Markov chain if
1.) for all states in ;
2.) ;
3.)
The first two conditions of Definition 10.1.1 are consequences of representing a probability that the Markov chain is in state . The third condition of Definition 10.1.1 describes the Markov chain reaching a steady state.
Let be an equilibrium distribution for some Markov chain and consider the quantities and respectively. Applying the identity from Definition 10.1.1, calculate: More generally, we can deduce that for any positive integer . This is to say that if for all states , then for all positive integers .
10.2 Calculating Equilibrium Disrtributions
It often possible to calculate equilibrium distributions for Markov chains by using the transition matrix to deduce equations for , that can be solved simultaneously alongside the ever present equation from Definition 10.1.1. R can be used to solve these simulataneously equations. This is all illustrated by the following example.
Find the equilibrium distribution of the Markov Chain governing Mary Berrys’ choice of Nottingham coffee shop of Example 8.1.3.
The transition matrix of the Markov chain is given by
Therefore the equilibrium distribution will satisfy
using the following R code:
We obtain and . The equilibrium distribution of the Markov is .It is possible that the system of linear equations mentioned above does not solve to give a unique solution for . Indeed it is possible that there is no solution at all. We would like to understand when the equilibrium distribution exists, and if it does exist when it is unique. This is tackled by the following theorem.
An irreducible Markov chain has a unique equilibrium distribution if and only if it is positive recurrent.
As well as solving the linear system above, there is another common identity that can used to find the equilibrium distribution supposing we know the mean recurrence times of each state, and vice-versa.
An irreducible Markov chain with a unique equilibrium distribution satisfies where is the mean recurrence time of state .
In practice it is usually easier to calculate the equilibrium distribution of an irreducible Markov chain, than it is to calculate the mean recurrance times of every state.
Considering again the Markov chain of Example 8.1.3, calculate how many long in average (in terms of trips to Nottingham) the owners of Latte Da and Deja Brew will have to wait to see Mary Berry again.
Mathematically the question is asking us to calculate the mean recurrence times and of Latte Da and Deja Brew respectively.
Sacred Heart Hospital have a fleet of ambulances across Greater Sacramento. The hospital owns four garages. At the beginning of each day every ambulance leaves its garage, completes nearby jobs transferring patients, and at the end of the day the ambulance parks in the garage nearest to its location. This may or may not be the same garage that the ambulance started the day in.
Based on data gathered through long term observation of the ambulances, the daily movement between garages is modelled by a Markov chain described by
Dr. Cox regularly leaves his stethoscope in the garage. How many working days will he expect to wait before he can pick it up again.

Mathematically, we are being asked to calculate the mean recurrence time of each state. To do this, we will prove the existence of and subsequently find a unique equilibrium distribution before appling Theorem 10.2.3 to obtain the mean recurrence times.
Note that clearly all states of the Markov chain are intercommunicating, that is, the Markov chain is irreducible. Furthermore since the Markov chain has only 4 states, by Lemma 9.3.9 the Markov chain is positive recurrent. Therefore by Theorem 10.2.2, there is a unique equalibrium distribution.
The probability transition matrix of the Markov chain is given by The unique equalibrium distribution is the vector satisfying In addition from Definition 10.1.1, we know and . Solving these equations simultaneously leads to Therefore by Theorem 10.2.3:Consider a Markov chain that has a transient state and isn’t irreducible. Can this Markov chain still have a unique equilibrium? To help you answer the question, consider the Markov chain describing the company website of Example 8.4.2.
10.3 Limiting Behaviour
Note that in Example 10.2.1, we have recovered the results of the tables seen at the top of the chapter. This is not a coincidence.
Consider a Markov chain that is irreducible, aperiodic and positive recurrent. Let and be the states of the Markov chain. Then the -step transition probability tends towards , the entry of the unique equilibrium distribution, as tends towards infinity:
In real terms, this theorem states that the probability of a Markov chain being in state sufficiently far in the future is given by , where is the equilibrium distribution. Equivalently suppose we simulate the Markov chain times: after enough time-steps have passed approximately of the Markov chains will be in state . This discusson is entirely irrespective of any of the initial states of the Markov chains.
Consider again the ambulances of Example 10.2.5. Given that of the medical emergenicies happen in the vicinity of garage , and knowing that it is critical that the ambulances get to an emergency promptly, do you anticipate any problems that Sacred Heart Hospital might encounter, and if so can you make a recommendation to the management to pre-emptively alleviate them.
By Theorem 10.3.1 and using the calculation of Example 10.2.5, after a large number of days have passed, one can expect that or approximately of the ambulances will be in Garage . Knowing that of the medical emergencies occur in the vicinity of garage , it is likely that other garages may end up having to be attend to emergenies in the neighboorhood of Garage C. This extra journey would cost valuable time. To avoid this, it would be sensible to transfer some ambulances to Garage C from the other garages periodically.