# 7 Markov Chain Monte Carlo

The phrase “Markov chain Monte Carlo” encompasses a broad array of techniques that have in common a few key ideas. The setup for all the techniques that we will discuss in this book is as follows:

1. We want to sample from a some complicated density or probability mass function $$\pi$$. Often, this density is the result of a Bayesian computation so it can be interpreted as a posterior density. The presumption here is that we can evaluate $$\pi$$ but we cannot sample from it.

2. We know that certain stochastic processes called Markov chains will converge to a stationary distribution (if it exists and if specific conditions are satisfied). Simulating from such a Markov chain for a long enough time will eventually give us a sample from the chain’s stationary distribution.

3. Given the functional form of the density $$\pi$$, we want to construct a Markov chain that has $$\pi$$ as its stationary distribution.

4. We want to sample values from the Markov chain such that the sequence of values $$\{x_n\}$$ generated by the chain converges in distribution to the density $$\pi$$.

In order for all these ideas to make sense, we need to first go through some background on Markov chains. The rest of this chapter will be spent defining all these terms, the conditions under which they make sense, and giving examples of how they can be implemented in practice.