Chapter 16 Dynamic simulations

This chapter on dynamic simulations is an extension of the previous Chapter 15 on Basic simulations. Rather than using simple simulations that merely explicate a problem description, we now venture into the terrain of more dynamic ones. The term “dynamic” refers to the fact that we explicitly allow for changes in parameters and the constructs they represent. As these changes are typically incremental, our simulations will proceed in a step-wise fashion and store sequences of parameter values.

Conceptually, we distinguish between two broad types of systems and corresponding variables and dynamics: Changing agents and changing environments. This provides a first glimpse on the phenomenon of learning and on representing environmental uncertainty (in the form of multi-armed bandits with risky options). As both of these terms hint at a large and important family of models, we will only cover the general principles here.


Recommended background readings for this chapter include:


i2ds: Preflexions

  • What are dynamics?

  • What is the difference between an agent and its environment?

  • Which aspects or elements of a simulation can be dynamic?

  • What do we need to describe or measure to understand changes?

  • How does this change the way in which we construct our simulations?


Page, S. E. (2018). The model thinker: What you need to know to make data work for you. Basic Books.
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction (2nd ed.). MIT press.