## 16.1 Introduction

Economic simulation games (see Wikipedia) promise to practice skills in a playful setting. Importantly, the conditions and actions of players are processed by rules to yield plausible consequences and outcomes. To be both believable and instructive, such games need to contain a modicum of truth (Figure 16.2).

Clarify some key terminology. Some concepts that are relevant in the present contexts are:

• the distinction between simulations and modeling,
• situations under risk vs. under uncertainty,
• solving problems by description vs. by experience.

### 16.1.1 What are simulations?

According to Wikipedia, a computer simulation is the process of running a model. Thus, it does not make sense to “build a simulation.” Instead, we can “build a model,” and then either “run the model” or “run a simulation.”

Despite this agreement, people also call a collection of coded assumptions and rules a simulation, rendering the boundaries between models and simulations blurry again. For our purposes, simulations simply are particular types of models of a situation. Thus, all simulations are models, but there are models (e.g., a mathematical formula) that we would not want to call a simulation.

Conceptually, simulations create concrete observations (i.e., data/cases) from a more abstract description of a problem, task, or situation. Note that, in data analysis, we typically do the reverse: We start with data (concrete observations) and create a more abstract description of the data.

Creating a simulation typically requires that we represent both a situation and the actions or rules specified in a problem description. Conceptually, a representation involves (a) mapping the essential elements of a problem to corresponding elements in a model, and (b) specifying the processes that can operate upon both sets of elements. Once these representational constructs are in place, we can run the simulation repeatedly to observe what happens.

If the description contains probabilistic parameters, the simulation can include random elements. In many contexts, the term “simulation” is used when models contain random elements. For instance, in statistical simulations, we often start by drawing random values from some well-defined distribution and then compute the result of some algorithm based on these values.

#### Goals of simulations

Creating simulations typically serves two inter-related purposes:

1. Explicate premises and processes: If we cannot solve a problem analytically (e.g., mathematically), we can simulate it to inspect its solution. By implementing both the premises and a mechanism, they explicate the assumptions and consequences of a theory. Often we may notice details or hidden premises that remained implicit or unspecified in the problem formulation.

2. Generate possible results (outputs): This method can be used to explore the consequences of a set of rules or regularities. Such simulations are instantiations that show what happens if certain starting conditions are met and some mechanism is executed. Changing premises or parts of the process shows how this affects our results.

#### Mental vs. silicon simulations

The idea of using a model or simulation for predicting outcomes is not new — and has often been viewed as an analogy for human thinking, reasoning, and problem solving . However, rather than thinking things through in our minds, we now also have the computing capacities and tools for quickly building quite powerful computer simulations (e.g., in simulating entire ecosystems or climate phenomena). But out-sourcing simulations from our brains to computing machines does not free us from thinking. The bigger and more complex our simulations get, the more important are solid programming skills and a sound reflection of their pitfalls and limitations.

#### Crutches vs. useful tools

Note that simulations are often portrayed as a poor alternative to analytic solutions. When we lack the (formal, e.g., mathematical) means to solve something in an analytic fashion, we can use simulations as a crutch that generates results without gaining a clear conceptual and theoretical understanding of the process. However, simulations can also be used in a sensible and responsible fashion (e.g., by systematically varying inputs) and explore systems or theories that are (currently?) too complex to be tackled by analytic approaches (e.g., interactions between neurons in a brain, or sets of physical laws jointly governing weather or climate phenomena). As the merits of any method generally depends on its uses and the availability of alternative approaches, we should generally welcome simulations as one possible way for solving problems.

### 16.1.2 Static simulations

This chapter on basic simulations focuses on static situations: The environment and the range of moves or options is fixed by the problem description. All relevant information (e.g., the rules of the game or scope of actions) is known a priori.

Such situations can be well-defined, but still contain probabilisitc elements. The psychology of judgment and decision making refers to such situations as risky choice or decision making under risk. A typical situation under risk is repeatedly throwing a random device (e.g., a coin or dice). Here, the set of all possible outcomes and their probabilities can be defined (e.g., by description) or measured (e.g., by experiencing a series of events). Although any particular outcome cannot be know in advance, we can still make observations and predictions regarding the long-term development of outcomes. The mathematics of such problems are governed by the axioms and calculus of probability theory.

By contrast, situations are considered to be under uncertainty when either the set of possible outcomes or their probabilities cannot be known in advance. Most real-life situations are actually under uncertainty, rather than under risk [see ; or ; for examples]. The boundaries between situations under risk and those under uncertainty are often blurred, as they partly depend on our ability of capturing, expressing, and defining the relevant elements. When considering more complex phenomena (e.g., systems that contain interactions between one or more agents and a reactive and responsive environment), all aspects of a situation may technically still be under risk, yet their dynamic interplay can get too messy and intransparent to provide an analytic solution to the problem. Thus, while analytic solutions may always be desirable, creating simulations may be our only realistic options for keeping track of such environments, their agent(s), or both. (We will examine such situations in Chapter 17 on Dynamic simulations.)

However, even static situations in which the range of optios and probabilities is relatively small and well-defined can often get complex and complicated. Experts in probability may be able to solve them analytically, but the rest of us can tackle them by running a simulation.

For simulating risky situations, we must address the following general issues:

• Creating data structures for representing environments and events
• Keeping track of environmental states, actions, and outcomes
• Managing sampling processes (i.e., randomness and iterations)

### 16.1.3 Overview

Conceptually, simulations create concrete observations (i.e., cases or data) from a more abstract description. If this description contains probabilistic parameters, the simulation can include random elements.

In the following, we introduce two basic types of simulations:

1. Enumerating cases essentially explicates information that is contained in a problem description. This most basic type of simulation will be used to illustrate so-called “Bayesian situations,” like the notorious mammography problem (Eddy, 1982, see Section 16.2).

2. Sampling cases employs controlled randomness for solving probabilistic problems by repeated observations. This slightly more sophisticated type of simulation is use to illustrate the Monty Hall problem (Savant, 1990, see Section 16.3).

Conceptually, both types of simulations exploit the continuum between a probabilistic description and its frequentist interpretation as the cumulative outcomes of many repeated random events. As our examples and the exercises (in Section 16.5) will show, the details of the assumed premises and processes matter for the outcomes that we will find — and simulations are particularly helpful for explicating implicit premises and processes.

#### Problems considered

Examples of problems that can be “solved” by basic simulations include:

• Bayesian situations, like the mammography problem or cab problem
• so-called cognitive illusions, like the engineer/lawyer problem or Linda problem; (e.g., Tversky & Kahneman, 1974)
• notorious brain teasers, like the Monty Hall dilemma or the three prisoners problem (e.g., Bar-Hillel, 1980)
• other mathematical or probability puzzles , and
• alleged biases in the human perception of randomness

The prominence of this list shows that simulations can address a substantial proportion of the problems discussed in the psychology of human judgment and decision making.

#### General goals

By implementing these problems, we pursue some general goals that reach beyond the details of any particular problem. These goals include:

1. Explicating a problem’s premises and structure (by implementing it)
2. Solving the problem (by repeatedly simulating its assumed process)
3. Evaluating the solution’s uncertainty or robustness (e.g., its variability)

Additional motivations for running simulations are that they can provide new insights into a problem and be a lot of fun — but that partly depends on your idea of fun, of course.

### References

Bar-Hillel, M. (1980). The base-rate fallacy in probability judgments. Acta Psychologica, 44(3), 211–233. https://doi.org/10.1016/0001-6918(80)90046-3
Craik, K. J. W. (1943). The nature of explanation. Cambridge University Press.
Dudeney, H. E. (1917). Amusements in mathematics. Nelson; Sons. www.gutenberg.org/files/16713/16713.txt
Eddy, D. M. (1982). Probabilistic reasoning in clinical medicine: Problems and opportunities. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty (pp. 249–267). Cambridge University Press. https://doi.org/10.1017/CBO9780511809477.019
Gardner, M. (1988). Time travel and other mathematical bewilderments. W.H. Freeman.
Gentner, D., & Stevens, A. L. (Eds.). (1983). Mental models. Lawrence Erlbaum Associates.
Gigerenzer, G. (2014). Risk savvy: How to make good decisions. Penguin.
Hahn, U., & Warren, P. A. (2009). Perceptions of randomness: Why three heads are better than four. Psychological Review, 116(2), 454–461. https://doi.org/10.1037/a0015241
Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference and consciousness. Cambridge University Press.
Kahneman, D., & Tversky, A. (1972a). On prediction and judgement. ORI Research Monographs, 1(12), 430–454.
Kahneman, D., & Tversky, A. (1972b). Subjective probability: A judgment of representativeness. Cognitive Psychology, 3(3), 430–454. https://doi.org/10.1016/0010-0285(72)90016-3
Mosteller, F. (1965). Fifty challenging problems in probability with solutions. Addison-Wesley.
Neth, H., & Gigerenzer, G. (2015). Heuristics: Tools for an uncertain world. In R. Scott & S. Kosslyn (Eds.), Emerging trends in the social and behavioral sciences. Wiley Online Library. https://doi.org/10.1002/9781118900772.etrds0394
Savant, M. vos. (1990). Ask Marilyn. Parade Magazine, (September 9), p. 15.
Selvin, S. (1975). A problem in probability. American Statistician, 29, 67. https://doi.org/10.1080/00031305.1975.10479121
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124