2  Introduction: Credibility, Models, and Parameters

The goal of this chapter is to introduce the conceptual framework of Bayesian data analysis. Bayesian data analysis has two foundational ideas. The first idea is that Bayesian inference is reallocation of credibility across possibilities. The second foundational idea is that the possibilities, over which we allocate credibility, are parameter values in meaningful mathematical models. ()

2.1 Bayesian inference is reallocation of credibility across possibilities

Load the primary packages for this chapter.

library(tidyverse)
library(brms)
library(tidybayes)

The first step toward making Figure 2.1 is putting together a data object.

# Save this to simplify the code to come
stage_levels <- c("Prior", "Posterior")

d <- crossing(iteration = 1:3,
              stage     = factor(stage_levels, levels = stage_levels)) |> 
  expand_grid(Possibilities = LETTERS[1:4]) |> 
  mutate(Credibility = c(rep(0.25, times = 4),
                         0, rep(1/3, times = 3),
                         0, rep(1/3, times = 3),
                         rep(c(0, 0.5), each = 2),
                         rep(c(0, 0.5), each = 2),
                         rep(0, times = 3), 1))

When making data with many repetitions in the rows, it’s good to have the tidyr::expand_grid() function up your sleeve. Go here to learn more.

We can take a look at the top few rows of the data with the head() function.

head(d)
# A tibble: 6 × 4
  iteration stage     Possibilities Credibility
      <int> <fct>     <chr>               <dbl>
1         1 Prior     A                   0.25 
2         1 Prior     B                   0.25 
3         1 Prior     C                   0.25 
4         1 Prior     D                   0.25 
5         1 Posterior A                   0    
6         1 Posterior B                   0.333

Before we attempt Figure 2.1, we’ll need two supplemental data frames. The first one, d_text, will supply the coordinates for the annotation in the plot. The second, d_arrow, will supply the coordinates for the arrows.

d_text <- tibble(
  Possibilities = "B",
  Credibility   = 0.75,
  label         = str_c(LETTERS[1:3], " is\nimpossible"),
  iteration     = 1:3,
  stage         = factor("Posterior", levels = stage_levels))

d_arrow <- tibble(Possibilities = LETTERS[1:3],
                  iteration     = 1:3) |> 
  expand_grid(Credibility = c(0.01, 0.6)) |> 
  mutate(stage = factor("Posterior", levels = stage_levels))

Now we’re ready to code our version of Figure 2.1.

d |>
  ggplot(aes(x = Possibilities, y = Credibility)) +
  geom_col(color = "grey30", fill = "grey30") +
  # Annotation in the bottom row
  geom_text(data = d_text, 
            aes(label = label)) +
  # Arrows in the bottom row
  geom_line(data = d_arrow,
            arrow = arrow(length = unit(0.30, "cm"), 
                          ends = "first", type = "closed")) +
  facet_grid(stage ~ iteration) +
  theme(axis.ticks.x = element_blank(),
        panel.grid = element_blank(),
        strip.text.x = element_blank())

We will take a similar approach to make our version of Figure 2.2. But this time, we’ll define our supplemental data sets directly in geom_text() and geom_line(). It’s good to have both methods up your sleeve. Also notice how we simply fed our primary data set directly into ggplot() without saving it, either.

# Primary data
crossing(stage         = factor(stage_levels, levels = stage_levels),
         Possibilities = LETTERS[1:4]) |> 
  mutate(Credibility = c(rep(0.25, times = 4),
                         rep(0,    times = 3), 
                         1)) |>
  
  # Plot!
  ggplot(aes(x = Possibilities, y = Credibility)) +
  geom_col(color = "grey30", fill = "grey30") +
  # Annotation in the bottom panel
  geom_text(data = tibble(
    Possibilities = "B",
    Credibility   = 0.8,
    label         = "D is\nresponsible",
    stage         = factor("Posterior", levels = stage_levels)
  ), aes(label = label)
  ) +
  # The arrow
  geom_line(data = tibble(
    Possibilities = LETTERS[c(4, 4)],
    Credibility   = c(0.25, 0.99),
    stage         = factor("Posterior", levels = stage_levels)),
  arrow = arrow(length = unit(0.30, "cm"), ends = "last", type = "closed"),
  color = "grey92") +
  facet_wrap(~ stage, ncol = 1) +
  theme(axis.ticks.x = element_blank(),
        panel.grid = element_blank())

2.1.1 Data are noisy and inferences are probabilistic.

Now on to Figure 2.3. I’m pretty sure the curves in the plot are Gaussian, which we’ll make with the dnorm() function. After a little trial and error, their standard deviations look to be 1.2. However, it’s tricky placing those curves in along with the probabilities, because the probabilities for the four discrete sizes (i.e., 1 through 4) are in a different metric than the Gaussian density curves. Since the probability metric for the four discrete sizes are the primary metric of the plot, we need to rescale the curves using a little algebra, which we do in the data code below. After that, the code for the plot is relatively simple.

# Data
d <- tibble(mu = 1:4,
            p  = 0.25) |> 
  expand_grid(x = seq(from = -2, to = 6, by = 0.1)) |> 
  mutate(density = dnorm(x, mean = mu, sd = 1.2)) |> 
  mutate(d_max = max(density)) |> 
  mutate(rescale = p / d_max) |> 
  mutate(density = density * rescale)
  
# Plot!
d |> 
  ggplot(aes(x = x)) +
  geom_col(data = d |> distinct(mu, p),
           aes(x = mu, y = p),
           fill = "grey67", width = 1/3) +
  geom_line(aes(y = density, group = mu)) +
  scale_x_continuous(breaks = 1:4) +
  scale_y_continuous(breaks = 0:5 / 5) +
  coord_cartesian(xlim = c(0, 5),
                  ylim = c(0, 1)) +
  labs(x = "Possibilities", 
       y = "Credibility",
       title = "Prior") +
  theme(axis.ticks.x = element_blank(),
        panel.grid = element_blank())

We can use the same basic method to make the bottom panel of Figure 2.3. Kruschke gave the relative probability values on page 21. Notice that they sum perfectly to 1. The only other notable change from the previous plot is our addition of a geom_point() section, the data in which we defined on the fly.

# Data
d <- tibble(mu = 1:4,
            p  = c(0.11, 0.56, 0.31, 0.02)) |> 
  expand_grid(x = seq(from = -2, to = 6, by = 0.1)) |> 
  mutate(density = dnorm(x, mean = mu, sd = 1.2)) |> 
  mutate(d_max = max(density)) |> 
  mutate(rescale = p / d_max) |> 
  mutate(density = density * rescale)

# Plot!
d |> 
  ggplot() +
  geom_col(data = d |> distinct(mu, p),
           aes(x = mu, y = p),
           fill = "grey67", width = 1/3) +
  geom_line(aes(x = x, y = density, group = mu)) +
  geom_point(data = tibble(x = c(1.75, 2.25, 2.75), y = 0),
             aes(x = x, y = y),
             size = 3, color = "grey33", alpha = 3/4) +
  scale_x_continuous(breaks = 1:4) +
  scale_y_continuous(breaks = 0:5 / 5) +
  coord_cartesian(xlim = c(0, 5),
                  ylim = c(0, 1)) +
  labs(x = "Possibilities", 
       y = "Credibility",
       title = "Posterior") +
  theme(axis.ticks.x = element_blank(),
        panel.grid = element_blank())

In summary, the essence of Bayesian inference is reallocation of credibility across possibilities. The distribution of credibility initially reflects prior knowledge about the possibilities, which can be quite vague. Then new data are observed, and the credibility is re-allocated. Possibilities that are consistent with the data garner more credibility, while possibilities that are not consistent with the data lose credibility. Bayesian analysis is the mathematics of re-allocating credibility in a logically coherent and precise way. (p. 22)

2.2 Possibilities are parameter values in descriptive models

“A key step in Bayesian analysis is defining the set of possibilities over which credibility is allocated. This is not a trivial step, because there might always be possibilities beyond the ones we include in the initial set” (p. 22, emphasis added).

In the last section, we used the dnorm() function to make curves following the normal distribution (a.k.a. the Gaussian distribution). Here we’ll do that again, but also use the rnorm() function to simulate actual data from that same normal distribution. Behold Figure 2.4.a.

# Set the seed to make the simulation reproducible
set.seed(2)
# Simulate the data with `rnorm()`
d <- tibble(x = rnorm(2000, mean = 10, sd = 5))

# Plot!
ggplot(data = d, aes(x = x)) +
  geom_histogram(aes(y = after_stat(density)),
                 binwidth = 1, fill = "grey67", 
                 color = "grey92", linewidth = 1/10) +
  geom_line(data = tibble(x = seq(from = -6, to = 26, by = 0.01)),
            aes(x = x, y = dnorm(x, mean = 10, sd = 5)),
            color = "grey33") +
  coord_cartesian(xlim = c(-5, 25)) +
  labs(x = "Data Values", 
       y = "Data Probability",
       subtitle = "The candidate normal distribution\nhas a mean of 10 and SD of 5.") +
  theme(panel.grid = element_blank())

Did you notice how we made the data for the density curve within geom_line()? That’s one way to do it, and in our next plot we’ll take a more elegant approach with the stat_function() function. Here’s our Figure 2.4.b.

ggplot(data = d, aes(x = x)) +
  geom_histogram(aes(y = after_stat(density)),
                 binwidth = 1, fill = "grey67",
                 color = "grey92", linewidth = 1/8) +
  stat_function(fun = dnorm, n = 101, args = list(mean = 8, sd = 6),
                color = "grey33", linetype = 2) +
  coord_cartesian(xlim = c(-5, 25)) +
  labs(x = "Data Values", 
       y = "Data Probability",
       subtitle = "The candidate normal distribution\nhas a mean of 8 and SD of 6.") +
  theme(panel.grid = element_blank())

2.3 The steps of Bayesian data analysis

In general, Bayesian analysis of data follows these steps:

  1. Identify the data relevant to the research questions. What are the measurement scales of the data? Which data variables are to be predicted, and which data variables are supposed to act as predictors?
  2. Define a descriptive model for the relevant data. The mathematical form and its parameters should be meaningful and appropriate to the theoretical purposes of the analysis.
  3. Specify a prior distribution on the parameters. The prior must pass muster with the audience of the analysis, such as skeptical scientists.
  4. Use Bayesian inference to re-allocate credibility across parameter values. Interpret the posterior distribution with respect to theoretically meaningful issues (assuming that the model is a reasonable description of the data; see next step).
  5. Check that the posterior predictions mimic the data with reasonable accuracy (i.e., conduct a “posterior predictive check”). If not, then consider a different descriptive model.

Perhaps the best way to explain these steps is with a realistic example of Bayesian data analysis. The discussion that follows is abbreviated for purposes of this introductory chapter, with many technical details suppressed. (p. 25)

I will show you a few more details than Kruschke did in the text. But just has he did, we’ll cover this workflow in much more detail in the chapters to come.

In order to recreate Figure 2.5, we need to generate the data and fit a model to those data. In his HtWtDataDenerator.R script, Kruschke provided the code for a function that will generate height/weight data of the kind in his text. Here we make a slightly revised version of the function called height_weight_generator(), which returns the data in a data frame.

height_weight_generator <- function(n, male_prob = 0.5, seed = 47405) {

  # Male MVN distribution parameters
  height_male_mu <- 69.18;    height_male_sd <- 2.87
  log_weight_male_mu <- 5.14; log_weight_male_sd <- 0.17
  male_rho <- 0.42
  male_mean <- c(height_male_mu, log_weight_male_mu)
  male_sigma <- matrix(c(height_male_sd^2, male_rho * height_male_sd * log_weight_male_sd,
                         male_rho * height_male_sd * log_weight_male_sd, log_weight_male_sd^2),
                       nrow = 2)

  # Female cluster 1 MVN distribution parameters
  height_female_mu_1 <- 63.11;    height_female_sd_1 <- 2.76
  log_weight_female_mu_1 <- 5.06; log_weight_female_sd_1 <- 0.24
  female_rho_1 <- 0.41
  female_mean_1 <- c(height_female_mu_1, log_weight_female_mu_1)
  female_sigma_1 <- matrix(c(height_female_sd_1^2, female_rho_1 * height_female_sd_1 * log_weight_female_sd_1,
                             female_rho_1 * height_female_sd_1 * log_weight_female_sd_1, log_weight_female_sd_1^2),
                           nrow = 2)

  # Female cluster 2 MVN distribution parameters
  height_female_mu_2 <- 64.36;    height_female_sd_2 <- 2.49
  log_weight_female_mu_2 <- 4.86; log_weight_female_sd_2 <- 0.14
  female_rho_2 <- 0.44
  female_mean_2 <- c(height_female_mu_2, log_weight_female_mu_2)
  female_sigma_2 <- matrix(c(height_female_sd_2^2, female_rho_2 * height_female_sd_2 * log_weight_female_sd_2,
                             female_rho_2 * height_female_sd_2 * log_weight_female_sd_2, log_weight_female_sd_2^2),
                           nrow = 2)

  # Female cluster probabilities
  female_prob_1 <- 0.46
  female_prob_2 <- 1 - female_prob_1

  # Randomly generate data values from those MVN distributions
  if (!is.null(seed)) set.seed(seed)
  datamatrix <- matrix(0, nrow = n, ncol = 3)
  colnames(datamatrix) <- c("male", "height", "weight")
  male_val <- 1; female_val <- 0  # Arbitrary coding values
  for (i in 1:n) {
    # Generate `sex`
    sex <- sample(c(male_val, female_val), size = 1, replace = TRUE, prob = c(male_prob, 1 - male_prob))
    if (sex == male_val) datum <- MASS::mvrnorm(n = 1, mu = male_mean, Sigma = male_sigma)
    if (sex == female_val) {
      female_cluster <- sample(c(1, 2), size = 1, replace = TRUE, prob = c(female_prob_1, female_prob_2))
      if (female_cluster == 1) datum <- MASS::mvrnorm(n = 1, mu = female_mean_1, Sigma = female_sigma_1)
      if (female_cluster == 2) datum <- MASS::mvrnorm(n = 1, mu = female_mean_2, Sigma = female_sigma_2)
    }
    datamatrix[i, ] <- c(sex, round(c(datum[1], exp(datum[2])), digits = 1))
  }

  data.frame(datamatrix)
}

Now we have the height_weight_generator() function, all we need to do is determine how many values to generate and how probable we want the values to be based on those from men. These are controlled by the n and male_prob parameters.

d <- height_weight_generator(n = 57, male_prob = 0.5, seed = 2)

head(d)
  male height weight
1    0   62.6  109.4
2    0   63.3  154.3
3    1   71.8  155.0
4    0   67.9  146.2
5    0   64.4  135.3
6    0   66.8  119.0

We’re about ready for the model, which we will fit it with the Hamiltonian Monte Carlo (HMC) method via the brms package. We’ll introduce brms more fully in and you can go here for an introduction, too.

The traditional use of diffuse and noninformative priors is discouraged with HMC, as is the uniform distribution for sigma. Instead, we’ll use weakly-regularizing priors for the intercept and slope and a half Cauchy with a fairly large scale parameter for σ.

fit2.1 <- brm(
  data = d, 
  family = gaussian,
  weight ~ 1 + height,
  prior = c(prior(normal(0, 100), class = Intercept),
            prior(normal(0, 100), class = b),
            prior(cauchy(0, 10),  class = sigma)),
  chains = 4, cores = 4, iter = 2000, warmup = 1000,
  seed = 2,
  file = "fits/fit02.01")

If you wanted a quick model summary, you could execute print(fit1). Again, we’ll walk through that and other diagnostics in greater detail starting in . For now, here’s how we might make Figure 2.5.a.

# Extract the posterior draws
draws <- as_draws_df(fit2.1)

# This will subset the output
n_lines <- 150

# Plot!
draws |> 
  slice(1:n_lines) |> 

  ggplot() +
  geom_abline(aes(intercept = b_Intercept, slope = b_height, group = .draw),
              color = "grey50", linewidth = 1/4, alpha = .3) +
  geom_point(data = d,
             aes(x = height, y = weight),
             shape = 1) +
  labs(x = "Height in inches",
       y = "Weight in pounds",
       # The `eval(substitute(paste()))` trick came from: https://www.r-bloggers.com/value-of-an-r-object-in-an-expression/
       subtitle = eval(substitute(paste("Data with", n_lines, "credible regression lines")))) +
  coord_cartesian(xlim = c(55, 80),
                  ylim = c(50, 250)) +
  theme(panel.grid = element_blank())

For Figure 2.5.b., we’ll mark off the mode and 95% highest density interval (HDI) with help from the handy stat_histinterval() function, provided by the tidybayes package ().

draws |> 
  ggplot(aes(x = b_height, y = 0)) +
  stat_histinterval(point_interval = mode_hdi, .width = 0.95,
                    fill = "grey67", slab_color = "grey92",
                    breaks = 40, slab_size = 0.2, outline_bars = TRUE) +
  scale_y_continuous(NULL, breaks = NULL) +
  coord_cartesian(xlim = c(0, 8)) +
  labs(x = expression(beta[1]~(slope)),
       title = "The posterior distribution",
       subtitle = "The mode and 95% HPD intervals are\nthe dot and horizontal line at the bottom.") +
  theme(panel.grid = element_blank())

To make Figure 2.6, we use the brms::predict() function, which we’ll cover more fully in the pages to come.

nd <- tibble(height = seq(from = 53, to = 81, length.out = 20))

predict(fit2.1, newdata = nd) |>
  data.frame() |>
  bind_cols(nd) |> 

  ggplot(aes(x = height)) +
  geom_pointrange(aes(y = Estimate, ymin = Q2.5, ymax = Q97.5),
                  color = "grey67", shape = 20) +
  geom_point(data = d, 
             aes(y = weight),
             alpha = 2/3) +
  labs(x = "Height in inches",
       y = "Weight in inches",
       subtitle = "Data with the percentile-based 95% intervals and\nthe means of the posterior predictions") +
  theme(panel.grid = element_blank())

The posterior predictions might be easier to depict with a ribbon and line, instead.

nd <- tibble(height = seq(from = 53, to = 81, length.out = 30))

predict(fit2.1, newdata = nd) |>
  data.frame() |>
  bind_cols(nd) |> 

  ggplot(aes(x = height)) +
  geom_ribbon(aes(ymin = Q2.5, ymax = Q97.5), fill = "grey75") +
  geom_line(aes(y = Estimate),
            color = "grey92") +
  geom_point(data =  d, 
             aes(y = weight),
             alpha = 2/3) +
  labs(x = "Height in inches",
       y = "Weight in inches",
       subtitle = "Data with the percentile-based 95% intervals and\nthe means of the posterior predictions") +
  theme(panel.grid = element_blank())

“We have seen the five steps of Bayesian analysis in a fairly realistic example. This book explains how to do this sort of analysis for many different applications and types of descriptive models” (p. 30). Are you intrigued and excited? You should be. Welcome to Bayes, friends!

Session info

sessionInfo()
R version 4.4.3 (2025-02-28)
Platform: aarch64-apple-darwin20
Running under: macOS Ventura 13.4

Matrix products: default
BLAS:   /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRblas.0.dylib 
LAPACK: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.12.0

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

time zone: America/Chicago
tzcode source: internal

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] tidybayes_3.0.7 brms_2.22.0     Rcpp_1.0.14     lubridate_1.9.3
 [5] forcats_1.0.0   stringr_1.5.1   dplyr_1.1.4     purrr_1.0.4    
 [9] readr_2.1.5     tidyr_1.3.1     tibble_3.2.1    ggplot2_3.5.1  
[13] tidyverse_2.0.0

loaded via a namespace (and not attached):
 [1] tidyselect_1.2.1     svUnit_1.0.6         farver_2.1.2        
 [4] loo_2.8.0            fastmap_1.1.1        TH.data_1.1-2       
 [7] tensorA_0.36.2.1     digest_0.6.37        estimability_1.5.1  
[10] timechange_0.3.0     lifecycle_1.0.4      StanHeaders_2.32.10 
[13] survival_3.8-3       magrittr_2.0.3       posterior_1.6.0     
[16] compiler_4.4.3       rlang_1.1.5          tools_4.4.3         
[19] utf8_1.2.4           yaml_2.3.8           knitr_1.49          
[22] labeling_0.4.3       bridgesampling_1.1-2 htmlwidgets_1.6.4   
[25] pkgbuild_1.4.4       curl_6.0.1           multcomp_1.4-26     
[28] abind_1.4-8          withr_3.0.2          grid_4.4.3          
[31] stats4_4.4.3         xtable_1.8-4         colorspace_2.1-1    
[34] inline_0.3.19        emmeans_1.10.3       scales_1.3.0        
[37] MASS_7.3-64          cli_3.6.4            mvtnorm_1.2-5       
[40] rmarkdown_2.29       generics_0.1.3       RcppParallel_5.1.7  
[43] rstudioapi_0.16.0    tzdb_0.4.0           rstan_2.32.6        
[46] splines_4.4.3        bayesplot_1.11.1     parallel_4.4.3      
[49] matrixStats_1.4.1    vctrs_0.6.5          V8_4.4.2            
[52] Matrix_1.7-2         sandwich_3.1-1       jsonlite_1.8.9      
[55] hms_1.1.3            arrayhelpers_1.1-0   ggdist_3.3.2        
[58] glue_1.8.0           codetools_0.2-20     distributional_0.5.0
[61] stringi_1.8.4        gtable_0.3.6         QuickJSR_1.1.3      
[64] munsell_0.5.1        pillar_1.10.2        htmltools_0.5.8.1   
[67] Brobdingnag_1.2-9    R6_2.6.1             evaluate_1.0.1      
[70] lattice_0.22-6       backports_1.5.0      rstantools_2.4.0    
[73] coda_0.19-4.1        gridExtra_2.3        nlme_3.1-167        
[76] checkmate_2.3.2      xfun_0.49            zoo_1.8-12          
[79] pkgconfig_2.0.3