22 Nominal Predicted Variable

This chapter considers data structures that have a nominal predicted variable. When the nominal predicted variable has only two possible values, this reduces to the case of the dichotomous predicted variable considered in the previous chapter. In the present chapter, we generalize to cases in which the predicted variable has three or more categorical values…

The traditional treatment of this sort of data structure is called multinomial logistic regression or conditional logistic regression. We will consider Bayesian approaches to these methods. As usual, in Bayesian software it is easy to generalize the traditional models so they are robust to outliers, allow different variances within levels of a nominal predictor, and have hierarchical structure to share information across levels or factors as appropriate. (p. 649)

22.1 Softmax regression

“The key descriptor of the [models in this chapter is their] inverse-link function, which is the softmax function (which will be defined below). Therefore, [Kruschke] refer[ed] to the method as softmax regression instead of multinomial logistic regression” (p. 650)

Say we have a metric predictor \(x\) and a multinomial criterion \(\lambda\) with \(k\) categories. We can express the basic liner model as

\[\lambda_k = \beta_{0, k} + \beta_{1, k} x,\]

for which the subscripts \(k\) indicate there’s a linear model for each of the \(k\) categories. We call the possible set of \(k\) outcomes \(S\). Taking the case where \(k = 3\), we’d have

\[\begin{align*} \lambda_{[1]} & = \beta_{0, [1]} + \beta_{1, [1]} x, \\ \lambda_{[2]} & = \beta_{0, [2]} + \beta_{1, [2]} x, \text{and} \\ \lambda_{[3]} & = \beta_{0, [3]} + \beta_{1, [3]} x. \end{align*}\]

In this scenerio, what we want to know is the probability of \(\lambda_{[1]}\), \(\lambda_{[2]}\), and \(\lambda_{[3]}\). The probability of a given outcome \(k\) follows the formula

\[\phi_k = \operatorname{softmax}_S (\{\lambda_k\}) = \frac{\exp (\lambda_k)}{\sum_{c \in S} \exp (\lambda_c)}\]

In words, [the equation] says that the probability of outcome \(k\) is the exponentiated linear propensity of outcome \(k\) relative to the sum of exponentiated linear propensities across all outcomes in the set \(S\). You may be wondering, Why exponentiate? Intuitively, we have to go from propensities that can have negative values to probabilities that can only have non-negative values, and we have to preserve order. The exponential function satisfies that need. (p. 650)

There are are indeterminacies in the system of equations Kruschke covered in this section, the upshot of which is we’ll end up making one of the \(k\) categories the reference category, which we term \(r\). Continuing on with our univariable model, we choose convenient constants for our parameters for \(r\): \(\beta_{0, r} = 0\) and \(\beta_{1, r} = 0\). As such, the regression coefficients for the remaining categories are relative to those for \(r\).

Kruschke saved the data for Figure 22.1 in the SoftmaxRegData1.csv and SoftmaxRegData2.csv files.

## Observations: 475
## Variables: 3
## $ X1 <dbl> -0.08714736, -0.72256565, 0.17918961, -1.15975176, -0.72711762, 0.53341559, -0.1893265…
## $ X2 <dbl> -1.08134218, -1.58386308, 0.97179045, 0.50262438, 1.37570446, 1.77465062, -0.53727640,…
## $ Y  <dbl> 2, 1, 3, 3, 3, 3, 1, 4, 2, 2, 3, 2, 4, 4, 4, 1, 2, 3, 3, 3, 3, 2, 1, 1, 3, 2, 3, 2, 4,…
## Observations: 475
## Variables: 3
## $ X1 <dbl> -0.08714736, -0.72256565, 0.17918961, -1.15975176, -0.72711762, 0.53341559, -0.1893265…
## $ X2 <dbl> -1.08134218, -1.58386308, 0.97179045, 0.50262438, 1.37570446, 1.77465062, -0.53727640,…
## $ Y  <dbl> 2, 2, 3, 3, 3, 3, 2, 3, 2, 1, 3, 2, 4, 3, 3, 2, 2, 1, 1, 3, 3, 4, 2, 2, 3, 2, 3, 4, 4,…

Let’s bind the two data frames together and plot in bulk.

22.1.1 Softmax reduces to logistic for two outcomes.

“When there are only two outcomes, the softmax formulation reduces to the logistic regression of Chapter 21” (p. 653)

22.1.2 Independence from irrelevant attributes.

An important property of the softmax function of Equation 22.2 is known as independence from irrelevant attributes (Luce, 1959, 2008). The model implies that the ratio of probabilities of two outcomes is the same regardless of what other possible outcomes are included in the set. Let \(S\) denote the set of possible outcomes. Then, from the definition of the softmax function, the ratio of outcomes \(j\) and \(k\) is

\[\frac{\phi_j}{\phi_k} = \frac{\exp (\lambda_j) / \sum_{c \in S} \exp (\lambda_c)}{\exp (\lambda_k) / \sum_{c \in S} \exp (\lambda_c)}\]

The summation in the denominators cancels and has no effect on the ratio of probabilities. Obviously if we changed the set of outcomes \(S\) to any other set \(S^*\) that still contains outcomes \(j\) and \(k\), the summation \(\sum_{c \in S^*}\) would still cancel and have no effect on the ratio of probabilities. (p. 654)

Just to walk that denominators-canceling business out a through a little further,

\[\begin{align*} \frac{\phi_j}{\phi_k} & = \frac{\exp (\lambda_j) / \sum_{c \in S} \exp (\lambda_c)}{\exp (\lambda_k) / \sum_{c \in S} \exp (\lambda_c)} \\ & = \frac{\exp (\lambda_j)}{\exp (\lambda_k)}. \end{align*}\]

Thus even in the case of a very different set of possible outcomes \(S^\text{very different}\), it remains that \(\frac{\phi_j}{\phi_k} = \frac{\exp (\lambda_j)}{\exp (\lambda_k)}\).

Getting more applied, here’s a tibble presentation of Kruschke’s commute example with three modes of transportation.

## # A tibble: 3 x 3
##   mode      preference `chance %`
##   <chr>          <int>      <dbl>
## 1 walking            3       50  
## 2 bicycling          2       33.3
## 3 bussing            1       16.7

Sticking with the example, if we take bicycling out of the picture, the preference values remain, but the chance % values change.

## # A tibble: 2 x 3
##   mode    preference `chance %`
##   <chr>        <dbl>      <dbl>
## 1 walking          3         75
## 2 bussing          1         25

Though we retain the same walking/bussing ratio, we end up with a different model of relative probabilities.

22.2 Conditional logistic regression

Softmax regression conceives of each outcome as an independent change in log odds from the reference outcome, and a special case of that is dichotomous logistic regression. But we can generalize logistic regression another way, which may better capture some patterns of data. The idea of this generalization is that we divide the set of outcomes into a hierarchy of two-set divisions, and use a logistic to describe the probability of each branch of the two-set divisions. (p. 655)

The model follows the generic equation

\[\begin{align*} \phi_{S^* | S} = \operatorname{logistic} (\lambda_{S^* | S}) \\ \lambda_{S^* | S} = \beta_{0, S^* | S} + \beta_{1, {S^* | S}} x, \end{align*}\]

where the conditional response probability (i.e., the goal of the analysis) is \(\phi_{S^* | S}\). \(S^*\) and \(S\) denote the subset of outcomes and larger set of outcomes, respectively, and \(\lambda_{S^* | S}\) is the propensity based on some linear model.

Kruschke saved the data for Figure 22.3 in the CondLogistRegData1.csv and CondLogistRegData2.csv files.

## Observations: 475
## Variables: 3
## $ X1 <dbl> -0.08714736, -0.72256565, 0.17918961, -1.15975176, -0.72711762, 0.53341559, -0.1893265…
## $ X2 <dbl> -1.08134218, -1.58386308, 0.97179045, 0.50262438, 1.37570446, 1.77465062, -0.53727640,…
## $ Y  <dbl> 2, 1, 3, 1, 3, 3, 2, 3, 2, 4, 1, 2, 2, 3, 4, 2, 2, 4, 2, 3, 4, 2, 1, 1, 1, 2, 1, 2, 3,…
## Observations: 475
## Variables: 3
## $ X1 <dbl> -0.08714736, -0.72256565, 0.17918961, -1.15975176, -0.72711762, 0.53341559, -0.1893265…
## $ X2 <dbl> -1.08134218, -1.58386308, 0.97179045, 0.50262438, 1.37570446, 1.77465062, -0.53727640,…
## $ Y  <dbl> 4, 4, 3, 4, 2, 3, 4, 3, 4, 4, 2, 4, 4, 3, 3, 4, 4, 4, 4, 3, 4, 4, 1, 1, 2, 4, 3, 4, 3,…

Let’s bind the two data frames together and plot in bulk.

22.3 Implementation in JAGS brms

22.3.1 Softmax model.

Kruschke pointed out in his Figure 22.4 and the surrounding prose that we speak of the categorical distribution when fitting softmax models. Our brms paradigm will be much the same. To fit a softmax model with the brm() function, you specify family = categorical. The default is to use the logit link. In his Parameterization of Response Distributions in brms vignette, Bürkner clarified:

The categorical family is currently only implemented with the multivariate logit link function and has density

\[f(y) = \mu_y = \frac{\exp (\eta_y)}{\sum_{k = 1}^K \exp (\eta_k)}\]

Note that \(\eta\) does also depend on the category \(k\). For reasons of identifiability, \(\eta_1\) is set to \(0\).

Though there’s no explicit softmax talk in that vignette, you can find it documented in his code here, starting in line 1891.

22.3.2 Conditional logistic model.

The conditional logistic regression models are not natively supported in brms at this time. However, if you follow issue #560, you’ll see there are ways to fit them using the nonlinear syntax. If you compare the syntax Bürkner used in that thread on January 30th to the JAGS syntax Kruschke showed on pages 661 and 662, you’ll see they appears to follow contrasting parameterizations.

I think that’s about as far as I’m going with this model type at this time. If you work through the solution, please share your code.

22.3.3 Results: Interpreting the regression coefficients.

22.3.3.1 Softmax model.

Load brms.

Along with Kruschke, we’ll be modeling the d1 data. In case it’s not clear, the X1 and X2 variables are already in a standardized metric.

## # A tibble: 2 x 3
##   name   mean    sd
##   <chr> <dbl> <dbl>
## 1 X1        0     1
## 2 X2        0     1

This will make it easier to set the priors. Here we’ll just use the rather wide priors Kruschke indicated on page 662.

Since it’s the default, we didn’t have to include the (link = logit) bit in the family argument. I’m just being explicit for the sake of pedagogy. Take a look at the parameter summary.

##  Family: categorical 
##   Links: mu2 = logit; mu3 = logit; mu4 = logit 
## Formula: Y ~ 0 + intercept + X1 + X2 
##    Data: d1 (Number of observations: 475) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Population-Level Effects: 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## mu2_intercept     3.37      0.59     2.25     4.57 1.00     1610     1593
## mu2_X1            5.57      0.72     4.25     7.08 1.00     2606     2483
## mu2_X2            0.81      0.49    -0.13     1.77 1.00     1767     2266
## mu3_intercept     2.04      0.66     0.76     3.33 1.00     1716     1604
## mu3_X1            0.71      0.57    -0.43     1.80 1.00     1961     2340
## mu3_X2            5.98      0.70     4.68     7.43 1.00     2358     2141
## mu4_intercept    -0.43      0.89    -2.22     1.24 1.00     2158     2248
## mu4_X1           12.38      1.15    10.25    14.72 1.00     2567     2542
## mu4_X2            3.55      0.63     2.35     4.84 1.00     1836     2325
## 
## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 
## is a crude measure of effective sample size, and Rhat is the potential 
## scale reduction factor on split chains (at convergence, Rhat = 1).

As indicated in the formulas, above, we get posteriors for each level of Y, except for Y == 1. That serves as the reference category. The values for \(\beta_{i, k = 1}\) are all fixed at \(0\).

Here’s how we might make the histograms in Figure 22.5.

Because the \(\beta\) values for when \(\lambda = 1\) are all fixed to 0, we left those plots out of our version of the figure. If you really wanted them, you’d have to enter the corresponding cells into the data before plotting. If you summarize each parameter by it’s posterior mean, round(), and wrangle a little, you can arrange the results in a similar way that the equations for \(\lambda_2\) through \(\lambda_4\) are displayed on the left side of Figure 22.5

## # A tibble: 3 x 4
## # Groups:   lambda [3]
##   lambda    `beta[0]` `beta[1]` `beta[2]`
##   <chr>         <dbl>     <dbl>     <dbl>
## 1 lambda[2]       3.4       5.6       0.8
## 2 lambda[3]       2         0.7       6  
## 3 lambda[4]      -0.4      12.4       3.6

As Kruschke mentioned in the text, “the estimated parameter values should be near the generating values, but not exactly the same because the data are merely a finite random sample” (pp. 662–663). Furthermore,

interpreting the parameters is always contextualized relative to the model. For the softmax model, the regression coefficient for outcome \(k\) on predictor \(x_j\) indicates that rate at which the log odds of that outcome increase relative to the reference outcome for a one unit increase in \(x_j\), assuming that a softmax model is a reasonable description of the data. (p. 663)

Unfortunately, this makes the parameters difficult to interpret directly. Kruschke didn’t show a plot like this, but it might be helpful to further understand what this model means in terms of probabilities for a given y value. Here we’ll use the fitted() function to return the conditional probabilities for all four response options for Y based on various combinations of X1 and X2.

Now use that plot while you walk through the final paragraph in this subsection.

It is easy to transform the estimated parameter values to a different reference category. Recall from Equation 22.3 (p. 651) that arbitrary constants can be added to all the regression coefficients without changing the model prediction. Therefore, to change the parameters estimates so they are relative to outcome \(R\), we simply subtract \(\beta_{j, R}\) from \(\beta_{j, k}\) for all predictors \(j\) and all outcomes \(k\). We do this at every step in the MCMC chain. For example, in Figure 22.5, consider the regression coefficient on \(x_1\) for outcome 2. Relative to reference outcome 1, this coefficient is positive, meaning that the probability of outcome 2 increases relative to outcome 1 when \(x_1\) increases. You can see this in the data graph, as the region of 2’s falls to right side (positive \(x_1\) direction) of the region of 1’s. But if the reference outcome is changed to outcome 4, then the coefficient on \(x_1\) for outcome 2 changes to a negative value. Algebraically this happens because the coefficient on \(x_1\) for outcome 4 is larger than for outcome 2, so when the coefficient for outcome 4 is subtracted, the result is a negative value for the coefficient on outcome 2. Visually, you can see this in the data graph, as the region of 2’s falls to the left side (negative \(x_1\) direction) of the region of 4’s. Thus, interpreting regression coefficients in a softmax model is rather different than in linear regression. In linear regression, a positive regression coefficient implies that \(y\) increases when the predictor increases. But not in softmax regression, where a positive regression coefficient is only positive with respect to a particular reference outcome. (p. 664, emphasis added)

22.3.3.2 Conditional logistic model.

I’m not pursuing this model type at this time. If you work through the solution, please share your code.

22.4 Generalizations and variations of the models

These models can be generalized to include different kinds of predictors, variants robust to outliers, and model comparison via information criteria and so forth. Also, you can find a couple more examples with softmax regression in Chapter 10 of McElreath’s Statistical Rethinking.

Session info

## R version 3.6.0 (2019-04-26)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS High Sierra 10.13.6
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] tidybayes_1.1.0 brms_2.10.3     Rcpp_1.0.2      forcats_0.4.0   stringr_1.4.0   dplyr_0.8.3    
##  [7] purrr_0.3.3     tidyr_1.0.0     tibble_2.1.3    ggplot2_3.2.1   tidyverse_1.2.1 readr_1.3.1    
## 
## loaded via a namespace (and not attached):
##   [1] nlme_3.1-139              matrixStats_0.55.0        xts_0.11-2               
##   [4] lubridate_1.7.4           threejs_0.3.1             httr_1.4.0               
##   [7] rstan_2.19.2              tools_3.6.0               backports_1.1.5          
##  [10] utf8_1.1.4                R6_2.4.0                  DT_0.9                   
##  [13] lazyeval_0.2.2            colorspace_1.4-1          withr_2.1.2              
##  [16] prettyunits_1.0.2         processx_3.4.1            tidyselect_0.2.5         
##  [19] gridExtra_2.3             Brobdingnag_1.2-6         compiler_3.6.0           
##  [22] cli_1.1.0                 rvest_0.3.4               HDInterval_0.2.0         
##  [25] arrayhelpers_1.0-20160527 xml2_1.2.0                shinyjs_1.0              
##  [28] labeling_0.3              colourpicker_1.0          scales_1.0.0             
##  [31] dygraphs_1.1.1.6          callr_3.3.2               ggridges_0.5.1           
##  [34] StanHeaders_2.19.0        digest_0.6.21             rmarkdown_1.13           
##  [37] base64enc_0.1-3           pkgconfig_2.0.3           htmltools_0.4.0          
##  [40] htmlwidgets_1.5           rlang_0.4.1               readxl_1.3.1             
##  [43] rstudioapi_0.10           shiny_1.3.2               svUnit_0.7-12            
##  [46] generics_0.0.2            zoo_1.8-6                 jsonlite_1.6             
##  [49] crosstalk_1.0.0           gtools_3.8.1              inline_0.3.15            
##  [52] magrittr_1.5              loo_2.1.0                 bayesplot_1.7.0          
##  [55] Matrix_1.2-17             munsell_0.5.0             fansi_0.4.0              
##  [58] abind_1.4-5               lifecycle_0.1.0           stringi_1.4.3            
##  [61] yaml_2.2.0                ggstance_0.3.2            pkgbuild_1.0.5           
##  [64] plyr_1.8.4                grid_3.6.0                parallel_3.6.0           
##  [67] promises_1.1.0            crayon_1.3.4              miniUI_0.1.1.1           
##  [70] lattice_0.20-38           haven_2.1.0               hms_0.4.2                
##  [73] ps_1.3.0                  zeallot_0.1.0             knitr_1.23               
##  [76] pillar_1.4.2              igraph_1.2.4.1            markdown_1.1             
##  [79] shinystan_2.5.0           stats4_3.6.0              reshape2_1.4.3           
##  [82] rstantools_2.0.0          glue_1.3.1.9000           evaluate_0.14            
##  [85] modelr_0.1.4              vctrs_0.2.0               httpuv_1.5.2             
##  [88] cellranger_1.1.0          gtable_0.3.0              assertthat_0.2.1         
##  [91] xfun_0.10                 mime_0.7                  xtable_1.8-4             
##  [94] broom_0.5.2               coda_0.19-3               later_1.0.0              
##  [97] rsconnect_0.8.15          viridisLite_0.3.0         shinythemes_1.1.2        
## [100] bridgesampling_0.7-2