1.3 Bayesian reports: Decision theory under uncertainty
The Bayesian framework allows reporting the full posterior distributions. However, some situations demand to report a specific value of the posterior distribution (point estimate), an informative interval (set), point or interval predictions and/or selecting a specific model. Decision theory offers an elegant framework to make a decision regarding what are the optimal posterior values to report (J. O. Berger 2013).
The point of departure is a loss function, which is a non-negative real value function whose arguments are the unknown state of nature (\(\mathbf{\Theta}\)), and a set of actions to be made (\(\mathcal{A}\)), that is, \[\begin{equation} L(\mathbf{\theta}, a):\mathbf{\Theta}\times \mathcal{A}\rightarrow R^+. \end{equation}\]
This function is a mathematical expression of the loss of making mistakes. In particular, selecting action \(a\in\mathcal{A}\) when \(\mathbf{\theta}\in\mathbf{\Theta}\) is the true. In our case, the unknown state of nature can be parameters, functions of them, future or unknown realizations, models, etc.
From a Bayesian perspective, we should choose the action (\(a^*(\mathbf{y})\)) that minimizes the posterior expected loss, which is the posterior risk function (\(\mathbb{E}[L(\mathbf{\theta}, a)|\mathbf{y}]\)),
\[\begin{equation} a^*(\mathbf{y})=\underset{a \in \mathcal{A}}{\mathrm{argmin}} \ \mathbb{E}[L(\mathbf{\theta}, a)|\mathbf{y}], \end{equation}\]
where \(\mathbb{E}[L(\mathbf{\theta}, a)|\mathbf{y}]= \int_{\mathbf{\Theta}} L(\mathbf{\theta}, a)\pi(\mathbf{\theta}|\mathbf{y})d\mathbf{\theta}\).13
Different loss functions imply different optimal decisions. We illustrate this assuming \(\theta \in \mathcal{R}\).
- The quadratic loss function, \(L({\theta},a)=[{\theta}-a]^2\), gives as optimal decision the posterior mean, \(a^*(\mathbf{y})=\mathbb{E}[{\theta}|\mathbf{y}]\), that is
\[\begin{equation} \mathbb{E}[{\theta}|\mathbf{y}] = \underset{a \in \mathcal{A}}{\mathrm{argmin}} \ \int_{{\Theta}} [{\theta}-a]^2\pi({\theta}|\mathbf{y})d{\theta}. \end{equation}\]
To get this results, let us use the first condition order, differentiate the risk function with respect to \(a\), interchange differential and integral order, and set this equal to zero, \(-2\int_{{\Theta}} [{\theta}-a^*]\pi({\theta}|\mathbf{y})d{\theta}=0\) implies that \(a^*\int_{{\Theta}} \pi({\theta}|\mathbf{y})d{\theta}=a^*(\mathbf{y})=\int_{{\Theta}} {\theta}\pi({\theta}|\mathbf{y})d{\theta}=\mathbb{E}[{\theta}|\mathbf{y}]\), that is, the posterior mean is the Bayesian optimal action. This means that we should report the posterior mean as a point estimate of \(\theta\) when facing the quadratic loss function.
The generalized quadratic loss function, \(L({\theta},a)=w({\theta})[{\theta}-a]^2\), where \(w({\theta})>0\) is a weighting function, gives as optimal decision rule the weighted mean. We should follow same steps as the previous result to get \(a^*(\mathbf{y})=\frac{\mathbb{E}[w({\theta})\times{\theta}|\mathbf{y}]}{\mathbb{E}[w({\theta})|\mathbf{y}]}\). Observe that the weighted average is driven by the weighted function \(w({\theta})\).
The absolute error loss function, \(L({\theta},a)=|{\theta}-a|\), gives as optimal action the posterior median (exercise 5).
The generalized absolute error function,
\[\begin{equation} L(\theta,a)=\begin{Bmatrix} K_0(\theta-a), \theta-a\geq 0\\ K_1(a-\theta), \theta-a < 0 \end{Bmatrix}, K_0, K_1 >0, \end{equation}\]
implies the following risk function,
\[\begin{align} \mathbb{E}[L(\theta, a)|\mathbf{y}]&=\int_{-\infty}^a K_1(a-\theta)\pi(\theta|\mathbf{y})d\theta + \int_a^{\infty} K_0(\theta-a)\pi(\theta|\mathbf{y})d\theta. \end{align}\]
Differentiating with respect to \(a\), interchanging differentials and integrals, and equating to zero,
\[\begin{align} K_1\int_{-\infty}^{a^*} \pi(\theta|\mathbf{y})d\theta-K_0\int_{a^*}^{\infty} \pi(\theta|\mathbf{y})d\theta&=0, \end{align}\]
then, \(\int_{-\infty}^{a^*} \pi(\theta|\mathbf{y})d\theta=\frac{K_0}{K_0+K_1}\), that is, any \(K_0/(K_0+K_1)\)-percentile of \(\pi(\theta|\mathbf{y})\) is an optimal Bayesian estimate of \(\theta\).
We can also use decision theory under uncertainty in hypothesis testing. In particular, testing \(H_0:\theta\in\Theta_0\) versus \(H_1:\theta\in\Theta_1\), \(\Theta=\Theta_0 \cup \Theta_1\) and \(\emptyset=\Theta_0 \cap \Theta_1\), there are two actions of interest, \(a_0\) and \(a_1\), where \(a_j\) denotes no rejecting \(H_j\), \(j=\left\{0,1\right\}\).
Given the \(0-K_j\) loss function,
\[\begin{equation} L(\theta,a_j)=\begin{Bmatrix} 0, & \theta\in\Theta_j\\ K_j, & \theta\in\Theta_j, j\neq i \end{Bmatrix}. \end{equation}\]
where there is no loss if the right decision is made, for instance, no rejecting \(H_0\) when \(\theta\in\Theta_0\), and the loss is \(K_j\) when an error is made, for instance, type I error, rejecting the null hypothesis (\(H_0\)) when it is true (\(\theta\in\Theta_0\)), implies a loss equal to \(K_1\) due to picking \(a_1\), no rejecting \(H_1\).
The posterior expected loss associated with decision \(a_j\), that is, no rejecting \(H_j\), is \(\mathbb{E}[L(\theta,a_j)|\mathbf{y}]=0\times P(\Theta_j|\mathbf{y}) + K_jP(\Theta_i|\mathbf{y})=K_jP(\Theta_i|\mathbf{y})\), \(j\neq i\). Therefore, the Bayes optimal decision is the one that gives the smallest posterior expected loss, that is, the null hypothesis is rejected (\(a_1\) is not rejected), when \(K_0P(\Theta_1|\mathbf{y}) > K_1P(\Theta_0|\mathbf{y})\). Given our framework \((\Theta=\Theta_0 \cup \Theta_1, \emptyset=\Theta_0 \cap \Theta_1)\), then \(P(\Theta_0|\mathbf{y})=1-P(\Theta_1|\mathbf{y})\), and as a consequence, \(P(\Theta_1|\mathbf{y})>\frac{K_1}{K_1+K_0}\), that is, the rejection region of the Bayesian test is \(R=\left\{\mathbf{y}:P(\Theta_1|\mathbf{y})>\frac{K_1}{K_1+K_0}\right\}\).
Decision theory also helps to construct interval (region) estimates. Let \(\Theta_{C(\mathbf{y})}\subset \Theta\) a credible set for \(\theta\), and \(L(\theta,\Theta_{C(\mathbf{y})})=1-\mathbb{I}\left\{\theta\in \Theta_{C(\mathbf{y})}\right\}\), where
\[\begin{equation} \mathbb{I}\left\{\theta\in \Theta_{C(\mathbf{y})}\right\}=\begin{Bmatrix}1, & \theta\in \Theta_{C(\mathbf{y})}\\ 0, & \theta\notin \Theta_{C(\mathbf{y})} \end{Bmatrix}. \end{equation}\]
Then,
\[\begin{equation} L(\theta,\Theta_{C(\mathbf{y})})=\begin{Bmatrix}0, & \theta\in \Theta_{C(\mathbf{y})}\\ 1, & \theta\notin \Theta_{C(\mathbf{y})} \end{Bmatrix}. \end{equation}\]
where the 0–1 loss function is equal to zero if \(\theta\in \Theta_{C(\mathbf{y})}\), and one if \(\theta\notin \Theta_{C(\mathbf{y})}\). Then, the risk function is \(1-P(\theta\in \Theta_{C(\mathbf{y})})\).
Given a measure of credibility (\(\alpha(\mathbf{y})\)) that defines the level of trust that \(\theta\in \Theta_{C(\mathbf{y})}\); then, we can measure the accuracy of the report by \(L(\theta, \alpha(\mathbf{y}))=[\mathbb{I}\left\{\theta\in \Theta_{C(\mathbf{y})}\right\}-\alpha(\mathbf{y})]^2\). This loss function could be used to suggest a choice of the report \(\alpha(\mathbf{y})\). Given that this is a quadratic loss function, the optimal action is the posterior mean, that is \(\mathbb{E}[\mathbb{I}\left\{\theta\in \Theta_{C(\mathbf{y})}\right\}|\mathbf{y}]=P(\theta\in \Theta_{C(\mathbf{y})}|\mathbf{y})\). This probability can be calculated given the posterior distribution, that is, \(P(\theta\in \Theta_{C(\mathbf{y})}|\mathbf{y})=\int_{\Theta_{C(\mathbf{y})}}\pi(\theta|\mathbf{y})d\theta\). This is a measure of the belief that \(\theta\in \Theta_{C(\mathbf{y})}\) given the prior beliefs and sample information.
The set \(\Theta_{C(\mathbf{y})}\in\Theta\) is a \(100(1-\alpha)\%\) credible set with respect to \(\pi(\theta|\mathbf{y})\) if \(P(\theta\in \Theta_{C(\mathbf{y})}|\mathbf{y})=\int_{\Theta_{C(\mathbf{y})}}\pi(\theta|\mathbf{y})=1-\alpha\).
Two alternatives to report credible sets are the symmetric credible set and the highest posterior density set (HPD). The former is based on \(\frac{\alpha}{2}\)% and \((1-\frac{\alpha}{2})\)% percentiles of the posterior distribution, and the latter is a \(100(1-\alpha)\%\) credible interval for \(\theta\) with the property that it has the smallest distance compared to any other \(100(1-\alpha)\%\) credible interval for \(\theta\) based on the posterior distribution. That is, \(C(\mathbf{y})=\left\{\theta:\pi(\theta|\mathbf{y})\geq k(\alpha)\right\}\), where \(k(\alpha)\) is the largest number such that \(\int_{\theta:\pi(\theta|\mathbf{y})\geq k(\alpha)}\pi(\theta|\mathbf{y})d\theta=1-\alpha\). The HPDs can be a collection of disjoint intervals when working with multimodal posterior densities. In addition, they have the limitation of not necessary being invariant under transformations.
Decision theory can be used to perform prediction (point, sets or probabilistic). Suppose that there is a loss function \(L(Y_0,a)\) involving the prediction of \(Y_0\). Then, \(\mathbb{E}_{Y_0}[L(Y_0,a)]=\int_{\mathcal{Y}_0}L(Y_0,a)\pi(Y_0|\mathbf{y})dY_0\), where \(\pi(Y_0|\mathbf{y})\) is the predictive density function. Thus, we make an optimal choice for prediction that minimizes the risk function given a specific loss function.
BMA allows incorporating model uncertainty in a regression framework, sometimes it is desirable to select just one model. A compelling alternative is the model with the highest posterior model probability. This model is the best alternative for prediction in the case of a 0–1 loss function (Clyde and George 2004).
Example: Health insurance continues
We show some optimal rules in the health insurance example. In particular, the best point estimates of \(\lambda\) given the quadratic, absolute and generalized absolute loss functions. For the latter, we assume that underestimating \(\lambda\) is twice as costly as overestimating it, that is, \(K_0=2\) and \(K_1=1\).
Taking into account that the posterior distribution of \(\lambda\) is \(G(\alpha_0+\sum_{i=1}^N y_i, \beta_0/(\beta_0N+1))\), using the hyperparameters from empirical Bayes, we have that \(\mathbb{E}[\lambda|\mathbf{y}]=\alpha_n\beta_n=1.2\), the median is 1.19, and the 2/3-th quantile is 1.26. Those are the optimal point estimates for the quadratic, absolute and generalized absolute loss functions.
In addition, we test the null hypothesis \(H_0. \lambda \in [0,1)\) versus \(H_1. \lambda \in [1,\infty)\) setting \(K_0=K_1=1\) we should reject the null hypothesis due to \(P(\lambda \in [0,1))=0.9>K_1/(K_0+K_1)=0.5\).
We get that the 95% symmetric credible interval is (0.91, 1.53), and the highest posterior density interval is (0.9, 1.51). Finally, the optimal point prediction under a quadratic loss function is 1.2, which is the mean value of the posterior predictive distribution, and the optimal model assuming a 0-1 loss function is the model using the hyperparameters from the empirical Bayes procedure due to the posterior model probability of this model being approximately 1, whereas the posterior model probability of the model using vague hyperparameters is approximately 0.
<- sum(y) + a0EB # Posterior shape parameter
an <- b0EB / (N*b0EB + 1) # Posterior scale parameter
bn <- 1000000 # Number of posterior draws
S <- rgamma(1000000, shape = an, scale = bn) # Posterior draws
Draws ###### Point estimation ########
<- an*bn # Mean: Optimal choice quadratic loss function
OptQua OptQua
## [1] 1.200952
<- qgamma(0.5, shape = an, scale = bn) # Median: Optimal choice absolute loss function
OptAbs OptAbs
## [1] 1.194034
# Setting K0 = 2 and K1 = 1, that is, to underestimate lambda is twice as costly as to overestimate it.
<- 2; K1 <- 1
K0 <- quantile(Draws, K0/(K0 + K1)) # Median: Optimal choice generalized absolute loss function
OptGenAbs OptGenAbs
## 66.66667%
## 1.262986
###### Hypothesis test ########
# H0: lambda in [0,1) vs H1: lambda in [1, Inf]
<- 1; K1 <- 1
K0 <- pgamma(1, shape = an, scale = bn)
ProbH0 # Posterior probability H0 ProbH0
## [1] 0.09569011
<- 1 -ProbH0
ProbH1 # Posterior probability H1 ProbH1
## [1] 0.9043099
# we should reject H0 given ProbH1 > K1 / (K0 + K1)
###### Credible intervals ########
<- qgamma(0.025, shape = an, scale = bn) # Lower bound
LimInf LimInf
## [1] 0.9114851
<- qgamma(0.975, shape = an, scale = bn) # Upper bound
LimSup LimSup
## [1] 1.529724
<- HDInterval::hdi(Draws, credMass = 0.95) # Highest posterior density credible interval
HDI HDI
## lower upper
## 0.8971505 1.5125911
## attr(,"credMass")
## [1] 0.95
###### Predictive optimal choices ########
<- bn / (bn + 1) # Probability negative binomial density
p <- p/(1-p)*an # Optimal point prediction given a quadratic loss function in prediction
OptPred OptPred
## [1] 1.200952
# Given a 0-1 loss function for prediction, the optimal model is the one using empirical Bayes due to having a posterior model probability approximately equal to 1.