5.3 Inference for model parameters

The assumptions on which a generalized linear model is constructed allow us to specify what is the asymptotic distribution of the random vector β^ through the theory of MLE. Again, the distribution is derived conditionally on the predictors’ sample X1,,Xn. In other words, we assume that the randomness of Y comes only from Y|(X1=x1,,Xp=xp) and not from the predictors.

For the ease of exposition, we will focus only on the logistic model rather than in the general case. The conceptual differences are not so big, but the simplification in terms of notation and the benefits on the intuition side are important.

There is an important difference between the inference results for the linear model and for logistic regression:

  • In linear regression the inference is exact. This is due to the nice properties of the normal, least squares estimation, and linearity. As a consequence, the distributions of the coefficients are perfectly known assuming that the assumptions hold.
  • In generalized linear models the inference is asymptotic. This means that the distributions of the coefficients are unknown except for large sample sizes n, for which we have approximations.166 The reason is the higher complexity of the model in terms of nonlinearity. This is the usual situation for the majority of regression models.167

5.3.1 Distributions of the fitted coefficients

The distribution of β^ is given by the asymptotic theory of MLE:

(5.26)β^aNp+1(β,I(β)1)

where a[] means “asymptotically distributed as [] when n” and

I(β):=E[2(β)ββ]

is the Fisher information matrix. The name comes from the fact that it measures the information available in the sample for estimating β. The “larger” the matrix (larger eigenvalues) is, the more precise the estimation of β is, because that results in smaller variances in (5.26).

The inverse of the Fisher information matrix is168

(5.27)I(β)1=(XVX)1,

where V=diag(V1,,Vn) and Vi=logistic(ηi)(1logistic(ηi)), with ηi=β0+β1xi1++βpxip. In the case of the multiple linear regression, I(β)1=σ2(XX)1 (see (2.11)), so the presence of V here is a consequence of the heteroscedasticity of the logistic model.

The interpretation of (5.26) and (5.27) gives some useful insights on what concepts affect the quality of the estimation:

  • Bias. The estimates are asymptotically unbiased.

  • Variance. It depends on:

    • Sample size n. Hidden inside XVX. As n grows, the precision of the estimators increases.
    • Weighted predictor sparsity (XVX)1. The more “disperse”169 the predictors are, the more precise β^ is. When p=1, XVX is a weighted version of sx2.

Figure 5.7 aids visualizing these insights.

The precision of β^ is affected by the value of β, which is hidden inside V. This contrasts sharply with the linear model, where the precision of the least squares estimator was not affected by β (see (2.11)). The reason is partially due to the heteroscedasticity of logistic regression, which implies a dependence of the variance of Y in the logistic curve, hence in β.

Similar to linear regression, the problem with (5.26) and (5.27) is that V is unknown in practice because it depends on β. Plugging-in the estimate β^ into β in V gives the estimator V^. Now we can use V^ to get

(5.28)β^jβjSE^(β^j)aN(0,1),SE^(β^j)2:=vj

where

vj is the j-th element of the diagonal of (XV^X)1.

The LHS of (5.28) is the Wald statistic for βj, j=0,,p. They are employed for building marginal confidence intervals and hypothesis tests, in a completely analogous way to how the t-statistics in linear regression operate.

Figure 5.7: Illustration of the randomness of the fitted coefficients (β^0,β^1) and the influence of n, (β0,β1) and sx2. The predictors’ sample x1,,xn are fixed and new responses Y1,,Yn are generated each time from a simple logistic model Y|X=xBer(logistic(β0+β1x)). Application available here.

5.3.2 Confidence intervals for the coefficients

Thanks to (5.28), we can have the 100(1α)% CI for the coefficient βj, j=0,,p:

(5.29)(β^j±SE^(β^j)zα/2)

where zα/2 is the α/2-upper quantile of the N(0,1). In case we are interested in the CI for eβj, we can just simply take the exponential on the above CI.170 So the 100(1α)% CI for eβj, j=0,,p, is

e(β^j±SE^(β^j)zα/2).

Of course, this CI is not the same as (eβ^j±eSE^(β^j)zα/2), which is not a valid CI for eβj!

5.3.3 Testing on the coefficients

The distributions in (5.28) also allow us to conduct a formal hypothesis test on the coefficients βj, j=0,,p. For example, the test for significance:

H0:βj=0

for j=0,,p. The test of H0:βj=0 with 1jp is especially interesting, since it allows us to answer whether the variable Xj has a significant effect on Y. The statistic used for testing for significance is the Wald statistic

β^j0SE^(β^j),

which is asymptotically distributed as a N(0,1) under the (veracity of) the null hypothesis. H0 is tested against the two-sided alternative hypothesis H1:βj0.

Is the CI for βj below (above) 0 at level α?

  • Yes reject H0 at level α. Conclude Xj has a significant negative (positive) effect on Y at level α.
  • No the criterion is not conclusive.

The tests for significance are built-in in the summary function. However, due to discrepancies between summary and confint, a note of caution is required when applying the previous rule of thumb for rejecting H0 in terms of the CI.

The significances given in summary and the output of MASS::confint are slightly incoherent and the previous rule of thumb does not apply. The reason is because MASS::confint is using a more sophisticated method (profile likelihood) to estimate the standard error of β^j, SE^(β^j), and not the asymptotic distribution behind the Wald statistic.

By changing confint to R’s default confint.default, the results of the latter will be completely equivalent to the significances in summary, and the rule of thumb still be completely valid. For the contents of this course we prefer confint.default due to its better interpretability. This point is exemplified in the next section.

5.3.4 Case study application

Let’s compute the summary of the nasa model in order to address the significance of the coefficients. At the sight of this curve and the summary of the model we can conclude that the temperature was increasing the probability of an O-ring incident (Q2). Indeed, the confidence intervals for the coefficients show a significant negative correlation at level α=0.05:

# Summary of the model
summary(nasa)
## 
## Call:
## glm(formula = fail.field ~ temp, family = "binomial", data = challenger)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.0566  -0.7575  -0.3818   0.4571   2.2195  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)  
## (Intercept)   7.5837     3.9146   1.937   0.0527 .
## temp         -0.4166     0.1940  -2.147   0.0318 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 28.267  on 22  degrees of freedom
## Residual deviance: 20.335  on 21  degrees of freedom
## AIC: 24.335
## 
## Number of Fisher Scoring iterations: 5

# Confidence intervals at 95%
confint.default(nasa)
##                   2.5 %      97.5 %
## (Intercept) -0.08865488 15.25614140
## temp        -0.79694430 -0.03634877

# Confidence intervals at other levels
confint.default(nasa, level = 0.90)
##                    5 %        95 %
## (Intercept)  1.1448638 14.02262275
## temp        -0.7358025 -0.09749059

# Confidence intervals for the factors affecting the odds
exp(confint.default(nasa))
##                 2.5 %       97.5 %
## (Intercept) 0.9151614 4.223359e+06
## temp        0.4507041 9.643039e-01

The coefficient for temp is significant at α=0.05 and the intercept is not (it is for α=0.10). The 95% confidence interval for β0 is (0.0887,15.2561) and for β1 is (0.7969,0.0363). For eβ0 and eβ1, the CIs are (0.9151,4.2233×106) and (0.4507,0.9643), respectively. Therefore, we can say with a 95% confidence that:

  • When temp=0, the probability of fail.field=1 is not significantly larger than the probability of fail.field=0 (using the CI for β0). fail.field=1 is between 0.9151 and 4.2233×106 more likely than fail.field=0 (using the CI for eβ0).
  • temp has a significantly negative effect on the probability of fail.field=1 (using the CI for β1). Indeed, each unit increase in temp produces a reduction of the odds of fail.field by a factor between 0.4507 and 0.9643 (using the CI for eβ1).

This completes the answers to Q1 and Q2.

We conclude by illustrating the incoherence of summary and confint.

# Significances with asymptotic approximation for the standard errors
summary(nasa)
## 
## Call:
## glm(formula = fail.field ~ temp, family = "binomial", data = challenger)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.0566  -0.7575  -0.3818   0.4571   2.2195  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)  
## (Intercept)   7.5837     3.9146   1.937   0.0527 .
## temp         -0.4166     0.1940  -2.147   0.0318 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 28.267  on 22  degrees of freedom
## Residual deviance: 20.335  on 21  degrees of freedom
## AIC: 24.335
## 
## Number of Fisher Scoring iterations: 5

# CIs with asymptotic approximation -- coherent with summary
confint.default(nasa, level = 0.95)
##                   2.5 %      97.5 %
## (Intercept) -0.08865488 15.25614140
## temp        -0.79694430 -0.03634877
confint.default(nasa, level = 0.99)
##                  0.5 %      99.5 %
## (Intercept) -2.4994971 17.66698362
## temp        -0.9164425  0.08314945

# CIs with profile likelihood -- incoherent with summary
confint(nasa, level = 0.95) # intercept still significant
##                  2.5 %     97.5 %
## (Intercept)  1.3364047 17.7834329
## temp        -0.9237721 -0.1089953
confint(nasa, level = 0.99) # temp still significant
##                  0.5 %      99.5 %
## (Intercept) -0.3095128 22.26687651
## temp        -1.1479817 -0.02994011

  1. That work quite well in practice and deliver many valuable insights.↩︎

  2. The linear model is an exception in terms of exact and simple inference, not the rule.↩︎

  3. Recall expression (5.23) for the general case of I(β).↩︎

  4. Undestood as small |(XVX)1|.↩︎

  5. Because eβ^jSE^(β^j)zα/2eβjeβ^j+SE^(β^j)zα/2β^jSE^(β^j)zα/2βjβ^j+SE^(β^j)zα/2 since the exponential is a monotone increasing function.↩︎