3 Ridge Regression
3.1 Introduction
Ridge regression was proposed as an alternative method to ordinary least squares (OLS), that trades bias for variance to improve the mean square error (\(MSE\)).
\(MSE\big(\hat{\beta}\big) = var \big(\hat{\beta}\big) + \underbrace{\left(\mathbb E\big(\hat{\beta}\big)-\beta \right)^2}_{bias^2}\)
This method is particularly useful when the \(OLS\) is unstable (high variance). This can happen in presence of high collinearity between the predictors or when the number of predictors is large in comparison to the number of observations.
The ridge estimator is obtained by maximising the likelihood with a restriction. This equivalent to minimising the residual sum of squares with an added penalisation term.
\(\underbrace{\sum^{n}_{i=1} \left(y_i-\beta_0-\sum^{p}_{j=1} \beta_j x_{ij}\right)^2 }_{\text {residual sum of squares}}+ \lambda \underbrace{\sum^{p}_{j=1} \beta_j^2}_{\ell_2-penalisation}\)
This is called a \(\ell_2\) penalisation, because is based on the \(\ell_2\) norm, \(\| \beta \|_2 = \sqrt{\sum_{j=1}^p \beta_j^2}\). The amount of penalisation is controlled by a parameter (\(\lambda\)). This parameter can be estimated by cross-validation. Basically, is finding the \(\lambda\) that gives the optimal trade-off between bias and variance to minimise the \(MSE\).
3.3 Practical session
Task 1 - Fit a linear model with Ridge penalisation
We will use the fat dataset in the library(faraway). We want to compare the OLS and Ridge estimates of a linear model to predict body fat (variable brozek), using the other variables available, except for siri , density and free .
Let’s first read the data
#install.packages(c("glmnet", "faraway"))
library(glmnet) #function for ridge regression
library(faraway) #has the dataset fat
set.seed(1233)
data("fat")
head(fat)
## brozek siri density age weight height adipos free neck chest abdom hip
## 1 12.6 12.3 1.0708 23 154.25 67.75 23.7 134.9 36.2 93.1 85.2 94.5
## 2 6.9 6.1 1.0853 22 173.25 72.25 23.4 161.3 38.5 93.6 83.0 98.7
## 3 24.6 25.3 1.0414 22 154.00 66.25 24.7 116.0 34.0 95.8 87.9 99.2
## 4 10.9 10.4 1.0751 26 184.75 72.25 24.9 164.7 37.4 101.8 86.4 101.2
## 5 27.8 28.7 1.0340 24 184.25 71.25 25.6 133.1 34.4 97.3 100.0 101.9
## 6 20.6 20.9 1.0502 24 210.25 74.75 26.5 167.0 39.0 104.5 94.4 107.8
## thigh knee ankle biceps forearm wrist
## 1 59.0 37.3 21.9 32.0 27.4 17.1
## 2 58.7 37.3 23.4 30.5 28.9 18.2
## 3 59.6 38.9 24.0 28.8 25.2 16.6
## 4 60.1 37.3 22.8 32.4 29.4 18.2
## 5 63.2 42.2 24.0 32.2 27.7 17.7
## 6 66.0 42.0 25.6 35.7 30.6 18.8
We will first fit the linear regression using ridge.
The package glmnet
implements the ridge estimation.
We need to define the design matrix of the model, \(X\) and identify
the outcome \(Y\)
############################################################
#RIDGE REGRESSION
############################################################
#we need to define the model equation
<- model.matrix(brozek ~ age + weight +
X + adipos +
height + chest +
neck + hip + thigh +
abdom + ankle +
knee + forearm +
biceps data=fat)[,-1]
wrist, #and the outcome
<- fat[,"brozek"] Y
First we need to find the amount of penalty, \(\lambda\) by cross-validation. We will search for the \(\lambda\) that give the minimum \(MSE\) in the interval
\(\lambda = \exp(-5)= 0.007\) to \(\lambda = \exp(8)= 2981\)
The function cv.glmnet()
fits a glm using penalisation. alpha = 0
gets the ridge estimates for different levels of penalisation. The \(\lambda\)
chosen by cross-validation is stored in lambda.min
.
#Penalty type (alpha=0 is ridge)
<- cv.glmnet(x=X, y=Y,
cv.lambda alpha = 0,
lambda=exp(seq(-5,8,.1))) #don't need to specify the range
#but you can do it anyways
plot(cv.lambda) #MSE for several lambdas
$lambda.min #best lambda cv.lambda
## [1] 0.0407622
The minimum \(MSE\) is achieved when \(\lambda =\) 0.0407622.
We can also see the impact of different \(\lambda\)s in the estimated coefficients. When \(\lambda\) is very high, the coefficients are shrunk towards zero.
#ridge path
plot(cv.lambda$glmnet.fit,
"lambda", label=FALSE)
With the chosen \(\lambda =\) 0.0407622, we can now obtain the ridge estimates:
#now get the coefs with
#the lambda found above
<- cv.lambda$lambda.min
lmin <- glmnet(x=X, y=Y,
ridge.model alpha = 0,
lambda = lmin)
$beta ridge.model
## 14 x 1 sparse Matrix of class "dgCMatrix"
## s0
## age 0.06507401
## weight -0.06789165
## height -0.06028540
## adipos 0.10207562
## neck -0.45063815
## chest -0.01601489
## abdom 0.81945358
## hip -0.18402114
## thigh 0.22019252
## knee -0.01128917
## ankle 0.12891779
## biceps 0.13023719
## forearm 0.41741050
## wrist -1.51553984
And finally, compare them with the OLS. Notice that in this case, the coefficients are similar because the penalisation was low.
<- lm(brozek ~ age + weight + #OLS
ols.regression + adipos +
height + chest +
neck + hip + thigh +
abdom + ankle +
knee + forearm +
biceps data=fat)
wrist, #OLS vs RIDGE
round(cbind(OLS = coef(ols.regression),
ridge = c(ridge.model$a0, #intercept for
as.vector(ridge.model$beta))),4) #ridge
## OLS ridge
## (Intercept) -15.1901 -13.0910
## age 0.0569 0.0651
## weight -0.0813 -0.0679
## height -0.0531 -0.0603
## adipos 0.0610 0.1021
## neck -0.4450 -0.4506
## chest -0.0309 -0.0160
## abdom 0.8790 0.8195
## hip -0.2031 -0.1840
## thigh 0.2274 0.2202
## knee -0.0010 -0.0113
## ankle 0.1572 0.1289
## biceps 0.1485 0.1302
## forearm 0.4297 0.4174
## wrist -1.4793 -1.5155
Task 2 - Ridge logistic model
With the lowbwt.csv dataset, in the library(faraway), we want to fit a ridge logistic regression to predict low birthweight (<2500gr), using age, lwt, race, smoke, ptl, ht, ui, ftv.
Let’s read the data and make sure that race and ftv are factor predictors. And recode ftv into (0, 1, 2+)
library(glmnet)
<- read.csv("https://www.dropbox.com/s/1odxxsbwd5anjs8/lowbwt.csv?dl=1",
lowbwt stringsAsFactors = TRUE)
$race <- factor(lowbwt$race,
lowbwtlevels = c(1,2,3),
labels = c("White", "Black", "Other"))
#number of previous appointments
$ftv[lowbwt$ftv>1] <- 2 #categories 0, 1, 2+
lowbwt$ftv <- factor(lowbwt$ftv,
lowbwtlevels = c(0,1,2),
labels = c("0", "1", "2+"))
head(lowbwt)
## id low age lwt race smoke ptl ht ui ftv bwt
## 1 85 0 19 182 Black 0 0 0 1 0 2523
## 2 86 0 33 155 Other 0 0 0 0 2+ 2551
## 3 87 0 20 105 White 1 0 0 0 1 2557
## 4 88 0 21 108 White 1 0 0 1 2+ 2594
## 5 89 0 18 107 White 1 0 0 1 0 2600
## 6 91 0 21 124 Other 0 0 0 0 0 2622
Let’s first get the MLE estimates for the model for low using all the predictors.
<- glm(low ~ age +
lowbwt.logit + race +
lwt + ptl +
smoke + ui + ftv,
ht family=binomial, data=lowbwt)
summary(lowbwt.logit)
##
## Call:
## glm(formula = low ~ age + lwt + race + smoke + ptl + ht + ui +
## ftv, family = binomial, data = lowbwt)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.9907 -0.8247 -0.5154 0.9791 2.1624
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.590930 1.213568 0.487 0.62630
## age -0.026945 0.037414 -0.720 0.47141
## lwt -0.015748 0.006963 -2.262 0.02371 *
## raceBlack 1.254016 0.529382 2.369 0.01784 *
## raceOther 0.820827 0.451042 1.820 0.06878 .
## smoke 0.865245 0.414450 2.088 0.03683 *
## ptl 0.604607 0.357445 1.691 0.09075 .
## ht 1.894689 0.705977 2.684 0.00728 **
## ui 0.732822 0.460442 1.592 0.11148
## ftv1 -0.299057 0.463918 -0.645 0.51917
## ftv2+ 0.178370 0.450777 0.396 0.69233
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 234.67 on 188 degrees of freedom
## Residual deviance: 200.61 on 178 degrees of freedom
## AIC: 222.61
##
## Number of Fisher Scoring iterations: 4
We will use cv.glmnet()
to find the best \(\lambda\)
<- lowbwt[complete.cases(lowbwt), ]
lowbwt.complete
#we need to define the model equation
<- model.matrix(low ~ age +
X.low + race +
lwt + ptl +
smoke + ui + ftv,
ht data=lowbwt.complete)[,-1]
#and the outcome
<- lowbwt[,"low"]
Y.low
<- cv.glmnet(x=X.low, y=Y.low,
cv.lambda alpha = 0,
family = binomial
)
plot(cv.lambda)
$lambda.min cv.lambda
## [1] 0.178367
<- cv.lambda$lambda.min lmin
And now use the chosen \(\lambda\) we can get the ridge estimates.
<- glmnet(x=X.low, y=Y.low,
ridge.model alpha = 0,
family = binomial,
lambda = lmin)
$beta ridge.model
## 10 x 1 sparse Matrix of class "dgCMatrix"
## s0
## age -0.01894794
## lwt -0.00647133
## raceBlack 0.44325516
## raceOther 0.27673349
## smoke 0.36274052
## ptl 0.37692639
## ht 0.84447200
## ui 0.43631641
## ftv1 -0.21840476
## ftv2+ -0.01265976
TRY IT YOURSELF:
- Plot the ridge path
See the solution code
plot(cv.lambda$glmnet.fit, xvar="lambda")
- What would you get if we used a
lambda = 0
in theglmnet()
function?
See the solution code
#The estimates would be the same as the MLE
#obtained with the glm()
<- glmnet(x=X.low, y=Y.low,
ridge.model alpha = 0,
family = binomial,
lambda = .04)
$beta ridge.model
3.4 Exercises
Solve the following exercise:
The dataset lowbwt.csv has the birth weigh of 189 newborns and information about the mother’s pregnancy.
Fita linear model to predict birth weight (variable bwt) using all other predictors except for low and id. Get the ridge estimates and compare the coefficients with the OLS.
See the solution code
library(glmnet)
set.seed(2100)
#read the data
<- read.csv("https://www.dropbox.com/s/1odxxsbwd5anjs8/lowbwt.csv?dl=1",
bwt.data stringsAsFactors = TRUE)
$race <- factor(bwt.data$race,
bwt.datalevels = c(1,2,3),
labels = c("White", "Black", "Other"))
head(bwt.data)
#Fit the OLS linear model
<- lm (bwt ~ age+race+
OLS.bwt +ptl+ht+ui+ftv,
smokedata = bwt.data)
summary(OLS.bwt)
# Define the model
<- model.matrix(bwt ~ age+
X +smoke+
race+ht+ui+ftv,
ptldata = bwt.data)[, -1]
<- bwt.data[, "bwt"]
Y
#Let's find the lambda by cross-validation with that gives the minimum MSE
<- cv.glmnet(x=X, y=Y,
cv.lambda alpha = 0,
type.measure = "mse", #adding type.measure="mse"
lambda = exp(seq(-6,8,0.1))) #will plot the lambda vs MSE
plot(cv.lambda)
$lambda.min
cv.lambda
#he minimum MSE if when the lambda = 0.033
# path diagram
plot(cv.lambda$glmnet.fit, "lambda", label = FALSE)
# with the lambda that achieves the minimum MSE we can now obtain the ridge estimates
<- cv.lambda$lambda.min
lmin <- glmnet(x=X, y=Y,
ridge.model alpha = 0,
lambda = lmin)
$beta
ridge.model
# Compare the ridge estimates with the OLS estimates
round(cbind(OLS = coef(OLS.bwt),
ridge = c(ridge.model$a0, as.vector(ridge.model$beta))), 4)
The dataset
prostate
a variable in the packagefaraway
contains information on 97 men who were about to receive a radical prostatectomy. These data come from a study examining the correlation between the prostate specific antigen (lpsa) and a number of other clinical measures.Fit the ridge estimators for linear model to predict lpsa using all other predictors. Plot the path diagram.
See the solution code
library(faraway)
library(glmnet)
set.seed(2000)
data("prostate")
head(prostate)
# Define the model
<- model.matrix(lpsa ~ lcavol + lweight +
X + lbph + svi+
age + gleason + pgg45,
lcp data = prostate)[, -1]
<- prostate[, "lpsa"]
Y
#Let's find the lambda by cross-validation with that gives the minimum MSE
<- cv.glmnet(x=X, y=Y,
cv.lambda alpha = 0,
lambda = exp(seq(-6,8,0.1)))
plot(cv.lambda)
$lambda.min
cv.lambda
#he minimum MSE if when the lambda = 0.033
# path diagram
plot(cv.lambda$glmnet.fit, "lambda", label = FALSE)
# with the lambda that achieves the minimum MSE we can now obtain the ridge estimates
<- cv.lambda$lambda.min
lmin <- glmnet(x=X, y=Y,
ridge.model alpha = 0,
lambda = lmin)
$beta
ridge.model
# Compare the ridge estimates with the OLS estimates
<- lm(lpsa ~ ., data = prostate)
ols.lm round(cbind(OLS = coef(ols.lm),
ridge = c(ridge.model$a0, as.vector(ridge.model$beta))), 4)
The \(\lambda\) penalty that was used is very close to 0, therefore the ridge estimates are similar to the OLS
- Fit the same model as in 2) but with high penalisation value (high lambda). What was the impact in the coefficients?
See the solution code
library(lasso2)
library(glmnet)
set.seed(2000)
data("prostate")
head(prostate)
# Define the model (this is the same as the model in 2)
<- model.matrix(lpsa ~ lcavol +
X + age +
lweight + svi+
lbph + gleason +
lcp
pgg45,data = prostate)[, -1]
<- prostate[, "lpsa"]
Y
# ridge estimates for high lambda (e.g, lambda=15)
<- glmnet(x=X, y=Y,
ridge.model2 alpha = 0,
lambda = 15 )
$beta
ridge.model2
# Step 4) Compare the ridge estimates with the OLS estimates
<- lm(lpsa ~ ., data = prostate)
ols.lm round(cbind(OLS = coef(ols.lm),
ridge = c(ridge.model$a0, as.vector(ridge.model2$beta))), 4)
The coefficients get shrunk towards 0 but they don’t reach 0!