18.3 Quantitative prediction tasks
Forecasting some probability vs. predicting values by some model (e.g., linear regression model).
Goal: Predicting some value (on a scale).
Method: Linear regression.
18.3.1 Introduction
This section provides a gentle introduction to linear regression models. Rather than providing a statistical treatment — which typically uses the simple LR model to transition into the realms of multiple predictors, different types of variables and their possible transformations, and non-linear extensions in the GLM framework — we aim to illustrate the principles behind linear regression and key intuitions that are important for any kind of statistical modeling.
Specific goals:
- Understand relation between correlation and simple linear regression models
- Understand common measures to assess model accuracy
- Understand different elements and uses of a model
- Understand the difference between explaining existing (model fitting) data and predicting new data
Historic data by Galton (from first correlation and “regression”):
# install.packages("HistData")
library(HistData)
# ?GaltonFamilies
dim(GaltonFamilies) # 934 x 8
#> [1] 934 8
# as_tibble(GaltonFamilies)
# View(GaltonFamilies)
<- as_tibble(GaltonFamilies)
GF
# Note: The more limited dataset Galton contains
# abstracted/processed data of 928 parent/child pairs.
# dim(Galton)
Goal: Study the relationship of height between generations, specifically: Association between parent’s and children’s height.
Note: We are tempted to interpret this as causal (i.e., parent’s height as a cause, children’s height as an effect). However, if a relationship exists, we can also view it from the opposite direction: To what extent can parent’s height be predicted by children’s height?
The variable midparentHeight
is a weighted average of father
and mother
, with values of mother
multiplied by a factor of 1.08. To account for the fact that men are on average taller than women, Galton “transmuted” (Galton, 1886, p. 247) women into men (see Hanley, 2004, for details). We can verify the calculation of midparentHeight
and see the effects of this manipulation by comparing the histograms for all three variables:
# Verify definition of midparentHeight:
<- (GF$father + (1.08 * GF$mother))/2
mpH all.equal(GF$midparentHeight, mpH)
#> [1] TRUE
# Compare histograms of 3 distributions:
ggplot(GF) +
geom_histogram(aes(x = midparentHeight), fill = "grey67", binwidth = 1) +
geom_histogram(aes(x = father), fill = "deepskyblue", alpha = .50, binwidth = 1) +
geom_histogram(aes(x = mother), fill = "deeppink", alpha = .50, binwidth = 1) +
labs(title = "Distribution of height values (for fathers, mothers, vs. midparents)",
x = "Height (in inches)", y = "Frequency count",
caption = "[Data from HistData::GaltonFamilies]") +
theme_ds4psy()
Scatterplot of raw values of childHeight
as a function of midparentHeight
:
Specific task: Explain or predict the values of childHeight
by values of midparentHeight
.
18.3.2 A basic model
We want to predict the true values of childHeight
\(y_i\).
What would the most simple model be?
A null-model for this data: Predict the mean criterion value \(\hat{y_i}\) for all data points \(x_i\):
For all values of \(x_i\), predict \(\bar{y_i} = \sum_{i=1}^{n}{y_i}\).
Graphically, this basic model is a horizontal line with a \(y\)-intercept given by the mean criterion value:
#> [1] 69.20677
Predicting the mean criterion value \(\bar{y_i}\) for all possible data points \(x_i\) provides a veritable model. Although this model is primitive and inflexible, it is very simple, as it employs only one parameter (i.e., the \(y\)-intercept of the mean). Nonetheless, it makes a specific prediction for any old or new data point, albeit the rather boring prediction of the same value.
How good is this model? We typically evaluate models by the accuracy of their predictions. Given some true criterion values, we can measure model accuracy by quantifying the deviations between these true values \(\hat{y_i}\) and the corresponding predicted values \(\hat{y_i}\).
Illustration of some deviations between true criterion values and model values (for a random sample of 10 data points):
As deviations are computed from differences (i.e., true value \(y_i\) minus predicted value \(\bar{y_i}\)) that can be positive or negative, they are typically squared (so that all differences are positive and larger differences weigh more than smaller ones) and summed (over all data points). Thus, the accuracy of our basic model or the amount of deviation between true data values and the model predictions can be quantified as the total sum of squares (TSS):
\[\text{TSS} = \sum_{i=1}^{n}{(y_i - \bar{y_i})^2}\]
As the value of TSS increases with larger deviations between true and predicted data values, smaller values of TSS indicate better models (i.e., ideally, \(\text{TSS} \rightarrow 0\)). We can express this in R as follows:
# Measures of model accuracy:
# (see @JamesEtAl2021, p. 69ff.):
# Total sum of squares (TSS): ----
<- function(true_vals, pred_vals = NA){
TSS
# Special case:
if (is.na(pred_vals)){
<- mean(true_vals)
pred_vals
}
sum((true_vals - pred_vals)^2)
}
and then compute the accuracy of our basic model as:
TSS(true_vals = GF$midparentHeight)
#> [1] 3030.886
Thus, the accuracy of our model is measured to be 3030.89. This is definitely not close to our ideal of a deviation value of zero (\(\text{TSS} \rightarrow 0\)). So can we find better models?
As TSS is our first measure of model accuracy, it is important to understand the role of such measures: As the fit between data and model predictions is expressed as a numeric value, we can vary aspects of the model to see whether this improves or decreases the quality of our model (as quantified by this particular measure).
When changing models, we need to distinguish between two aspects:
varying parameters within a given model type: Changing the value of parameter(s) given by the current model type to optimize the fit of a given model type; vs.
varying model types: Changing to a different model type (with different numbers or types of parameters).
As our basic model contains only one parameter (the \(y\)-intercept), we could try to predict different values (i.e., different variants of the same model type) and compare the accuracy of these alternative models to the one of our basic model:
TSS(true_vals = GF$midparentHeight, pred_vals = 66)
#> [1] 12635.58
TSS(true_vals = GF$midparentHeight, pred_vals = 68)
#> [1] 4391.072
TSS(true_vals = GF$midparentHeight, pred_vals = 70)
#> [1] 3618.568
We see that other predictions do not improve further on our basic model (given this measure and model type). Given the nature of our model and our measure, using the mean value as our only model parameter optimizes this particular accuracy measure, i.e., provides the best model when accepting the measure (TSS) and the constraints of our model type (i.e., it being a horizontal line).
18.3.3 Linear models
How can we improve on this basic model?
A more powerful model family: Create linear models with 2 parameters: intercept and slope.
\[y = a + bx = b_0 + b_1 x\]
Generate many random models: Each model generates “predictions” for criterion values.
Example: \(y = 0 + 1 x\)
Quantify the deviation of the model predictions from the corresponding data point (truth).
Goal: Optimize the model by minimizing their accuracy/deviation measure.
Model accuracy
To optimize, we need a measure to quantify model accuracy:
For linear models, a basic accuracy measure follows the logic of quantifying the amount of deviation between true and predicted values. This deviation is given by computing the so-called residuals (i.e., the difference between true values \(y_i\) and predicted values \(\hat{y_i}\)).
Illustration of residual values (for a random sample of 10 data points):
Again, residual values exist for every point, can be positive or negative: squaring them, and adding the resulting sums. The resulting accuracy measure is known as the residual sums of squares (RSS, see James et al., 2021, p. 69):
\[\text{RSS} = \sum_{i=1}^{n}{(y_i - \hat{y_i})^2}\]
Note that TSS (above) is but a special case of RSS (with predicting a single value \(\bar{y_i}\) for all values of \(x_i\), rather than predicting different values of \(y_i\) for different values of \(x_i\)) and that RSS will later be used to compute the residual square error (RSE) and combined with TSS to obtain the \(R^2\)-measure.
Again, we can easily define an R function for the residual sum of squares (RSS), when providing both the true and predicted values as arguments:
# Measures of model accuracy:
# (see @JamesEtAl2021, p. 69ff.):
# Residual sum of squares (RSS): ----
<- function(true_vals, pred_vals){
RSS
sum((true_vals - pred_vals)^2)
}
Compute RSS for various models…
A more systematic approach: Rather than randomly generating parameters and assessing the fit of the corresponding model to the data, we could ask: What would be good values for intercept and slope?
Intercept value is arbitrary in the sense that it depends on the scale of the criterion. If the criterion was standardized, the intercept could be zero.
Slope represents the linear association between the single predictor and the criterion variable.
Correlation
A measure of linear association: Pearson correlation:
<- cor(x = GF$midparentHeight, y = GF$childHeight, method = "pearson"))
(cor_1 #> [1] 0.3209499
Linear regression
Linear regression model:
# Linear regression model (raw values):
<- lm(childHeight ~ midparentHeight, data = GF)
lr_1
summary(lr_1)
#>
#> Call:
#> lm(formula = childHeight ~ midparentHeight, data = GF)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -8.9570 -2.6989 -0.2155 2.7961 11.6848
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 22.63624 4.26511 5.307 1.39e-07 ***
#> midparentHeight 0.63736 0.06161 10.345 < 2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 3.392 on 932 degrees of freedom
#> Multiple R-squared: 0.103, Adjusted R-squared: 0.102
#> F-statistic: 107 on 1 and 932 DF, p-value: < 2.2e-16
plot(x = GF$midparentHeight, y = GF$childHeight,
pch = 20, col = adjustcolor("black", .25),
main = "Height of child by height of midparent (raw values)",
xlab = "Height of midparent", ylab = "Height of child")
# Plot linear regression line:
abline(lr_1, col = "deepskyblue", lwd = 2)
Interpretation:
Interpreting model coefficients:
Intercept: Can be difficult to interpret, as predictor values of zero often (and here) make no sense. Important to understand that it determines the overall height of the regression line and that it is expressed in units of the predictor variable \(Y\).
Coefficient of predictor: Indicates the amount of association between this predictor and the criterion, given any other predictors. Value depends on scale and is hard to interpret unless variables were standardized (see below). Importantly, we need to consider value and its SE. If value differs more than 2 SEs from 0, it is significant (unlikely that the observed effect is due to chance alone).
As above, we can quantify model accuracy by measuring the deviation of our predicted values (on the linear regression curve) from the true values by RSS:
# Compute model predictions (from model coefficients):
<- coefficients(lr_1)[1] + (coefficients(lr_1)[2] * GF$midparentHeight)
lr_1_predictions
# Note: Model predictions are also provided by lr_1$fitted.values:
all.equal(lr_1_predictions, lr_1$fitted.values, check.attributes = FALSE)
#> [1] TRUE
RSS(true_vals = GF$childHeight, pred_vals = lr_1_predictions)
#> [1] 10721.47
Note that RSS value is much lower (i.e., better) than the RSS (or TSS) value of our basic model. (Below, we will use the difference between RSS and TSS to compute a key measure of model accuracy known as the \(R^2\)-statistic.)
The values of RSE, \(R^2\) and \(F\)-statistic in the summary()
provide overall measures of model accuracy or quality (and will be defined and explained below).
Standardizing model variables
Standardizing model variables (both criterion and predictor) with the scale()
function:
# z-Standardization:
<- scale(GF$childHeight)
z_childHeight
# Histogram:
hist(z_childHeight, col = "grey90", border = "white")
# Show mean value and +/- 1 SD:
abline(v = mean(z_childHeight), col = "deeppink", lwd = 2, lty = 1)
abline(v = mean(z_childHeight) - sd(z_childHeight), col = "deepskyblue", lwd = 1, lty = 2)
abline(v = mean(z_childHeight) + sd(z_childHeight), col = "deepskyblue", lwd = 1, lty = 2)
<- scale(GF$midparentHeight) z_midparentHeight
Standardized model (both criterion and predictor):
# Linear regression model (standardized):
<- lm(scale(childHeight) ~ scale(midparentHeight), data = GF)
lr_2
summary(lr_2)
#>
#> Call:
#> lm(formula = scale(childHeight) ~ scale(midparentHeight), data = GF)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -2.5025 -0.7540 -0.0602 0.7812 3.2646
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 7.570e-16 3.101e-02 0.00 1
#> scale(midparentHeight) 3.209e-01 3.102e-02 10.35 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.9476 on 932 degrees of freedom
#> Multiple R-squared: 0.103, Adjusted R-squared: 0.102
#> F-statistic: 107 on 1 and 932 DF, p-value: < 2.2e-16
Interpretation:
Standardizing the predictor and criterion variables changes their scale and facilitates the interpretation of the coefficients. Importantly, we now need to interpret values in units of standard deviations.
In the \(z\)-standardized model lr_2
, the intercept value is (near) zero and the regression coefficient of the linear predictor corresponds to the Pearson correlation cor_1
(computed above):
near(coefficients(lr_2)[1], 0)
#> (Intercept)
#> TRUE
near(coefficients(lr_2)[2], cor_1)
#> scale(midparentHeight)
#> TRUE
By comparing the summary()
outputs for both models, we see that the model’s residual standard error (RSE) changes when rescaling the criterion variable. However, \(R^2\) and \(F\)-statistic are unchanged by the standardization from lr_1
to lr_2
.
Key elements of a model
summary(lr_1)
#>
#> Call:
#> lm(formula = childHeight ~ midparentHeight, data = GF)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -8.9570 -2.6989 -0.2155 2.7961 11.6848
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 22.63624 4.26511 5.307 1.39e-07 ***
#> midparentHeight 0.63736 0.06161 10.345 < 2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 3.392 on 932 degrees of freedom
#> Multiple R-squared: 0.103, Adjusted R-squared: 0.102
#> F-statistic: 107 on 1 and 932 DF, p-value: < 2.2e-16
# Note:
# str(lr_1) # shows list structure with 12 elements
# Important elements: ------
coefficients(lr_1)
#> (Intercept) midparentHeight
#> 22.6362405 0.6373609
$coefficients
lr_1#> (Intercept) midparentHeight
#> 22.6362405 0.6373609
# Different types of values: -----
# in data:
$midparentHeight # predictor values
GF#> [1] 75.43 75.43 75.43 75.43 73.66 73.66 73.66 73.66 72.06 72.06 72.06 72.06
#> [13] 72.06 72.06 72.06 69.09 69.09 69.09 69.09 69.09 69.09 73.72 73.72 73.72
#> [25] 73.72 73.72 73.72 73.72 72.91 72.91 72.91 72.89 72.37 70.48 70.48 70.48
#> [37] 70.48 70.48 70.48 70.48 70.48 69.94 72.68 72.68 72.68 72.68 72.41 72.41
#> [49] 72.41 71.60 71.60 71.60 71.60 71.60 71.60 71.60 71.60 71.60 71.33 71.33
#> [61] 71.33 71.33 71.33 71.33 71.06 71.06 71.06 70.62 73.61 73.61 73.61 73.61
#> [73] 73.61 73.61 73.61
#> [ reached getOption("max.print") -- omitted 859 entries ]
$childHeight # criterion values
GF#> [1] 73.2 69.2 69.0 69.0 73.5 72.5 65.5 65.5 71.0 68.0 70.5 68.5 67.0 64.5 63.0
#> [16] 72.0 69.0 68.0 66.5 62.5 62.5 69.5 76.5 74.0 73.0 73.0 70.5 64.0 70.5 68.0
#> [31] 66.0 66.0 65.5 74.0 70.0 68.0 67.0 67.0 66.0 63.5 63.0 65.0 71.0 62.0 68.0
#> [46] 67.0 71.0 70.5 66.7 72.0 70.5 70.2 70.2 69.2 68.7 66.5 64.5 63.5 74.0 73.0
#> [61] 71.5 62.5 66.5 62.3 66.0 64.5 64.0 62.7 73.2 73.0 72.7 70.0 69.0 68.5 68.0
#> [ reached getOption("max.print") -- omitted 859 entries ]
# in model:
$fitted.values
lr_1#> 1 2 3 4 5 6 7 8
#> 70.71237 70.71237 70.71237 70.71237 69.58424 69.58424 69.58424 69.58424
#> 9 10 11 12 13 14 15 16
#> 68.56447 68.56447 68.56447 68.56447 68.56447 68.56447 68.56447 66.67150
#> 17 18 19 20 21 22 23 24
#> 66.67150 66.67150 66.67150 66.67150 66.67150 69.62249 69.62249 69.62249
#> 25 26 27 28 29 30 31 32
#> 69.62249 69.62249 69.62249 69.62249 69.10622 69.10622 69.10622 69.09348
#> 33 34 35 36 37 38 39 40
#> 68.76205 67.55744 67.55744 67.55744 67.55744 67.55744 67.55744 67.55744
#> 41 42 43 44 45 46 47 48
#> 67.55744 67.21326 68.95963 68.95963 68.95963 68.95963 68.78754 68.78754
#> 49 50 51 52 53 54 55 56
#> 68.78754 68.27128 68.27128 68.27128 68.27128 68.27128 68.27128 68.27128
#> 57 58 59 60 61 62 63 64
#> 68.27128 68.27128 68.09919 68.09919 68.09919 68.09919 68.09919 68.09919
#> 65 66 67 68 69 70 71 72
#> 67.92711 67.92711 67.92711 67.64667 69.55238 69.55238 69.55238 69.55238
#> 73 74 75
#> 69.55238 69.55238 69.55238
#> [ reached getOption("max.print") -- omitted 859 entries ]
$residuals
lr_1#> 1 2 3 4 5 6
#> 2.48762699 -1.51237301 -1.71237301 -1.71237301 3.91575578 2.91575578
#> 7 8 9 10 11 12
#> -4.08424422 -4.08424422 2.43553321 -0.56446679 1.93553321 -0.06446679
#> 13 14 15 16 17 18
#> -1.56446679 -4.06446679 -5.56446679 5.32849508 2.32849508 1.32849508
#> 19 20 21 22 23 24
#> -0.17150492 -4.17150492 -4.17150492 -0.12248587 6.87751413 4.37751413
#> 25 26 27 28 29 30
#> 3.37751413 3.37751413 0.87751413 -5.62248587 1.39377645 -1.10622355
#> 31 32 33 34 35 36
#> -3.10622355 -3.09347633 -3.26204866 6.44256343 2.44256343 0.44256343
#> 37 38 39 40 41 42
#> -0.55743657 -0.55743657 -1.55743657 -4.05743657 -4.55743657 -2.21326168
#> 43 44 45 46 47 48
#> 2.04036946 -6.95963054 -0.95963054 -1.95963054 2.21245690 1.71245690
#> 49 50 51 52 53 54
#> -2.08754310 3.72871923 2.22871923 1.92871923 1.92871923 0.92871923
#> 55 56 57 58 59 60
#> 0.42871923 -1.77128077 -3.77128077 -4.77128077 5.90080667 4.90080667
#> 61 62 63 64 65 66
#> 3.40080667 -5.59919333 -1.59919333 -5.79919333 -1.92710589 -3.42710589
#> 67 68 69 70 71 72
#> -3.92710589 -4.94666709 3.64762382 3.44762382 3.14762382 0.44762382
#> 73 74 75
#> -0.55237618 -1.05237618 -1.55237618
#> [ reached getOption("max.print") -- omitted 859 entries ]
# Note:
# (1) The fitted.values are equal to the predicted criterion values, given
# the raw values of the predictor and the model (coefficients):
<- lr_1$coefficients[1] + lr_1$coefficients[2] * GF$midparentHeight
model_predictions all.equal(model_predictions, lr_1$fitted.values, check.attributes = FALSE)
#> [1] TRUE
# (2) The residuals are equal to raw values (of criterion) minus fitted.values:
<- GF$childHeight - lr_1$fitted.values
model_errors all.equal(model_errors, lr_1$residuals)
#> [1] TRUE
Fitting vs. predicting
Fitting a model on an entire dataset does not really “predict” anything. Instead, we fit a linear model to data for constructing a simple (model-based) explanation of the dataset.
The model’s “predictions” are its fitted values:
# The predict() function uses a model to generate predictions: ------
predict(object = lr_1) # with data used to fit the model: fitted.values == predicted values
#> 1 2 3 4 5 6 7 8
#> 70.71237 70.71237 70.71237 70.71237 69.58424 69.58424 69.58424 69.58424
#> 9 10 11 12 13 14 15 16
#> 68.56447 68.56447 68.56447 68.56447 68.56447 68.56447 68.56447 66.67150
#> 17 18 19 20 21 22 23 24
#> 66.67150 66.67150 66.67150 66.67150 66.67150 69.62249 69.62249 69.62249
#> 25 26 27 28 29 30 31 32
#> 69.62249 69.62249 69.62249 69.62249 69.10622 69.10622 69.10622 69.09348
#> 33 34 35 36 37 38 39 40
#> 68.76205 67.55744 67.55744 67.55744 67.55744 67.55744 67.55744 67.55744
#> 41 42 43 44 45 46 47 48
#> 67.55744 67.21326 68.95963 68.95963 68.95963 68.95963 68.78754 68.78754
#> 49 50 51 52 53 54 55 56
#> 68.78754 68.27128 68.27128 68.27128 68.27128 68.27128 68.27128 68.27128
#> 57 58 59 60 61 62 63 64
#> 68.27128 68.27128 68.09919 68.09919 68.09919 68.09919 68.09919 68.09919
#> 65 66 67 68 69 70 71 72
#> 67.92711 67.92711 67.92711 67.64667 69.55238 69.55238 69.55238 69.55238
#> 73 74 75
#> 69.55238 69.55238 69.55238
#> [ reached getOption("max.print") -- omitted 859 entries ]
We can, however, also use a model for genuine predictions or predicting criterion values for new data values (i.e., predictor values not used for model fitting).
To do so, the predict()
function receives both a model and new data values (as the object
and newdata
arguments, respectively):
# Using model to predict criterion values for new data (predictor values):
<- seq(50, 80, by = 5))
(my_heights #> [1] 50 55 60 65 70 75 80
predict(object = lr_1, newdata = data.frame(midparentHeight = my_heights))
#> 1 2 3 4 5 6 7
#> 54.50429 57.69109 60.87789 64.06470 67.25150 70.43831 73.62511
Note that the newdata
argument requires a data frame with the same variable names as those used as predictors in the model.
To illustrate the difference between explaining existing data and genuinely predicting new data, we can split our data into two parts:
GF_train
contains \(p = 50\)% of the observations (rows) ofGF
;GF_test
contains the other \((1 - p) = 50\)% of the observations (rows) ofGF
.
We use the sample()
function to draw a random subset of rows from GF
and then numerical indexing on GF
to extract two equally sized subsets:
# Sample 50% of 1:nrow(GF) without replacement:
<- nrow(GF)
N set.seed(246) # reproducible randomness
<- sort(sample(x = 1:N, size = N/2, replace = FALSE))
ix_train
# Split data GF into 2 complementary subsets:
<- GF[ ix_train, ]
GF_train <- GF[-ix_train, ]
GF_test
# Check subsets sizes:
dim(GF_train) # 467 x 8
#> [1] 467 8
dim(GF_test) # 467 x 8
#> [1] 467 8
Construct a model by fitting a simple LR to GF_train
:
# Linear regression model (unstandardized):
<- lm(childHeight ~ midparentHeight, data = GF_train)
lr_3 summary(lr_3)
#>
#> Call:
#> lm(formula = childHeight ~ midparentHeight, data = GF_train)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -8.311 -2.778 -0.100 2.856 11.375
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 23.12173 6.43589 3.593 0.000362 ***
#> midparentHeight 0.63486 0.09292 6.832 2.63e-11 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 3.481 on 465 degrees of freedom
#> Multiple R-squared: 0.09123, Adjusted R-squared: 0.08928
#> F-statistic: 46.68 on 1 and 465 DF, p-value: 2.629e-11
# Alternative direction: ------
# Linear regression model (unstandardized):
<- lm(childHeight ~ midparentHeight, data = GF_test)
lr_3b summary(lr_3b)
#>
#> Call:
#> lm(formula = childHeight ~ midparentHeight, data = GF_test)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -8.656 -2.561 -0.276 2.610 9.316
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 22.59630 5.62405 4.018 6.85e-05 ***
#> midparentHeight 0.63342 0.08127 7.794 4.30e-14 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 3.278 on 465 degrees of freedom
#> Multiple R-squared: 0.1155, Adjusted R-squared: 0.1136
#> F-statistic: 60.74 on 1 and 465 DF, p-value: 4.298e-14
Using the model lr_3
for predicting criterion values in GF_test
data:
<- predict(lr_3, newdata = GF_test) GF_test_predictions
Evaluating the accuracy of model predictions
The accuracy of linear regression models is typically evaluated by two measures that are based on the TSS and RSS measures from above (see James et al., 2021, p. 69ff., for details):
- The residual standard error (RSE) estimates the standard deviation of the irreducible error \(\epsilon\) to quantify the extent to which our predictions would miss on average if we knew the true LR line (i.e., the population values \(\beta_0\) and \(\beta_1\)).
The residual standard error (RSE) is based on the residual sums of squares (RSS, see above and James et al., 2021, p. 69):
\[\text{RSS} = \sum_{i=1}^{n}{(y_i - \hat{y_i})^2}\]
and
\[\text{RSE} = \sqrt{\frac{1}{n-2} \cdot \text{RSS}}\]
As RSE quantifies the lack of fit of a model in units of \(Y\), lower values are better (ideally, \(RSE \rightarrow 0\)).
- The proportion of variance explained (\(R^2\)): Whereas RSE provides an absolute measure of model fit (in units of the criterion \(Y\)), the \(R^2\) statistic provides an alternative measure of fit as the proportion of non-erroreous squared deviations, or the proportion of variance explained:
\[R^2 = \frac{\text{TSS} - \text{RSS}}{\text{TSS}}\]
with TSS defined as (see above):
\[\text{TSS} = \sum_{i=1}^{n}{(y_i - \bar{y_i})^2}\]
In contrast to error measures, \(R^2\) ranges from \(0\) to \(1\), with higher values indicating better model fits (ideally, \(R^2 \rightarrow 1\)).
We can easily define R functions for the residual sum of squares (RSS), the residual standard error (RSE), and the \(R^2\)-statistic, when providing both the true and predicted values as arguments:
# Measures of model accuracy:
# (see @JamesEtAl2021, p. 69ff.):
# Residual standard error (RSE): ----
<- function(true_vals, pred_vals){
RSE
<- length(true_vals)
n
<- RSS(true_vals, pred_vals) # defined above
rss
sqrt(1/(n - 2) * rss)
}
# R^2 statistic: ----
<- function(true_vals, pred_vals){
Rsq
<- TSS(true_vals) # defined above
tss
<- RSS(true_vals, pred_vals) # defined above
rss
- rss)/tss
(tss }
Checking the accuracy of the original model lr_1
:
RSE(true_vals = GF$childHeight, pred_vals = lr_1$fitted.values)
#> [1] 3.391713
Rsq(true_vals = GF$childHeight, pred_vals = lr_1$fitted.values)
#> [1] 0.1030088
# summary(lr_1) # contains the same values
Comparing to the accuracy of the standardized model lr_2
:
RSE(true_vals = scale(GF$childHeight), pred_vals = lr_2$fitted.values)
#> [1] 0.9476041
Rsq(true_vals = scale(GF$childHeight), pred_vals = lr_2$fitted.values)
#> [1] 0.1030088
# summary(lr_2) # contains the same values
We see that the residual standar error (RSE) changes by re-scaling the criterion, but \(R^2\) and \(F\)-statistic remain the same.
Checking the accuracy of the model lr_3
(fitted to the GF_train
subset):
# True vs. predicted values (training data):
<- GF_train$childHeight
true_train_vals <- lr_3$fitted.values
true_pred_vals
# Model accuracy:
RSE(true_train_vals, true_pred_vals)
#> [1] 3.481212
Rsq(true_train_vals, true_pred_vals)
#> [1] 0.09122942
# summary(lr_3)
+++ here now +++
Next: Checking prediction accuracy for unseen TEST data GF_test
…
Distinguish:
explaining old data vs. predicting new data
ToDo:
Next steps:
- Elements of a model (e.g.,
str()
) - Fitting vs. predicting (explaining old vs. predicting new data)
- Regression to the mean (Galton’s discovery/point)
- Comparing models (ANOVA)
- Multiple predictors (collinearity?)
18.3.4 Comparing models
Using the GaltonFamilies
(GF
) data, we can try alternative predictors:
Rather than using the complex variable midparentHeight
as our predictor, we can compare and contrast regressions using either father
or mother
as a single predictor (using standardized models):
# Using data from GF:
# Above, we computed:
# (0) Linear regression model on midparentHeight (standardized):
summary(lr_2)
#>
#> Call:
#> lm(formula = scale(childHeight) ~ scale(midparentHeight), data = GF)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -2.5025 -0.7540 -0.0602 0.7812 3.2646
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 7.570e-16 3.101e-02 0.00 1
#> scale(midparentHeight) 3.209e-01 3.102e-02 10.35 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.9476 on 932 degrees of freedom
#> Multiple R-squared: 0.103, Adjusted R-squared: 0.102
#> F-statistic: 107 on 1 and 932 DF, p-value: < 2.2e-16
# Consider 2 alternatives:
# (1) Linear regression model on father height (standardized):
<- lm(scale(childHeight) ~ scale(father), data = GF)
lr_2_f summary(lr_2_f)
#>
#> Call:
#> lm(formula = scale(childHeight) ~ scale(father), data = GF)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -2.8737 -0.7648 -0.0636 0.7477 3.3374
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -3.581e-16 3.156e-02 0.000 1
#> scale(father) 2.660e-01 3.158e-02 8.425 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.9645 on 932 degrees of freedom
#> Multiple R-squared: 0.07078, Adjusted R-squared: 0.06978
#> F-statistic: 70.99 on 1 and 932 DF, p-value: < 2.2e-16
# (2) Linear regression model on mother height (standardized):
<- lm(scale(childHeight) ~ scale(mother), data = GF)
lr_2_m summary(lr_2_m)
#>
#> Call:
#> lm(formula = scale(childHeight) ~ scale(mother), data = GF)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -2.6632 -0.7326 -0.0405 0.8001 3.3436
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 4.653e-16 3.207e-02 0.000 1
#> scale(mother) 2.013e-01 3.209e-02 6.275 5.36e-10 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.9801 on 932 degrees of freedom
#> Multiple R-squared: 0.04053, Adjusted R-squared: 0.0395
#> F-statistic: 39.37 on 1 and 932 DF, p-value: 5.362e-10
Both alternatives show significant correlations between predictor and criterion,
but appear to be worse than the combined model lr_2
. This raises the question:
- Are the models different from each other?
Comparing models:
First, ensure that models of interest are based on the same cases (i.e., no missing or NA
values on individual variables):
sum(is.na(GF$midparentHeight))
#> [1] 0
sum(is.na(GF$father))
#> [1] 0
sum(is.na(GF$mother))
#> [1] 0
No missing values here, so we can compare model fits. (Otherwise, we would need to filter the data to only contain cases with values on all predictors.)
Comparing models essentially asks: Is one model better than another? To answer this, we first need to ask: Better in which respect? Explanatory power in terms of accuracy vs. other factors (complexity)?
Here: Complexity is the same, hence we can only compare these models in terms of accuracy. Two possible approaches:
- Amount of variance explained: Does one model explain more variance than another?
# Comparing models: Rsq
summary(lr_2)$r.squared
#> [1] 0.1030088
summary(lr_2_f)$r.squared
#> [1] 0.0707765
summary(lr_2_m)$r.squared
#> [1] 0.04053053
- \(F\)-test between models:
Differences in accuracy between models (based on the same data) can be checked by conducting an \(F\)-test between models,
using the
anova()
function:
# Comparing models: F-test between models
anova(lr_2, lr_2_f)
#> Analysis of Variance Table
#>
#> Model 1: scale(childHeight) ~ scale(midparentHeight)
#> Model 2: scale(childHeight) ~ scale(father)
#> Res.Df RSS Df Sum of Sq F Pr(>F)
#> 1 932 836.89
#> 2 932 866.97 0 -30.073
anova(lr_2, lr_2_m)
#> Analysis of Variance Table
#>
#> Model 1: scale(childHeight) ~ scale(midparentHeight)
#> Model 2: scale(childHeight) ~ scale(mother)
#> Res.Df RSS Df Sum of Sq F Pr(>F)
#> 1 932 836.89
#> 2 932 895.19 0 -58.292
anova(lr_2_f, lr_2_m)
#> Analysis of Variance Table
#>
#> Model 1: scale(childHeight) ~ scale(father)
#> Model 2: scale(childHeight) ~ scale(mother)
#> Res.Df RSS Df Sum of Sq F Pr(>F)
#> 1 932 866.97
#> 2 932 895.19 0 -28.22
Interpretation:
The 3 models do not differ significantly from each other, but lr_2
explains a higher proportion of variance than the others.
Can we boost the performance by using more complex models (e.g., including multiple predictors)? Next: Creating linear models with more than one predictor…
18.3.5 Multiple predictors
Using both father
and mother
as individual predictors:
# Using data from GF:
# Above, we computed:
# (0) Linear regression model on midparentHeight (standardized):
summary(lr_2)
#>
#> Call:
#> lm(formula = scale(childHeight) ~ scale(midparentHeight), data = GF)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -2.5025 -0.7540 -0.0602 0.7812 3.2646
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 7.570e-16 3.101e-02 0.00 1
#> scale(midparentHeight) 3.209e-01 3.102e-02 10.35 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.9476 on 932 degrees of freedom
#> Multiple R-squared: 0.103, Adjusted R-squared: 0.102
#> F-statistic: 107 on 1 and 932 DF, p-value: < 2.2e-16
# Consider alternative model with 2 predictors:
# (1) Linear regression model on father height (standardized):
<- lm(scale(childHeight) ~ scale(father) + scale(mother), data = GF)
lr_2_fm summary(lr_2_fm)
#>
#> Call:
#> lm(formula = scale(childHeight) ~ scale(father) + scale(mother),
#> data = GF)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -2.5472 -0.7658 -0.0609 0.7728 3.2671
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.059e-16 3.098e-02 0.000 1
#> scale(father) 2.548e-01 3.106e-02 8.204 7.66e-16 ***
#> scale(mother) 1.859e-01 3.106e-02 5.987 3.05e-09 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.9469 on 931 degrees of freedom
#> Multiple R-squared: 0.1052, Adjusted R-squared: 0.1033
#> F-statistic: 54.74 on 2 and 931 DF, p-value: < 2.2e-16
Overall model fit (as measured by RSE and \(R^2\)) almost identical — which is not surprising, given that the midparentheight
predictor of lr_2
is a linear combination of mother
and father
.
Deciding between models is tricky:
Conceptually, they make different points (e.g., lr_2_fm
quantifies the partial relationships between childrenHeight
to both the father
and mother
predictors, while lr_2
glosses over the differences between parents).
At the same time, lr_2_fm
is more complex without gaining anything in terms of accuracy.
Comparing model fit while also considering their complexity: Using information critera (e.g., Akaikes Information Criterion, AIC):
AIC(lr_2, lr_2_fm)
#> df AIC
#> lr_2 3 2554.042
#> lr_2_fm 4 2553.733
Here, hardly any difference.
Generally, models with lower AIC are preferred.
Hence, both conceptual clarity and AIC suggest that lr_2_fm
may be better here (although our models essentially make the same predictions).
Using multiple predictors raises an important new question:
- Are predictors highly correlated?
Here: Are the heights of father
and mother
highly correlated?
This would be the case if height was a systematic criterion in human mate search.
Important assumption of LR with multiple predictors: Avoiding collinearity between predictors (so-called multicollinearity).
- Correlation between predictor variables:
cor(GF$father, GF$mother, method = "pearson")
#> [1] 0.06036612
Value close to zero: No substantial correlation. (Incidentally, this point was addressed (and negated) by Galton as well.)
- Checking multicollinearity of a model: The variance inflation factor (VIF) for each variable should have a value close to 1, and not above 10:
library(car)
::vif(lr_2_fm)
car#> scale(father) scale(mother)
#> 1.003657 1.003657
Here: No indication of multicollinearity or variance inflation.
18.3.5.1 Practice
Here are some tasks to check your understanding of predicting continuous variables using linear regression models:
- Scatterplot and linear model:
Create the scatterplot and the corresponding linear regression line for the
GaltonFamilies
data (from HistData) and the standardized linear regression modellr_2
(for explaining the criterionchildHeight
by one predictormidparentHeight
).
Solution:
- Cross-validation: Reverse the roles of
GF_train
andGF_test
by first fitting a linear model to theGF_test
data and then predicting new values for theGF_train
data.
18.3.6 Older parts
The following notes are based on Chapter 23: Model basics of the r4ds book (Wickham & Grolemund, 2017). This chapter — and the following ones on 24: Model building and 25: Many models — provide a decent introduction to general modeling principles (e.g., starting with scatterplot, model evaluation by distance function, random vs. optimized models, different optimization methods, different types of linear models, e.g., with categorical predictors and interactions). However, the statistical principles are somewhat obscured by using tidyverse syntax and the modelr package. Hence, we recommend translating the important concepts of the chapter into base R.
Introduction
The two parts of modeling:
Defining a family of models that express a precise, but generic, pattern that you want to capture (e.g., as an equation that contains free parameters). For instance, linear models (that predict
y
fromx
) have the general formy = p1 + p2 * x
and two parameters: an interceptp1
and a slopep2
.Generating a fitted model by finding the model from the family that is the closest to our data (assuming some distance measure).
Thus, the “best” model is not “true,” but the “closest” based on some criterion of distance.
Code from Chapter 23: Model basics of the r4ds book (Wickham & Grolemund, 2017):
# setup:
library(tidyverse)
library(modelr)
library(ds4psy)
library(unikn)
options(na.action = na.warn)
# data:
<- modelr::sim1
dat dim(dat) # 30 x 2
#> [1] 30 2
Figure 18.1 shows a scatterplot of data
:

Figure 18.1: A scatterplot of the to-be-predicted data
.
Random models
Generate n
linear models of form y = p1 + p2 * x
:
<- 100
n set.seed(246) # for reproducible randomness.
# Sample n random values for p1 and p2:
<- tibble(
models p1 = runif(n, -20, 20),
p2 = runif(n, -5, 5)
)
ggplot(sim1, aes(x, y)) +
geom_abline(aes(intercept = p1, slope = p2), data = models, alpha = 1/5) +
geom_point(color = "steelblue", size = 2, alpha = 2/3) +
labs(title = "The data to predict and random linear models", caption = "Data from modelr::sim1") +
theme_ds4psy()
Note usage of set.seed()
for reproducible randomness.
Some random linear models appear to be ok, but most of them are pretty bad (i.e., do not describe the locations and relations of the data points very well).
We can improve the quality of our models by narrowing down the range of parameters from which we sample:
set.seed(123) # for reproducible randomness.
# Sample n random values for p1 and p2:
<- tibble(
models p1 = runif(n, 0, 10),
p2 = runif(n, 0, 5)
)
ggplot(sim1, aes(x, y)) +
geom_abline(aes(intercept = p1, slope = p2), data = models, alpha = 1/5) +
geom_point(color = "steelblue", size = 2, alpha = 2/3) +
labs(title = "The data to predict and random linear models", caption = "Data from modelr::sim1") +
theme_ds4psy()
The new set of models
is much better. But to determine which is the best candidate among them, we need some way of comapring them to the data in a quantitative fashion.
To measure the performance of a model, we need to quantify the distance between the model and the actual data points.
A function that computes the predictions of a model (i.e., the model’s y
values for some input data data$x
), given the model parameters mp
:
# mp ... model parameters
# Model predictions:
<- function(model, data) {
model1 1] + model[2] * data$x
model[
}
# Generate predictions for various model parameters and dat:
model1(c(6, 3), dat)
#> [1] 9 9 9 12 12 12 15 15 15 18 18 18 21 21 21 24 24 24 27 27 27 30 30 30 33
#> [26] 33 33 36 36 36
model1(c(6, 2), dat)
#> [1] 8 8 8 10 10 10 12 12 12 14 14 14 16 16 16 18 18 18 20 20 20 22 22 22 24
#> [26] 24 24 26 26 26
model1(c(5, 3), dat)
#> [1] 8 8 8 11 11 11 14 14 14 17 17 17 20 20 20 23 23 23 26 26 26 29 29 29 32
#> [26] 32 32 35 35 35
model1(c(5, 2), dat)
#> [1] 7 7 7 9 9 9 11 11 11 13 13 13 15 15 15 17 17 17 19 19 19 21 21 21 23
#> [26] 23 23 25 25 25
Adding a distance function: root of the mean-squared deviation: The following function computes the difference between actual and predicted, square them, average them, and the take the square root:
<- function(model, data) {
distance <- data$y - model1(model, data)
diff sqrt(mean(diff ^ 2))
}
# Measure distance for models and dat:
distance(c(6, 3), dat)
#> [1] 7.803283
distance(c(6, 2), dat)
#> [1] 2.60544
distance(c(5, 3), dat)
#> [1] 6.920964
distance(c(5, 2), dat)
#> [1] 2.190166
Alternative distance function (using 2 model parameters as two distinct arguments and only using the data of dat
):
<- function(p1, p2) {
dist distance(c(p1, p2), data = dat)
}
# Checks:
all.equal(dist(6, 3), distance(c(6, 3), dat))
#> [1] TRUE
all.equal(dist(6, 2), distance(c(6, 2), dat))
#> [1] TRUE
We can now compute the distance of each model in models
:
# using a for loop:
<- rep(NA, nrow(models))
md
for (i in 1:nrow(models)){
<- dist(models$p1[i], models$p2[i])
md[i]
}
md
# Or directly:
<- purrr::map2_dbl(models$p1, models$p2, dist)
md_2
# Verify equality:
all.equal(md, md_2)
In one step: Compute the distance of all models
(generated above) and arrange by lowest distance:
<- models %>%
models mutate(dist = purrr::map2_dbl(p1, p2, dist)) %>%
arrange(dist)
# with a column for dist
models #> # A tibble: 100 × 3
#> p1 p2 dist
#> <dbl> <dbl> <dbl>
#> 1 4.15 2.05 2.13
#> 2 1.88 2.33 2.41
#> 3 2.32 2.20 2.43
#> 4 6.53 1.61 2.48
#> 5 5.51 2.05 2.50
#> 6 6.93 1.60 2.50
#> # … with 94 more rows
We can now select the top_n
models (i.e., those with the lowest overall distance to the sim1
data) and compare them to the data:
# Plot the top models:
<- 10
top_n
# subset of models to plot:
<- models[1:top_n, ] # base R solution (given that models are sorted by -dist)
m2p <- dplyr::filter(models, rank(dist) <= top_n) # dplyr solution
m2p # m2p
<- paste0("The data to predict and the top ", top_n, " random linear models")
pt
ggplot(sim1, aes(x, y)) +
geom_abline(aes(intercept = p1, slope = p2, color = dist),
data = m2p, alpha = 4/5) +
geom_point(color = "steelblue", size = 2, alpha = 2/3) +
labs(title = pt, caption = "Data from modelr::sim1") +
scale_color_gradient(low = "gold", high = Bordeaux) +
theme_ds4psy()
An alternative way of looking at the models is by plotting the parameter pair p1
and p2
of each model as a scatterplot.
To gain an impression about a model’s performance, we can color the corresponding point by -dist
(i.e., models with a lower distance value will be colored in a lighter color).
Additionally, the following command first draws the subset of the top_n
models in a larger size and brighter color, effectively highlighting those models in the full scatterplot:
ggplot(models, aes(p1, p2)) +
geom_point(data = filter(models, rank(dist) <= top_n),
size = 4, color = Seegruen) +
geom_point(aes(color = dist), size = 2) +
scale_color_gradient(low = "gold", high = Bordeaux) +
theme_ds4psy()
Grid search
Rather than generating model parametes randomly (within constraints), we could also generate an evenly spaced grid of points (within the same constraints).
Instead of trying lots of random models, we could be more systematic and generate an evenly spaced grid of parameters. The following creates such a grid search for \(20 \times 20\) parameter pairs:
<- expand.grid(
grids p1 = seq(0, 10, length = 20),
p2 = seq(0, 5, length = 20)
%>%
) mutate(dist = purrr::map2_dbl(p1, p2, dist)) %>%
arrange(dist)
# Data with grid of linear models:
ggplot(sim1, aes(x, y)) +
geom_abline(aes(intercept = p1, slope = p2, color = dist),
data = grids, alpha = 1/5) +
geom_point(color = "steelblue", size = 2, alpha = 2/3) +
labs(title = "Grid search of linear models",
caption = "Data from modelr::sim1") +
scale_color_gradient(low = "gold", high = Bordeaux) +
theme_ds4psy()
# Parameters (as scatterplot):
ggplot(grids, aes(p1, p2)) +
geom_point(data = filter(grids, rank(dist) <= top_n),
size = 4, colour = Seegruen) +
geom_point(aes(colour = dist), size = 2) +
scale_color_gradient(low = "gold", high = Bordeaux) +
theme_ds4psy()
We could repeat this process for a finer grid to incrementally zoom in to the best parameters.
Minimizing distance
An important part of quantitative modeling is to find parameters that would optimize (i.e., maximize or minimize) some measure.
There are multiple ways of doing this.
If we are considering a one-dimensional range of values, the optimize()
function (of the stats package)
searches this range (from some lower
to some upper
value) for a minimum (or maximum) of the first argument of some function f
:
curve(expr = (x - 1/3)^2, from = -1, to = 2, col = "deepskyblue", lwd = 2, ylab = "y")
# As a function:
<- function(x, a) {(x - a)^2} # x as 1st parameter (to be optimized)
fn
<- optimize(fn, c(-1, 2), a = 1/3))
(mn #> $minimum
#> [1] 0.3333333
#>
#> $objective
#> [1] 0
<- optimize(fn, c(-1, 2), a = 1/3, maximum = TRUE))
(mx #> $maximum
#> [1] 1.999924
#>
#> $objective
#> [1] 2.777525
<- optimize(fn, c(-1, 1), a = 1/3, maximum = TRUE)) # Note limited range
(mx_2 #> $maximum
#> [1] -0.999959
#>
#> $objective
#> [1] 1.777668
# Not as lists:
# (mx <- unlist(mx)) # as named vector
points(x = mn$minimum, y = mn$objective, col = "olivedrab3", pch = 16, cex = 2) # using mn
points(x = mx$maximum, y = mx$objective, col = "brown2", pch = 17, cex = 2) # using mx
points(x = mx_2$maximum, y = mx_2$objective, col = "steelblue", pch = 18, cex = 2) # using mx_2
Examples based on the documentation of optimize()
:
# Example 2:
<- function(x) {x^3 * (x - 1)}
fn plot(fn, col = "deepskyblue", lwd = 2)
# Minimum:
<- optimize(fn, lower = 0, upper = 2))
(mn #> $minimum
#> [1] 0.7500078
#>
#> $objective
#> [1] -0.1054687
points(x = mn$minimum, y = mn$objective, col = "olivedrab3", pch = 16, cex = 2) # using mx
# See where a function is evaluated:
optimize(function(x) x^3 * (print(x) - 1), lower = 0, upper = 2)
#> [1] 0.763932
#> [1] 1.236068
#> [1] 0.472136
#> [1] 0.6666667
#> [1] 0.7777988
#> [1] 0.7471242
#> [1] 0.7502268
#> [1] 0.7499671
#> [1] 0.7500078
#> [1] 0.7500485
#> [1] 0.7500078
#> $minimum
#> [1] 0.7500078
#>
#> $objective
#> [1] -0.1054687
# Warning:
# A "wrong" solution with an unlucky interval and piecewise constant f():
<- function(x) ifelse(x > -1, ifelse(x < 4, exp(-1/abs(x - 1)), 10), 10)
f <- function(x) { print(x); f(x) }
fp
plot(f, -2,5, ylim = 0:1, col = "red")
optimize(fp, c(-4, 20)) # fails to find the minimum
#> [1] 5.167184
#> [1] 10.83282
#> [1] 14.33437
#> [1] 16.49845
#> [1] 17.83592
#> [1] 18.66253
#> [1] 19.1734
#> [1] 19.48913
#> [1] 19.68427
#> [1] 19.80487
#> [1] 19.8794
#> [1] 19.92547
#> [1] 19.95393
#> [1] 19.97153
#> [1] 19.9824
#> [1] 19.98913
#> [1] 19.99328
#> [1] 19.99585
#> [1] 19.99743
#> [1] 19.99841
#> [1] 19.99902
#> [1] 19.99939
#> [1] 19.99963
#> [1] 19.99977
#> [1] 19.99986
#> [1] 19.99991
#> [1] 19.99995
#> [1] 19.99995
#> $minimum
#> [1] 19.99995
#>
#> $objective
#> [1] 10
optimize(fp, c(-7, 20)) # finds minimum
#> [1] 3.313082
#> [1] 9.686918
#> [1] -0.6261646
#> [1] 1.244956
#> [1] 1.250965
#> [1] 0.771827
#> [1] 0.2378417
#> [1] 1.000451
#> [1] 0.9906964
#> [1] 0.9955736
#> [1] 0.9980122
#> [1] 0.9992315
#> [1] 0.9998411
#> [1] 0.9996083
#> [1] 0.9994644
#> [1] 0.9993754
#> [1] 0.9993204
#> [1] 0.9992797
#> [1] 0.9992797
#> $minimum
#> [1] 0.9992797
#>
#> $objective
#> [1] 0
optimize(fp, c(-2, 5))
#> [1] 0.6737621
#> [1] 2.326238
#> [1] -0.3475242
#> [1] 0.9935203
#> [1] 1.075221
#> [1] 1.034342
#> [1] 1.013931
#> [1] 1.003726
#> [1] 0.998623
#> [1] 1.001174
#> [1] 0.9998987
#> [1] 1.000538
#> [1] 1.000294
#> [1] 1.000143
#> [1] 1.00005
#> [1] 0.999992
#> [1] 0.9999513
#> [1] 0.9999513
#> $minimum
#> [1] 0.9999513
#>
#> $objective
#> [1] 0
Here, we explore two different methods (both from the stats package that comes with R):
General intuition: Given the scatterplot of our data
(shown in Figure 18.1), we have a pretty good idea that we are looking for a linear model.
General-purpose optimization (using
optim()
)Fitting a linear model (using
lm()
)
- ad 1: Using
optim()
to determine the best parameter pair:
Intuition: Given a fn = distance
and data
, optim()
will use an algorithm to find the parameter combination par
that minimize the value of distance
.
# sim1
<- optim(par = c(0, 0), fn = distance, data = sim1)
best $par # vector of the 2 best parameters (i.e., minimizing fn for data)
best#> [1] 4.222248 2.051204
ggplot(sim1, aes(x, y)) +
geom_point(color = "steelblue", size = 2, alpha = 2/3) +
geom_abline(intercept = best$par[1], slope = best$par[2], color = "gold") +
labs(title = "The best linear model to predict the data",
caption = "Data from modelr::sim1") +
theme_ds4psy()
- ad 2: Using
lm()
for fitting a linear model:
<- lm(y ~ x, data = sim1)
sim1_mod coef(sim1_mod)
#> (Intercept) x
#> 4.220822 2.051533
For fitting other datasets and types of functions, using glm()
or nlm()
may be indicated.
18.3.7 ToDo 2
Quantitative prediction/regression: Generalize and use a more meaningful dataset:
Candidate data:
father.son
orgalton
(both from the UsingR package)
See Chapter 3: Linear regression of James et al. (2021).