Chapter 5 Linear Models
This Chapter will not give a complete introduction of linear regression. It only focus on some critical issues which are related with Travel-Urban Form models. This chapter only discusses the VMT-urban form models because the response are continue variables in regular linear models. The part of mode choice model is placed in Chapter of Generalized linear models.
5.1 Assumptions
5.1.1 Additive and linearity
For linear models, the most important assumptions are the additive and linear relationship between the response and predictors. Gravity Law discloses that travel distance has a multiplicative (inverse) relationship with the ‘masses’ of two places. If the population size can be a representative of built environment, the additive relationship will not hold. Previous studies also shows that the effect sizes of built environment with respect of travel are small and complex. There is not sufficient evidence to support or against the linear hypothesis.
5.1.2 Independent Identically Distributed (IID)
Another essential assumption is that random error are Independent Identically Distributed (IID). Random error is also called residual, which refer to the difference between observed y and fitted ˆy. ˆy are the linear combinations of predictors X. residuals represent the part can not be explained by the model.
e=y−ˆy
The expected value, the variances, and the covariances among the random errors are the first- and second-moment of residuals. ‘Identical’ means that random errors should have zero mean and constant variance. The homogeneity of variance is also called homoscedasticity.
E(ε)=0Var(ε)=σ2
‘Independent’ requires the random errors are uncorrelated. That is
Cov[εi,εj]=0,i≠j
Once the conditions of IID are satisfied, the Gauss - Markov theorem 5.1 proves that least-square method could give the minimum-variance unbiased estimators (MVUE) or called the best linear unbiased estimators (BLUE). These conditions are not strict and make regression method widely applicable.
Theorem 5.1 (Gauss - Markov theorem) For the regression model (1.1) with the assumptions E(ε)=0, Var(ε)=σ2, and uncorrelated errors, the least-squares estimators are unbiased and have minimum variance when compared with all other unbiased estimators that are linear combinations of the yi. (Montgomery et al., 2021)
Another version is that: Under Models II - VII, if λ′β is estimable and ˆβ is any solution to the normal equations, then λ′ˆβ is a linear unbiased estimator of λ′β and, under Model II, the variance of λ′ˆβ is uniformly less than that of any other linear unbiased estimator of λ′β (IX, Theorem E13, p38)
Unfortunately, many of the predictors are correlated. Moreover, the observations from various cities, regions, or counties are very unlikely identical. This issue is called heteroscedasticity. Related contents are in Section of Diagonusis and Validation.
5.1.3 Normality
When conducting hypothesis test and confidence intervals, the required assumption is y|x∼N(Xβ,σ2I). Maximum Likelihood Methods also requires this assumption.
Evidence has demonstrated that travel distance is not Normal distributed. The Zipf’s law also prove that travel distance follows a power distribution. Using logarithm transformations, the skewed distribution can be converted to an approximate normal distribution.
There are some quantitative methods which can examine nomalirty of the transformed distributions.
5.2 Estimations
5.2.1 Least Squares
- Ordinary Least Squares
Least-Squares method can be used to estimate the coefficients β in equation (1.1) The dimension of X is n×p, which means the data contain n observations and p−1 predictors. The p×1 vector of least-squares estimators is denoted as ˆβ and the solution to the normal equations is
ˆβ=(X′X)−1X′y
and
ˆσ2=1n−p(y−Xˆβ)′(y−Xˆβ)
Here requires X′X are invertible, that is, the covariates are linearly independent if X has rank p (V., Definition, p.22).
Given the estimated coefficients, the model can give the fitted values of response as:
ˆy=Xˆβ=X(X′X)−1X′y=Hy
where H=X(X′X)−1X′ is hat matrix and e=y−ˆy=y−Xˆβ=(I−H)y
- Generalized Least Squares
When the observations are not independent or have unequal variances, the covariance matrix of error is not identity matrix. The assumption of regression model V[ε]=σ2I doesn’t hold. Denote V is a known n×n positive definite matrix and V[ε]=σ2V. Then, there exists an n×n symmetric matrix K with rank n and V=KK′. Let
z=K′y, B=K−1X,and η=K′ε
The linear model becomes z=Bβ+η and V[η]=σ2I. If the model is full rank, that is rank(X)=p then X′V−1X is invertible and the generalized least squares solution is
ˆβGLS=(X′V−1X)−1X′V−1y
and
ˆσ2GLS=1n−p(y−XˆβGLS)′V−1(y−XˆβGLS)
5.2.2 Standardized coefficients
The value of ˆβj means that, given all other coefficients fixed, for each change of one unit in xj, the average change in the mean of Y. However, the units of predictors X are very different. Hence, the values of coefficients are not comparable.
Unit normal scaling or Unit length scaling can convert ˆβj to dimensionless regression coefficient, which is called standardized regression coefficients. Let
zij=xij−ˉxj√∑ni=1(xij−ˉxj)2,y0i=yi−ˉy√∑ni=1(yi−ˉy)2
ˆb=(Z′Z)−1Z′y0, orˆbj=ˆβj√∑ni=1(xij−ˉxj)2∑ni=1(yi−ˉy)2, j=1,2,...(p−1), andˆβ0=ˉy−p−1∑j=1ˆβjˉxj
Note that Z′Z correlations matrix.
Z′Z=[1r12r13…r1kr211r23…r2kr31321…r3k⋮⋮⋮⋱⋮rk1rk2k3…1],Z′y0=[r1yr2yr3y⋮rky]
where
rij=∑nu=1(xui−ˉxi)(xuj−ˉxj)√∑nu=1(xui−ˉxi)2∑nu=1(xuj−ˉxj)2rjy=∑nu=1(xuj−ˉxj)(yu−ˉy)√∑nu=1(xuj−ˉxj)2∑nu=1(yu−ˉy)2
where rij is the simple correlation between xi and xj. rjy is the simple correlation between xj and y
It seems that standardized regression coefficients are comparable. However, the value of ˆbj depends on other predictors. Therefore, comparison between different models is still problematic.
5.2.3 Elasticity
Definition: Commonly used to determine the relative importance of a variable in terms of its influence on a dependent variable. It is generally interpreted as the percent change in the dependent variable induced by a 1% change in the independent variable.16
ei=βiXiYi≈∂Yi∂XiXiYi
Model | Marginal Effects | Elasticity |
---|---|---|
Linear | β | βXiYi |
Log-linear | βYi | βXi |
Linear-log | β1Xi | β1Yi |
Log-log | βYiXi | β |
Logit | βpi(1−pi) | βXi(1−pi) |
Poisson | βλi | βXi |
NB | βλi | βXi |
It is strange that Reid Ewing and Cervero (2010) use the formula of βˉX(1−ˉYn) for Logit model. In Poisson model and Negative Binomial model, λi=exp[x′iβ] (Greene, 2018, eq.18-17,21). For truncated Poisson model: δi=(1−Pi,0−λiPi,0)(1−Pi,0)2⋅λiβ (Greene, 2018, eq.18-23). Hurdle model will give separate marginal(partial) effects (Greene, 2018, example 18.20)
5.3 Inference
5.3.1 Analysis of Variance
Analysis of Variance (ANOVA) is the fundamental approach in regression analysis. Actually, this method analysis the variation in means rather than variances themselves (Casella & Berger, 2002, Ch.11).
Once the linear relationship holds, the response y can be decomposed to
y′y=y′Hy+y′(I−H)yy′y=ˆβX′y+y′y−ˆβX′yy′y−nˉy2=ˆβX′y−nˉy2+y′y−ˆβX′y∑(y−ˉy)2=∑(ˆy−ˉy)2+∑(y−ˆy)2SST=SSR+SSE
where SST is Sum of Squares Total, SSR is Sum of Squares Regression, and SSE is Sum of Square Error. SSE=e′e represents the unknown part of model.
For Generalized Least Squares method, SST=y′V−1y, SSR=ˆβ′B′z=y′V−1X(X′V−1X)−1X′V−1y, and SSE=SST−SSR
5.3.2 Hypothesis Test
- Significance of Regression
Significance of regression means if the linear relationship between response and predictors is adequate. The hypotheses for testing model adequacy are
H0:β0=β1=⋯=βp−1=0H1:at least one βj≠0, j=0,1,...,(p−1)
By Theorem D14 (XX, p.90), if an n×1random vector y∼N(μ,I), then
y′y∼χ2(n,12μ′μ)
Recall the assumption of y|x∼N(Xβ,σ2I).
By the additive property of χ2 distribution,
MSEσ2=y′(I−H)y(n−p)σ2∼χ2(n−p)MSRσ2=y′Hy(p−1)σ2∼χ2(p−1)
Though σ2 is usually unknown, by the relationship between χ2 and F distributions,
F0=MSEMSR∼F(p−1),(n−p),λ where λ is the non-centrality parameter. It allows to test the hypotheses given a significance level α. If test statistic F0>Fα,(p−1),(n−p), then one can reject H0.
If a VMT-urban form model added many predictors but adjusted R2 is still low, the association between travel distance and built environment might be spurious.
- Significance of Coefficients
For testing a specific coefficient, the hypothesis is
H0:βj=0H1:βj≠0
ˆβ is a linear combination of y. Based on the assumption of y|x∼N(Xβ,σ2I), it can be proved that ˆβ∼N(β,σ2(X′X)−1) and
t0=ˆβjse(ˆβj)=ˆβj√ˆσ2Cjj∼t(n−p) where Cjj is the element at the j row and j column of (X′X)−1. If |t0|<tα/2,(n−p), then the test failed to reject the H0, this predictor can be removed from the model. This test is called partial or marginal test because the test statistic for βj depends on all the predictors in the model.
5.4 Adequacy
The outcome of estimation and inference can not demonstrate model’s performance. If the primary assumptions is violated, the estimations could be biased and the model could be useless. These problems can also happen when the model is not correctly specified. It is necessary to make diagnosis and validation for fitted models.
5.4.1 Goodness of fit
This structure tell us how good the model can explain the data. Coefficient of Determination R2 is a proportion to assess the quality of fitted model.
R2=SSRSST=1−SSESST
when R2 is close to 1, the most of variation in response can be explained by the fitted model. Although R2 is not the only criteria of a good model, it is often available in most published papers. Recall the discussion in Part I, the aggregated data will eliminate the difference among individuals, households, or neighborhoods. In the new variance structure, SSE will be much less than disaggregated model. The R2 in many disaggregate studies are around 0.3, while the R2 in some aggregate studies can reach 0.8. A seriously underfitting model’s outputs could be biased and unstable.
A fact is that adding predictors into the model will never decrease R2. Thus the models with different number of predictors is not comparable. Adjusted R2 can address this issue by introducing degree of freedom. The degree of freedom denotes the amount of information required to know.
dfT=dfR+dfEn−1=(p−1)+(n−p)
Then, the mean square (MS) of each sum of squares (SS) can be calculated by MS=SS/df. The mean square error MSE is also called as the expected value of error variance ˆσ2=MSE=SSE/(n−p). n−p is the degree of freedom. Then adjusted R2 is
R2adj=1−MSEMST=1−SSE/(n−p)SST/(n−1)
Another similar method is R2 for prediction based on PRESS. Recall the PRESS statistic is the prediction error sum of square by fitting a model with n−1 observations.
PRESS=n∑i=1(yi−ˆy(i))2=n∑i=1(ei1−hii)2
A model with smaller PRESS has a better ability of prediction. The R2 for prediction is
R2adj=1−PRESSMST
5.4.2 Residuals Analysis
The major assumptions, both IID and normality are related to residual. Residual diagnosis is an essential step for modeling validation.
There are several scaled residuals can help the diagnosis. Since MSE is the expected variance of error ˆσ2 and E[ε]=0, standardized residuals should follow a standard normal distribution.
di=ei√MSE=ei√n−p∑ni=1e2i,i=1,2,...,n
Recall random error e=y−ˆy=(I−H)y and hat matrix H=X(X′X)−1X′. Let hii denote the ith diagonal element of hat matrix. Studentized Residuals can be expressed by ri=ei√MSE(1−hii),i=1,2,...,n It is proved that 0≤hii≤1. An observation with hii closed to one will return a large value of ri. The xi who has strong influence on fitted value is called leverage point.
Ideally, the scaled residual have zero mean and unit variance. Hence, an observation with |di|>3 or |ri|>3 is a potential outlier.
Predicted Residual Error Sum of Squares (PRESS) can also be used to detect outliers. This method predicts the ith fitted response by excluding the ith observation and examine the influence of this point. The corresponding error e(i)=ei/(1−hii) and V[e(i)]=σ2/(1−hii). Thus, if MSE is a good estimate of σ2, PRESS residuals is equivalent to Studentized Residuals.
e(i)√V[e(i)]=ei/(1−hii)√σ2/(1−hii)=ei√σ2(1−hii)
- Residual Plot
Residual plot shows the pattern of the residuals against fitted ˆy. If the assumptions are valid, the shape of points should like a envelope and be evenly distributed around the horizontal line of e=0.
A funnel shape in residual plot shows that the variance of error is a function of ˆy. A suitable transformation to response or predictor could stabilize the variance. A curved shape means the assumption of linearity is not valid. It implies that adding quadratic terms or higher-order terms might be suitable.
- Normal Probability Plot
A histogram of residuals can check the normality assumption. The highly right-skewed probability distribution of VMT log-transform of VMT is reasonable.
A better way is a normal quantile – quantile (QQ) plot of the residuals. An ideal cumulative normal distribution should plot as a straight line. Only looking at the R2 and p-values cannot disclose this feature.
5.4.3 Heteroscedasticity
When the assumption of constant variance is violated, the linear model is heteroscedastic. Heteroscedasticity is common in urban studies. For example, the cities with different size are not identical. Small cities or rural areas might have homogeneous values of population density, while large cities’ densities are more variable.
Recall Generalized least square estimates (5.7) and (5.8), if the residuals are independent but variances are not constant, a simple linear model becomes ε∼MVN(0,σ2V) where
V=[x210…00x22…0⋮⋮⋱⋮00…x2n],V−1=[1x210…001x22…0⋮⋮⋱⋮00…1x2n]
Then X′V−1X=n and the generalized least squares solution is
ˆβ1,WLS=1nn∑i=1yixi
and
ˆσ2WLS=1n−1n∑i=1(yi−ˆβ1xi)2x2i
In heteroscedastic model, the OLS estimates of coefficients are still unbiased but no longer efficient. But the estimates of variances are biased. The corresponding hypothesis test and confidence interval would be misleading.
Another special case is the model with aggregated variables, which is the cases of geographic unit. Let uj and vj are the response and predictors of jth household in a neighborhood. ni is the sample size in each neighborhood. Then yi=∑nij=1uj/ni and Xi=∑nij=1vj/ni. In this case,
V=[1n10…001n2…0⋮⋮⋱⋮00…1nn],V−1=[n10…00n2…0⋮⋮⋱⋮00…nn]
Then X′V−1X=∑ni=1nix2i and the WLS estimate of β1 is
ˆβ1,WLS=1n∑ni=1nixiyi∑ni=1nix2i
and
V[ˆβ1,WLS]=V[∑ni=1nixiyi](∑ni=1nix2i)2=∑ni=1n2ix2iσ2/ni(∑ni=1nix2i)2=σ2∑ni=1nix2i
There are three procedures, Bartlett’s likelihood ratio test, Goldfeld-Quandt test, or Breusch-Pagan test which can be used to examine heteroscedasticity (Ravishanker & Dey, 2020, 8.1.3, pp.288-290)
5.4.4 Autocorrelation
For spatio-temporal data, the observations often have some relationship over time or space. In these cases, the assumption of independent errors is violated, the linear model with serially correlated errors is called autocorrelation. Autocorrelation is also common in urban studies. All the neighboring geographic entities or stages could impact each other, or sharing the similar environment.
Take a special case of time-series data for example, assuming the model have constant variance. E[ε]=0 but Cov[εi,εj]=σ2ρ|j−i|, i,j=1,2,...,n and |ρ|<1 The variance-covariance matrix is also called Toeplitz matrix as below
V=[1ρρ2…ρn−1ρ1ρ…ρn−2⋮⋮⋮⋱⋮ρn−1ρn−2ρn−3…1],{V−1}ij={11−ρ2if i=j=1,n1+ρ21−ρ2if i=j=2,...,n−1−ρ1−ρ2if |j−i|=10otherwise
This is a linear regression with autoregressive order 1 (AR(1)). The estimates of ˆβ is the same with the GLS solutions, which are ˆβGLS=(X′V−1X)−1X′V−1y and ^V[ˆβGLS]=ˆσ2GLS(X′V−1X)−1, where ˆσ2GLS=1n−p(y−XˆβGLS)′V−1(y−XˆβGLS).
It is can be verified that ˆβGLS≤ˆβOLS always holds and they are equal when V=I or ρ=0. It proves that ˆβGLS are the best linear unbiased estimators (BLUE).
This case can be extended to miltiple regression models and the autocorrelation of a stationary stochastic process at lag-k. Durbin-Watson test is used to test the null hypothesis of ρ=0.
5.4.5 Multicollinearity
Multicollinearity or near-linear dependence refers to the models with highly correlated predictors. When data is generated from experimental design, the treatments X could be fixed variables and be orthogonal. But travel-urban form model is observational studies and nothing can be controlled as in lab. It is known that there are complex correlations among the built-environment predictors themselves.
Although, the basic IID assumptions do not require that all predictors X are independent, when the predictors are near-linear dependent, the model is ill-conditioned and the least-square estimators are unstable.
multicollinearity can make the variances inflated and impact model precision seriously. If some of predictors are exact linear dependent, the matrix (X′X)−1 is symmetric but non-invertible. By spectral decomposition of symmetric matrix, X′X=P′ΛP where Λ=diag(λ1,...,λp), λi’s are eigenvalues of X′X, P is an orthogonla matrix whose columns are normalize eigenvectors. Then the total-variance of ˆβLS is σ2∑pj=11/λj. If the predictors are near-linear dependent or nearly singular, λjs may be very small and the total-variance of ˆβLS is highly inflated.
For the same reason, the correlation matrix using unit length scaling Z′Zwill has a inverse matrix with inflated variances. That means that the diagonal elements of (Z′Z)−1 are not all equal to one. The diagnoal elements are called Variance Inflation Factors, which can be used to examine multicollinearity. The VIF for a particular predictor is examined as below (5.29)
VIFj=11−R2j
where R2j is the coefficient of determination by regressing xj on all the remaining predictors.
A common approach is to drop off the predictor with greatest VIF and refit the model until all VIFs are less than 10. However, dropping off one or more predictors will lose many information which might be valuable for explaining response. Due to the complexity among predictors, dropping off the predictor with the greatest VIF is not always the best choice. Sometimes, removing a predictor with moderate VIF can make all VIFs less than 10 in the refitted model. Moreover, there is not an unique criteria for VIF value. When the relationship between predictor and response is weak, or the R2 is low, the VIFs less than 10 may also affect the ability of estimation dramatically.
Orthogonalization before fitting the model might be helpful. Other approaches such as ridge regression or principal components regression could deal with multicollinearity better.
5.4.5.1 Ridge Regression and Lasso
Least squares method gives the unbiased estimates of regression coefficients. However, multicollinearity will lead to inflated variance and make the estimates unstable and unreliable. To get a smaller variance, a tradeoff is to release the requirement of unbiasedness. Denote ˆβR are biased estimates but its variance is small enough.
MSE(ˆβR)=E[ˆβR−β]2=Var[ˆβR]+Bias[ˆβR]2<MSE(ˆβLS)=Var[ˆβLS]
Hoerl and Kennard (1970) proposed ridge regression to address the nonorthogonal problems.
ˆβR=(X′X+kI)−1X′y where k≥0 is a selected constant and is called a biasing parameter. When k=0, the ridge estimator reduces to least squares estimators.
When X is nonsingular and (X′X)−1 exists, the ridge estimator is a linear transformation of ˆβLS. That is ˆβR=ZkˆβLS where Zk=(X′X+kI)−1X′X
Recall the total-variance of ˆβLS is σ2∑pj=11/λj. The total-variance of ˆβR is
tr(Cov[ˆβR])=σ2p∑j=1λj(λj+k)2
Thus, introducing k into the model can avoid tiny denominators and eliminate the inflated variance. Choosing a proper value of k is to keep the balance of MSE and bias. The bias in ˆβR is
Bias(ˆβR)2=k2β′(X′X+kI)−2β)
Hence,increasing k will reduce MSE but make greater bias. Ridge trace is a plot of ˆβR versus k that can help to select a suitable value of k. First, at the value of k, the estimates should be stable. Second, the estimated coefficients should have proper sign and reasonable values. Third, the SSE also should has a reasonable value.
Ridge regression will not give a greater R2 than least squares method. Because the total sum of squares is fixed.
SSE(ˆβR)=(y−XˆβR)′(y−XˆβR)=(y−XˆβLS)′(y−XˆβLS)+(ˆβLS−ˆβR)′X′X(ˆβLS−ˆβR)=SSE(ˆβLS)+(ˆβLS−ˆβR)′X′X(ˆβLS−ˆβR)≥SSE(ˆβLS)
The advantage of ridge regression is to abtain a suitable set of parameter estimates rather than to improve the fitness. It could have a better prediction ability than least squares. It can also be useful for variable selection. The variables with unstable ridge trace or tending toward the value of zero can be removed from the model.
In many case, the ridge trace is erratic divergence and may revert back to least square estimates. Jensen and Ramirez(2010, 2012) proposed surrogate model to further improve ridge regression. Surrogate model chooses k depend on matrix X and free to Y.
Using a compact singular value decomposition (SVD) X=PDξQ′, the left-singular vector P satisfies P′P=I and the right-singular vectors Q are orthogonal. Dξ=diag(ξ1,...,ξp) is decreasing singular values. Then Xk=PD((ξ2i+ki)1/2)Q′ and
X′X=QD2ξQ′X′kXk=Q(D2ξ+K)Q′generalized surrogateX′kXk=QD2ξQ′+kIordinary surrogate
and the surrogate solution ˆβS is
Q(D2ξ+K)Q′ˆβS=Xk=QD((ξ2i+ki)1/2)P′y
Jensen and Ramirez proved that SSE(ˆβS)<SSE(ˆβS) and surrogate model’s canonical traces are monotone in k.
5.4.5.2 Principal Components Regression (PCR)
Dimention Reduction Methods Shrinkage Methods
Partial Least Square (PLS)
By dint of some synthetic variables, the disaggregated model’s R2 can be over 0.5. But the risk is these techniques may describe the data themselves, and the results cannot be generalized. It is worthy of a deeper investigation.
5.6 Other Topics
5.6.2 SEM (Opt.)
Another attempt tries the method of structural equation modeling (SEM). The two studies capture higher elasticities of per capita VMT with respect to density (-0.38 and -0.238) (Cervero and Murakami 2010; Reid Ewing, Hamidi, et al. 2014).
In general, modeling is a case-by-case work. Researchers may have their preferred model by weighing the sensitivity and robustness even given the same hypothesis and data. The published papers usually don’t show the results of diagnosis and validation. Under this circumstance, compare or summarize these outcomes are unreliable.
References
McCarthy, P.S., Transportation Economics Theory and Practice: A Case Study Approach. Blackwell, Boston, 2001.↩︎