10 Vector autoregression model
Vector autoregression (VAR) is a system of equations assuming that all variables are endogenously related (there is no exogenous variable)
It is a multivariate dynamic model with number of equations equal to the number of endogenous variables \(k\)
Reduced form of VAR(\(p\)) model is commonly used in applications, which means that all RHS variables are lagged with \(p\) time lags (\(t-1\), \(t-2\), \(\dots\), \(t-p\))
System of equations with two endogenous variables (\(k=2\)) and only one time lag \(p=1\) is called bivariate VAR(\(1\)) model
\[\begin{equation} \begin{aligned} y_t&=\beta_{1,0}+\beta_{1,1}y_{t-1}+\beta_{1,2}x_{t-1}+u_{1,t} \\ x_t&=\beta_{2,0}+\beta_{2,1}y_{t-1}+\beta_{2,2}x_{t-1}+u_{2,t} \end{aligned} \tag{10.1} \end{equation}\]
- Using matrix notation bivariate VAR(\(1\)) is
\[\begin{equation} \underbrace{\begin{bmatrix}y_t \\ x_t \end{bmatrix}}_{z_t}=\underbrace{\begin{bmatrix} \beta_{1,0} \\ \beta_{2,0} \end{bmatrix}}_{a_0}+ \underbrace{\begin{bmatrix} \beta_{1,1} & \beta_{1,2} \\ \beta_{2,1} & \beta_{2,2} \end{bmatrix}}_{A_1} \underbrace{\begin{bmatrix}y_{t-1} \\ x_{t-1} \end{bmatrix}}_{z_{t-1}}+\underbrace{\begin{bmatrix} u_{1,t} \\ u_{2,t} \end{bmatrix}}_{u_t} \tag{10.2} \end{equation}\]
- For simplicity, a matrix equation is written as
\[\begin{equation} z_t=a_0+A_1 z_{t-1}+u_t ~~~~~~~~~u_t\sim WN(0,~\Sigma) \tag{10.3} \end{equation}\]
In matrix equation (10.3) \(z_t\) is a vector of two time-series, \(a_0\) is a two dimensional vector of constant terms, \(A_1\) is matrix of coefficients \(k \times k\) with respect to two dimensional vector of lagged time-series \(z_{t-1}\), \(u_t\) is two dimensional vector of error terms, while \(\Sigma\) is error terms covariance matrix \(k \times k\)
Note that every equation in the system (10.1) is a special case of ARDL(\(1,1\)) without contemporaneous or instantaneous terms (non-lagged time-series on the RHS are omitted)
To include contemporaneous terms, we would need to impose certain restrictions in the structure, such as using structural VAR (SVAR)
Main advantage of reduced VAR(\(p\)) model is OLS estimation for each equation separately
Number of estimated parameters in reduced VAR(\(p\)) model is \(k+pk^2\)
Additionaly, \(k(k+1)/2\) terms should be estimated in the covariance matrix \(\Sigma\) (e.g. two variances on the diagonal and one covariance off-diagonal)
It is assumed that error terms from both equations \(u_{1,t}\) and \(u_{2,t}\) are independent and follow white noise with zero mean and constant variances
This indicates that \(\Sigma\) is diagonal matrix (equal diagonal elements and off-diagonal elements are all zero)
Matrices \(A_1\) and \(\Sigma\) are most important in this type of multivariate time-series analysis
Matrix \(\Sigma\) can be estimated using residuals from both equations
\[\begin{equation} \hat{\Sigma}=\frac{1}{T-1} \begin{bmatrix}\sum_{t=1}^{T}\hat{u}^2_{1,t}~~~~~~~~ \sum_{t=1}^{T} \hat{u}_{1,t} \hat{u}_{2,t} \\ \sum_{t=1}^{T} \hat{u}_{1,t} \hat{u}_{2,t}~~~~~~~~ \sum_{t=1}^{T}\hat{u}^2_{2,t} \end{bmatrix} \tag{10.4} \end{equation}\]
Consistent and efficient OLS estimates require that all time-series are stationary
Stationarity of every time-series can be checked by Augmented Dickey Fuller test - ADF
Estimated parameters of VAR(\(1\)) model are not interpreted in a traditional way
Results of VAR(\(1\)) are interpreted in context of:
- Granger causality
- Impulse response function
- Cointegration