Unit 10 GLPs

a GLP is a linear filter with a white noise input

We take white noise input and put it through a filter and get something that isnt white noise.

j=0ψjatj=Xtμ

GLP’s are an infinite sum of white noise terms. This might be weird now but later on this concept will return.

The ψj’s are called psi weights, these will be useful again later. AR, MA, and ARMA are all special cases of GLP’s, and will be useful when we study confidence intervals.

10.1 AR(1) Intro

AR(p) in which p = 1

We will go through well known forms of the AR(1) model.

Xt=β+ϕ1Xt1+at

β=(1ϕ1)μ

Beta is moving average constant

We are saying that the vluse of x depends on some constant, the previous value of x, and some random white noise. Looks a lot like regression except there is a wierd variable

we can rewirte tis as

Xt=(1ϕ1)μ+ϕ1Xt1+at

An are 1 process is stationary iff the magnitude of ϕ1 is less than 1

We will deal with this a lot in the near future

10.1.1 AR(1) math zone

E[Xt]=μ? E[Xt]=E[1ϕ1μ]+E[ϕ1Xt1]+E[at]

We can rewrite this as E[Xt]=1ϕ1μ+ϕ1E[Xt]+0

E[Xt](1ϕ1)=1ϕ1μ E[Xt]=μ

Mean does not depend on T

The variance also does not, and if phi1 is less than one variance is finite

σ2X=σ2a1ϕ21

and rhok is phi1 to the k

ρk=ϕk1,k0

Spectral Density of AR(1) also does not depend on time, it just monotonically increases or decreases depending on phi1:

SX(f)=σ2aσ2X(11ϕ1e2πif2)

10.1.2 the zero mean form of AR1

with zero mean, Xt=ϕ1Xt1+at OR Xtϕ1Xt1A

10.1.3 AR1 with positive phi

remember we have ρk=ϕk1

With positive ϕ1, we have:

realizations are wandering and aperiodic

Autocorrelations are damped exponentials

Spectral density Peaks at zero.

Higher values = stronger versions of these characterstics

10.1.4 AR1 with negative phi

We have:

realization are oscillating

Autocorrelations are damped and alternating(negative to a power you fool)

Spectral density* has a peak at f = 0.5 (cycle length of 2)

Higher magnitude = stronger characteristics

10.1.5 Nonstationary

if phi = 1 or phi> 1, we have that realizations are from a nonstationary process. with it equal to one, it looks ok, and is actualy a special ARIMA model. WIth it > 1, we have crazy explosive realizations. Check this out:

nonstarma <- function(n, phi) {
    x <- rep(0, n)
    a <- rnorm(n)
    x[1:n] <- 0
    for (k in 2:n) {
        x[k] = phi * x[k - 1] + a[k]
    }
    tplot(x)
}

nonstarma(n = 50, phi = 1.5) + th