5.2 The Kalman Filter

FUN FACT: The Kalman filter was developed by Rudolf Kalman while he worked at the Research Institute for Advanced Study in Baltimore, MD.

For the sake of introducing the Kalman filter, let’s take a simple model sometimes referred to as the “local level” model, which has a state equation of

xt=θxt1+wt

and an observation equation of

yt=xt+vt

where we assume wtN(0,τ2) and vtN(0,σ2).

The basic one-dimensional Kalman filtering algorithm is as follows. We start with an initial state x00 and initial variance P00. From here we compute x01=θx00P01=θ2P00+τ2 as our best guesses for x1 and P1 given our current state. Given our new observation y1, we can the update our guess based on this new information to get x11=x01+K1(y1x01)P11=(1K1)P01. where K1=P01/(P01+σ2).

For the general case, we want to produce a new estimate xt and we have the current state xt1t1 and variance Pt1t1. The one-step prediction is then xt1t=θxt1t1Pt1t=θ2Pt1t1+τ2. Given the new information yt, we can then update our estimate to get xtt=xt1t+Kt(ytxt1t)Ptt=(1Kt)Pt1t where Kt=Pt1tPt1t+σ2 is the Kalman gain coefficient.

If we look at the formula for the Kalman gain, it’s clear that if the measurement noise is high, so σ2 is large, then the Kalman gain will be closer to 0, and the influence of the new data point yt will be small. If σ2 is small, then the filtered value xtt will be adjusted more in the direction of yt. This is important to remember when tuning the Kalman filtering algorithm for specific applications. The general idea is

σ2 is largeTrust the systemτ2 is largeTrust the data