1.1 Example: Linear Models

Consider the simple linear model. y=β0+β1x+ε

This model has unknown parameters β0 and β1. Given observations (y1,x1),(y2,x2),,(yn,xn), we can combine these data with the likelihood principle, which gives us a procedure for producing model parameter estimates. The likelihood can be maximized to produce maximum likelihood estimates, ˆβ0=ˉyˆβ1ˉx and ˆβ1=ni=1(xiˉx)(yiˉy)ni=1(xiˉx) These statistics, ˆβ0 and ˆβ1, can then be interpreted, depending on the area of application, or used for other purposes, perhaps as inputs to other procedures. In this simple example, we can see how each component of the modeling process works.

Component Implementation
Model Linear regression
Principle/Technique Likelihood principle
Algorithm Maximization
Statistic ˆβ0, ˆβ1

In this example, the maximization of the likelihood was simple because the solution was available in closed form. However, in most other cases, there will not be a closed form solution and some specific algorithm will be needed to maximize the likelihood.

Changing the implementation of a given component can lead to different outcomes further down the change and can even produce completely different outputs. Identical estimates for the parameters in this model can be produced (in this case) by replacing the likelihood principle with the principle of least squares. However, changing the principle to produce, for example, maximum a posteriori estimates would have produced different statistics at the end.