## 14.3 The likelihood ratio test

$t_{LR} = 2[l(\hat{\theta})-l(\theta_0)] \sim \chi^2_v$

where v is the degree of freedom.

Compare the height of the log-likelihood of the sample estimate in relation to the height of log-likelihood of the hypothesized population parameter

Alternatively,

This test considers a ratio of two maximizations,

\begin{aligned} L_r &= \text{maximized value of the likelihood under H_0 (the reduced model)} \\ L_f &= \text{maximized value of the likelihood under H_0 \cup H_a (the full model)} \end{aligned}

Then, the likelihood ratio is:

$\Lambda = \frac{L_r}{L_f}$

which can’t exceed 1 (since $$L_f$$ is always at least as large as $$L-r$$ because $$L_r$$ is the result of a maximization under a restricted set of the parameter values).

The likelihood ratio statistic is:

\begin{aligned} -2ln(\Lambda) &= -2ln(L_r/L_f) = -2(l_r - l_f) \\ \lim_{n \to \infty}(-2ln(\Lambda)) &\sim \chi^2_v \end{aligned}

where $$v$$ is the number of parameters in the full model minus the number of parameters in the reduced model.

If $$L_r$$ is much smaller than $$L_f$$ (the likelihood ratio exceeds $$\chi_{\alpha,v}^2$$), then we reject he reduced model and accept the full model at $$\alpha \times 100 \%$$ significance level