Chapter 4 M4: Estimator behavior
Having established some ways to create estimators, in this module we dig in on how those estimators actually behave. Do they…work? How well? What happens if we have a really big sample to base them on? These questions turn out to underlie the inference tools you’ve been working with your whole statistical life, so there’s a lot to say about them :)
Learning goals for this module include:
- Describe an estimator’s bias, variance, and MSE (mean squared error)
- …which includes talking about what those things are (what do they say about the estimator and the estimates it gives you?), but also actually finding them given a specific example
- Describe the Fisher information associated with an estimator
- Again, this involves both being able to explain what Fisher information is/does (what is it for?), and actually finding it.
- As always, Assessments won’t involve super-complicated algebra (or you might just have to set things up and not calculate it all out).
- This includes both regular and observed Fisher information!
- Find large-sample distributions for estimators
- Particularly for MLEs!
- Explain and assess consistency, efficiency, and the Cramer-Rao Lower Bound (CRLB)
- This includes defining/explaining each one separately (what does it say? why is that useful?), but also explaining how they relate to each other.
- The “and assess” part means you can actually determine whether a given estimator displays these properties!