Chapter 15 Reflect Review and Relax

Motivating scenarios: We are taking stock of where we are in the term. Thinking about stats and science, and making sure we understand the material to date.

Required reading / Viewing:

The Science of Doubt. link. By Michael Whitlock.

15.1 Review / Setup

  • So much of stats aims to learn the TRUTH.

  • We focus so much on our data and how to measure uncertainty around estimates and (in)compatibility of data with a null model. We will review and solidify this, but

  • Recognize that so much beyond sampling error can mislead us.

15.2 How science goes wrong

Watch the video below. Whe you do, consider these types of errors that accompanny science. You should be able to think about htese and as good questions about them.

  • Fraud.
  • Wrong models.
  • Experimental design error.
  • Communication error.
  • Statistician error.
  • Harking.
  • Coding error.
  • Technical Error.
  • Publication bias.

You should have something to say about

  • The “replication crisis”, and
  • If/why preregistration of studies is a good idea.

FIGURE 15.1: Watch this hour long video on The science of Doubt by Michael Whitlock.

A brief word on publication bias Scientists are overworked and have too much to do. They get more rewards for publishing statistically significant results, so those are usually higher on the to do list. This results in the file drawer effect in which non-significant results are less likely to be submitted for publication.

I find this stuff fascinating If you want more, here are some good resources.
A response to whitlock from Butch brodie link
Videos from calling bullshit – largely redundant with video above): 7.2 Science is amazing, but…, 7.3 Reproducibility, 7.4 A Replication Crisis, 7.5 Publication Bias, and 7.6 Science is not Bullshit.
- The replication crisis
- Estimating the reproducibility of psychological science (Collaboration 2015) link, - A glass half full interpretation of the replicability of psychological (Leek, Patil, and Peng 2015) link,
- The Persistence of Underpowered Studies in Psychological Research: Causes, Consequences, and Remedies (Maxwell 2004) link.
- P-hacking The Extent and Consequences of P-Hacking in Science (Head 2015) link.
- *The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time∗ link.

15.3 Review

You should be pretty comfortable with the ideas of

  • Parameters vs Estimates
  • Sampling and what can go wrong
  • Uncertainty
  • The sampling distribution
  • Null hypothesis significance testing
    • False positives
    • False Negatives
    • What a p-value is and is not

15.4 Homework on Canvas

References

Collaboration, Open Science. 2015. “Estimating the Reproducibility of Psychological Science.” Science 349 (6251). https://doi.org/10.1126/science.aac4716.
Head, Luke AND Lanfear, Megan L. AND Holman. 2015. “The Extent and Consequences of p-Hacking in Science.” PLOS Biology 13 (3): 1–15. https://doi.org/10.1371/journal.pbio.1002106.
Leek, Jeffrey T., Prasad Patil, and Roger D. Peng. 2015. “A Glass Half Full Interpretation of the Replicability of Psychological Science.” https://arxiv.org/abs/1509.08968.
Maxwell, Scott E. 2004. “The Persistence of Underpowered Studies in Psychological Research: Causes, Consequences, and Remedies.” Psychological Methods 9 (2): 147–63. https://doi.org/10.1037/1082-989x.9.2.147.