7 Day 7 (February 11)

7.1 Announcements

  • Please read (and re-read) Ch. 4 and 6 in BBM2L book.

  • Final project is posted

    • Short lecture with donuts/meet people on Thursday
  • Activity 3 is posted

  • Selected questions/clarifications from journals

    • Two guiding principals in statistics
    • Loss functions and decision theory ([good paper])(https://mhooten.github.io/publications/Williams_Hooten_EcolApps_2016.pdf)
    • “This class (Day 6) felt exactly like the previous class (Day 5). The only difference was two sentences regarding Metropolis-Hastings and rejection sampling at the end of the class…”
    • “One question I have is about the usefulness of Bayesian methods when sample size is large….Does that then mean that in most cases it doesn’t really make a different which method you use?”
    • “How to get from the assumptions to the model, and (2) how to translate the model into code.”
    • “However, in my discipline, we mostly compare the effect of intervention between two or more groups, and so in as much as the magnitude and uncertainty around the magnitude is important, how large or small the difference is between the two groups, that is not due to chance alone, is equally significant as well. Could it be that in such scenarios the magnitude, uncertainty, and P-values are equally significant.”
    • “In Activity 2, Problem 8, you seemed to suggest that there’s NO POINT in solving the integral analytically, but I respectfully disagree. When dealing with very complex problems or models, simplifying or adjusting the integrals, even slightly, can significantly reduce the computational power required to make progress.”

7.2 Building our first statistical model

  • The backstory
  • Building a statistical model using a Bayesian approach
    • Specify (write out) the likelihood/data model
    • Specify the parameter model (or prior) including hyper-parameters
    • Select an approach to obtain the posterior distribution
      • Analytically (i.e., pencil and paper)
      • Deterministic numerical algorithm
      • Simulation-based or stochastic algorithm (e.g., Metropolis-Hastings, MCMC, importance sampling, ABC, etc)

7.3 Rejection sampling

  • Note this material is not in the book

  • What is rejection sampling?

  • Live example using bat data/model

  • Live example using bat and coin data/model

7.4 Introduction to Metropolis-Hastings algorithm

  • Note this material is in Ch. 4

  • What is a Metropolis-Hastings algorithm?

  • Why use a Metropolis-Hastings algorithm?

    • Original work (see link and link)
    • Wikipedia page
    • Only need to know a function that is proportional to the PDF/PMF
    • Why this is such a big deal for Bayesian statistics?
    • What else do we need to unlock the power of Bayesian?
  • What we loose by using a Metropolis-Hastings algorithm

    • Requires a bit more programming and supervision/checking
    • Correlated samples vs. independent samples
    • Burn-in interval
  • Live example using bat and coin data/model

  • Automated software

    • All of this in R (WinBugs, OpenBugs, JAGS, NIMBLE, STAN, etc)