## 4.2 Notes by chapter

### 4.2.1 Chapter 1

This chapter gives an introduction to the experimental process in general. It’s mostly philosophical, and super wordy. Skim it to get a sense of how BHH (and experimenters in general) approach the world, and to get used to their language; but don’t take a bunch of notes on the specifics.

Notes by section:

- 1.1: Very general philosophy and a couple of puns.
- I have my doubts about the “two-sided brain” thing, but okay, whatever, Bill.

- 1.2: Mostly useful if you are interested in tips for playing 20 Questions.
- 1.3: This introduces a few actual terms (complexity, experimental error, and correlation vs. causation). They are hopefully not new to you, but take a moment to refresh your acquaintance with them.
- Everybody loves that stork example. You can also visit <tylervigen.com> for more fun with spurious correlations.

- 1.4: This is sort of an annotated version of the table of contents: Rita and Peter’s fictional investigation is specifically designed to match BHH’s choice (and order) of topics, which we will not necessarily follow. You can skim or skip this now, but it may be more enlightening to revisit it toward the end of the course, to help you see one way that all the different tools can fit together into a single investigation.
- 1.5: This bit you should actually read, although it isn’t technical at all. Again, hopefully the ideas are not new to you, but they may be more important here than in your previous coursework, especially if you haven’t done applied stats research (or an applied Stat elective) before. Applied statistics relies heavily on things that you may not think of as “statistics” – like subject matter knowledge, goal setting, and communication.
- Sir Ronald Fisher (1890-1962) was a very problematic person in a lot of ways and I do not recommend him either as a role model or as a lunch buddy. He did come up with quite a lot of the foundational ideas in experimental design, though, so his name will crop up periodically in BHH. Like all fields, statistics continues to struggle with how to handle contributions from flawed historical figures. I’m no expert on this issue myself, but I’m happy to chat with you about it in office hours if you like. In the regular course of class, I will mostly not mention Fisher specifically, unless something is actually named after him; the origins of any given concept are more complicated than “this one dude invented it” anyway. Fisher gets enough air time, he doesn’t need more from me.

### 4.2.2 Chapter 2

Even BHH tells you to skip this chapter if you remember intro stats. Frankly, I might tell you to skip most of it even if you *don’t* remember intro stats. BHH takes an approach to terminology and methodology here that, while interesting, is not very standard and will probably be more confusing than it’s worth. For the most part, if you want to brush up on some stuff that’s gotten dusty since you took intro, I’d recommend going back to your previous courses’ notes instead of wading through this chapter.

- 2.1: Okay, this one is actually good to read – a short introduction of some fundamental vocabulary.
- 2.2: Some concepts you encountered in intro stat (visualizing distributions, probability, density, population mean) but with weird terminology.
- 2.3: This is also good stuff, but you probably know it already – population/sample, parameter/statistic, expected value.
- There’s a nice little rant about random sampling in here, which can also be encapsulated in this comic: https://xkcd.com/2357/

- 2.4: More on various measures/statistics you’ve seen before. We don’t really use the “coefficient of variation.”
- The subsection “Residuals and Degrees of Freedom” touches on an important concept. My notes on this topic approach it a bit differently, but you may find this useful as an alternate way of thinking about it.

- 2.5: Normal distributions and the Central Limit “Effect” (because BHH just can’t use the same wording as everyone else). Useful if you lost your old notes/textbook, I guess.
- 2.6: You may or may not have seen normal probability plots before. They’re a way of visually seeing whether your data points match what you’d expect to draw from a normal distribution, used largely for checking outliers. I don’t love BHH’s explanation, but maybe you will.
- 2.7: Joint and conditional distributions, independence. Again, I’d go with your old notes over this explanation.
- 2.8: This is a pretty good theoretical recap of covariance and correlation.
- It also mentions
*autocorrelation*. We won’t really concern ourselves with formal autocorrelation in this course, but I guess you could read this if you’re interested in doing work with time series.

- It also mentions
- 2.9: Not a bad review of Student’s
*t*, but please do not spend your time on Table A or any other probability tables, we have R now, thank you.- Goes on to talk about the sampling distribution of \(\bar{y}\), which we will indeed discuss. But, again, you may have better sources of notes on this.

- 2.10: It says this is about “sampling distributions” but the most useful thing in here is a reminder of the rules for adding/subtracting variance and expected value. That stuff is also in Appendix 2A.
- 2.11: Okay, now this is actually about sampling distributions, for normals.
- 2.12: We’ll talk about the chi-square and F distributions when we need them (mostly for ANOVA).
- “A First Look at Robustness” defines
*robust*. This is a good word to have on hand. We often talk about the “assumptions and conditions” for a particular test/analysis/statistic, like normality or equal variance. Robustness means that the analysis pretty much works even if you*don’t*meet those conditions.

- “A First Look at Robustness” defines
- 2.13: You don’t need to know all this stuff about the binomial distribution for our purposes.
- 2.14: Same for the Poisson. You might come back to this depending on your project topics later.
- Appendix 2A: These rules for computing things with variances are useful, but again, this notation is a little clunky, and you probably have better notes on them elsewhere.

### 4.2.3 Chapter 3

This chapter is more or less where BHH goes over the idea of inference on the difference between two groups. But it gets…weird. There are some sections that are useful, noted in **bold** below. Beyond that I really recommend that you consult other sources – my notes, class discussion, or notes from previous courses – for reviewing inference principles.

Notes by section:

- 3.1: Building up the idea of hypothesis testing, but using weird vocabulary you probably haven’t seen before. Not recommended, unless (and this may be true!) you never really clicked with the way people explained significance testing to you, in which case, who knows, maybe this is the one.
- The subsection “A Randomized Design Used in the Comparison of Standard and Modified Fertilizer Mixtures for Tomato Plants” (p. 78) talks about our old friend the tomato experiment (it’s a classic). But I still don’t recommend it.

- 3.2: A discussion of a
*paired*analysis. You may recall paired*t*-tests from intro.- The first subsection,
**“An Experiment on Boys’ Shoes,”**is worth reading because (a) they’ll reference this experiment later so it’s good to know what it was about, and (b) it shows another example of how randomization works. If you consider each boy to be a block, this is randomization within blocks! - The next few subsections are weird and you should probably skip them.
**“What Does This Tell Us?”**is actually a nice discussion on how to approach analysis in the real world (including the importance of communication with your client/whoever’s actually running the experiment).

- The first subsection,
**3.3**: An accompaniment to the “Vocab, part 1” lecture, explaining blocking and randomization in different words.- Skip “About the Nonparametric and Distribution Free Tests.” Even if you end up doing a project on these, there’s probably a better resource than BHH.

**3.4**: A pretty good summary of the basics of experimentation.- Point #2 here relates to the “Replication and pseudo-replication” notes.
- Don’t worry about the bits about “randomization test,” “distribution free tests,” and “exchangeability.”

- 3.5: A sort of grab bag of inference-related topics.
**Potentially useful**(but again, no more useful than your old notes) subsections include: “One- and Two-Sided Significance Tests,” “Conventional Significance Levels,” “Confidence Intervals for a Difference in Means: Paired Comparison Design,” “A Confidence Distribution” (we don’t really do these but it’s a cute idea), “Confidence Intervals Are More Useful Than Single Significance Tests” (true!), “Confidence Intervals for a Difference in Means: Unpaired Design”, and “Testing the Ratio of Two Variances” (but only later on in the course).

- 3.6: Skip, at least for now. This is of use only if you are interested in a discrete (binary) response.
- 3.7: Skip, at least for now. This is of use only if you are interested in a
*count*response. - 3.8: Skip, at least for now. This is of use only if you are interested in a
*categorical*response. - Appendix 3A: Skip, at least for now. I guess you could come back to this if you decide you are interested in robustness for your project.

### 4.2.4 Chapter 4

BHH considers this chapter to be where things start getting good, and I kind of agree. As in other chapters, you’ll find that BHH has an idiosyncratic way of saying things. But sometimes it can help to see a different approach to a topic, so who knows, maybe it’ll be great!

Notes by section:

- 4.1: This is BHH’s approach to what we call “one-way ANOVA” – examining the effect of a single treatment factor with several levels.
- There are several differences in notation here! For example, where the lecture uses the general \(ij\) indexing for group and individual, BHH uses \(t\) for the group index and \(i\) for the individual. Also, \(\overline{y}\) (with no subscript, as opposed to \(\overline{y}_t\)) is the grand mean, \(\overline{\overline{y}}\).
- BHH goes through this example in numerical detail, including showing the “deviations” for each data point – what the lecture refers to as \(y_{ij} - \overline{\overline{y}}\), \(y_{ij} - \overline{y}_i\), and \(\overline{y}_i - \overline{\overline{y}}\). If the visuals didn’t really work for you, this may help you understand the grand mean/group mean/individual thing more clearly.
- The subsection “Graphical ANOVA” is extra super optional. But it is sort of a cute way of visualizing what’s happening in ANOVA, and maybe you will find it helpful.
- The subsection “Geometry and the ANOVA Table” is triple extra optional and I honestly don’t know why it’s there. If you read it and derive some insight from it, definitely post to a Topic Conversation about it!
- Subsections “Assumptions” and “Graphical Checks” relate to the lecture on diagnostics – if you’re just dealing with the first couple of lectures on how ANOVA works, stop here and come back later.
- The next few subsections (“A Conclusion Instead of an Argument”; “Preparation”; “Practical Considerations”; “Extrapolation”) are not about math, but about the process of actually planning and running experiments in the real world. I don’t know why they are tucked in here with the tables of deviations, but that’s BHH for you. These are good sections to read for their pragmatic information, although they will not really help you understand the technicalities of ANOVA.

### 4.2.5 Chapter 6

Things continue to get more technical in this chapter. Still got a few weird ways of saying things, but I think there is less weirdness to wade through to get to the substantive content. So that’s nice.

Notes by section:

- 6.1: In vaguely Goos-Jones-esque style, BHH throws you immediately into an example. Don’t worry too much about the fact that there are multiple responses here; we’ll pretty much consider each response separately. It is interesting to look at which factors seem to matter for each response, though!
- This section very casually introduces the terminology “fraction of a \(2^k\) design”; don’t let them sneak that past you.

- 6.2: Another example. This contour-lines thing is interesting, though not a central technique for our current purposes. You may have met contour lines before on R’s built-in regression diagnostic plots (one of which shows contour lines for Cook’s distance), and/or on a hiking trip looking at elevation maps.
- 6.3: Yet another example. This one is good for seeing the basics of how a factorial design works: by deliberately creating confounding between factors because you think one of those factors doesn’t matter.
- Look closely at Table 6.3: confirm that the columns A, B, C, and D are all orthogonal.
- Find the “product” of the A, B, and C columns. (For example, run 1 is \(-1 * -1 * -1 = -1\).) That product is the column for the interaction \(ABC\). But it’s identical to the settings for factor D!
- This is the key idea: because we are not attempting to distinguish between D and ABC, we don’t need as many runs. The cost is that if we see that “D” has an effect, we can’t be sure if it’s due to D or due to ABC, because they are completely confounded/aliased. But based on the principle of
*hierarchy*, we’d bet that the effect was due to D, a main effect, not some weird three-way interaction. - Some weird pictures in here (also the term “inert” which just means “not practically significant”), but the takeaway from that part is that if you decide D doesn’t matter after all, you are back to a full \(2^3\) factorial involving A, B, and C.

- 6.4: In which BHH bothers to explain things a little bit.
- Note the notation we use for fractional factorials: \(2^{k-p}\). Don’t simplify \(2^{4-1}\) to \(2^3\) – these are quite different design concepts!
- BHH goes back and explains what’s going on with the complete confounding in table 6.3. Note that when you choose to confound D with ABC, there are consequences – all the interaction effects that involve D are
*also*confounded with other stuff! **But**note that*main effects*are only confounded with three-way interactions, which probably don’t matter. So we are comfortable using our main effect estimates as if they aren’t confounded with anything.- The “explanation of the generation” is the key part here so I don’t know why they put it in small print. Terms like
*generating relation*are very important in describing fractional factorial designs. - Subsection “Design Resolution” also introduces some important terminology (
*resolution*). - Subsection “High-Order Interactions Negligible” just shows the consequences of assuming that D doesn’t matter (and therefore no interactions involving D matter: the principle of
*heredity*at work!) - Subsection “Redundancy” seems to be restating what they already told you, but maybe that’s the joke?
- “Parsimony” is basically equivalent to the
*sparsity*principle. Don’t worry about “projectivity.”

- 6.5: Boy, I hope you like examples. Use this one as practice – see if you can follow the reasoning BHH describes. Things to consider:
- Why is this referred to as a \(2^{7-4}\) design?
- Look at the factor settings in table 6.4. Confirm the generators (D = AB, etc.) and defining relation.
- Note that any time you “square” a factor it turns into the identity I and drops out. Why does that make sense? (Hint: remember all the settings are \(pm1\)!)
- Follow along with “multiplying through the defining relation by A” to confirm your understanding of how this version of multiplication works.
- The abbreviated alias pattern for each main effect only mentions the two-way interactions. Why don’t we bother to write the other interactions that are also confounded with these?