Processing math: 100%
Type to search
PSY 207 and 208
1
Introduction
2
Categorizing and Summarizing Information
2.1
How to Tell a Story
2.2
Types of Data
2.2.1
Statistics and Parameters
2.2.2
Scales of Measurement
2.2.3
Dependent and Independent Variables, Predictor Variables and Predicted Variables
2.3
Summary Statistics
2.3.1
A Brief Divergence Regarding Histograms
2.3.2
Central Tendency
2.3.3
Quantiles
2.3.4
Spread
2.3.5
Skew
2.3.6
Kurtosis
2.4
R Commands
2.4.1
mean
2.4.2
median
2.4.3
mode
2.4.4
quantiles
2.4.5
range
2.4.6
variance
2.4.7
standard deviation
2.4.8
Skewness and kurtosis
2.5
Bonus Content
2.5.1
The mathematical link between mean, variance, skewness, and kurtosis
3
Visual Displays of Data
3.1
About this Page
3.1.1
Packages Used to Make The Figures in This Chapter
3.1.2
Datasets Created for the Figures in This Chapter
3.2
Making the Audience Smarter
3.3
Essentials of Good Visualization
3.3.1
Maximize the Data-ink Ratio
3.3.2
When
Not
to Visualize Data
3.3.3
Lines and Angles
3.3.4
Ducks
3.3.5
Annotations
3.3.6
Lying
3.3.7
Colors
3.3.8
Fonts
3.4
Types of Visualization
3.4.1
Boxplots
3.4.2
Bar Charts
3.4.3
Histograms
3.4.4
Combining Histogram Elements with Bar Chart Elements
3.4.5
Scatterplots
3.4.6
Line Charts
3.4.7
Pie Charts
3.4.8
Forestplots
3.4.9
Heatmaps
3.4.10
Choropleth maps
3.4.11
Alluvial Diagrams (
aka
Sankey Plots,
aka
Riverplots,
aka
Ribbonplots)
3.4.12
Small Multiples
3.5
Closing Remarks
3.6
The Worst Data Visualization Ever Made
4
Probability Theory
4.1
Probability, Statistics, and Scientific Inquiry
4.2
The Three Kolmogorov (1933) Axioms
4.2.1
1. Non-negativity
4.2.2
2. Normalization
4.2.3
3. Finite Additivity
4.3
Methods of Assigning Probability
4.3.1
Objective Methods
4.3.2
Subjective Probability
4.4
Intersections and Unions
4.4.1
Intersections
4.4.2
Unions
4.5
Expected Value and Variance
4.5.1
Probability Trees
4.6
Elementary and Compound (or Composite) Events
4.7
Permutations and Combinations
4.7.1
Permutations
4.7.2
Combinations
4.8
Odds
4.8.1
Odds in Favor/Against
4.8.2
Odds Ratios
4.9
Conditional Probability
4.9.1
Binomial Probability
4.9.2
Bayes’s Theorem
4.10
Monte Carlo Methods
4.10.1
Random Walk Models
4.10.2
Markov Chain Monte Carlo (MCMC)
4.11
Summary Information
4.11.1
Glossary
4.11.2
Formulas
4.12
Bonus Content
4.12.1
Why is the factorial of zero equal to one?
4.12.2
Excerpts from
Statistics for Everybody
by D. Barch, reprinted with permission from the author
5
Probability Distributions
5.1
Weirdly-shaped Jars and the Marbles Inside
5.2
The Binomial Distribution
5.2.1
Discrete Probability
5.2.2
Features of the Binomial Distribution
5.2.3
Sufficient Statistics for the Binomial
5.2.4
Cumulative Binomial Probability
5.2.5
Finding Binomial Probabilities with R Commands
5.3
The Normal Distribution
5.3.1
The Standard Normal Distribution
5.3.2
Features of the Normal Distribution
5.3.3
The Cumulative Normal Distribution
5.3.4
Percentiles with the Normal Distribution
5.3.5
The Connections Between the Normal and the Binomial
5.4
the
t
distribution
5.4.1
The Central Limit Theorem
5.5
The
χ
2
Distribution
5.6
Other Probability Distributions
5.6.1
Uniform distribution
5.6.2
the
β
distribution
5.6.3
The Logistic Distribution
5.6.4
The Poisson Distribution
5.7
Interval Estimates
6
Classical and Bayesian Inference
6.0.1
Different Approaches to Analyzing the Same Data
6.0.2
The Essential Difference
6.0.3
Consequences of the Difference
6.1
Examples of Classical and Bayesian Analyses
6.1.1
Classical Null Hypothesis Testing
6.1.2
The Six (sometimes Five) Step Procedure
6.1.3
Confidence Intervals
6.2
Bayesian Inference
6.2.1
Posterior Probabilities
6.2.2
Bayesian Interval Estimates
6.2.3
Bayes Factor
7
Correlation and Regression
7.1
Correlation Does Not Imply Causation…
7.1.1
…but it Doesn’t Imply NOT Causation either
7.2
Correlation
7.2.1
The Product Moment Correlation Coefficient
r
7.2.2
Parametric Correlation Example
7.2.3
Nonparametric correlation
7.2.4
Nonlinear correlation
7.3
Regression
7.3.1
The Least-Squares Regression Line
7.3.2
The Proportionate Reduction in Error (
R
2
)
7.3.3
Prediction Errors,
a.k.a.
Residuals
7.4
Bonus Content
7.4.1
Correcting the Kendall’s
τ
Software Estimate
8
Signal Detection Theory
8.1
Detecting Signals
8.1.1
Hits, Misses, False Alarms, and Correct Rejections
8.2
The Signal Detection Metaphor
8.2.1
Limits of the Signal-Detection Metaphor
8.2.2
Signal + Noise and Noise Distributions
8.3
Distinguishing Signal from Noise
8.3.1
d
′
8.3.2
β
8.3.3
C-statistic
8.4
Doing the Math
8.4.1
Assumptions of Variances
8.4.2
SDT with Unequal Variances
8.4.3
Equal Variance Assumption
9
Markov Chain Monte Carlo Methods
9.1
Let’s Make a Deal
9.2
Let’s Make a Simulation, or, the Monte Carlo Problem
9.2.1
Single Game Code
9.2.2
Repeated Games
9.3
Random Walks
9.4
Markov Chain Monte Carlo Methods
9.5
MCMC in Bayesian Analysis
9.5.1
The Metropolis-Hastings Algorithm
9.6
Multinomial Models
9.7
Bayesian Regression Models
10
Assumptions of Parametric Tests
10.1
Probability Distributions and Parametric Tests
10.2
The General Assumptions
10.2.1
Scale Data
10.2.2
Random Assignment
10.2.3
Normality
10.2.4
Homoscedasticity (
a.k.a.
Homogeneity of Variance)
10.3
What to do When Assumptions are violated
11
Differences Between Two Things (the
t
-test chapter)
11.1
Classical Parametric Tests of the Differences Between Two Things:
t
-tests
11.1.1
The Central Limit Theorem and the
t
-distribution
11.1.2
One-sample
t
-test
11.1.3
Repeated-Measures
t
-test
11.1.4
Independent-Groups
t
-test
11.1.5
t
-tests and Regression
11.2
Nonparametric Tests of the Differences Between Two Things
11.2.1
Nonparametric Tests for 2 Independent Groups
11.2.2
The Wilcoxon Mann-Whitney
U
Test
11.2.3
Nonparametric Tests for 2 Paired Groups
11.2.4
Confidence Intervals on Means Using
t
-statistics
11.3
Effect Size with
t
-tests
11.4
Power of
t
-tests
11.5
Bayesian
t
-tests
12
Differences Between Three or More Things (the ANOVA chapter)
12.1
Differences Between Things and Differences Within Things
12.1.1
The
F
-ratio
12.2
One-Way Independent-Groups ANOVA models
12.2.1
The Model
12.2.2
The ANOVA table
12.2.3
Example
12.2.4
Effect Size Statistics
12.2.5
Post hoc
tests
12.2.6
Power Analysis for the One-Way Between-Groups ANOVA
12.2.7
Nonparametric Differences Between 3 or More Independent Groups
12.2.8
Bayesian One-Way Between-Groups ANOVA
12.3
One-Way Repeated-Measures ANOVA
12.3.1
Repeated-measures ANOVA: Assumptions
12.3.2
Repeated-measures ANOVA: Hypotheses
12.3.3
Repeated-measures ANOVA Model
12.3.4
RM model
vs.
IG model
12.3.5
RM ANOVA Calculation (Additive)
12.3.6
RM One-way ANOVA Example
12.3.7
RM One-way ANOVA Example
12.3.8
RM One-way ANOVA Example
12.3.9
RM One-way ANOVA Example
12.3.10
RM One-way ANOVA Example
12.3.11
RM One-way ANOVA Example
12.3.12
RM One-way ANOVA Example
12.3.13
RM One-way ANOVA Table
12.3.14
RM One-way ANOVA Table
12.3.15
RM One-way ANOVA Table
12.3.16
RM One-way ANOVA table
12.3.17
Nonparametric RM ANOVA
12.3.18
Bayesian RM ANOVA
12.3.19
About those RM Assumptions
12.3.20
RM One-way ANOVA Table (nonadditive)
12.3.21
RM One-way ANOVA Table (nonadditive)
12.3.22
Issues with Nonadditive Analyses
12.3.23
Sphericity
12.3.24
Sphericity and Compound Symmetry
12.3.25
Sphericity and Compound Symmetry
13
Factorial Analysis
13.1
Factorial Between-Groups ANOVA
13.1.1
Between-Groups 2-way Factorial ANOVA Model
13.2
Four-or-more-way Factorial ANOVA
14
Multiple Regression
15
Statistical Tables
15.1
Critical Values
15.1.1
r
15.1.2
t
15.1.3
F
15.1.4
q
15.2
Upper-tail Quantiles of the
χ
2
Distribution
Published with bookdown
Facebook
Twitter
LinkedIn
Weibo
Instapaper
A
A
Serif
Sans
White
Sepia
Night
PDF
EPUB
Advanced Statistics I & II
Chapter 14
Multiple Regression
Figure 14.1: Page under construction.