Processing math: 100%
  • 1 Copyright Information
  • 2 Introduction To Statistics
    • 2.1 1. Statistics
    • 2.2 2.Individuals and Variables
    • 2.3 3. Random Variables and Variation
    • 2.4 4. Statistical Populations
    • 2.5 5. Uncertainty, Error and Variability
    • 2.6 6. Research Studies & Scientific Investigations
    • 2.7 7. Probability
    • 2.8 8. Statistical Inference
      • 2.8.1 8.1 Introduction to Statistical Inference
      • 2.8.2 8.2 Estimation and Hypothesis Testing
    • 2.9 9. Using The R Software - Week 1.
      • 2.9.1 9.1 Accessing R in the common use computer labs
      • 2.9.2 9.2 Using R Within RStudio
      • 2.9.3 9.3. Getting Help
      • 2.9.4 9.4 Creating an R script/program
  • 3 Week 2 - Chi-Squared Tests
    • 3.1 1.1 Some Goodness of Fit Examples
      • 3.1.1 Example 1 – Gold Lotto: Is it fair or are players being ripped off?
      • 3.1.2 Example 2 – Occupational Health and Safety: How do gloves wear out?
      • 3.1.3 Example 3 - Credit Card Debt
      • 3.1.4 Example 4 – Mendel’s Peas
    • 3.2 1.2 Test Statistics & the Null Hypothesis
      • 3.2.1 1.2.1 The Null Hypothesis
      • 3.2.2 1.2.2 The Test Statistic
    • 3.3 1.3 Distribution of the Test Statistic
      • 3.3.1 1.3.1 The Null Distribution
      • 3.3.2 1.3.2 Degrees of Freedom
      • 3.3.3 1.3.3 The Statistical Table
      • 3.3.4 1.3.4 Using the χ2 Table
      • 3.3.5 1.3.5 Examples Using the χ2 Table
      • 3.3.6 1.3.6 The Significance Level, α, and The Type I Error
      • 3.3.7 1.3.7 The Goodness of Fit Examples Revisited
    • 3.4 1.4 The Formal Chi-Squared, χ2 , Goodness of Fit Test
    • 3.5 1.5 The χ2 Test of Independence – The Two-Way Contingency Table
    • 3.6 2.1 Using rep and factor to Enter Data Manually in R
    • 3.7 2.2 Using R for Goodness of Fit Tests
    • 3.8 2.3 Using R for Tests of Independence – Two Way Contingency Table
  • 4 Week 3/4 - Probability Distributions and The Test of Proportion
    • 4.1 1.1 Revision and things for you to look up
    • 4.2 1.2 Types of Inference
    • 4.3 1.3 Notation
    • 4.4 1.4 Probability Distribution Functions: f(x)
    • 4.5 1.5 Areas Under a Probability Distribution Curve - Probabilities
    • 4.6 1.6 Tchebychef’s Rule – applies for any shaped distribution
    • 4.7 1.7 Probability of a Range of the Variable
    • 4.8 1.8 Cumulative Probability (CDF): F(x)=Pr(X≤x)
    • 4.9 1.9 Central Limit Theorem
    • 4.10 2.1 An Example
    • 4.11 2.2 One- and Two-Tailed Hypotheses
    • 4.12 2.3 The p - value of a test statistic.
    • 4.13 3.1 The Binomial Distribution – Discrete Variable
      • 4.13.1 3.1.1 Examples Using the Binomial Distribution
    • 4.14 3.2 The Normal Distribution – A Continuous Variable
    • 4.15 3.3 Normal Approximation to the Binomial
      • 4.15.1 3.3.1 Binomial Test of Proportion for Large Samples
    • 4.16 4.1 Calculating Binomial and Normal Probabilities in R
      • 4.16.1 4.1.1 Binomial Probabilities:
      • 4.16.2 4.1.2 Normal Probabilities:
    • 4.17 4.2 Sorting Data Sets and running Functions Separately for Different Categories:
  • 5 Week 5/6 - T-tests
    • 5.1 1.1 The Concept
    • 5.2 1.2 The Basic Steps for Hypothesis Testing – the HT 10 steps
    • 5.3 1.3 The Scientific Problem and Question
    • 5.4 1.4 The Research Hypothesis
    • 5.5 1.5 Resources, Required Detectable Differences, Significance Level Required
    • 5.6 1.6 The Statistical Hypotheses
    • 5.7 1.7 One and Two Tailed Hypotheses
    • 5.8 1.8 Theoretical Models used in Testing Hypotheses
    • 5.9 1.9 The Test Statistic, its Null Distribution, Significance Level and Critical Region
    • 5.10 1.10 Sample Collection and Calculation of Sample Test Statistic
    • 5.11 1.11 Comparison of Sample Test Statistic with Null Distribution
    • 5.12 1.12 The p-Value of a Test
    • 5.13 1.13 Conclusion and Interpretation
    • 5.14 1.14 Consider Possible Errors:
    • 5.15 1.15 Power of a Statistical Test
    • 5.16 2.1 Hypothesis Testing: The Proportion versus a Stated Value
    • 5.17 2.2 Hypothesis Testing: The Mean versus a Stated Value (The one-sample t-test)
    • 5.18 2.3 Hypothesis Testing: Difference Between Two Means I –Independent Samples (The two-sample t-test)
    • 5.19 2.4 Hypothesis Testing: Difference Between Two Means I –Paired Samples (the Paired t-test)
    • 5.20 3.1 More Probability Functions:
    • 5.21 3.2 Testing a Mean – The One-Sample t- test
    • 5.22 3.3 Testing the Difference Between Two Means – The Two-Sample t-test
    • 5.23 3.4 Testing the Mean Difference Between Paired Data – The Paired t- test
  • 6 Week 7 - ANOVA
    • 6.1 1.1 The Concept
    • 6.2 1.2. Data Synthesis - What the Model Means
    • 6.3 3.1 Introduction
    • 6.4 3.2 Design and Model
    • 6.5 3.3 Statistical Hypotheses in the ANOVA
    • 6.6 3.4 Sums of Squares
    • 6.7 3.5 Degrees of Freedom
    • 6.8 3.6 Mean Squares
    • 6.9 3.7 The Analysis of Variance Table
    • 6.10 3.8 Test of Hypothesis – The F-Test
    • 6.11 3.9 Comparison of the F-test in the ANOVA with 1 df for Treatment Versus the Two Sample Independent t-test.
    • 6.12 3.10 Worked Example:
    • 6.13 3.11 Assumptions in the ANOVA Process
    • 6.14 2.1 Hangover Antidotes: ANOVA Using R
    • 6.15 2.2 Harvester Example - ANOVA Using R:
  • 7 Week 8 - Multiple Treatment Comparisons and LSD
    • 7.1 1.1 Introduction
    • 7.2 1.2 The Protected (Extended) t-test
    • 7.3 1.3 Least Significant Differences - LSD
  • 8 Week 9 - Factorial ANOVA
    • 8.1 1.1 The Treatment Design Concept
    • 8.2 1.2 The Factorial Alternative - How to Waste resources?
    • 8.3 1.3 What is this ‘Interaction’??
    • 8.4 1.4 The Factorial Model
    • 8.5 1.5 Factorial Effects
    • 8.6 1.6 The Factorial ANOVA
    • 8.7 1.7 Partitioning Treatment Sums of Squares Into Factorial Components
    • 8.8 1.8 Factorial Effects and Hypotheses
    • 8.9 1.9 Testing and Interpreting Factorial Effects
  • 9 Week 10/11 - Correlation and Simple Linear Regression
    • 9.1 1.1 Introduction
      • 9.1.1 1.1.1 Covariance
      • 9.1.2 1.1.2 The Correlation Coefficient – Pearson’s Product Moment Coefficient
    • 9.2 2.1 Assumptions Underlying The Regression Model
    • 9.3 2.2 Simple Linear Regression
    • 9.4 2.3 Estimating A Simple Linear Regression Model
    • 9.5 2.4 Evaluating The Model
    • 9.6 2.5 The Coefficient of Determination (R2)
    • 9.7 2.6 The Standard Error (or Root Mean Squared Error) Of The Regression, σϵ
    • 9.8 2.7 Testing The Significance Of The Independent Variable
    • 9.9 2.8 Testing The Overall Significance Of The Model
    • 9.10 2.9 Predictions Using The Regression Model
    • 9.11 2.10 Functional Forms Of The Regression Model
    • 9.12 3.1 House prices and distance from an abattoir
    • 9.13 3.2 Bicycle lanes and rider safety
    • 9.14 3.3 Dissolved Oxygen and Temperature in freshwater rivers

1014SCG Statistics - Lecture Notes

1014SCG Statistics - Lecture Notes

James McBroom

2020-12-05

Chapter 1 Copyright Information

©Griffith University 2019. Subject to the provisions of the Copyright Act, no part of this publication may be reproduced in any form or by any means (whether mechanical, electronic, microcopying, photocopying, recording, or otherwise), stored in a retrieval system or transmitted without prior permission.