1 A Framework for Investigating Change over Time
It is possible to measure change, and to do it well, if you have longitudinal data (D. Rogosa et al., 1982; Willett, 1989). Cross-sectional data–so easy to collect and so widely available–will not suffice. In this chapter, we describe why longitudinal data are necessary for studying change. (Singer & Willett, 2003, p. 3, emphasis in the original)
1.2 Distinguishing between two types of questions about change
On page 8, Singer and Willett proposed there are two fundamental questions for longitudinal data analysis:
- “How does the outcome change over time?” and
- “Can we predict differences in these changes?”
Within the hierarchical framework, we often speak about two levels of change. We address within-individual change at level-1.
The goal of a level-1 analysis is to describe the shape of each person’s individual growth trajectory.
In the second stage of an analysis of change, known as level-2, we ask about interindividual differences in change… The goal of a level-2 analysis is to detect heterogeneity in change across individuals and to determine the relationship between predictors and the shape of each person’s individual growth trajectory. (p. 8, emphasis in the original)
1.3 Three important features of a study of change
- Three or more waves of data
- An outcome whose values change systematically over time
- A sensible metric for clocking time
1.3.1 Multiple waves of data.
Singer and Willett criticized two-waves data on two grounds.
First, it cannot tell us about the shape of each person’s individual growth trajectory, the focus of our level-1 question. Did all the change occur immediately after the first assessment? Was progress steady or delayed? Second, it cannot distinguish true change from measurement error. If measurement error renders pretest scores too low and posttest scores too high, you might conclude erroneously that scores increase over time when a longer temporal view would suggest the opposite. In statistical terms, two-waves studies cannot describe individual trajectories of change and they confound true change with measurement error (see D. Rogosa et al., 1982). (p. 10, emphasis in the original)
I am not a fan of this ‘true change/measurement error’ way of speaking and would rather speak in terms of systemic and [seemingly] un-systemic changes among means and variances. Otherwise put, I’d rather speak in terms of trait and state. Two waves of data do not allow us to disentangle systemic mean differences from stable means and substantial variances for those means. Two waves of data do not allow us to disentangle changes in traits from stable traits but important differences in states. For an introduction to this way of thinking, check out Nezlek’s (2007) A multilevel framework for understanding relationships among traits, states, situations and behaviors.
1.3.2 A sensible metric for time.
Choice of a time metric affects several interrelated decisions about the number and spacing of data collection waves….
Our overarching point is that there is no single answer to the seemingly simple question about the most sensible metric for time. You should adopt whatever scale makes most sense for your outcomes and your research question….
Our point is simple: choose a metric for the that reflect the cadence you expect to be most useful for your outcome. (p. 11)
Data collection waves can be evenly spaced or not. E.g., if you anticipate a time period of rapid nonlinear change, it might be helpful to increase the density of assessments during that period. Everyone does not have to have the same assessment schedule. If all are assessed on the same schedule, we describe the data as time-structured. When assessment schedules vary across participants, the data are termed time-unstructured. The data are balanced if all participants have the same number of waves. Issues like attrition and so on lead to unbalanced data. Though they may have some pedagogical use, I have not found these terms useful in practice.
1.3.3 A continuous outcome that changes systematically over time.
To my eye, the most interesting part of this section is the discussion of measurement validity over time. E.g.,
when we say the metric in which the outcome is measured must be preserved across time, we mean that the outcome scores must be equatable over time–a given value of the outcome on any occasion must represent the same “amount” of the outcome on every occasion. Outcome equatability is easiest to ensure when you use the identical instrument for measurement repeatedly over time. (p. 13)
This isn’t as simple as is sounds. Though it’s beyond the scope of this project, you might learn more about this from a study of the longitudinal measurement invariance literature. To dive in, see the first couple chapters in Newsom’s (2015) text, Longitudinal structural equation modeling: A comprehensive introduction.
Session info
sessionInfo()
## R version 4.3.0 (2023-04-21)
## Platform: x86_64-apple-darwin20 (64-bit)
## Running under: macOS Monterey 12.4
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/lib/libRlapack.dylib; LAPACK version 3.11.0
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## time zone: America/Chicago
## tzcode source: internal
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## loaded via a namespace (and not attached):
## [1] digest_0.6.31 R6_2.5.1 bookdown_0.34 fastmap_1.1.1
## [5] xfun_0.39 cachem_1.0.8 knitr_1.42 htmltools_0.5.5
## [9] rmarkdown_2.21 cli_3.6.1 sass_0.4.6 jquerylib_0.1.4
## [13] compiler_4.3.0 rstudioapi_0.14 tools_4.3.0 evaluate_0.21
## [17] bslib_0.4.2 rlang_1.1.1 jsonlite_1.8.4