8.1 Dependent Data

In many real-world applications, observations are not independent but exhibit correlations due to shared characteristics. Below are common forms of dependent data:

  • Multivariate measurements on individuals: Multiple attributes measured on the same person may be correlated (e.g., blood pressure, cholesterol, and BMI).
  • Clustered measurements: Individuals within a shared environment (e.g., families, schools, hospitals) often exhibit correlated responses.
  • Repeated measurements: When the same individual is measured multiple times, correlations arise naturally.
    • Example: Tracking cholesterol levels of a person over time.
    • If these repeated measurements follow an experimental design where treatments were applied initially, they are referred to as repeated measures (Schabenberger and Pierce 2001).
  • Longitudinal data: Repeated measurements taken over time in an observational study are known as longitudinal data (Schabenberger and Pierce 2001).
  • Spatial data: Measurements taken from individuals in nearby locations (e.g., residents of the same neighborhood) often exhibit spatial correlation.

Since standard linear regression assumes independent observations, these correlations violate its assumptions. Thus, Linear Mixed Models (LMMs) provide a framework to account for these dependencies.

A Linear Mixed Model (also called a Mixed Linear Model) consists of two components:

  • Fixed effects: Parameters associated with variables that have the same effect across all observations (e.g., gender, age, diet, time).
  • Random effects: Individual-specific variations or correlation structures (e.g., subject-specific effects, spatial correlations), leading to dependent (correlated) errors.

A key advantage of LMMs is that they model random subject-specific effects, rather than including individual dummy variables. This provides:

  • A reduction in the number of parameters to estimate, avoiding overfitting.
  • A framework for inference on a population level, rather than restricting conclusions to observed individuals.

8.1.1 Motivation: A Repeated Measurements Example

Consider a scenario where we analyze repeated measurements of a response variable Yij for the i-th subject at time j:

  • i=1,,N (subjects)
  • j=1,,ni (measurements per subject)

We define the response vector for subject i as:

Yi=[Yi1Yi2Yini]

To model this data, we use a two-stage hierarchical approach:

8.1.1.1 Stage 1: Regression Model (Within-Subject Variation) {#sec-stage-1-regression-model-(within-subject-variation}

We first model how the response changes over time for each subject:

Yi=Ziβi+ϵi

where:

  • Zi is an ni×q matrix of known covariates (e.g., time, treatment).
  • βi is a q×1 vector of subject-specific regression coefficients.
  • ϵi represents random errors, often assumed to follow ϵiN(0,σ2I).

At this stage, each individual has their own unique regression coefficients βi. However, estimating a separate βi for each subject is impractical when N is large. Thus, we introduce a [second stage](#sec-stage-2-parameter-model-(between-subject-variation) to impose structure on βi.

8.1.1.2 Stage 2: Parameter Model (Between-Subject Variation) {#sec-stage-2-parameter-model-(between-subject-variation}

We assume that the subject-specific coefficients βi arise from a common population distribution:

βi=Kiβ+bi

where:

  • Ki is a q×p matrix of known covariates.
  • β is a p×1 vector of global parameters (fixed effects).
  • bi are random effects, assumed to follow biN(0,D).

Thus, each individual’s regression coefficients βi are modeled as deviations from a population-level mean β, with subject-specific variations bi. This hierarchical structure enables:

  • Borrowing of strength: Individual estimates βi are informed by the overall population distribution.
  • Improved generalization: The model captures both fixed and random variability efficiently.

The full Linear Mixed Model can then be written as:

Yi=ZiKiβ+Zibi+ϵi

where:

  • ZiKiβ represents the fixed effects component.
  • Zibi represents the random effects component.
  • ϵi represents the residual errors.

This formulation accounts for both within-subject correlations (via random effects) and between-subject variability (via fixed effects), making it a powerful tool for analyzing dependent data.


8.1.2 Example: Linear Mixed Model for Repeated Measurements

8.1.2.1 Stage 1: Subject-Specific Model

The first stage models the response variable Yij for subject i at time tij:

Yij=β1i+β2itij+ϵij

where:

  • j=1,,ni represents different time points for subject i.
  • β1i is the subject-specific intercept (baseline response for subject i).
  • β2i is the subject-specific slope (rate of change over time).
  • ϵijN(0,σ2) are independent errors.

In matrix notation, the model is written as:

Yi=[Yi1Yi2Yini],Zi=[1ti11ti21tini],

βi=[β1iβ2i],ϵi=[ϵi1ϵi2ϵini].

Thus, the model can be rewritten as:

Yi=Ziβi+ϵi.


8.1.2.2 Stage 2: Population-Level Model

Since estimating a separate β1i and β2i for each subject is impractical, we assume they follow a population distribution:

β1i=β0+b1i,β2i=β1Li+β2Hi+β3Ci+b2i.

where:

  • Li,Hi,Ci are indicator variables for treatment groups:
    • Li=1 if the subject belongs to treatment group 1, else 0.
    • Hi=1 if the subject belongs to treatment group 2, else 0.
    • Ci=1 if the subject belongs to treatment group 3, else 0.
  • β0 represents the average baseline response across all subjects.
  • β1,β2,β3 are the average time effects for the respective treatment groups.
  • b1i,b2i are random effects representing subject-specific deviations.

This structure implies that while the intercept β1i varies randomly across subjects, the slope β2i depends both on treatment and random subject-specific deviations.

In matrix form, this is:

Ki=[10000LiHiCi],β=[β0β1β2β3],bi=[b1ib2i],βi=Kiβ+bi.


8.1.2.3 Final Mixed Model Formulation

Substituting Stage 2 into Stage 1, we obtain the full mixed model:

Yi=Zi(Kiβ+bi)+ϵi.

Expanding:

Yi=ZiKiβ+Zibi+ϵi.

Interpretation:

  • ZiKiβ represents the fixed effects (population-level trends).
  • Zibi represents the random effects (subject-specific variations).
  • ϵi represents the residual errors.

Assumptions:

  • Random effects: biN(0,D), where D is the variance-covariance matrix.
  • Residual errors: ϵiN(0,σ2I).
  • Independence: bi and ϵi are independent.

To estimate ˆβ, one might consider a sequential approach:

  1. Estimate ˆβi for each subject in the first stage.
  2. Estimate ˆβ in the second stage by replacing βi with ˆβi.

However, this method introduces several problems:

  • Loss of information: Summarizing Yi solely by ˆβi discards valuable within-subject variability.
  • Ignoring uncertainty: Treating estimated values ˆβi as known values leads to underestimated variability.
  • Unequal sample sizes: Subjects may have different numbers of observations (ni), which affects variance estimation.

To address these issues, we adopt the Linear Mixed Model (LMM) framework (Laird and Ware 1982).


8.1.2.4 Reformulating the Linear Mixed Model

Substituting [Stage 2](#sec-stage-2-parameter-model-(between-subject-variation) into [Stage 1](#sec-stage-2-parameter-model-(between-subject-variation), we obtain:

Yi=ZiKiβ+Zibi+ϵi.

Defining Xi=ZiKi as an ni×p matrix of covariates, we rewrite the model as:

Yi=Xiβ+Zibi+ϵi.

where:

  • i=1,,N (subjects).

  • β are fixed effects, common to all subjects.

  • bi are subject-specific random effects, assumed to follow:

    biNq(0,D).

  • ϵi represents residual errors:

    ϵiNni(0,Σi).

  • Independence assumption: bi and ϵi are independent.

  • Dimension notation:

    • Zi is an ni×q matrix of known covariates for random effects.
    • Xi is an ni×p matrix of known covariates for fixed effects.

8.1.2.5 Hierarchical (Conditional) Formulation

Rewriting the LMM in hierarchical form:

Yi|biN(Xiβ+Zibi,Σi),biN(0,D).

where:

  • The first equation states that, given the subject-specific random effects bi, the response follows a normal distribution with mean Xiβ+Zibi and covariance Σi.
  • The second equation states that the random effects bi follow a multivariate normal distribution with mean zero and covariance D.

We denote the respective probability density functions as:

f(Yi|bi)andf(bi).


Using the general marginalization formula:

f(A,B)=f(A|B)f(B)f(A)=f(A,B)dB=f(A|B)f(B)dB

we obtain the marginal density of Yi:

f(Yi)=f(Yi|bi)f(bi)dbi.

Solving this integral, we obtain:

YiN(Xiβ,ZiDZi+Σi).

Interpretation:

  • Mean structure: The expectation remains Xiβ, the fixed effects.

  • Variance structure: The covariance now incorporates random effect variability:

    ZiDZi+Σi.

    This shows that the random effects contribute additional correlation between observations.

🔹 Key Takeaway:
The marginal formulation of LMM no longer includes Zibi in the mean, but instead incorporates it into the covariance structure, introducing marginal dependence in Yi.

Technically, rather than “averaging out” the random effect bi, we add its contribution to the variance-covariance matrix.


Continue with the example:

Yij=(β0+b1i)+(β1Li+β2Hi+β3Ci+b2i)tij+ϵij.

For each treatment group:

Yij={β0+b1i+(β1+b2i)tij+ϵij,Li=1β0+b1i+(β2+b2i)tij+ϵij,Hi=1β0+b1i+(β3+b2i)tij+ϵij,Ci=1.

Interpretation:

  • Intercepts and slopes are subject-specific: Each subject has their own baseline response and rate of change.
  • Treatment groups affect slopes, but not intercepts:
    • β0 is the common intercept across groups.
    • Slopes differ by treatment: β1 for L, β2 for H, β3 for C.
  • Random effects introduce within-subject correlation:
    • b1i allows individual variation in baseline response.
    • b2i allows subject-specific deviations in slopes.

In the hierarchical model form, we again express the LMM as:

Yi|biN(Xiβ+Zibi,Σi)biN(0,D)

where:

  • Xi and Zi are design matrices for fixed and random effects, respectively.
  • bi represents subject-specific random effects.
  • Σi is the residual error covariance matrix.
  • D is the random effects covariance matrix.

The fixed-effects parameter vector is:

β=(β0,β1,β2,β3).

From the model structure, we define:

Xi=ZiKi.

Expanding,

Zi=[1ti11ti21tini],Ki=[10000LiHiCi].

Multiplying:

Xi=[1ti1Liti1Hiti1Ci1ti2Liti2Hiti2Ci1tiniLitiniHitiniCi].

The random effects vector is:

bi=[b1ib2i].

The covariance matrix D for random effects is:

D=[d11d12d12d22].

We assume conditional independence:

Σi=σ2Ini.

This means that, given the random effects bi and β, the responses for subject i are independent.


To derive the marginal model, we integrate out the random effects bi. This leads to:

Yij=β0+β1Litij+β2Hitij+β3Citij+ηij,

where:

ηiN(0,ZiDZi+Σi).

Thus, the full marginal model is:

YiN(Xiβ,ZiDZi+Σi).


Example: Case Where ni=2

For a subject with two observations, we compute:

Zi=[1ti11ti2].

Then:

ZiDZi=[1ti11ti2][d11d12d12d22][11ti1ti2].

Expanding the multiplication:

ZiDZi=[d11+2d12ti1+d22t2i1d11+d12(ti1+ti2)+d22ti1ti2d11+d12(ti1+ti2)+d22ti1ti2d11+2d12ti2+d22t2i2].

Finally, incorporating the residual variance:

Var(Yi1)=d11+2d12ti1+d22t2i1+σ2.

Interpretation of the Marginal Model

  1. Correlation in the Errors:
    • The off-diagonal elements of ZiDZi introduce correlation between observations within the same subject.
    • This accounts for the fact that repeated measurements on the same individual are not independent.
  2. Variance Structure:
    • The variance function of Yij is quadratic in time:

      Var(Yij)=d11+2d12tij+d22t2ij+σ2.

    • The curvature of this variance function is determined by d22.

    • If d22>0, variance increases over time.

🔹 Key Takeaway:
In the hierarchical model, random effects contribute to the mean structure.
In the marginal model, they affect the covariance structure, leading to heteroskedasticity (changing variance over time) and correlation between repeated measures.


8.1.3 Random-Intercepts Model

The Random-Intercepts Model is obtained by removing random slopes, meaning:

  • All subject-specific variability in slopes is attributed only to treatment differences.
  • The model allows each subject to have their own intercept, but within each treatment group, all subjects share the same slope.

The hierarchical (conditional) model is:

Yi|biN(Xiβ+1bi,Σi),biN(0,d11).

where:

  • bi is the random intercept for subject i, assumed to follow N(0,d11).
  • Xi contains the fixed effects, which include treatment and time.
  • Σi represents residual variance, typically assumed to be σ2I (independent errors).

Since there are no random slopes, the only source of subject-specific variability is the intercept.

Integrating out bi and assuming conditional independence Σi=σ2Ini, the marginal distribution of Yi is:

YiN(Xiβ,11d11+σ2I).

The marginal covariance matrix is:

Cov(Yi)=11d11+σ2I=[d11+σ2d11d11d11d11d11+σ2d11d11d11d11d11+σ2d11d11d11d11d11+σ2].

The correlation matrix is obtained by standardizing the covariance matrix:

Corr(Yi)=[1ρρρρ1ρρρρ1ρρρρ1],

where:

ρ=d11d11+σ2.

This correlation structure is known as compound symmetry, meaning:

  • Constant variance: Var(Yij)=d11+σ2 for all j.
  • Equal correlation: Corr(Yij,Yik)=ρ for all jk.
  • Positive correlation: Any two observations from the same subject are equally correlated.

Interpretation of ρ (Intra-Class Correlation)

  • ρ is called the intra-class correlation coefficient (ICC).

  • It measures the proportion of total variability that is due to between-subject variability:

    ρ=Between-Subject VarianceTotal Variance=d11d11+σ2.

  • If ρ is large:

    • The inter-subject variability (d11) is large relative to the intra-subject variability (σ2).
    • This means subjects differ substantially in their intercepts.
    • Responses from the same subject are highly correlated.
  • If ρ is small:

    • The residual error variance dominates, meaning individual differences in intercepts are small.
    • Measurements from the same subject are weakly correlated.

Summary of the Random-Intercepts Model

Feature Explanation
Intercepts Random, subject-specific (bi)
Slopes Fixed within each treatment group
Covariance Structure Compound symmetry (constant variance, equal correlation)
Intra-Class Correlation (ρ) Measures between-subject variability relative to total variability
Interpretation

Large ρ → Strong subject-level differences,

Small ρ → Mostly residual noise


8.1.4 Covariance Models in Linear Mixed Models

Previously, we assumed that the within-subject errors are conditionally independent, meaning:

Σi=σ2Ini.

However, real-world data often exhibit correlated errors, particularly in longitudinal studies where observations over time are influenced by underlying stochastic processes.

To model this dependence, we decompose the error term into two components:

ϵi=ϵ(1)i+ϵ(2)i,

where:

  • ϵ(1)i (Serial Correlation Component):
    • Represents subject-specific response to time-varying stochastic processes.
    • Captures dependency across observations for the same subject.
  • ϵ(2)i (Measurement Error Component):
    • Represents pure measurement error, assumed independent of ϵ(1)i.

Thus, the full LMM formulation becomes:

Yi=Xiβ+Zibi+ϵ(1)i+ϵ(2)i.

where:

  • Random effects: biN(0,D).
  • Measurement errors: ϵ(2)iN(0,σ2Ini).
  • Serial correlation errors: ϵ(1)iN(0,τ2Hi).
  • Independence assumption: bi and ϵi are mutually independent.

To model the correlation structure of the serial correlation component ϵ(1)i, we define the ni×ni correlation (or covariance) matrix Hi.

The (j,k)th element of Hi is denoted as:

hijk=g(tij,tik),

where:

  • hijk represents the covariance (or correlation) between time points tij and tik.

  • g(tij,tik) is a decreasing function that defines the correlation between time points tij and tik.

  • Typically, we assume that this function depends only on the time difference (stationarity assumption):

    hijk=g(|tijtik|)

    meaning that the correlation only depends on the absolute time lag.

  • To be a valid correlation matrix, we require:

    g(0)=1.

    This ensures that each observation has a perfect correlation with itself.


Common Choices for g(|tijtik|)

Several functions can be used to define the decay in correlation as time differences increase.

1. Exponential Correlation (Continuous-Time AR(1))

g(|tijtik|)=exp(ϕ|tijtik|)

  • Decay rate: Controlled by ϕ>0.
  • Interpretation:
    • Observations closer in time are more correlated.
    • The correlation decreases exponentially as time separation increases.
  • Use case: Often used in biological and economic time-series models.

2. Gaussian Correlation (Squared Exponential Kernel)

g(|tijtik|)=exp(ϕ(tijtik)2)

  • Decay rate: Faster than exponential.
  • Interpretation:
    • If ϕ is large, correlation drops sharply as time separation increases.
    • Produces smooth correlation structures.
  • Use case: Used in spatial statistics and machine learning (Gaussian processes).

3. Autoregressive (AR(1)) Correlation

A First-Order Autoregressive Model (AR(1)) assumes that the value at time t depends on its previous value:

αt=ϕαt1+ηt,

where

  • ηti.i.d. N(0,σ2η) (white noise process).

  • ϕ is the autocorrelation coefficient.

Then, the covariance between two observations at different times is:

Cov(αt,αt+h)=σ2ηϕ|h|1ϕ2.

Thus, the correlation between time points t and t+h is:

Corr(αt,αt+h)=ϕ|h|.

For a sequence of T time points, the resulting Toeplitz correlation matrix is:

Corr(αT)=[1ϕ1ϕ2ϕT1ϕ11ϕ1ϕT2ϕ2ϕ11ϕT3ϕT1ϕT2ϕT31].

  • Decay rate: ϕ controls how fast correlation decreases.
  • Use case:
    • Common in time series models.
    • Appropriate when observations are equally spaced in time.

Properties of the AR(1) Structure:

  • Correlation decreases with time lag:
    • Observations closer in time are more correlated.
    • Decay rate is controlled by ϕ.
  • Toeplitz structure:
    • The covariance matrix exhibits a banded diagonal pattern.
    • Useful in longitudinal and time-series data.
  • Applicability:
    • Small ϕ (0) → Weak autocorrelation (errors are mostly independent).
    • Large ϕ (1) → Strong autocorrelation (highly dependent observations).

8.1.5 Covariance Structures in Mixed Models

Structure Covariance Function g(|tijtik|) Decay Behavior Correlation Matrix Pattern Key Properties Common Use Cases
Compound Symmetry Constant: g(|tijtik|)=ρ No decay Constant off-diagonal Equal correlation across all time points (homogeneous) Random-intercepts model, repeated measures
AR(1) (Autoregressive) ϕ|tijtik| Exponential decay Toeplitz (banded diagonal) Strong correlation for nearby observations, weak for distant ones Time series, longitudinal data
Exponential exp(ϕ|tijtik|) Smooth decay Spatially continuous Models continuous correlation decline over time/space Growth curves, ecological models
Gaussian (Squared Exponential) exp(ϕ(tijtik)2) Rapid decay Spatially continuous Smooth decay but stronger locality effect Spatial processes, geostatistics
Toeplitz Varies by lag, g(|tijtik|)=ch General pattern General symmetric matrix Arbitrary structure, allows irregular spacing Irregular time points, flexible spatial models
Unstructured Fully parameterized, no constraints No fixed pattern General symmetric matrix Allows any correlation pattern, but many parameters Small datasets with unknown correlation
Banded Zero correlation beyond a certain lag Abrupt cutoff Banded diagonal Assumes only close observations are correlated Large datasets, high-dimensional time series
Spatial Power ϕ|sijsik| Exponential decay Distance-based structure Used when correlation depends on spatial distance Spatial statistics, environmental data
Matérn Covariance Function of |tijtik|ν Flexible decay Spatially continuous Generalization of Gaussian and exponential structures Geostatistics, spatiotemporal models

Choosing the Right Covariance Structure

Scenario Recommended Structure
Repeated measures with equal correlation Compound Symmetry
Longitudinal data with time dependence AR(1), Toeplitz
Continuous-time process Exponential, Gaussian
Spatial correlation Spatial Power, Matérn
Irregularly spaced time points Toeplitz, Unstructured
High-dimensional data Banded, AR(1)

References

Laird, Nan M, and James H Ware. 1982. “Random-Effects Models for Longitudinal Data.” Biometrics, 963–74.
Schabenberger, Oliver, and Francis J Pierce. 2001. Contemporary Statistical Models for the Plant and Soil Sciences. CRC press.