5.11 Bayesian approach for parameter estimation

In maximum likelihood estimation, we attempt to find parameters {g,s} that can maximize the marginalized log likelihood (Y).

In Bayesian framework, we aim to determine the posterior distribution of all model parameters g,s,: P(g,s,|Y)=P(Y|g,s,)p(g,s,)P(Y)

Typically, in MCMC programs such as JAGS and nimble, one needs to specify two things:

  • priors and
  • model likelihood

All parameters have prior distributions

  • Item parameters g and s follow beta(1,1), which is a uniform distribution ranging from 0 to 1:
Code
curve(dbeta(x, 1, 1))

  • Person’s latent class membership latent.group.index[n] follows a categorical distribution cat(p)

  • p in the categorical distribution cat(p) follows a dirichlet distribution dirich(δ)

Regarding likelihood, we need to specify the likelihood of observing each item response:

P(Yij=1|αc)=gj+(1sjgj)I(αTcqj<qTjqj)

Although MCMC is guaranteed to converge to the target distributions of model parameters, one may need a very long MCMC chain to achieve that.