6.8 Identifiability for MCMC

So far we have showed that the EM algorithm will not be able to find the true parameters when the model is not identifiable.

In MCMC, we have prior distribution for all parameters. How do the prior distributions affect the estimation when model is not identifiable?

To ensure identifiability for MCMC estimation process, we may need to impose additional constraints. In this case, the Bayesian algorithm impose certain constraints on parametere and produce interpretatble estimates(Gu, 2024).

Let us have a look at some example.

Code
#Hierarchical GDINA model information
# n= individual
# i = item
# j = attribute
# alpha(n,k) = individual's attribute profile

jags.hgdina <- function() {
  for (n in 1:N) {
    for (i in 1:I) { # for k= 3, we may estimate maximum 8 parameters for GDINA model. 
      
      eta1[n, i] <- lamda1[i]* alpha[n,1] + lamda2[i]* alpha[n,2]+ lamda3[i]* alpha[n,3]
      
      eta2[n,i] <- lamda12[i]* alpha[n,1]* alpha[n,2] + lamda13[i]* alpha[n,1]* alpha[n,3] + lamda23[i]* alpha[n,2]* alpha[n,3]
      
      eta3[n,i] <- lamda123[i]* alpha[n,1]* alpha[n,2]* alpha[n,3]
      
      logit(p[n, i]) <- lamda0[i]+ eta1[n,i] + eta2[n,i]+ eta3[n,i] 
      
      Y[n, i] ~ dbern(p[n, i])
    }}
    
  for (n in 1:N){
    for (k in 1: K){
      logit(prob.a[n,k])<- xi[k]* theta[n] - lamda[k]
      alpha[n,k] ~ dbern(prob.a[n,k])
    }
    theta[n] ~ dnorm(0,1)
  }
  
  for(k in 1: K){
    lamda[k]~ dnorm(0, .25)
    xi[k] ~ dnorm(0, .25) %_% T(0,)
  }
    
  
  for (i in 1:I) {
    lamda0[i] ~ dnorm(-1.096, 0.25)
    xlamda1[i] ~ dnorm(0, 0.25)%_% T(0,)
    xlamda2[i] ~ dnorm(0, 0.25)%_% T(0,) 
    xlamda3[i] ~ dnorm(0, 0.25)%_% T(0,) 
    xlamda12[i] ~ dnorm(0, 0.25)
    xlamda13[i] ~ dnorm(0, 0.25)
    xlamda23[i] ~ dnorm(0, 0.25)
    xlamda123[i] ~ dnorm(0, 0.25)
    lamda1[i] <- xlamda1[i] * Q[i, 1]
    lamda2[i] <- xlamda2[i] * Q[i, 2]
    lamda3[i] <- xlamda3[i] * Q[i, 3]
    lamda12[i] <- xlamda12[i] * Q[i, 1] * Q[i, 2]
    lamda13[i] <- xlamda13[i] * Q[i, 1] * Q[i, 3]
    lamda23[i] <- xlamda23[i] * Q[i, 2] * Q[i, 3]
    lamda123[i] <- xlamda123[i] * Q[i, 1] * Q[i, 2] * Q[i, 3]}
  
}

References

Gu, Y. (2024). Going deep in diagnostic modeling: Deep cognitive diagnostic models (DeepCDMs). Psychometrika, 89(1), 118–150.