 
SMABS 2004 Home Organization About Jena Sponsors Links Imprint / Contact SMABS Home 
Contributions: AbstractLabel switching in mixture models
A wellknown identifiability problem in estimating mixture models is that the observed data likelihood is invariant to permutations of the group labels. In ML estimation these permutations are easily spotted and shown to be equivalent. In a Bayesian setting, however, the m! equivalent modes can lead to distorted inference if the MCMC algorithm samples from more than one mode. Posterior variances will be inflated, and posterior means will be biased as a result of the label switching. Proposed solutions to this problem (Richardson & Green, 1997) mostly center on imposing identifiability constraints, and rejecting or relabeling inadmissible draws. A new proposal to address modeswitching is offered. If one or more cases are preclassified, that is if the group membership is assumed to be known, the effects of label switching can be dramatically reduced. In a mixture of two groups, assuming that a single case belongs specifically to group 1 can be considered as simply following from the definition of the model, and thus requiring no prior information. The modification is easy to implement in EM, and also in MCMC simulation of the posterior distribution. It can be shown that preclassifying modifies the likelihood by eliminating the nuisance mode and leaving the mode of interest almost perfectly intact. Simulations with a latent class model, and with a mixture of two exponentials, will be presented to show that the technique works well and compares favorably with other strategies. The extension of the technique to mixtures with three or more components will also be discussed. References
