Loading...
Please wait, while we are loading the content...
Similar Documents
Learning Mixture Models with the Regularized Latent Maximum Entropy Principle (2004)
| Content Provider | CiteSeerX |
|---|---|
| Author | Wang, Shaojun Schuurmans, Dale Peng, Fuchun Zhao, Yunxin |
| Abstract | Abstract—This paper presents a new approach to estimating mixture models based on a recent inference principle we have proposed: the latent maximum entropy principle (LME). LME is different from Jaynes ’ maximum entropy principle, standard maximum likelihood, and maximum a posteriori probability estimation. We demonstrate the LME principle by deriving new algorithms for mixture model estimation, and show how robust new variants of the expectation maximization (EM) algorithm can be developed. We show that a regularized version of LME (RLME), is effective at estimating mixture models. It generally yields better results than plain LME, which in turn is often better than maximum likelihood and maximum a posterior estimation, particularly when inferring latent variable models from small amounts of data. Index Terms—Expectation maximization (EM), iterative scaling, latent variables, maximum entropy, mixture models, regularization. I. |
| File Format | |
| Journal | IEEE Transactions on Neural Networks |
| Language | English |
| Publisher Date | 2004-01-01 |
| Access Restriction | Open |
| Subject Keyword | Mixture Model Regularized Latent Maximum Entropy Principle Lme Principle Small Amount New Approach Posteriori Probability Estimation Maximum Likelihood Latent Variable Model Posterior Estimation Plain Lme Maximum Entropy Index Term Expectation Maximization Jaynes Maximum Entropy Principle Standard Maximum Likelihood Latent Variable Recent Inference Principle Iterative Scaling Robust New Variant Latent Maximum Entropy Principle Mixture Model Estimation Expectation Maximization New Algorithm Regularized Version |
| Content Type | Text |
| Resource Type | Article |