We discussed "Latent Variables Models and Learning with the EM Algorithm" today, mostly using slides from Sam Roweis' talk. The view of EM from the lower bound optimization perspective [1] is particularly interesting and is perhaps the most elucidative view of EM. The discussion in [2] is also very useful to understand extensions of EM. Of course, the canonical reference [3] is always cited and perhaps worth a read if you have a lot of patience.
We will continue the discussion next week when we will discuss the incremental version of EM [2] and revisit Kilian Pohl's work.
[1] Minka, T. (1998). Expectation-Maximization as lower bound maximization. Tutorial published on the web at http://www-white.media.mit.edu/ tpminka /papers/em.html.
[2] Neal, R. M. and Hinton, G. E. 1999. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in Graphical Models, M. I. Jordan, Ed. MIT Press, Cambridge, MA, 355-368.
[3] Arthur Dempster, Nan Laird, and Donald Rubin. "Maximum likelihood from incomplete data via the EM algorithm". Journal of the Royal Statistical Society, Series B, 39(1):1–38, 1977
Friday, February 8, 2008
Subscribe to:
Post Comments (Atom)
2 comments:
how about posting links to all the papers /
http://www.citeulike.org/user/singhsh/tag/em
Post a Comment