Abstract
Latent Dirichlet Allocation (LDA) model is biased to draw high-frequency words to describe topics. This affects the accuracy of the representation of topics. To solve this issue, we use point-wise mutual information (PMI) to estimate the internal correlation between words and documents and propose the LDA model based on PMI. The proposed model draws words in a topic according to the mutual information. We also propose three measures to evaluate the quality of topics, which are readability, consistency of topics, and similarity of topics. The experimental results show that the quality of the topics generated by the proposed topic model is better than that of the LDA model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Thomas, K.L., Peter, W.F., Darrell, L.: An introduction to latent semantic analysis. Discourse Process 25, 259–284 (1998)
Hofmann, T.: Probabilistic latent semantic indexing. In: Special Interest Group on Information Retrieval, pp. 50–57, Berkeley, CA, USA (1999)
Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3(1), 993–1022 (2003)
Ding, Y., Meng, X., Chai, G., Tang, Y.: User identification for instant messages. In: Lu, B.-L., Zhang, L., Kwok, J. (eds.) ICONIP 2011, Part III. LNCS, vol. 7064, pp. 113–120. Springer, Heidelberg (2011)
Griffiths, T.L., Steyvers, M.: Finding scientific topics. Proc. Natl. Acad. Sci. 101, 5228–5235 (2004)
Michal, R.Z., Griffiths, T., Steyvers, M., et al.: The author-topic model for authors and documents. In: Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, pp. 487–494 (2004)
Zhao, W.X., Jiang, J., Weng, J., He, J., Lim, E.-P., Yan, H., Li, X.: Comparing twitter and traditional media using topic models. In: Clough, P., Foley, C., Gurrin, C., Jones, G.J., Kraaij, W., Lee, H., Mudoch, V. (eds.) ECIR 2011. LNCS, vol. 6611, pp. 338–349. Springer, Heidelberg (2011)
Blei, D.M., Lafferty, J.D.: Correlated topic models.. In: International Conference on Machine Learning, pp. 113–120 (2006)
Canini, K.R., Shi, L., Griffiths, T.L.: Online inference of topics with latent Dirichlet allocation. In: International Conference on Artificial Intelligence and Statistics, pp. 41–48, Clearwater Beach, Florida, USA (2009)
David, M., Wallach, H.M., Talley, E., et al.: Optimizing semantic coherence in topic models. In: Empirical Methods in Natural Language Processing, pp. 262–272 (2011)
Blei, D.M., Jon, D.: McAuliffe. supervised topic models. In: NIPS (2007)
Acknowledgments
This work was partially supported by Scientific Research Foundation in Shenzhen (Grant No. JCYJ20140627163809422), Scientific Research Innovation Foundation in Harbin Institute of Technology (Project No. HIT.NSRIF2010123), State Key Laboratory of Computer Architecture, Chinese Academy of Sciences and Key Laboratory of Network Oriented Intelligent Computation (Shenzhen).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Ding, Y., Yan, S. (2015). Topic Optimization Method Based on Pointwise Mutual Information. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9491. Springer, Cham. https://doi.org/10.1007/978-3-319-26555-1_17
Download citation
DOI: https://doi.org/10.1007/978-3-319-26555-1_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-26554-4
Online ISBN: 978-3-319-26555-1
eBook Packages: Computer ScienceComputer Science (R0)