Skip to main content

Increasing Topic Coherence by Aggregating Topic Models

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9983))

Abstract

In this paper, we introduce a novel method for aggregating multiple topic models to produce an aggregate model that contains topics with greater coherence than individual models. When generating a topic model a number of parameters must be specified. Depending on the parameters chosen the resulting topics can be very general or very specific. In this paper the process of aggregating multiple topic models generated using different parameters is investigated; the hypothesis being that combining the general and specific topics can increase topic coherence. The aggregate model is created using cosine similarity and Jensen-Shannon divergence to combine topics which are above a similarity threshold. The model is evaluated using evaluation methods to calculate the coherence of topics in the base models against those of the aggregated model. The results presented in this paper show that the aggregated model outperforms standard topic models at a statistically significant level in terms of topic coherence when evaluated against an external corpus.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Available at: http://www.cs.princeton.edu/~blei/lda-c/.

References

  1. Aletras, N., Stevenson, M.: Measuring the Similarity between automatically generated topics. In: Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pp. 22–27, Gothenburg (2014)

    Google Scholar 

  2. Blei, D.M., Griffiths, T.L., Jordan, M.I., Tenenbaum, J.B.: Hierarchical topic models and the nested Chinese restaurant process. Adv. Neural Inf. Process. Syst. 16, 17–24 (2004)

    Google Scholar 

  3. Blei, D.M., Ng, A., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)

    MATH  Google Scholar 

  4. Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J.L., Blei, D.M.: Reading tea leaves: how humans interpret topic models. In: Advances in Neural Information Processing Systems, pp. 288–296 (2009)

    Google Scholar 

  5. Deerwester, S., Dumais, S., Furnas, G., Landauer, T., Harshman, R.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391–407 (1990)

    Article  Google Scholar 

  6. Hall, D., Jurafsky, D., Manning, C.D.: Studying the history of ideas using topic models. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 363–371 (2008)

    Google Scholar 

  7. Hofmann, T.: The cluster-abstraction model: unsupervised learning of topic hierarchies from text data. IJCAI 99, 682–687 (1999)

    Google Scholar 

  8. Lau, J.H., Newman, D., Baldwin, T.: Machine reading tea leaves: automatically evaluating topic coherence and topic model quality. In: Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pp. 530–539, Gothenburg (2014)

    Google Scholar 

  9. Li, W., McCallum, A.: Pachinko allocation: DAG-structured mixture models of topic correlations. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 577–584 (2006)

    Google Scholar 

  10. Mimno, D., Wallach, H.M., Talley, E., Leenders, M., McCallum, A.: Optimizing semantic coherence in topic models. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 262–272. Association for Computational Linguistics, july 2011

    Google Scholar 

  11. Newman, D., Lau, J.H., Grieser, K., Baldwin, T.: Automatic evaluation of topic coherence. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL, pp. 100–108, Los Angeles (2010)

    Google Scholar 

  12. Quan, X., Kit, C., Ge, Y., Pan, S.J.: Short and sparse text topic modeling via self-aggregation. In: Proceedings of the 24th International Joint Conference on Artificial Intelligence, pp. 2270–2276 (2015)

    Google Scholar 

  13. Rider, A.K., Chawla, N.V.: An ensemble topic model for sharing healthcare data and predicting disease risk. In: Proceedings of the International Conference on Bioinformatics, Computational Biology and Biomedical Informatics, pp. 333–340 (2013)

    Google Scholar 

  14. Shen, Z., Luo, P., Yang, S., Shen, X.: Topic modeling ensembles. In: 2010 IEEE International Conference on Data Mining. pp. 1031–1036 (2010)

    Google Scholar 

  15. Tang, J., Zhang, M., Mei, Q.: One theme in all views: modeling consensus topics in multiple contexts. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 5–13 (2013)

    Google Scholar 

  16. Wallach, H.M.: Structured topic models for language. Ph.D. thesis, Universty of Cambridge (2008)

    Google Scholar 

  17. Wallach, H.M., Mimno, D., Mccallum, A.: Rethinking LDA: why priors matter. In: NIPS (2009)

    Google Scholar 

  18. Wallach, H.M., Murray, I., Salakhutdinov, R., Mimno, D.: Evaluation methods for topic models. In: Proceedings of the 26th Annual International Conference on Machine Learning - ICML 2009, New York, USA, pp. 1–8. ACM, New York, June 2009

    Google Scholar 

  19. Yan, X., Guo, J., Lan, Y., Cheng, X.: A biterm topic model for short texts. In: Proceedings of the 22nd International Conference on World Wide Web, pp. 1445–1456 (2013)

    Google Scholar 

  20. Yongliang, W., Qiao, G.: Multi-LDA hybrid topic model with boosting strategy and its application in text classification. In: 33rd Chinese Control Conference, pp. 4802–4806 (2014)

    Google Scholar 

  21. Yongliang, W., Qiao, G.: Modeling texts in semantic space and ensemble topic-models via boosting strategy. In: 34th Chinese Control Conference, pp. 3838–3843 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stuart J. Blair .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Blair, S.J., Bi, Y., Mulvenna, M.D. (2016). Increasing Topic Coherence by Aggregating Topic Models. In: Lehner, F., Fteimi, N. (eds) Knowledge Science, Engineering and Management. KSEM 2016. Lecture Notes in Computer Science(), vol 9983. Springer, Cham. https://doi.org/10.1007/978-3-319-47650-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-47650-6_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-47649-0

  • Online ISBN: 978-3-319-47650-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics