Skip to main content

Adversarial Learning for Topic Models

  • Conference paper
  • First Online:
Advanced Data Mining and Applications (ADMA 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11323))

Included in the following conference series:

Abstract

This paper proposes adversarial learning for topic models. Adversarial learning we consider here is a method of density ratio estimation using a neural network called discriminator. In generative adversarial networks (GANs) we train discriminator for estimating the density ratio between the true data distribution and the generator distribution. Also in variational inference (VI) for Bayesian probabilistic models we can train discriminator for estimating the density ratio between the approximate posterior distribution and the prior distribution. With the adversarial learning in VI we can adopt implicit distribution as an approximate posterior. This paper proposes adversarial learning for latent Dirichlet allocation (LDA) to improve the expressiveness of the approximate posterior. Our experimental results showed that the quality of extracted topics was improved in terms of test perplexity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We do not consider the joint contrastive form of ELBO [10] in this paper.

  2. 2.

    https://archive.ics.uci.edu/ml/datasets/bag+of+words.

  3. 3.

    https://www.kaggle.com/stackoverflow/rquestions.

  4. 4.

    https://pytorch.org/.

  5. 5.

    https://github.com/tomonari-masada/adversarial-learning-for-topic-models.

References

  1. Asuncion, A., Welling, M., Smyth, P., Teh, Y.W.: On smoothing and inference for topic models. In: Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI 2009, pp. 27–34 (2009)

    Google Scholar 

  2. Blei, D.M.: Probabilistic topic models. Commun. ACM 55(4), 77–84 (2012)

    Article  Google Scholar 

  3. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)

    MATH  Google Scholar 

  4. Chen, S.F., Goodman, J.: An empirical study of smoothing techniques for language modeling. In: Proceedings of the 34th Annual Meeting on Association for Computational Linguistics, ACL 1996, pp. 310–318 (1996)

    Google Scholar 

  5. Dieng, A.B., Wang, C., Gao, J., Paisley, J.W.: TopicRNN: a recurrent neural network with long-range semantic dependency. CoRR abs/1611.01702 (2016). http://arxiv.org/abs/1611.01702

  6. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, pp. 249–256 (2010)

    Google Scholar 

  7. Goodfellow, I.J., et al.: Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, NIPS 2014, vol. 2, pp. 2672–2680 (2014)

    Google Scholar 

  8. Griffiths, T.L., Steyvers, M.: Finding scientific topics. Proc. Natl. Acad. Sci. 101(Suppl. 1), 5228–5235 (2004)

    Article  Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the 2015 IEEE International Conference on Computer Vision, ICCV 2015, pp. 1026–1034 (2015)

    Google Scholar 

  10. Huszár, F.: Variational inference using implicit distributions. CoRR abs/1702.08235 (2017). http://arxiv.org/abs/1702.08235

  11. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. CoRR abs/1312.6114 (2013). http://arxiv.org/abs/1312.6114

  12. Lea, C., Vidal, R., Reiter, A., Hager, G.D.: Temporal convolutional networks: a unified approach to action segmentation. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 47–54. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_7

    Chapter  Google Scholar 

  13. Mescheder, L.M., Nowozin, S., Geiger, A.: Adversarial variational Bayes: unifying variational autoencoders and generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, pp. 2391–2400 (2017)

    Google Scholar 

  14. Miao, Y., Yu, L., Blunsom, P.: Neural variational inference for text processing. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning, ICML 2016, vol. 48, pp. 1727–1736 (2016)

    Google Scholar 

  15. Mohamed, S., Lakshminarayanan, B.: Learning in implicit generative models. CoRR abs/1610.03483 (2016). http://arxiv.org/abs/1610.03483

  16. Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: Proceedings of the 31st International Conference on International Conference on Machine Learning, ICML 2014, vol. 32, pp. II-1278–II-1286 (2014)

    Google Scholar 

  17. Shu, R., Bui, H.H., Zhao, S., Kochenderfer, M.J., Ermon, S.: Amortized inference regularization. CoRR abs/1805.08913 (2018). http://arxiv.org/abs/1805.08913

  18. Srivastava, A., Sutton, C.: Autoencoding variational inference for topic models. CoRR abs/1703.01488 (2017). http://arxiv.org/abs/1703.01488

  19. Titsias, M.K., Lázaro-Gredilla, M.: Doubly stochastic variational Bayes for non-conjugate inference. In: Proceedings of the 31st International Conference on International Conference on Machine Learning, ICML 2014, vol. 32, pp. II-1971–II-1980 (2014)

    Google Scholar 

  20. Uehara, M., Sato, I., Suzuki, M., Nakayama, K., Matsuo, Y.: Generative adversarial nets from a density ratio estimation perspective. CoRR abs/1610.02920 (2016). http://arxiv.org/abs/1610.02920

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomonari Masada .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Masada, T., Takasu, A. (2018). Adversarial Learning for Topic Models. In: Gan, G., Li, B., Li, X., Wang, S. (eds) Advanced Data Mining and Applications. ADMA 2018. Lecture Notes in Computer Science(), vol 11323. Springer, Cham. https://doi.org/10.1007/978-3-030-05090-0_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-05090-0_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-05089-4

  • Online ISBN: 978-3-030-05090-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics