skip to main content
10.1145/3486622.3493968acmconferencesArticle/Chapter ViewAbstractPublication PageswiConference Proceedingsconference-collections
research-article

A Text Generation Model that Maintains the Order of Words, Topics, and Parts of Speech via Their Embedding Representations and Neural Language Models

Published:13 April 2022Publication History

ABSTRACT

Our goal is to generate coherent text accurately in terms of their semantic information and syntactic structure. Embedding methods and neural language models are indispensable in generating coherent text as they learn semantic information, and syntactic structure, respectively, and they are indispensable methods for generating coherent text. We focus here on parts of speech (POS) (e.g. noun, verb, preposition, etc.) so as to enhance these models, and allow us to generate truly coherent text more efficiently than is possible by using any of them in isolation. This leads us to derive Words and Topics and POS 2 Vec (WTP2Vec) as an embedding method, and Structure Aware Unified Language Model (SAUL) as a neural language model. Experiments show that our approach enhances previous models and generates coherent and semantically valid text with natural syntactic structure.

References

  1. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A Neural Probabilistic Language Model. J. Mach. Learn. Res. 3 (Mar 2003), 1137–1155.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. JMLR 3(2003), 993–1022.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. J. Mach. Learn. Res. 12 (nov 2011), 2493–2537.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian LDA for Topic Models with Word Embeddings. In ACL. 795–804.Google ScholarGoogle Scholar
  5. Adji B. Dieng, Chong Wang, Jianfeng Gao, and John William Paisley. 2016. TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency. CoRR abs/1611.01702(2016).Google ScholarGoogle Scholar
  6. Yarin Gal and Zoubin Ghahramani. 2016. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. In NIPS. 1027–1035.Google ScholarGoogle Scholar
  7. Yoav Goldberg. 2016. A Primer on Neural Network Models for Natural Language Processing. J. Artif. Intell. Res. 57 (2016), 345–420.Google ScholarGoogle ScholarCross RefCross Ref
  8. Ruining He and Julian McAuley. 2016. Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering. In WWW. 507–517.Google ScholarGoogle Scholar
  9. Aurélie Herbelot and Eva Maria Vecchi. 2015. Building a shared world: mapping distributional to model-theoretic semantic spaces. In EMNLP. 22–32.Google ScholarGoogle Scholar
  10. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735–1780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Y. Hu, Y. Koren, and C. Volinsky. 2008. Collaborative Filtering for Implicit Feedback Datasets. In ICDM. 263–272.Google ScholarGoogle Scholar
  12. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A Convolutional Neural Network for Modelling Sentences. In ACL. 655–665.Google ScholarGoogle Scholar
  13. Noriaki Kawamae. 2016. N-gram over Context. In WWW. 1045–1055.Google ScholarGoogle Scholar
  14. Noriaki Kawamae. 2019. Topic Structure-Aware Neural Language Model: Unified language model that maintains word and topic ordering by their embedded representations. In WWW. 2900–2906.Google ScholarGoogle Scholar
  15. Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In EMNLP. 1746–1751.Google ScholarGoogle Scholar
  16. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR.Google ScholarGoogle Scholar
  17. Da Kuang, Sangwoon Yun, and Haesun Park. 2015. SymNMF: nonnegative low-rank approximation of a similarity matrix for graph clustering. J. Global Optimization 62, 3 (2015), 545–574.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Jey Han Lau, Timothy Baldwin, and Trevor Cohn. 2017. Topically Driven Neural Language Model. In ACL. 355–365.Google ScholarGoogle Scholar
  19. Jarvan Law, Hankz Hankui Zhuo, Junhua He, and Erhu Rong. 2017. LTSG: Latent Topical Skip-Gram for Mutually Learning Topic Model and Vector Representations. CoRR (2017).Google ScholarGoogle Scholar
  20. Quoc V. Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In ICML. 1188–1196.Google ScholarGoogle Scholar
  21. Daniel D. Lee and H. Sebastian Seung. 1999. Learning the parts of objects by nonnegative matrix factorization. Nature 401, 6755 (1999), 788–791.Google ScholarGoogle Scholar
  22. Omer Levy and Yoav Goldberg. 2014. Neural Word Embedding As Implicit Matrix Factorization. In NIPS. 2177–2185.Google ScholarGoogle Scholar
  23. Dawen Liang, Jaan Altosaar, Laurent Charlin, and David M Blei. 2016. Factorization Meets the Item Embedding: Regularizing Matrix Factorization with Item Co-occurrence. In RecSys. 59–66.Google ScholarGoogle Scholar
  24. Robert V. Lindsey, William P. Headden, III, and Michael J Stipicevic. 2012. A phrase-discovering topic model using hierarchical Pitman-Yor processes. In EMNLP-CoNLL. 214–222.Google ScholarGoogle Scholar
  25. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical Word Embeddings. In AAAI. 2418–2424.Google ScholarGoogle Scholar
  26. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-Based Recommendations on Styles and Substitutes. In SIGIR. 43–52.Google ScholarGoogle Scholar
  27. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. CoRR (2013).Google ScholarGoogle Scholar
  28. Tomas Mikolov, Martin Karafiát, Lukás Burget, Jan Cernocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. 1045–1048.Google ScholarGoogle Scholar
  29. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In NIPS. 3111–3119.Google ScholarGoogle Scholar
  30. David Mimno, Hanna M. Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2010. Optimizing Semantic Coherence in Topic Models. In EMNLP. 262–272.Google ScholarGoogle Scholar
  31. Christopher E. Moody. 2016. Mixing Dirichlet Topic Models and Word Embeddings to Make lda2vec. CoRR abs/1605.02019(2016).Google ScholarGoogle Scholar
  32. Dat Quoc Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. 2015. Improving Topic Models with Latent Feature Word Representations. TACL 3, 4 (2015), 299–313.Google ScholarGoogle ScholarCross RefCross Ref
  33. Liqiang Niu, Xinyu Dai, Jianbing Zhang, and Jiajun Chen. 2015. Topic2Vec: Learning distributed representations of topics. In IALP. 193–196.Google ScholarGoogle Scholar
  34. Bei Shi, Wai Lam, Shoaib Jameel, Steven Schockaert, and Kwun Ping Lai. 2017. Jointly Learning Word Embeddings and Latent Topics. In SIGIR. 375–384.Google ScholarGoogle Scholar
  35. Tian Shi, Kyeongpil Kang, Jaegul Choo, and Chandan K. Reddy. 2018. Short-Text Topic Modeling via Non-negative Matrix Factorization Enriched with Local Word-Context Correlations. In WWW. 1105–1114.Google ScholarGoogle Scholar
  36. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 15, 1 (2014), 1929–1958.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Hendrik Strobelt, Sebastian Gehrmann, Bernd Huber, Hanspeter Pfister, and Alexander M. Rush. 2016. Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks. CoRR abs/1606.07461(2016).Google ScholarGoogle Scholar
  38. Fei Tian, Bin Gao, Di He, and Tie-Yan Liu. 2016. Sentence Level Recurrent Topic Model: Letting Topics Speak for Themselves. CoRR (2016).Google ScholarGoogle Scholar
  39. Guangxu Xun, Yaliang Li, Jing Gao, and Aidong Zhang. 2017. Collaboratively Improving Topic Discovery and Word Embeddings by Coordinating Global and Local Contexts. In KDD. 535–543.Google ScholarGoogle Scholar
  40. Zijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, and Hui Xiong. 2018. Dynamic Word Embeddings for Evolving Semantic Discovery. In WSDM. 673–681.Google ScholarGoogle Scholar
  41. Manzil Zaheer, Amr Ahmed, and Alexander J. Smola. 2017. Latent LSTM Allocation: Joint Clustering and Non-Linear Dynamic Modeling of Sequence Datay. In ICML. 3967–3976.Google ScholarGoogle Scholar
  42. Jianwen Zhang, Yangqiu Song, Changshui Zhang, and Shixia Liu. 2010. Evolutionary hierarchical dirichlet processes for multiple correlated time-varying corpora. In KDD. 1079–1088.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    WI-IAT '21: IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology
    December 2021
    698 pages

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 April 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)17
    • Downloads (Last 6 weeks)2

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format