ABSTRACT
Our goal is to generate coherent text accurately in terms of their semantic information and syntactic structure. Embedding methods and neural language models are indispensable in generating coherent text as they learn semantic information, and syntactic structure, respectively, and they are indispensable methods for generating coherent text. We focus here on parts of speech (POS) (e.g. noun, verb, preposition, etc.) so as to enhance these models, and allow us to generate truly coherent text more efficiently than is possible by using any of them in isolation. This leads us to derive Words and Topics and POS 2 Vec (WTP2Vec) as an embedding method, and Structure Aware Unified Language Model (SAUL) as a neural language model. Experiments show that our approach enhances previous models and generates coherent and semantically valid text with natural syntactic structure.
- Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A Neural Probabilistic Language Model. J. Mach. Learn. Res. 3 (Mar 2003), 1137–1155.Google ScholarDigital Library
- David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. JMLR 3(2003), 993–1022.Google ScholarDigital Library
- Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. J. Mach. Learn. Res. 12 (nov 2011), 2493–2537.Google ScholarDigital Library
- Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian LDA for Topic Models with Word Embeddings. In ACL. 795–804.Google Scholar
- Adji B. Dieng, Chong Wang, Jianfeng Gao, and John William Paisley. 2016. TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency. CoRR abs/1611.01702(2016).Google Scholar
- Yarin Gal and Zoubin Ghahramani. 2016. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. In NIPS. 1027–1035.Google Scholar
- Yoav Goldberg. 2016. A Primer on Neural Network Models for Natural Language Processing. J. Artif. Intell. Res. 57 (2016), 345–420.Google ScholarCross Ref
- Ruining He and Julian McAuley. 2016. Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering. In WWW. 507–517.Google Scholar
- Aurélie Herbelot and Eva Maria Vecchi. 2015. Building a shared world: mapping distributional to model-theoretic semantic spaces. In EMNLP. 22–32.Google Scholar
- Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735–1780.Google ScholarDigital Library
- Y. Hu, Y. Koren, and C. Volinsky. 2008. Collaborative Filtering for Implicit Feedback Datasets. In ICDM. 263–272.Google Scholar
- Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A Convolutional Neural Network for Modelling Sentences. In ACL. 655–665.Google Scholar
- Noriaki Kawamae. 2016. N-gram over Context. In WWW. 1045–1055.Google Scholar
- Noriaki Kawamae. 2019. Topic Structure-Aware Neural Language Model: Unified language model that maintains word and topic ordering by their embedded representations. In WWW. 2900–2906.Google Scholar
- Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In EMNLP. 1746–1751.Google Scholar
- Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR.Google Scholar
- Da Kuang, Sangwoon Yun, and Haesun Park. 2015. SymNMF: nonnegative low-rank approximation of a similarity matrix for graph clustering. J. Global Optimization 62, 3 (2015), 545–574.Google ScholarDigital Library
- Jey Han Lau, Timothy Baldwin, and Trevor Cohn. 2017. Topically Driven Neural Language Model. In ACL. 355–365.Google Scholar
- Jarvan Law, Hankz Hankui Zhuo, Junhua He, and Erhu Rong. 2017. LTSG: Latent Topical Skip-Gram for Mutually Learning Topic Model and Vector Representations. CoRR (2017).Google Scholar
- Quoc V. Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In ICML. 1188–1196.Google Scholar
- Daniel D. Lee and H. Sebastian Seung. 1999. Learning the parts of objects by nonnegative matrix factorization. Nature 401, 6755 (1999), 788–791.Google Scholar
- Omer Levy and Yoav Goldberg. 2014. Neural Word Embedding As Implicit Matrix Factorization. In NIPS. 2177–2185.Google Scholar
- Dawen Liang, Jaan Altosaar, Laurent Charlin, and David M Blei. 2016. Factorization Meets the Item Embedding: Regularizing Matrix Factorization with Item Co-occurrence. In RecSys. 59–66.Google Scholar
- Robert V. Lindsey, William P. Headden, III, and Michael J Stipicevic. 2012. A phrase-discovering topic model using hierarchical Pitman-Yor processes. In EMNLP-CoNLL. 214–222.Google Scholar
- Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical Word Embeddings. In AAAI. 2418–2424.Google Scholar
- Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-Based Recommendations on Styles and Substitutes. In SIGIR. 43–52.Google Scholar
- Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. CoRR (2013).Google Scholar
- Tomas Mikolov, Martin Karafiát, Lukás Burget, Jan Cernocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. 1045–1048.Google Scholar
- Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In NIPS. 3111–3119.Google Scholar
- David Mimno, Hanna M. Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2010. Optimizing Semantic Coherence in Topic Models. In EMNLP. 262–272.Google Scholar
- Christopher E. Moody. 2016. Mixing Dirichlet Topic Models and Word Embeddings to Make lda2vec. CoRR abs/1605.02019(2016).Google Scholar
- Dat Quoc Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. 2015. Improving Topic Models with Latent Feature Word Representations. TACL 3, 4 (2015), 299–313.Google ScholarCross Ref
- Liqiang Niu, Xinyu Dai, Jianbing Zhang, and Jiajun Chen. 2015. Topic2Vec: Learning distributed representations of topics. In IALP. 193–196.Google Scholar
- Bei Shi, Wai Lam, Shoaib Jameel, Steven Schockaert, and Kwun Ping Lai. 2017. Jointly Learning Word Embeddings and Latent Topics. In SIGIR. 375–384.Google Scholar
- Tian Shi, Kyeongpil Kang, Jaegul Choo, and Chandan K. Reddy. 2018. Short-Text Topic Modeling via Non-negative Matrix Factorization Enriched with Local Word-Context Correlations. In WWW. 1105–1114.Google Scholar
- Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 15, 1 (2014), 1929–1958.Google ScholarDigital Library
- Hendrik Strobelt, Sebastian Gehrmann, Bernd Huber, Hanspeter Pfister, and Alexander M. Rush. 2016. Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks. CoRR abs/1606.07461(2016).Google Scholar
- Fei Tian, Bin Gao, Di He, and Tie-Yan Liu. 2016. Sentence Level Recurrent Topic Model: Letting Topics Speak for Themselves. CoRR (2016).Google Scholar
- Guangxu Xun, Yaliang Li, Jing Gao, and Aidong Zhang. 2017. Collaboratively Improving Topic Discovery and Word Embeddings by Coordinating Global and Local Contexts. In KDD. 535–543.Google Scholar
- Zijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, and Hui Xiong. 2018. Dynamic Word Embeddings for Evolving Semantic Discovery. In WSDM. 673–681.Google Scholar
- Manzil Zaheer, Amr Ahmed, and Alexander J. Smola. 2017. Latent LSTM Allocation: Joint Clustering and Non-Linear Dynamic Modeling of Sequence Datay. In ICML. 3967–3976.Google Scholar
- Jianwen Zhang, Yangqiu Song, Changshui Zhang, and Shixia Liu. 2010. Evolutionary hierarchical dirichlet processes for multiple correlated time-varying corpora. In KDD. 1079–1088.Google Scholar
Recommendations
MorphoGen: Full Inflection Generation Using Recurrent Neural Networks
Computational Linguistics and Intelligent Text ProcessingAbstractSub-word level alternations during inflection (apophonies) are an common linguistic phenomenon present in morphologically-rich languages, like Romanian. Inflection learning, or predicting the inflection class of a partially regular or fully ...
The latent words language model
We present a new generative model of natural language, the latent words language model. This model uses a latent variable for every word in a text that represents synonyms or related words in the given context. We develop novel methods to train this ...
Automatic Text Generation in Slovak Language
SOFSEM 2020: Theory and Practice of Computer ScienceAbstractAutomatic text generation can significantly help to ease human effort in many every-day tasks. Recent advancements in neural networks supported further research in this area and also brought significant improvement in quality of text generation. ...
Comments