Skip to main content

Automatic Approval of Online Comments with Multiple-Encoder Networks

  • Conference paper
  • First Online:
  • 662 Accesses

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1215))

Abstract

In modern online publishing, user comments are an integral part of any media platform. Between the high volume of generated comments and the need for moderation of inappropriate content, human approval becomes a serious bottleneck with negative consequences for both operating cost and user experience. To alleviate this problem we present a text classification model for automatic approval of user comments on text articles. With multiple textual input from both the comment in question and the host article, the model uses a neural network with multiple encoders. Different choices for encoder networks and combination methods for encoder outputs are investigated. The system is evaluated on news articles from a leading Vietnamese online media provider, and is currently on a test run with said newspaper.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv e-prints \(\rm {abs}\)/1409.0473, September 2014. https://arxiv.org/abs/1409.0473

  2. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 5, 135–146 (2017)

    Article  Google Scholar 

  3. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537 (2011). http://dl.acm.org/citation.cfm?id=1953048.2078186

    MATH  Google Scholar 

  4. Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutional sequence to sequence learning. In: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML 2017, pp. 1243–1252. JMLR.org (2017). http://dl.acm.org/citation.cfm?id=3305381.3305510

  5. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

    Article  Google Scholar 

  6. Jain, S., Wallace, B.C.: Attention is not explanation. CoRR abs/1902.10186 (2019)

    Google Scholar 

  7. Johnson, R., Zhang, T.: Effective use of word order for text categorization with convolutional neural networks. In: NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, 31 May–5 June 2015, pp. 103–112 (2015), http://aclweb.org/anthology/N/N15/N15-1011.pdf

  8. Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, a Meeting of SIGDAT, a Special Interest Group of the ACLEMNLP 2014, Doha, Qatar, 25–29 October 2014, pp. 1746–1751 (2014), http://aclweb.org/anthology/D/D14/D14-1181.pdf

  9. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS 2012, pp. 1097–1105. Curran Associates Inc., Red Hook (2012). http://dl.acm.org/citation.cfm?id=2999134.2999257

  10. Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI 2015, pp. 2267–2273. AAAI Press (2015). http://dl.acm.org/citation.cfm?id=2886521.2886636

  11. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, pp. 2278–2324 (1998)

    Google Scholar 

  12. Lei, T., Barzilay, R., Jaakkola, T.: Rationalizing neural predictions. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 107–117. Association for Computational Linguistics, November 2016. https://doi.org/10.18653/v1/D16-1011. https://www.aclweb.org/anthology/D16-1011

  13. Lin, Z., et al.: A structured self-attentive sentence embedding (2017)

    Google Scholar 

  14. Mnih, V., Heess, N., Graves, A., Kavukcuoglu, K.: Recurrent models of visual attention. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 27, pp. 2204–2212. Curran Associates, Inc. (2014). http://papers.nips.cc/paper/5542-recurrent-models-of-visual-attention.pdf

  15. Sainath, T.N., Vinyals, O., Senior, A., Sak, H.: Convolutional, long short-term memory, fully connected deep neural networks. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 4580–4584, April 2015. https://doi.org/10.1109/ICASSP.2015.7178838

  16. Vaswani, A., et al.: Attention is all you need. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems 30, pp. 5998–6008. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf

  17. Vu, X., Vu, T., Tran, S.N., Jiang, L.: ETNLP: a toolkit for extraction, evaluation and visualization of pre-trained word embeddings. CoRR abs/1903.04433 (2019)

    Google Scholar 

  18. Xu, K., et al.: Show, attend and tell: neural image caption generation with visual attention. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML2015, pp. 2048–2057. JMLR.org (2015). http://dl.acm.org/citation.cfm?id=3045118.3045336

  19. Yang, B., Wang, L., Wong, D.F., Chao, L.S., Tu, Z.: Convolutional self-attention networks. CoRR abs/1904.03107 (2019). http://arxiv.org/abs/1904.03107

  20. Zhang, Y., Marshall, I., Wallace, B.C.: Rationale-augmented convolutional neural networks for text classification. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 795–804. Association for Computational Linguistics, November 2016. https://doi.org/10.18653/v1/D16-1076. https://www.aclweb.org/anthology/D16-1076

  21. Zhou, C., Sun, C., Liu, Z., Lau, F.C.M.: A C-LSTM neural network for text classification. CoRR abs/1511.08630 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vu Dang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dang, V. (2020). Automatic Approval of Online Comments with Multiple-Encoder Networks. In: Nguyen, LM., Phan, XH., Hasida, K., Tojo, S. (eds) Computational Linguistics. PACLING 2019. Communications in Computer and Information Science, vol 1215. Springer, Singapore. https://doi.org/10.1007/978-981-15-6168-9_20

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-6168-9_20

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-6167-2

  • Online ISBN: 978-981-15-6168-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics