Skip to main content

A Comparative Analysis of Machine Comprehension Using Deep Learning Models in Code-Mixed Hindi Language

  • Chapter
  • First Online:
Book cover Recent Advances in Computational Intelligence

Part of the book series: Studies in Computational Intelligence ((SCI,volume 823))

Abstract

The domain of artificial intelligence revolutionizes the way in which humans interact with machines. Machine comprehension is one of the latest fields under natural language processing that holds the capability for huge improvement in artificial intelligence. Machine comprehension technique gives systems the ability to understand a passage given by user and answer questions asked from it, which is an evolved version of traditional question answering technique. Machine comprehension is a main technique that falls under the category of natural language understanding, which exposes the amount of understanding required for a model to find the area of interest from a passage. The scope for the implementation of this technique is very high in India due to the availability of different regional languages. This work focused on the incorporation of machine comprehension technique in code-mixed Hindi language. A detailed comparison study on the performance of dataset in several deep learning approaches including End to End Memory Network, Dynamic Memory Network, Recurrent Neural Network, Long Short-Term Memory Network and Gated Recurrent Unit are evaluated. The best suited model for the dataset used is identified from the comparison study. A new architecture is proposed in this work by combining two of the best performing networks. To improve the model with respect to various ways of answering questions from a passage the natural language processing technique of distributed word representation was performed on the best model identified. The model was improved by applying pre-trained fastText embeddings for word representations. This is the first implementation of machine comprehension models in code-mixed Hindi language using deep neural networks. The work analyses the performance of all five models implemented, which will be helpful for future researches on Machine Comprehension technique in code-mixed Indian languages.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Anand Kumar, M., Soman, K.: Amrita-CEN@MSIR-FIRE2016: Code-mixed question classification using BoWs and RNN embeddings. In: FIRE (Working Notes), pp. 122–125 (2016)

    Google Scholar 

  2. Bengio, Y., Ducharme, R., Vincent, P., Janvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003). http://dl.acm.org/citation.cfm?id=944919.944966

  3. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 5, 135–146 (2017). http://aclweb.org/anthology/Q17-1010

    Article  Google Scholar 

  4. Bordes, A., Usunier, N., Chopra, S., Weston, J.: Large-scale simple question answering with memory networks (2015). arXiv:1506.02075

  5. Chung, J., Gülçehre, Ç., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling (2014). arXiv:1412.3555

  6. Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning (2008)

    Google Scholar 

  7. Hermann, K.M., Kočiský, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., Blunsom, P.: Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems (NIPS) (2015). arXiv:1506.03340

  8. Higgins, B., Nho, E.: LSTM Encoder-Decoder Architecture with Attention Mechanism for Machine Comprehension (2017)

    Google Scholar 

  9. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  10. Joulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of Tricks for Efficient Text Classification (2017)

    Google Scholar 

  11. Kumar, A., Irsoy, O., Su, J., Bradbury, J., English, R., Pierce, B., Ondruska, P., Gulrajani, I., Socher, R.: Ask Me Anything: dynamic memory networks for natural language processing. arXiv:1506.07285

  12. Loatman, R.B., Post, S.D., Yang, C.K., Hermansen, J.C.: Natural Language Understanding System. US Patent 4,914,590 (1990)

    Google Scholar 

  13. Nadeau, D., Sekine, S.: A survey of named entity recognition and classification. 30 (2007)

    Google Scholar 

  14. Turian, J., Ratinov, L.A., Bengio, Y.: Word representations: a simple and general method for semi-supervised learning (2010)

    Google Scholar 

  15. Ratnaparkhi, A.: A Maximum Entropy Model for Part-of-Speech Tagging (2002)

    Google Scholar 

  16. Ravichandran, D., Hovy, E.: Learning surface text patterns for a question answering system. In: Proceedings of the 40th annual meeting on association for computational linguistics, pp. 41–47. Association for Computational Linguistics (2002)

    Google Scholar 

  17. Reiter, E., Dale, R.: Building Natural Language Generation Systems. Cambridge University Press (2000)

    Google Scholar 

  18. Sachin Kumar, S., Kumar, M., Kp, S.: Sentiment analysis of tweets in malayalam using long short-term memory units and convolutional neural nets (2017)

    Google Scholar 

  19. Shawar, B.A., Atwell, E.: Chatbots: are they really useful? LDV Forum 22, 29–49 (2007)

    Google Scholar 

  20. Viswanathan, S., Anand Kumar, M., Soman, K.: A deep learning approach to machine comprehension in code-mixed hindi language. Int. J. Pure Appl. Math. (2018)

    Google Scholar 

  21. Sukhbaatar, S., Szlam, A., Weston, J., Fergus, R.: End-to-end memory networks. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 2440–2448. Curran Associates, Inc. (2015). http://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf

  22. Sukhbaatar, S., Szlam, A., Weston, J., Fergus, R.: Weakly supervised memory networks (2015). arXiv:1503.08895

  23. Sundermeyer, M., Schlüter, R., Ney, H.: Lstm neural networks for language modeling. In: INTERSPEECH (2012)

    Google Scholar 

  24. Wang, S., Jiang, J.: Machine comprehension using match-LSTM and answer pointer (2016). arXiv:1608.07905

  25. Weston, J., Bordes, A., Chopra, S., Mikolov, T.: Towards ai-complete question answering: a set of prerequisite toy tasks (2015). arXiv:1502.05698

  26. Weston, J., Chopra, S., Bordes, A.: Memory networks (2014). arXiv:1410.3916

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sujith Viswanathan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Viswanathan, S., Anand Kumar, M., Soman, K.P. (2019). A Comparative Analysis of Machine Comprehension Using Deep Learning Models in Code-Mixed Hindi Language. In: Kumar, R., Wiil, U. (eds) Recent Advances in Computational Intelligence. Studies in Computational Intelligence, vol 823. Springer, Cham. https://doi.org/10.1007/978-3-030-12500-4_19

Download citation

Publish with us

Policies and ethics