Skip to main content

Natural Language Inference Based on the LIC Architecture with DCAE Feature

  • Conference paper
  • First Online:
Book cover Chinese Computational Linguistics (CCL 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11856))

Included in the following conference series:

  • 4142 Accesses

Abstract

Natural Language Inference (NLI), which is also known as Recognizing Textual Entailment (RTE), aims to identify the logical relationship between a premise and a hypothesis. In this paper, a DCAE (Directly-Conditional-Attention-Encoding) feature based on Bi-LSTM and a new architecture named LIC (LSTM-Interaction-CNN) is proposed to deal with the NLI task. In the proposed algorithm, Bi-LSTM layers are used to modeling sentences to obtain a DCAE feature, then the DCAE feature is reconstructed into images through an interaction layer to enrich the relevant information and make it possible to be dealt with convolutional layers, finally the CNN layers are applied to extract high-level relevant features and relation patterns and the discriminant result obtained through a MLP (Multi-Layer Perceptron). Advantages of LSTM layers in sequence information processing and CNN layers in feature extraction are fully combined in this proposed algorithm. Experiments show this model achieving state-of-the-art results on the SNLI and Multi-NLI datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dagan, I., Glickman, O.: Probabilistic textual entailment: generic applied modeling of language variability. Learn. Methods Text Underst. Min. 2004, 26–29 (2004)

    Google Scholar 

  2. Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 632–642 (2015)

    Google Scholar 

  3. Williams, A., Nangia, N., Bowman, S.: A broad-coverage challenge corpus for sentence understanding through inference. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122 (2018)

    Google Scholar 

  4. Tay, Y., Luu, A.T., Hui, S.C.: Compare, compress and propagate: enhancing neural architectures with alignment factorization for natural language inference. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1565–1575 (2018)

    Google Scholar 

  5. Ghaeini, R., et al.: DR-BiLSTM: dependent reading bidirectional LSTM for natural language inference. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1460–1469 (2018)

    Google Scholar 

  6. Gong, Y., Luo, H., Zhang, J.: Natural language inference over interaction space. arXiv preprint arXiv:1709.04348 (2017)

  7. Liu, Y., Sun, C., Lin, L., Wang, X.: Learning natural language inference using bidirectional lstm model and inner-attention. arXiv preprint arXiv:1605.09090 (2016)

  8. Wang, S., Jiang, J.: Learning natural language inference with LSTM. In: Proceedings of NAACL-HLT, pp. 1442–1451 (2016)

    Google Scholar 

  9. Chen, Q., Zhu, X., Ling, Z.H., Wei, S., Jiang, H., Inkpen, D.: Enhanced LSTM for natural language inference. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1657– 1668 (2017)

    Google Scholar 

  10. Pang, L., Lan, Y., Guo, J., Xu, J., Wan, S., Cheng, X.: Text matching as image recognition. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 2793–2799. AAAI Press (2016)

    Google Scholar 

  11. Rocktäschel, T., Grefenstette, E., Hermann, K.M., Kočiský, T., Blunsom, P.: Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664 (2015)

  12. Cheng, J., Dong, L., Lapata, M.: Long short-term memory-networks for machine reading. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 551–561 (2016)

    Google Scholar 

  13. Plummer, B.A., Wang, L., Cervantes, C.M., Caicedo, J.C., Hockenmaier, J., Lazebnik, S.: Flickr30k entities: collecting region-to-phrase correspondences for richer image-to-sentence models. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2641–2649 (2015)

    Google Scholar 

  14. Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)

    Google Scholar 

  15. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  16. Chen, Q., Zhu, X., Ling, Z.H., Wei, S., Jiang, H., Inkpen, D.: Recurrent neural network-based sentence encoder with gated attention for natural language inference. In: Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, pp. 36–40 (2017)

    Google Scholar 

  17. Pan, B., Yang, Y., Zhao, Z., Zhuang, Y., Cai, D., He, X.: Discourse marker augmented network with reinforcement learning for natural language inference. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 989–999 (2018)

    Google Scholar 

Download references

Acknowledgements

This work is funded by National Key Research and Development Projects of China (2018YFC0830703). It is also supported by National Natural Science Foundation of China (Grant No. 61572320 & 61572321).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tanfeng Sun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, J., Sun, T., Jiang, X., Yao, L., Xu, K. (2019). Natural Language Inference Based on the LIC Architecture with DCAE Feature. In: Sun, M., Huang, X., Ji, H., Liu, Z., Liu, Y. (eds) Chinese Computational Linguistics. CCL 2019. Lecture Notes in Computer Science(), vol 11856. Springer, Cham. https://doi.org/10.1007/978-3-030-32381-3_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32381-3_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32380-6

  • Online ISBN: 978-3-030-32381-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics