Skip to main content

A Framework for Classifying Temporal Relations with Question Encoder

  • Conference paper
  • First Online:
Digital Libraries at Times of Massive Societal Transition (ICADL 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12504))

Included in the following conference series:

Abstract

Temporal-relation classification plays an important role in the field of natural language processing. Various deep learning-based classifiers, which can generate better models using sentence embedding, have been proposed to address this challenging task. These approaches, however, do not work well because of the lack of task-related information. To overcome this problem, we propose a novel framework that incorporates prior information by employing awareness of events and time expressions (time–event entities) as a filter. We name this module “question encoder.” In our approach, this kind of prior information can extract task-related information from sentence embedding. Our experimental results on a publicly available Timebank-Dense corpus demonstrate that our approach outperforms some state-of-the-art techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chambers, N., Cassidy, T., McDowell, B., Bethard, S.: Dense event ordering with a multi-pass architecture. Trans. Assoc. Comput. Linguist. 2, 273–284 (2014)

    Article  Google Scholar 

  2. Cheng, F., Miyao, Y.: Classifying temporal relations by bidirectional LSTM over dependency paths. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), pp. 1–6, July 2017

    Google Scholar 

  3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, Minneapolis, MN, USA, pp. 4171–4186, June 2019

    Google Scholar 

  4. Dligach, D., Miller, T., Lin, C., Bethard, S., Savova, G.: Neural temporal relation extraction. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, Spain, vol. 2, pp. 746–751, April 2017

    Google Scholar 

  5. Do, H.W., Jeong, Y.S.: Temporal relation classification with deep neural network. In: Proceedings of the 2016 International Conference on Big Data and Smart Computing (BigComp), Hong Kong, China, pp. 454–457, January 2016

    Google Scholar 

  6. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  7. Jin, P., Lian, J., Zhao, X., Wan, S.: TISE: a temporal search engine for web contents. Intell. Inf. Technol. Appl. (2008). https://doi.org/10.1109/IITA.2008.132. 2007 Workshop on 3, 220–224

  8. Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pp. 1746–1751 (2014)

    Google Scholar 

  9. Laokulrat, N., Miwa, M., Tsuruoka, Y., Chikayama, T.: UTTime: temporal relation classification using deep syntactic features. Second Joint Conference on Lexical and Computational Semantics. Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation, Atlanta, Georgia, USA, pp. 88–92, June 2013

    Google Scholar 

  10. Li, X., Yin, F., Sun, Z., Li, X., Yuan, A., Chai, D., Zhou, M., Li, J.: Entity-relation extraction as multi-turn question answering. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), pp. 1340–1350, July 2019

    Google Scholar 

  11. Lin, C., Dligach, D., Miller, T., Bethard, S., Savova, G.: Multilayered temporal modeling for the clinical domain. J. Am. Med. Inform. Assoc. 23, 387–395 (2016)

    Article  Google Scholar 

  12. Lin, C., Miller, T., Dligach, D., Bethard, S., Savova, G.: Representations of time expressions for temporal relation extraction with convolutional neural networks. In: Proceedings of the 2017 Biomedical Natural Language Processing Workshop, pp. 322–327, August 2017. https://doi.org/10.18653/v1/W17-2341

  13. Lin, Z., Feng, M., dos Santos, C.N., Yu, M., Xiang, B., Zhou, B., Bengio, Y.: A structured self-attentive sentence embedding. In: Proceedings of the 5th International Conference on Learning Representations (ICLR 2017) (2017)

    Google Scholar 

  14. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. CoRR abs/1301.3781 (2013)

    Google Scholar 

  15. Mirza, P., Tonelli, S.: Classifying temporal relations with simple features. In: Proceedings of the 2014 Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014), pp. 308–317, April 2014

    Google Scholar 

  16. Mirza, P., Tonelli, S.: On the contribution of word embeddings to temporal relation classification. In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan, pp. 2818–2828. The COLING 2016 Organizing Committee, December 2016

    Google Scholar 

  17. Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pp. 1532–1543, October 2014

    Google Scholar 

  18. Tourille, J., Ferret, O., Névéol, A., Tannier, X.: Neural architecture for temporal relation extraction: a bi-LSTM approach for detecting narrative containers. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 224–230, July 2017. https://doi.org/10.18653/v1/P17-2035

  19. Wang, Y., et al.: Clinical information extraction applications: a literature review. J. Biomed. Inform. 77, 34–49 (2018)

    Article  Google Scholar 

  20. Zhang, Y., Li, P., Zhou, G.: Classifying temporal relations between events by deep BiLSTM. In: Proceedings of the 2018 International Conference on Asian Language Processing (IALP 2018), pp. 267–272, November 2018

    Google Scholar 

  21. Zheng, N., et al.: Predicting COVID-19 in China using hybrid AI model. IEEE Trans. Cybern. 50(7), 2891–2904 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

This work was partially supported by a JSPS Grant-in-Aid for Scientific Research (B) (#19H04420).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yohei Seki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Seki, Y., Zhao, K., Oguni, M., Sugiyama, K. (2020). A Framework for Classifying Temporal Relations with Question Encoder. In: Ishita, E., Pang, N.L.S., Zhou, L. (eds) Digital Libraries at Times of Massive Societal Transition. ICADL 2020. Lecture Notes in Computer Science(), vol 12504. Springer, Cham. https://doi.org/10.1007/978-3-030-64452-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-64452-9_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-64451-2

  • Online ISBN: 978-3-030-64452-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics