Skip to main content

Exploit Vague Relation: An Augmented Temporal Relation Corpus and Evaluation

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13029))

  • 1461 Accesses

Abstract

Temporal relation classification is a challenging task in Natural Language Processing (NLP) that faces many difficulties, such as imbalanced distribution of instances, and ambiguity of the vague instances. To address the above issues, this paper proposes a novel data augmentation method on the TimeBank-Dense (TBD) corpus to distinguish those vague instances, which can provide more evidence to identify temporal relation clearly. Specifically, we additionally annotate those VAGUE instances in the form of multiple labels, to further distinguish varied semantic phenomena in VAGUE circumstances. Experimental results show that the models trained on our augmented corpus VATBD significantly outperform those trained on the original TBD corpus, and these results verify the effectiveness of our augmented corpus on temporal relation classification.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    E.g., a plane taking off at 9 o'clock every day.

  2. 2.

    1–5 corresponds to five non-vague relations, respectively.

  3. 3.

    http://www.usna.edu/Users/cs/nchamber/c.

  4. 4.

    TBD is a 6 classes classification task, here, N = 6.

References

  1. Mo, Y., Wenpeng, Y., Kazi, S., et al.: Improved neural relation detection for knowledge base question answering. In: Proceedings of ACL 2017, pp. 571–581 (2017)

    Google Scholar 

  2. Sakaguchi, T., Kurohashi, S.: Timeline generation based on a two-stage event-time anchoring model. In: Gelbukh, A. (ed.) CICLing 2017. LNCS, vol. 10762, pp. 535–545. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77116-8_40

    Chapter  Google Scholar 

  3. Zhou, G., Qian, L., Fan, J.: Tree kernel-based semantic relation extraction with rich syntactic and semantic information. Inf. Sci. 180(8), 1313–1325 (2010)

    Article  MathSciNet  Google Scholar 

  4. Nathanael, C., Taylor, C., Bill, M., Steven, B.: Dense event ordering with a multi-pass architecture. TACL 2, 273–284 (2014)

    Article  Google Scholar 

  5. Marc, V., Robert, G., Frank, S., et at.: SemEval-2007 Task 15: TempEval temporal relation identification. In: Proceedings of SemEval 2007, pp. 75–80 (2007)

    Google Scholar 

  6. Qiang, N., Hao, W., Dan, R.: A multi-axis annotation scheme for event temporal relations. In: Proceedings of ACL 2018, pp. 1318–1328 (2018)

    Google Scholar 

  7. Ben, Z., Qiang, N., Daniel, K., Dan, R.: Temporal common sense acquisition with minimal supervision. In: Proceedings of ACL 2020, pp. 7579–7589 (2020)

    Google Scholar 

  8. James, P., Patrick, H., Roser, S., et al.: The TIMEBANK corpus. Corpus Linguistics, pp. 647–656 (2003)

    Google Scholar 

  9. Tim, O., Kristin, W., Martha, P.: Richer event description: integrating event conference with temporal, causal and bridging annotation. In: Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016), pp. 47–56 (2016)

    Google Scholar 

  10. Aakanksha, N., Luke, B., Carolyn, R.: Tddiscourse: a dataset for discourse-level temporal ordering of events. In: Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pp. 239–249 (2019)

    Google Scholar 

  11. Cheng, F., Miyao, Y.: Classifying temporal relations by bidirectional LSTM over dependency paths. In: Proceedings of ACL 2017, pp. 1–6 (2017)

    Google Scholar 

  12. Zhou, X., Li, P., Zhu, Q., Kong, F.: Incorporating temporal cues and AC-GCN to improve temporal relation classification. In: Proceedings of NLPCC 2019, pp. 580–592 (2019).

    Google Scholar 

  13. Cheng, F., Asahara, M., Kobayashi, I., Kurohashi, S.: Dynamically updating event representations for temporal relation classification with multi-category learning. In: Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1352–1357 (2020)

    Google Scholar 

  14. Jacob, D., Ming, W., Kenton, L., Kristina, T.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL 2019, pp. 4171–4186 (2019)

    Google Scholar 

  15. Liu, X., He, P., Chen, W., Gao, J.: Multi-task deep neural networks for natural language understanding. In: Proceedings of ACL 2019, pp. 4487–4496 (2019)

    Google Scholar 

  16. Ashish, V., Noam, S., Niki, P., et al.: Attention is all you need. In: Proceedings of NeurIPS 2017, pp. 5998–6008 (2017)

    Google Scholar 

  17. Joshi, M., Chen, D., Liu, Y., et al.: SpanBERT: improving pre-training by representing and predicting spans. TACL 8, 64–77 (2019)

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the three anonymous reviewers for their comments on this paper. This research was supported by the National Natural Science Foundation of China (No. 61772354, 61836007 and 61773276.), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peifeng Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, L., Xu, S., Li, P., Zhu, Q. (2021). Exploit Vague Relation: An Augmented Temporal Relation Corpus and Evaluation. In: Wang, L., Feng, Y., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2021. Lecture Notes in Computer Science(), vol 13029. Springer, Cham. https://doi.org/10.1007/978-3-030-88483-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88483-3_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88482-6

  • Online ISBN: 978-3-030-88483-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics