Abstract
Temporal relation classification is a challenging task in Natural Language Processing (NLP) that faces many difficulties, such as imbalanced distribution of instances, and ambiguity of the vague instances. To address the above issues, this paper proposes a novel data augmentation method on the TimeBank-Dense (TBD) corpus to distinguish those vague instances, which can provide more evidence to identify temporal relation clearly. Specifically, we additionally annotate those VAGUE instances in the form of multiple labels, to further distinguish varied semantic phenomena in VAGUE circumstances. Experimental results show that the models trained on our augmented corpus VATBD significantly outperform those trained on the original TBD corpus, and these results verify the effectiveness of our augmented corpus on temporal relation classification.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
E.g., a plane taking off at 9 o'clock every day.
- 2.
1–5 corresponds to five non-vague relations, respectively.
- 3.
- 4.
TBD is a 6 classes classification task, here, N = 6.
References
Mo, Y., Wenpeng, Y., Kazi, S., et al.: Improved neural relation detection for knowledge base question answering. In: Proceedings of ACL 2017, pp. 571–581 (2017)
Sakaguchi, T., Kurohashi, S.: Timeline generation based on a two-stage event-time anchoring model. In: Gelbukh, A. (ed.) CICLing 2017. LNCS, vol. 10762, pp. 535–545. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77116-8_40
Zhou, G., Qian, L., Fan, J.: Tree kernel-based semantic relation extraction with rich syntactic and semantic information. Inf. Sci. 180(8), 1313–1325 (2010)
Nathanael, C., Taylor, C., Bill, M., Steven, B.: Dense event ordering with a multi-pass architecture. TACL 2, 273–284 (2014)
Marc, V., Robert, G., Frank, S., et at.: SemEval-2007 Task 15: TempEval temporal relation identification. In: Proceedings of SemEval 2007, pp. 75–80 (2007)
Qiang, N., Hao, W., Dan, R.: A multi-axis annotation scheme for event temporal relations. In: Proceedings of ACL 2018, pp. 1318–1328 (2018)
Ben, Z., Qiang, N., Daniel, K., Dan, R.: Temporal common sense acquisition with minimal supervision. In: Proceedings of ACL 2020, pp. 7579–7589 (2020)
James, P., Patrick, H., Roser, S., et al.: The TIMEBANK corpus. Corpus Linguistics, pp. 647–656 (2003)
Tim, O., Kristin, W., Martha, P.: Richer event description: integrating event conference with temporal, causal and bridging annotation. In: Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016), pp. 47–56 (2016)
Aakanksha, N., Luke, B., Carolyn, R.: Tddiscourse: a dataset for discourse-level temporal ordering of events. In: Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pp. 239–249 (2019)
Cheng, F., Miyao, Y.: Classifying temporal relations by bidirectional LSTM over dependency paths. In: Proceedings of ACL 2017, pp. 1–6 (2017)
Zhou, X., Li, P., Zhu, Q., Kong, F.: Incorporating temporal cues and AC-GCN to improve temporal relation classification. In: Proceedings of NLPCC 2019, pp. 580–592 (2019).
Cheng, F., Asahara, M., Kobayashi, I., Kurohashi, S.: Dynamically updating event representations for temporal relation classification with multi-category learning. In: Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1352–1357 (2020)
Jacob, D., Ming, W., Kenton, L., Kristina, T.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL 2019, pp. 4171–4186 (2019)
Liu, X., He, P., Chen, W., Gao, J.: Multi-task deep neural networks for natural language understanding. In: Proceedings of ACL 2019, pp. 4487–4496 (2019)
Ashish, V., Noam, S., Niki, P., et al.: Attention is all you need. In: Proceedings of NeurIPS 2017, pp. 5998–6008 (2017)
Joshi, M., Chen, D., Liu, Y., et al.: SpanBERT: improving pre-training by representing and predicting spans. TACL 8, 64–77 (2019)
Acknowledgments
The authors would like to thank the three anonymous reviewers for their comments on this paper. This research was supported by the National Natural Science Foundation of China (No. 61772354, 61836007 and 61773276.), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, L., Xu, S., Li, P., Zhu, Q. (2021). Exploit Vague Relation: An Augmented Temporal Relation Corpus and Evaluation. In: Wang, L., Feng, Y., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2021. Lecture Notes in Computer Science(), vol 13029. Springer, Cham. https://doi.org/10.1007/978-3-030-88483-3_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-88483-3_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88482-6
Online ISBN: 978-3-030-88483-3
eBook Packages: Computer ScienceComputer Science (R0)