Skip to main content

A Three-Stage Framework for Event-Event Relation Extraction with Large Language Model

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1968))

Included in the following conference series:

Abstract

Expanding the parameter count of a large language model (LLM) alone is insufficient to achieve satisfactory outcomes in natural language processing tasks, specifically event extraction (EE), event temporal relation extraction (ETRE), and event causal relation extraction (ECRE). To tackle these challenges, we propose a novel three-stage extraction framework (ThreeEERE) that integrates an improved automatic chain of thought prompting (Auto-CoT) with LLM and is tailored based on a golden rule to maximize event and relation extraction precision. The three stages include constructing examples in each category, federating local knowledge to extract relationships between events, and selecting the best answer. By following these stages, we can achieve our objective. Although supervised models dominate for these tasks, our experiments on three types of extraction tasks demonstrate that utilizing these three stages approach yields significant results in event extraction and event relation extraction, even surpassing some supervised model methods in the extraction task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://catalog.ldc.upenn.edu/LDC2006T06.

References

  1. Bubeck, S., et al.: Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR abs/2303.12712 (2023). https://doi.org/10.48550/arXiv.2303.12712

  2. Caselli, T., Vossen, P.: The event storyline corpus: A new benchmark for causal and temporal relation extraction. In: Caselli, T., et al. (eds.) Proceedings of the Events and Stories in the News Workshop@ACL 2017, Vancouver, Canada, August 4, 2017, pp. 77–86. Association for Computational Linguistics (2017). https://doi.org/10.18653/v1/w17-2711. https://doi.org/10.18653/v1/w17-2711

  3. Jiao, W., Wang, W., Huang, J., Wang, X., Tu, Z.: Is chatgpt A good translator? A preliminary study. CoRR abs/2301.08745 (2023). https://doi.org/10.48550/arXiv.2301.08745

  4. Kojima, T., Gu, S.S., Reid, M., Matsuo, Y., Iwasawa, Y.: Large language models are zero-shot reasoners. In: NeurIPS (2022). http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html

  5. Li, X., et al.: DuEE: a large-scale dataset for Chinese event extraction in real-world scenarios. In: Zhu, X., Zhang, M., Hong, Yu., He, R. (eds.) NLPCC 2020. LNCS (LNAI), vol. 12431, pp. 534–545. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60457-8_44

    Chapter  Google Scholar 

  6. Luo, Z., Xie, Q., Ananiadou, S.: Chatgpt as a factual inconsistency evaluator for abstractive text summarization. CoRR abs/2303.15621 (2023). https://doi.org/10.48550/arXiv.2303.15621

  7. Man, H., Nguyen, M., Nguyen, T.: Event causality identification via generation of important context words. In: Nastase, V., Pavlick, E., Pilehvar, M.T., Camacho-Collados, J., Raganato, A. (eds.) Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2022, Seattle, WA, USA, July 14–15, 2022, pp. 323–330. Association for Computational, Linguistics (2022). https://doi.org/10.18653/v1/2022.starsem-1.28

  8. Mirza, P.: Extracting temporal and causal relations between events. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22–27, 2014, Baltimore, MD, USA, Student Research Workshop, pp. 10–17. The Association for Computer Linguistics (2014). https://doi.org/10.3115/v1/p14-3002

  9. Naik, A., Breitfeller, L., Rosé, C.P.: Tddiscourse: A dataset for discourse-level temporal ordering of events. In: Nakamura, S., Gasic, M., Zuckerman, I., Skantze, G., Nakano, M., Papangelis, A., Ultes, S., Yoshino, K. (eds.) Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, SIGdial 2019, Stockholm, Sweden, September 11–13 2019, pp. 239–249. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/W19-5929

  10. Ning, Q., Feng, Z., Wu, H., Roth, D.: Joint reasoning for temporal and causal relations. In: Gurevych, I., Miyao, Y. (eds.) Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15–20, 2018, Volume 1: Long Papers, pp. 2278–2288. Association for Computational Linguistics (2018). https://doi.org/10.18653/v1/P18-1212

  11. Ning, Q., Subramanian, S., Roth, D.: An improved neural baseline for temporal relation extraction. In: Inui, K., Jiang, J., Ng, V., Wan, X. (eds.) Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3–7, 2019, pp. 6202–6208. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/D19-1642

  12. Ning, Q., Wu, H., Roth, D.: A multi-axis annotation scheme for event temporal relations. ArXiv abs/1804.07828 (2018)

    Google Scholar 

  13. Ning, Q., Zhou, B., Feng, Z., Peng, H., Roth, D.: Cogcomptime: a tool for understanding time in natural language. In: Blanco, E., Lu, W. (eds.) Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pp. 72–77. Association for Computational Linguistics (2018). https://doi.org/10.18653/v1/d18-2013

  14. Phu, M.T., Nguyen, M.V., Nguyen, T.H.: Fine-grained temporal relation extraction with ordered-neuron LSTM and graph convolutional networks. In: Xu, W., Ritter, A., Baldwin, T., Rahimi, A. (eds.) Proceedings of the Seventh Workshop on Noisy User-generated Text, W-NUT 2021, Online, November 11, 2021, pp. 35–45. Association for Computational Linguistics (2021). https://doi.org/10.18653/v1/2021.wnut-1.5

  15. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 140:1–140:67 (2020). http://jmlr.org/papers/v21/20-074.html

  16. Reimers, N., Gurevych, I.: Sentence-bert: sentence embeddings using siamese bert-networks. In: Inui, K., Jiang, J., Ng, V., Wan, X. (eds.) Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3–7, 2019, pp. 3980–3990. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/D19-1410

  17. Song, S., Gao, Y., Wang, C., Zhu, X., Wang, J., Yu, P.S.: Matching heterogeneous events with patterns. IEEE Trans. Knowl. Data Eng. 29(8), 1695–1708 (2017). https://doi.org/10.1109/TKDE.2017.2690912

    Article  Google Scholar 

  18. Tan, X., Pergola, G., He, Y.: Extracting event temporal relations via hyperbolic geometry. In: Moens, M., Huang, X., Specia, L., Yih, S.W. (eds.) Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event/Punta Cana, Dominican Republic, 7–11 November, 2021, pp. 8065–8077. Association for Computational Linguistics (2021). https://doi.org/10.18653/v1/2021.emnlp-main.636

  19. Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. In: NeurIPS (2022). http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html

  20. Wei, X., et al.: Zero-shot information extraction via chatting with chatgpt. CoRR abs/2302.10205 (2023). https://doi.org/10.48550/arXiv.2302.10205

  21. Xian, Y., Schiele, B., Akata, Z.: Zero-shot learning - the good, the bad and the ugly. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017, pp. 3077–3086. IEEE Computer Society (2017). https://doi.org/10.1109/CVPR.2017.328

  22. Yang, K., Ji, S., Zhang, T., Xie, Q., Ananiadou, S.: On the evaluations of chatgpt and emotion-enhanced prompting for mental health analysis. CoRR abs/2304.03347 (2023). https://doi.org/10.48550/arXiv.2304.03347

  23. Yang, S., Feng, D., Qiao, L., Kan, Z., Li, D.: Exploring pre-trained language models for event extraction and generation. In: Korhonen, A., Traum, D.R., Màrquez, L. (eds.) Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers. pp. 5284–5294. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/p19-1522, https://doi.org/10.18653/v1/p19-1522

  24. Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic chain of thought prompting in large language models. CoRR abs/2210.03493 (2022). https://doi.org/10.48550/arXiv.2210.03493

  25. Zuo, X., Cao, P., Chen, Y., Liu, K., Zhao, J., Peng, W., Chen, Y.: Learnda: Learnable knowledge-guided data augmentation for event causality identification. In: Zong, C., Xia, F., Li, W., Navigli, R. (eds.) Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1–6, 2021, pp. 3558–3571. Association for Computational Linguistics (2021). https://doi.org/10.18653/v1/2021.acl-long.276

  26. Zuo, X., Chen, Y., Liu, K., Zhao, J.: Knowdis: Knowledge enhanced data augmentation for event causality detection via distant supervision. In: Scott, D., Bel, N., Zong, C. (eds.) Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8–13, 2020, pp. 1544–1550. International Committee on Computational Linguistics (2020). https://doi.org/10.18653/v1/2020.coling-main.135

Download references

Acknowledgements

This work was supported in part by the Important Science and Technology Project of Hainan Province under Grant ZDKJ2020010.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to BingKun Wang or YongFeng Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, F. et al. (2024). A Three-Stage Framework for Event-Event Relation Extraction with Large Language Model. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1968. Springer, Singapore. https://doi.org/10.1007/978-981-99-8181-6_33

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8181-6_33

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8180-9

  • Online ISBN: 978-981-99-8181-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics