skip to main content
10.1145/3508230.3508250acmotherconferencesArticle/Chapter ViewAbstractPublication PagesnlpirConference Proceedingsconference-collections
research-article

CBCP: A Method of Causality Extraction from Unstructured Financial Text

Authors Info & Claims
Published:08 March 2022Publication History

ABSTRACT

Extracting causality information from unstructured natural language text is a challenging problem in natural language processing. However, there are no mature special causality extraction systems. Most people use basic sequence labeling methods, such as BERT-CRF model, to extract causal elements from unstructured text and the results are usually not well. At the same time, there is a large number of causal event relations in the field of finance. If we can extract enormous financial causality, this information will help us better understand the relationships between financial events and build related event evolutionary graphs in the future. In this paper, we propose a causality extraction method for this question, named CBCP (Center word-based BERT-CRF with Pattern extraction), which can directly extract cause elements and effect elements from unstructured text. Compared to BERT-CRF model, our model incorporates the information of center words as prior conditions and performs better in the performance of entity extraction. Moreover, our method combined with pattern can further improve the effect of extracting causality. Then we evaluate our method and compare it to the basic sequence labeling method. We prove that our method performs better than other basic extraction methods on causality extraction tasks in the finance field. At last, we summarize our work and prospect some future work.

References

  1. Z. Li, X. Ding, and T. Liu, “Constructing Narrative Event Evolutionary Graph for Script Event Prediction,” arXiv:1805.05081 [cs], May 2018, Accessed: Jan. 11, 2021. [Online]. Available: http://arxiv.org/abs/1805.05081Google ScholarGoogle Scholar
  2. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” arXiv:1810.04805 [cs], May 2019, Accessed: Jan. 26, 2021. [Online]. Available: http://arxiv.org/abs/1810.04805Google ScholarGoogle Scholar
  3. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.Google ScholarGoogle Scholar
  4. Jianlin Su. (Dec.14, 2019). “Based on Conditional Layer Normalization condition of text generation” [Blog post]. Retrieved from https://spaces.ac.cn/archives/7124Google ScholarGoogle Scholar
  5. F. Eeckman, “The Sigmoid Nonlinearity in Prepyriform Cortex,” in Neural Information Processing Systems, 1988. Accessed: Jun. 05, 2021. [Online]. Available: https://proceedings.neurips.cc/paper/1987/file/c9f0f895-fb98ab9159f51fd0297e236d-Paper.pdfGoogle ScholarGoogle Scholar
  6. A. Vaswani , “Attention Is All You Need,” arXiv:1706.03762 [cs], Dec. 2017, Accessed: Feb. 15, 2021. [Online]. Available: http://arxiv.org/abs/1706.03762Google ScholarGoogle Scholar
  7. Y. Cui , “Pre-Training with Whole Word Masking for Chinese BERT,” arXiv:1906.08101 [cs], Oct. 2019, Accessed: Jun. 01, 2021. [Online]. Available: http://arxiv.org/abs/1906.08101Google ScholarGoogle Scholar
  8. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv:1412.6980 [cs], Jan. 2017, Accessed: Apr. 11, 2021. [Online]. Available: http://arxiv.org/abs/1412.6980Google ScholarGoogle Scholar
  9. Christopher S. G. Khoo, Syin Chan, and Yun Niu. Extracting causal knowledge from a medical database using graphical patterns. Proceedings of the 38th annual meeting of the association for computational linguistics, pages 336–343, 2000.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Roxana Girju and Dan Moldovan. Text mining for causal relations. Proceedings of the FLAIRS Conference, page 360–364, 2002.Google ScholarGoogle Scholar
  11. Tirthankar Dasgupta, Rupsa Saha, Lipika Dey, and Abir Naskar. Automatic Ex- traction of Causal Relations from Text using Linguistically Informed Deep Neural Networks. Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 306–316, 2019.Google ScholarGoogle Scholar
  12. Oktie Hassanzadeh, Debarun Bhattacharjya, Mark Feblowitz, Kavitha Srinivas, Michael Perrone, Shirin Sohrabi, and Michael Katz. Answering binary causal ques- tions through large-scale text mining: An evaluation using cause-effect pairs from human experts. IJCAI International Joint Conference on Artificial Intelligence, 2019-Augus:5003–5009, 2019.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    NLPIR '21: Proceedings of the 2021 5th International Conference on Natural Language Processing and Information Retrieval
    December 2021
    175 pages
    ISBN:9781450387354
    DOI:10.1145/3508230

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 March 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format