Skip to main content
Log in

RERG: Reinforced evidence reasoning with graph neural network for table-based fact verification

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Table-based fact verification aims to check whether a statement is entailed by the content of relevant table. Existing works mainly either parse a statement with logical form or design a table-aware neural network to represent the statement-table pair. However, they fail to directly exploit guidance signals to capture enough evidence from the table, which lead to performance degradation. Thus, to investigate how to select potential key words from the table for fact verification, we propose a Reinforced Evidence Reasoning framework with Graph neural network (RERG), which simulates human inference process of focusing on some words at each step. Specifically, we employ a Transformer-based graph neural network to represent multi-granularity features. Then, we design a monitor node and connect it with some potential key nodes by reinforcement learning on each graph layer, according to the feedback of the reward. In this way, the monitor node can be used to predict the label, which has aggregated various key information through multiple graph layers. Besides, we add secondary updating after the attention mechanism to enhance information aggregation of each graph layer. Experimental results on two benchmark datasets TABFACT and INFOTABS show performance improvements over state-of-the-art baselines and the feasibility of selecting some meaningful evidences during graph reasoning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Neutral samples are removed for fine-tuning TaPaS++.

  2. https://spacy.io/

  3. Initialized with [CLS] or < s > representation.

References

  1. Chen C, Cai F, Hu X, Chen W, Chen H (2021) Hhgn: A hierarchical reasoning-based heterogeneous graph neural network for fact verification. Inf Process Manage 58(5):102659

    Article  Google Scholar 

  2. Chen T, Xu R, He Y, Wang X (2017) Improving sentiment analysis via sentence type classification using bilstm-crf and cnn. Expert Syst Appl 72:221–230

    Article  Google Scholar 

  3. Chen W, Wang H, Chen J, Zhang Y, Wang H, Li S, Zhou X, Wang WY (2020) Tabfact: a large-scale dataset for table-based fact verification. In: Proceedings of the 8th International Conference on Learning Representations, ICLR’20, Addis Ababa, Ethiopia

  4. Devlin J, Chang M, Lee K, Toutanova K (2019) BERT: Pre-training Of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American chapter of the association for computational linguistics: Human Language Technologies, NAACL-HLT’19, pp 4171–4186. Minneapolis, Minnesota, USA

  5. Dong R, Smith D (2021) Structural encoding and pre-training matter: Adapting BERT for table-based fact verification. In: Proceedings of the 16th Conference of the european chapter of the association for computational linguistics, EACL’21, pp 2366–2375. Online

  6. Eisenschlos J, Krichene S, Müller T (2020) Understanding tables with intermediate pre-training. In: Findings of the association for computational linguistics: EMNLP’20, pp 281–296. Online

  7. Feng J, Huang M, Zhao L, Yang Y, Zhu X (2018) Reinforcement learning for relation classification from noisy data. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI’18, New Orleans, Louisiana, USA, pp 5779–5786

  8. Goodrich B, Rao V, Liu PJ, Saleh M (2019) Assessing the factual accuracy of generated text. In: Proceedings of the 25th ACM SIGKDD International conference on knowledge discovery and data mining, KDD’19, pp 166–175. Anchorage, AK, USA

  9. Gupta V, Mehta M, Nokhiz P, Srikumar V (2020) INFOTABS: Inference On tables as semi-structured data. In: Proceedings of the 58th Annual meeting of the association for computational linguistics, ACL’20, pp 2309–2324. Online

  10. He P, Liu X, Gao J, Chen W (2021) Deberta: decoding-enhanced bert with disentangled attention. In: Proceedings of the 9th International conference on learning representations, ICLR’21. Online

  11. Herzig J, Nowak PK, Müller T, Piccinno F, Eisenschlos J (2020) TaPas: Weakly supervised table parsing via pre-training. In: Proceedings of the 58th Annual meeting of the association for computational linguistics, ACL’20, pp 4320–4333. Online

  12. Krichene S, Müller T, Eisenschlos J (2021) DoT: An efficient double transformer for NLP tasks with tables. In: Findings of the association for computational linguistics, ACL-IJCNLP’21, pp 3273–3283. Online

  13. Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2020) BART: Denoising Sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual meeting of the association for computational linguistics, ACL’20, pp 7871–7880. Online

  14. Liu Q, Chen B, Guo J, Lin Z, Lou J (2021) TAPEX: Table pre-training via learning a neural SQL executor. arXiv:2107.07653

  15. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta:, A robustly optimized BERT pretraining approach. arXiv:1907.11692

  16. Loshchilov I, Hutter F (2019) Decoupled weight decay regularization. In: Proceedings of the 7th International Conference on Learning Representations, ICLR’19, New Orleans, LA, USA

  17. Mueller T, Piccinno F, Shaw P, Nicosia M, Altun Y (2019) Answering conversational questions on structured data without logical forms. In: Proceedings of the 2019 Conference on empirical methods in natural language processing, EMNLP-IJCNLP’19, pp 5902–5910, Hong Kong, China

  18. Nadeem M, Fang W, Xu B, Mohtarami M, Glass J (2019) FAKTA: An automatic end-to-end fact checking system. In: Proceedings of the 2019 Conference of the north American chapter of the association for computational linguistics, NAACL-HLT’19, pp 78–83, Minneapolis, Minnesota

  19. Neeraja J, Gupta V, Srikumar V (2021) Incorporating external knowledge to enhance tabular reasoning. In: Proceedings of the 2021 Conference of the north american chapter of the association for computational linguistics: Human language technologies, NAACL-HLT’21, pp 2799–2809. Online

  20. Nishida K, Sadamitsu K, Higashinaka R, Matsuo Y (2017) Understanding the semantic structures of tables with a hybrid deep neural network architecture. In: Thirty-first AAAI conference on artificial intelligence, AAAI’17. San francisco, california, USA

  21. Qin P, Xu W, Wang WY (2018) Robust distant supervision relation extraction via deep reinforcement learning. In: Proceedings of the 56th Annual meeting of the association for computational linguistics, ACL’18, pp 2137–2147, Melbourne, Australia

  22. Schlichtkrull MS, Karpukhin V, Oguz B, Lewis M, Yih W, Riedel S (2021) Joint verification and reranking for open fact checking over tables. In: Proceedings of the 59th Annual meeting of the association for computational linguistics, ACL-IJCNLP’21, pp 6787–6799. Online

  23. Shaar S, Babulkov N, Da San Martino G, Nakov P (2020) That is a known lie: Detecting previously fact-checked claims. In: Proceedings of the 58th Annual meeting of the association for computational linguistics, ACL’20, pp 3607–3618. Online

  24. Shaw P, Massey P, Chen A, Piccinno F, Altun Y (2019) Generating logical forms from graph representations of text and entities. In: Proceedings of the 57th Annual meeting of the association for computational linguistics, ACL’19, pp 95–106. Florence, Italy

  25. Shaw P, Uszkoreit J, Vaswani A (2018) Self-attention with relative position representations. In: Proceedings of the 2018 Conference of the north american chapter of the association for computational linguistics: Human language technologies, NAACL-HLT’18, pp 464–468. New Orleans, Louisiana, USA

  26. Shi Q, Zhang Y, Yin Q, Liu T (2020) Learn to combine linguistic and symbolic information for table-based fact verification. In: Proceedings of the 28th International conference on computational linguistics, COLING’20, pp 5335–5346. Online

  27. Si J, Zhou D, Li T, Shi X, He Y (2021) Topic-aware evidence reasoning and stance-aware aggregation for fact verification. In: Proceedings of the 59th Annual meeting of the association for computational linguistics, ACL-IJCNLP’21, pp 1612–1622. Online

  28. Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  29. Thorne J, Vlachos A, Christodoulopoulos C, Mittal A (2018) FEVER: A large-scale dataset for fact extraction and VERification. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT’18, pp 809–819, New Orleans, Louisiana

  30. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. In: Proceedings of the neural information processing systems, NeurIPS’17, pp 5998–6008. Long Beach, California, USA

  31. Wang F, Sun K, Pujara J, Szekely P, Chen M (2021) Table-based fact verification with salience-aware learning. In: Proceedings of the findings of the association for computational linguistics, EMNLP’21, pp 4025–4036. Punta Cana, Dominican Republic

  32. Wang H, Zhang X, Ma S, Sun X, Wang H, Wang M (2018) A neural question answering model based on semi-structured tables. In: Proceedings of the 27th international conference on computational linguistics, COLING’18, pp 1941–1951. Santa Fe, New Mexico, USA

  33. Williams A, Nangia N, Bowman S (2018) A broad-coverage challenge corpus for sentence understanding through inference. In: Proceedings of the 2018 Conference of the north american chapter of the association for computational linguistics: Human language technologies, NAACL-HLT’18, pp 1112–1122. New Orleans, Louisiana, USA

  34. Williams RJ (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning

  35. Wu J, Li L, Wang WY (2018) Reinforced co-training. In: Proceedings of the 2018 conference of the north american chapter of the association for computational linguistics: Human language technologies, NAACL-HLT’18, pp 1252–1262, New Orleans, Louisiana

  36. Yang X, Nie F, Feng Y, Liu Q, Chen Z, Zhu X (2020) Program enhanced fact verification with verbalization and graph attention network. In: Proceedings of the 2020 conference on empirical methods in natural language processing, EMNLP’20, pp 7810–7825. Online

  37. Yang X, Zhu X (2021) Exploring decomposition for table-based fact verification. In: Proceedings of the findings of the association for computational linguistics, EMNLP’21, pp 1045–1052. Punta Cana, Dominican Republic

  38. Ye Z, Geng Y, Chen J, Chen J, Xu X, Zheng S, Wang F, Zhang J, Chen H (2020) Zero-shot text classification via reinforced self-training. In: Proceedings of the 58th annual meeting of the association for computational linguistics, ACL’20, pp 3014–3024. Online

  39. Yin P, Neubig G, Yih W, Riedel S (2020) TaBERT: Pretraining for joint understanding of textual and tabular data. In: Proceedings of the 58th annual meeting of the association for computational linguistics, ACL’20, pp 8413–8426. Online

  40. Zhang H, Wang Y, Wang S, Cao X, Zhang F, Wang Z (2020) Table fact verification with structure-aware transformer. In: Proceedings of the 2020 conference on empirical methods in natural language processing, EMNLP’20, pp 1624–1629. Online

  41. Zhang L, Zhang S, Balog K (2019) Table2vec: Neural word and entity embeddings for table population and retrieval. In: Proceedings of the 42nd International ACM SIGIR Conference on research and development in information retrieval, SIGIR’19, pp 1029–1032. Paris, France

  42. Zhang X, Shou L, Pei J, Gong M, Wen L, Jiang D (2020) A graph representation of semi-structured data for web question answering. In: Proceedings of the 28th International conference on computational linguistics, COLING’20, pp 51–61. Online

  43. Zhao C, Xiong C, Rosset C, Song X, Bennett PN, Tiwary S (2020) Transformer-xh: Multi-evidence reasoning with extra hop attention. In: Proceedings of the 8th international conference on learning representations, ICLR’20, Addis Ababa, Ethiopia

  44. Zhong W, Tang D, Feng Z, Duan N, Zhou M, Gong M, Shou L, Jiang D, Wang J, Yin J (2020) LogicalFactChecker: Leveraging logical operations for fact checking with graph module network. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL’20, pp 6053–6065. Online

  45. Zhou J, Han X, Yang C, Liu Z, Wang L, Li C, Sun M (2019) GEAR: Graph-based Evidence aggregating and reasoning for fact verification. In: Proceedings of the 57th annual meeting of the association for computational linguistics, ACL’19, pp 892–901. Florence, Italy

Download references

Acknowledgements

The authors thank the anonymous reviewers for their valuable comments. This work was supported in part by the Consulting Project of Chinese Academy of Engineering under Grant 2020-XY-5, and in part by the National Natural Science Foundation of China under Grant 62272100, and in part by the Fundamental Research Funds for the Central Universities and the Academy Locality Cooperation Project of Chinese Academy of Engineering under Grant JS2021ZT05. The authors have no relevant financial or non-financial interests to disclose.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peng Yang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, G., Yang, P. & Yao, Y. RERG: Reinforced evidence reasoning with graph neural network for table-based fact verification. Appl Intell 53, 12308–12323 (2023). https://doi.org/10.1007/s10489-022-04130-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-04130-x

Keywords

Navigation