skip to main content
10.1145/3539618.3591708acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Large Language Models are Versatile Decomposers: Decomposing Evidence and Questions for Table-based Reasoning

Published: 18 July 2023 Publication History

Abstract

Table-based reasoning has shown remarkable progress in a wide range of table-based tasks. It is a challenging task, which requires reasoning over both free-form natural language (NL) questions and (semi-)structured tabular data. However, previous table-based reasoning solutions usually suffer from significant performance degradation on ''huge'' evidence (tables). In addition, most existing methods struggle to reason over complex questions since the essential information is scattered in different places. To alleviate the above challenges, we exploit large language models (LLMs) as decomposers for effective table-based reasoning, which (i) decompose huge evidence (a huge table) into sub-evidence (a small table) to mitigate the interference of useless information for table reasoning, and (ii) decompose a complex question into simpler sub-questions for text reasoning. First, we use a powerful LLM to decompose the evidence involved in the current question into the sub-evidence that retains the relevant information and excludes the remaining irrelevant information from the ''huge'' evidence. Second, we propose a novel ''parsing-execution-filling'' strategy to decompose a complex question into simper step-by-step sub-questions by generating intermediate SQL queries as a bridge to produce numerical and logical sub-questions with a powerful LLM. Finally, we leverage the decomposed sub-evidence and sub-questions to get the final answer with a few in-context prompting examples. Extensive experiments on three benchmark datasets (TabFact, WikiTableQuestion, and FetaQA) demonstrate that our method achieves significantly better results than competitive baselines for table-based reasoning. Notably, our method outperforms human performance for the first time on the TabFact dataset. In addition to impressive overall performance, our method also has the advantage of interpretability, where the returned results are to some extent tractable with the generated sub-evidence and sub-questions. For reproducibility, we release our source code and data at: https://github.com/AlibabaResearch/DAMO-ConvAI.

References

[1]
Ibrahim Abdelaziz, Razen Harbi, Zuhair Khayyat, and Panos Kalnis. 2017. A survey and experimental comparison of distributed SPARQL engines for very large RDF data. Proceedings of the VLDB Endowment 10, 13 (2017), 2049--2060.
[2]
Rami Aly, Zhijiang Guo, Michael Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. Feverous: Fact extraction and verification over unstructured and structured information. ArXiv preprint abs/2106.05707 (2021). https://arxiv.org/abs/2106.05707
[3]
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.). https://proceedings.neurips.cc/paper/2020/hash/ 1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html
[4]
Zefeng Cai, Xiangyu Li, Binyuan Hui, Min Yang, Bowen Li, Binhua Li, Zhen Cao, Weijie Li, Fei Huang, Luo Si, and Yongbin Li. 2022. STAR: SQL Guided Pre-Training for Context-dependent Text-to-SQL Parsing. In EMNLP.
[5]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. ArXiv preprint abs/2107.03374 (2021). https://arxiv.org/abs/2107.03374
[6]
Wenhu Chen. 2022. Large language models are few (1)-shot table reasoners. ArXiv preprint abs/2210.06710 (2022). https://arxiv.org/abs/2210.06710
[7]
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. 2022. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. ArXiv preprint abs/2211.12588 (2022). https: //arxiv.org/abs/2211.12588
[8]
Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020. TabFact: A Large-scale Dataset for Table-based Fact Verification. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26--30, 2020. OpenReview.net. https://openreview.net/forum?id=rkeJRhNYDH
[9]
Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, et al. 2022. Binding language models in symbolic languages. ArXiv preprint abs/2210.02875 (2022). https://arxiv.org/abs/2210.02875
[10]
Minseok Cho, Gyeongbok Lee, and Seung-won Hwang. 2019. Explanatory and Actionable Debugging for Machine Learning: A TableQA Demonstration. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, Benjamin Piwowarski, Max Chevalier, Éric Gaussier, Yoelle Maarek, Jian-Yun Nie, and Falk Scholer (Eds.). ACM, 1333--1336. https://doi.org/10.1145/3331184.3331404
[11]
Antonia Creswell, Murray Shanahan, and Irina Higgins. 2022. Selection-inference: Exploiting large language models for interpretable logical reasoning. ArXiv preprint abs/2205.09712 (2022). https://arxiv.org/abs/2205.09712
[12]
Chris J Date. 1989. A Guide to the SQL Standard. Addison-Wesley Longman Publishing Co., Inc.
[13]
Dheeru Dua, Shivanshu Gupta, Sameer Singh, and Matt Gardner. 2022. Successive Prompting for Decomposing Complex Questions. ArXiv preprint abs/2212.04092 (2022). https://arxiv.org/abs/2212.04092
[14]
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. 2022. Complexity-based prompting for multi-step reasoning. ArXiv preprint abs/2210.00720 (2022). https://arxiv.org/abs/2210.00720
[15]
Zihui Gu, Ju Fan, Nan Tang, Preslav Nakov, Xiaoman Zhao, and Xiaoyong Du. 2022. PASTA: Table-Operations Aware Fact Verification via Sentence-Table Cloze Pre-training. ArXiv preprint abs/2211.02816 (2022). https://arxiv.org/abs/2211. 02816
[16]
Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, and Vivek Srikumar. 2020. INFOTABS: Inference on Tables as Semi-structured Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 2309--2324. https://doi.org/10.18653/v1/2020. acl-main.210
[17]
Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly Supervised Table Parsing via Pretraining. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4320--4333. https://doi.org/10.18653/v1/2020.acl-main.398
[18]
JiaCheng Huang, Wei Hu, Haoxuan Li, and Yuzhong Qu. 2018. Automated Comparative Table Generation for Facilitating Human Intervention in Multi-Entity Resolution. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08-12, 2018, Kevyn Collins-Thompson, Qiaozhu Mei, Brian D. Davison, Yiqun Liu, and Emine Yilmaz (Eds.). ACM, 585--594. https://doi.org/10.1145/3209978.3210021
[19]
Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. 2022. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning. PMLR, 9118--9147.
[20]
Binyuan Hui, Ruiying Geng, Qiyu Ren, Binhua Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Pengfei Zhu, and Xiaodan Zhu. 2021. Dynamic Hybrid Relation Exploration Network for Cross-Domain Context-Dependent Semantic Parsing. In AAAI Conference on Artificial Intelligence.
[21]
Binyuan Hui, Ruiying Geng, Lihan Wang, Bowen Qin, Yanyang Li, Bowen Li, Jian Sun, and Yongbin Li. 2022. S2SQL: Injecting Syntax to Question-Schema Interaction Graph Encoder for Text-to-SQL Parsers. In Findings of the Association for Computational Linguistics: ACL 2022. Association for Computational Linguistics, Dublin, Ireland, 1254--1262. https://doi.org/10.18653/v1/2022.findings-acl.99
[22]
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. 2022. Survey of hallucination in natural language generation. Comput. Surveys (2022).
[23]
Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, and Weizhu Chen. 2022. OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Seattle, United States, 932--942. https://doi.org/10.18653/v1/2022.naacl-main.68
[24]
Aditya Kalyanpur, Siddharth Patwardhan, BK Boguraev, Adam Lally, and Jennifer Chu-Carroll. 2012. Fact-based question decomposition in DeepQA. IBM Journal of Research and Development 56 (2012), 13--1.
[25]
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. ArXiv preprint abs/2205.11916 (2022). https://arxiv.org/abs/2205.11916
[26]
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. ArXiv preprint abs/2206.14858 (2022). https://arxiv.org/abs/2206.14858
[27]
Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out. Association for Computational Linguistics, Barcelona, Spain, 74--81. https://aclanthology.org/W04-1013
[28]
Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, and Yejin Choi. 2022. Rainier: Reinforced knowledge introspector for commonsense question answering. ArXiv preprint abs/2210.03078 (2022). https://arxiv.org/abs/2210.03078
[29]
Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. 2021. Tapex: Table pre-training via learning a neural sql executor. ArXiv preprint abs/2107.07653 (2021). https://arxiv.org/abs/2107.07653
[30]
Zhiqiang Ma, Steven Pomerville, Mingyang Di, and Armineh Nourbakhsh. 2020. SPot: A Tool for Identifying Operating Segments in Financial Tables. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, Jimmy Huang, Yi Chang, Xueqi Cheng, Jaap Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu (Eds.). ACM, 2157--2160. https://doi.org/10.1145/3397271.3401406
[31]
Saeed Masoudnia and Reza Ebrahimpour. 2014. Mixture of experts: a literature survey. The Artificial Intelligence Review 42, 2 (2014), 275.
[32]
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? ArXiv preprint abs/2202.12837 (2022). https://arxiv.org/abs/2202.12837
[33]
Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Kry?ci'ski, Hailey Schoelkopf, Riley Kong, Xiangru Tang, Mutethia Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, Dragomir Radev, and Dragomir Radev. 2022. FeTaQA: Free-form Table Question Answering. Transactions of the Association for Compu-tational Linguistics 10 (2022), 35--49. https://doi.org/10.1162/tacl_a_00446
[34]
J. Neeraja, Vivek Gupta, and Vivek Srikumar. 2021. Incorporating External Knowl-edge to Enhance Tabular Reasoning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online, 2799--2809. https://doi.org/10.18653/v1/2021.naacl-main.224
[35]
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, 311--318. https://doi.org/10.3115/1073083.1073135
[36]
Panupong Pasupat and Percy Liang. 2015. Compositional Semantic Parsing on Semi-Structured Tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, 1470--1480. https://doi.org/10.3115/v1/P15--1142
[37]
Ethan Perez, Patrick Lewis, Wen-tau Yih, Kyunghyun Cho, and Douwe Kiela. 2020. Unsupervised Question Decomposition for Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 8864--8880. https://doi.org/10.18653/v1/2020.emnlp-main.713
[38]
Bowen Qin, Binyuan Hui, Lihan Wang, Min Yang, Jinyang Li, Binhua Li, Ruiying Geng, Rongyu Cao, Jian Sun, Luo Si, Fei Huang, and Yongbin Li. 2022. A Survey on Text-to-SQL Parsing: Concepts, Methods, and Future Directions. ArXiv abs/2208.13629 (2022).
[39]
Alon Talmor and Jonathan Berant. 2018. The Web as a Knowledge-Base for Answering Complex Questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, 641--651. https://doi.org/10.18653/v1/N18--1059
[40]
Lihan Wang, Bowen Qin, Binyuan Hui, Bowen Li, Min Yang, Bailin Wang, Binhua Li, Fei Huang, Luo Si, and Yongbin Li. 2022. Proton: Probing Schema Linking Information from Pre-trained Language Models for Text-to-SQL Parsing. Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2022).
[41]
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 (2022).
[42]
Yihan Wang, Si Si, Daliang Li, Michal Lukasik, Felix Yu, Cho-Jui Hsieh, Inder-jit S Dhillon, and Sanjiv Kumar. 2022. Preserving In-Context Learning ability in Large Language Model Fine-tuning. ArXiv preprint abs/2211.00635 (2022). https://arxiv.org/abs/2211.00635
[43]
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. ArXiv preprint abs/2201.11903 (2022). https://arxiv.org/abs/ 2201.11903
[44]
Jingfeng Yang, Aditya Gupta, Shyam Upadhyay, Luheng He, Rahul Goel, and Shachi Paul. 2022. TableFormer: Robust Transformer Modeling for Table-Text Encoding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, 528--537. https://doi.org/10.18653/v1/2022.acl-long.40
[45]
Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 8413--8426. https://doi.org/10.18653/v1/2020.acl-main.745
[46]
Eric Zelikman, Yuhuai Wu, and Noah D Goodman. 2022. Star: Bootstrapping reasoning with reasoning. ArXiv preprint abs/2203.14465 (2022). https: //arxiv.org/abs/2203.14465
[47]
Haoyu Zhang, Jingjing Cai, Jianjun Xu, and Ji Wang. 2019. Complex Question Decomposition for Semantic Parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 4477--4486. https://doi.org/10.18653/v1/P19--1440
[48]
Hongzhi Zhang, Yingyao Wang, Sirui Wang, Xuezhi Cao, Fuzheng Zhang, and Zhongyuan Wang. 2020. Table Fact Verification with Structure-Aware Transformer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 1624--1629. https://doi.org/10.18653/v1/2020.emnlp-main.126
[49]
Shuo Zhang and Krisztian Balog. 2018. On-the-fly Table Generation. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08--12, 2018, Kevyn Collins-Thompson, Qiaozhu Mei, Brian D. Davison, Yiqun Liu, and Emine Yilmaz (Eds.). ACM, 595--604. https://doi.org/10.1145/3209978.3209988
[50]
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2022. Automatic chain of thought prompting in large language models. ArXiv preprint abs/2210.03493 (2022). https://arxiv.org/abs/2210.03493
[51]
Yilun Zhao, Linyong Nan, Zhenting Qi, Rui Zhang, and Dragomir Radev. 2022. ReasTAP: Injecting Table Reasoning Skills During Pre-training via Synthetic Reasoning Examples. ArXiv preprint abs/2210.12374 (2022). https://arxiv.org/ abs/2210.12374
[52]
Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning. CoRR abs/1709.00103 (2017).
[53]
Wanjun Zhong, Duyu Tang, Zhangyin Feng, Nan Duan, Ming Zhou, Ming Gong, Linjun Shou, Daxin Jiang, Jiahai Wang, and Jian Yin. 2020. Logical-FactChecker: Leveraging Logical Operations for Fact Checking with Graph Module Network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 6053--6065. https://doi.org/10.18653/v1/2020.acl-main.539
[54]
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-most prompting enables complex reasoning in large language models. ArXiv preprint abs/2205.10625 (2022). https://arxiv.org/abs/2205.10625
[55]
Fan Zhou, Mengkang Hu, Haoyu Dong, Zhoujun Cheng, Shi Han, and Dongmei Zhang. 2022. Tacube: Pre-computing data cubes for answering numerical-reasoning questions over tabular data. ArXiv preprint abs/2205.12682 (2022). https://arxiv.org/abs/2205.12682
[56]
Yuxuan Zhou, Xien Liu, Kaiyin Zhou, and Ji Wu. 2022. Table-based Fact Verifi-cation with Self-adaptive Mixture of Experts. In Findings of the Association for Computational Linguistics: ACL 2022. Association for Computational Linguistics, Dublin, Ireland, 139--149. https://doi.org/10.18653/v1/2022.findings-acl.13

Cited By

View all
  • (2025)Large language model for table processing: a surveyFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-024-40763-619:2Online publication date: 1-Feb-2025
  • (2025)A survey of table reasoning with large language modelsFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-024-40330-z19:9Online publication date: 1-Sep-2025
  • (2024)MMVQAProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/690(6243-6251)Online publication date: 3-Aug-2024
  • Show More Cited By

Index Terms

  1. Large Language Models are Versatile Decomposers: Decomposing Evidence and Questions for Table-based Reasoning

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval
    July 2023
    3567 pages
    ISBN:9781450394086
    DOI:10.1145/3539618
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 July 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. large language models
    2. pre-trained language models
    3. table-based reasoning

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    SIGIR '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)568
    • Downloads (Last 6 weeks)57
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Large language model for table processing: a surveyFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-024-40763-619:2Online publication date: 1-Feb-2025
    • (2025)A survey of table reasoning with large language modelsFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-024-40330-z19:9Online publication date: 1-Sep-2025
    • (2024)MMVQAProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/690(6243-6251)Online publication date: 3-Aug-2024
    • (2024)TabVer: Tabular Fact Verification with Natural LogicTransactions of the Association for Computational Linguistics10.1162/tacl_a_0072212(1648-1671)Online publication date: 18-Dec-2024
    • (2024)DataDive: Supporting Readers' Contextualization of Statistical Statements with Data ExplorationProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645155(623-639)Online publication date: 18-Mar-2024
    • (2024)Mitigating Cold-Start Problems in Knowledge Tracing with Large Language Models: An Attribute-aware ApproachProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679664(727-736)Online publication date: 21-Oct-2024
    • (2024)Self-Improving Teacher Cultivates Better Student: Distillation Calibration for Multimodal Large Language ModelsProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657692(882-892)Online publication date: 10-Jul-2024
    • (2024)Efficient Prompting for LLM-Based Generative Internet of ThingsIEEE Internet of Things Journal10.1109/JIOT.2024.3470210(1-1)Online publication date: 2024
    • (2024)Mitigating Knowledge Conflicts in Data-to-Text Generation via the Internalization of Fact Extraction2024 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN60899.2024.10651167(1-9)Online publication date: 30-Jun-2024
    • (2024)Are Large Language Models Table-based Fact-Checkers?2024 27th International Conference on Computer Supported Cooperative Work in Design (CSCWD)10.1109/CSCWD61410.2024.10580146(3086-3091)Online publication date: 8-May-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media