skip to main content
10.1145/3488560.3502190acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
short-paper

iLFQA: A Platform for Efficient and Accurate Long-Form Question Answering

Published: 15 February 2022 Publication History

Abstract

We present an efficient and accurate long-form question-answering platform, dubbed iLFQA (i.e., short for intelligent Long-Form Question Answering). The purpose of iLFQA is to function as a platform which accepts unscripted questions and efficiently produces semantically meaningful, explanatory, and accurate long-form responses. iLFQA consists of a number of modules for zero-shot classification, text retrieval, and text generation to generate answers to questions based on an open-domain knowledge base. iLFQA is unique in the question answering space because it is an example of a deployable and efficient long-form question answering system. Question answering systems exist in many forms, but long-form question answering remains relatively unexplored, and to the best of our knowledge none of the existing long-form question answering systems are shown to be sufficiently efficient to be deployable. We have made the source code and implementation details of iLFQA available for the benefit of researchers and practitioners in this field. With this demonstration, we present iLFQA as an open-domain, deployable, and accurate open-source long-form question answering platform.

Supplementary Material

MP4 File (10.11453488560.3502190.mp4)
We present a Long Form Question Answering platform named intelligent Longform Question Answering. iLFQA is an open-source open domain question answering platform. iLFQA's data set is a collection of Wikipedia articles, and textbooks. iLFQA accepts user generated questions and produces paragraph length, semantically meaningful responses. iLFQA should serve as an example of a deployable and efficient Longform Question Answering Platform. The beginning of the presentation details the motivation and architecture behind iLFQA. The second portion of the demonstration shows iLFQA operating in real time with a simple interface, along with a discussion of some of the output.

References

[1]
Yogi Wisesa Chandra and Suyanto Suyanto. 2019. Indonesian Chatbot of University Admission Using a Question Answering System Based on Sequence-to-Sequence Model. Procedia Computer Science 157 (2019), 367--374. https://doi.org/10.1016/j.procs.2019.08.179 The 4th International Conference on Computer Science and Computational Intelligence (ICCSCI 2019) : Enabling Collaboration to Escalate Impact of Research Results for Society.
[2]
Andry Chowanda and Alan Darmasaputra Chowanda. 2018. Generative Indonesian Conversation Model using Recurrent Neural Network with Attention Mechanism. Procedia Computer Science 135 (2018), 433--440. https://doi.org/10.1016/j.procs.2018.08.194 The 3rd International Conference on Computer Science and Computational Intelligence (ICCSCI 2018) : Empowering Smart Technology in Digital Era for a Better Life.
[3]
Christopher Clark and Matt Gardner. 2017. Simple and Effective Multi-Paragraph Reading Comprehension. CoRR abs/1710.10723 (2017). arXiv:1710.10723 http://arxiv.org/abs/1710.10723
[4]
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long Form Question Answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 3558--3567. https://doi.org/10.18653/v1/P19--1346
[5]
Sihyung Kim, Oh-Woog Kwon, and Harksoo Kim. 2020. Knowledge-Grounded Chatbot Based on Dual Wasserstein Generative Adversarial Networks with Effective Attention Mechanisms. Applied Sciences 10, 9 (2020). https://doi.org/10.3390/app10093335
[6]
Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. Hurdles to Progress in Long-form Question Answering. arXiv:2103.06332 [cs.CL]
[7]
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 7871--7880. https://doi.org/10.18653/v1/2020.acl-main.703
[8]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv:1907.11692 http://arxiv.org/abs/1907.11692
[9]
Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering Ambiguous Open-domain Questions. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP). Association for Computational Linguistics, Online, 5783--5797. https://doi.org/10.18653/v1/2020.emnlp- main.466
[10]
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, 311--318. https://doi.org/10.3115/1073083. 1073135
[11]
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. CoRR abs/1910.10683 (2019). arXiv:1910.10683 http://arxiv.org/abs/1910.10683
[12]
Zhaopeng Tu, Yong Jiang, Xiaojiang Liu, Lei Shu, and Shuming Shi. 2018. Generative Stock Question Answering. arXiv:1804.07942 [cs.CL]
[13]
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating Text Generation with BERT. CoRR abs/1904.09675 (2019). arXiv:1904.09675 http://arxiv.org/abs/1904.09675

Cited By

View all
  • (2023)Open-Domain Long-Form Question–Answering Using Transformer-Based PipelineSN Computer Science10.1007/s42979-023-02039-x4:5Online publication date: 3-Aug-2023

Index Terms

  1. iLFQA: A Platform for Efficient and Accurate Long-Form Question Answering

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      WSDM '22: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining
      February 2022
      1690 pages
      ISBN:9781450391320
      DOI:10.1145/3488560
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 15 February 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. generalized language models
      2. long-form question answering
      3. natural language processing
      4. text generation
      5. text retrieval

      Qualifiers

      • Short-paper

      Conference

      WSDM '22

      Acceptance Rates

      Overall Acceptance Rate 498 of 2,863 submissions, 17%

      Upcoming Conference

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)12
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 13 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Open-Domain Long-Form Question–Answering Using Transformer-Based PipelineSN Computer Science10.1007/s42979-023-02039-x4:5Online publication date: 3-Aug-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media