skip to main content
10.1145/3605801.3605806acmotherconferencesArticle/Chapter ViewAbstractPublication PagescncitConference Proceedingsconference-collections
research-article

Application of large language model in intelligent Q&A of digital government

Published:09 August 2023Publication History

ABSTRACT

LLM (Large Language Model) is developing rapidly today, and it is very important to use LLM to help existing question answering systems improve. The government question answering system is of great value in improving government administrative efficiency. Existing intelligent questions and answers mostly use the combination of keyword matching and machine learning. But keyword matching is difficult to understand complex and multi-round dialogue questions, resulting in limited quality of reply content. Based on machine learning method, it has further improved the understanding of semantics, but because the understanding of words in the field of government affairs is too difficult and professional; and the semantic recognition of colloquial questions is difficult, the performance is not even as good as the question answering system based on keyword matching. This paper uses the large language model as a tool to help understand user questions, and integrates it into the existing government question answering system. On the premise of effectively utilizing the advantages of the large language model, it avoids its defects. And it is demonstrated by experiments that the above system has a great improvement compared with the previous method.

References

  1. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems 26 (2013).Google ScholarGoogle Scholar
  2. André Guskow Cardoso. 2023. Do we need a Chat-GPT-Gov? The importance of technology for effective access to public information.The importance of technology for effective access to public information.(January 7, 2023) (2023).Google ScholarGoogle Scholar
  3. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).Google ScholarGoogle Scholar
  4. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management. 2333–2338.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics. 311–318.Google ScholarGoogle Scholar
  7. Georgios Patsoulis, Rafail Promikyridis, and Efthimios Tambouris. 2021. Integration of chatbots with Knowledge Graphs in eGovernment: The case of Getting a Passport. In 25th Pan-Hellenic Conference on Informatics. 425–429.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Anestis Stamatis, Alexandros Gerontas, Anastasios Dasyras, and Efthimios Tambouris. 2020. Using chatbots and life events to provide public service information. In Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance. 54–61.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017).Google ScholarGoogle Scholar
  10. Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2017. Improved neural relation detection for knowledge base question answering. arXiv preprint arXiv:1704.06194 (2017).Google ScholarGoogle Scholar

Index Terms

  1. Application of large language model in intelligent Q&A of digital government

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      CNCIT '23: Proceedings of the 2023 2nd International Conference on Networks, Communications and Information Technology
      June 2023
      253 pages
      ISBN:9798400700620
      DOI:10.1145/3605801

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 9 August 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)144
      • Downloads (Last 6 weeks)19

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format