skip to main content
10.1145/3477495.3531682acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
abstract

Fairness-Aware Question Answering for Intelligent Assistants

Published:07 July 2022Publication History

ABSTRACT

Conversational intelligent assistants, such as Amazon Alexa, Google Assistant, and Apple Siri, are a form of voice-only Question Answering (QA) system and have the potential to address complex information needs. However, at the moment they are mostly limited to answering with facts expressed in a few words. For example, when a user asks Google Assistant if coffee is good for their health, it responds by justifying why it is good for their health without shedding any light on the side effects coffee consumption might have \citegao2020toward. Such limited exposure to multiple perspectives can lead to change in perceptions, preferences, and attitude of users, as well as to the creation and reinforcement of undesired cognitive biases. Getting such QA systems to provide a fair exposure to complex answers -- including those with opposing perspectives -- is an open research problem. In this research, I aim to address the problem of fairly exposing multiple perspectives and relevant answers to users in a multi-turn conversation without negatively impacting user satisfaction.

References

  1. Ruoyuan Gao and Chirag Shah. 2020. Toward Creating a Fairer Ranking in Search Engine Results. Information Processing & Management , Vol. 57, 1 (2020), 102138. https://doi.org/10.1016/j.ipm.2019.102138Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Sachin Pathiyan Cherumanal, Damiano Spina, Falk Scholer, and W. Bruce Croft. 2021. Evaluating Fairness in Argument Retrieval. In Proc. CIKM. 3363--3367. https://doi.org/10.1145/3459637.3482099Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Sachin Payhiyan Cherumnala, Damiano Spina, Falk Scholer, and W. Bruce Croft. 2022. RMIT at TREC 2021 Fair Ranking Track. In Proc. TREC . https://trec.nist.gov/pubs/trec30/papers/RMIT-IR-F.pdfGoogle ScholarGoogle Scholar
  4. Piotr Sapiezynski, Wesley Zeng, Ronald E Robertson, Alan Mislove, and Christo Wilson. 2019. Quantifying the Impact of User Attention on Fair Group Representation in Ranked Lists. In Proc. WWW. 553--562. https://doi.org/10.1145/3308560.3317595Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Ke Yang and Julia Stoyanovich. 2017. Measuring Fairness in Ranked Outputs. In Proc. SSDBM. Article 22, https://doi.org/10.1145/3085504.3085526Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Fairness-Aware Question Answering for Intelligent Assistants

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGIR '22: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval
      July 2022
      3569 pages
      ISBN:9781450387323
      DOI:10.1145/3477495

      Copyright © 2022 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 7 July 2022

      Check for updates

      Qualifiers

      • abstract

      Acceptance Rates

      Overall Acceptance Rate792of3,983submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader