ABSTRACT
Conversational intelligent assistants, such as Amazon Alexa, Google Assistant, and Apple Siri, are a form of voice-only Question Answering (QA) system and have the potential to address complex information needs. However, at the moment they are mostly limited to answering with facts expressed in a few words. For example, when a user asks Google Assistant if coffee is good for their health, it responds by justifying why it is good for their health without shedding any light on the side effects coffee consumption might have \citegao2020toward. Such limited exposure to multiple perspectives can lead to change in perceptions, preferences, and attitude of users, as well as to the creation and reinforcement of undesired cognitive biases. Getting such QA systems to provide a fair exposure to complex answers -- including those with opposing perspectives -- is an open research problem. In this research, I aim to address the problem of fairly exposing multiple perspectives and relevant answers to users in a multi-turn conversation without negatively impacting user satisfaction.
- Ruoyuan Gao and Chirag Shah. 2020. Toward Creating a Fairer Ranking in Search Engine Results. Information Processing & Management , Vol. 57, 1 (2020), 102138. https://doi.org/10.1016/j.ipm.2019.102138Google ScholarDigital Library
- Sachin Pathiyan Cherumanal, Damiano Spina, Falk Scholer, and W. Bruce Croft. 2021. Evaluating Fairness in Argument Retrieval. In Proc. CIKM. 3363--3367. https://doi.org/10.1145/3459637.3482099Google ScholarDigital Library
- Sachin Payhiyan Cherumnala, Damiano Spina, Falk Scholer, and W. Bruce Croft. 2022. RMIT at TREC 2021 Fair Ranking Track. In Proc. TREC . https://trec.nist.gov/pubs/trec30/papers/RMIT-IR-F.pdfGoogle Scholar
- Piotr Sapiezynski, Wesley Zeng, Ronald E Robertson, Alan Mislove, and Christo Wilson. 2019. Quantifying the Impact of User Attention on Fair Group Representation in Ranked Lists. In Proc. WWW. 553--562. https://doi.org/10.1145/3308560.3317595Google ScholarDigital Library
- Ke Yang and Julia Stoyanovich. 2017. Measuring Fairness in Ranked Outputs. In Proc. SSDBM. Article 22, https://doi.org/10.1145/3085504.3085526Google ScholarDigital Library
Index Terms
- Fairness-Aware Question Answering for Intelligent Assistants
Recommendations
Question Rewriting for Conversational Question Answering
WSDM '21: Proceedings of the 14th ACM International Conference on Web Search and Data MiningConversational question answering (QA) requires the ability to correctly interpret a question in the context of previous conversation turns. We address the conversational QA task by decomposing it into question rewriting and question answering subtasks. ...
Quality-aware collaborative question answering: methods and evaluation
WSDM '09: Proceedings of the Second ACM International Conference on Web Search and Data MiningCommunity Question Answering (QA) portals contain questions and answers contributed by hundreds of millions of users. These databases of questions and answers are of great value if they can be used directly to answer questions from any user. In this ...
First International Workshop on Conversational Approaches to Information Retrieval (CAIR'17)
SIGIR '17: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information RetrievalRecent advances in commercial conversational services that allow naturally spoken and typed interaction, particularly for well-formulated questions and commands, have increased the need for more human-centric interactions in information retrieval. The ...
Comments