ABSTRACT
Impact statements articulate the impacts of a research project with concise and unambiguous statements about problems addressed, actions to resolve, and explanations of any impacts. Researchers and technologists often rely on impact statements as means to provoke introspective critical thinking around the impacts of technology being developed. However, due to factors such as technocentrism, positivity bias, marketization, or hyperinflation of impact statements, the claims presented in these statements do not cover all important aspects when creating technology – for instance, negative and delayed impacts. This work contributes to the development of a chatbot called ImpactBot to promote critical thinking while researchers create impact statements for research projects or scientific papers. The proposed chatbot leverages two fine-tuned state-of-the-art RoBERTa models for sequence classification and was assessed in this case study with 5 researchers from a large information technology company and 7 university engineering research scientists or students. This approach may be reused as part of content management or a paper submission system, for instance, to dialogue with researchers and promote critical thinking about negative impacts and how to mitigate them (if any) while creating impact statements for their projects or scientific papers.
Footnotes
Supplemental Material
Available for Download
- Bayan AbuShawar and Eric Atwell. 2016. Automatic extraction of chatbot training data from natural dialogue corpora. In RE-WOCHAT: Workshop on Collecting and Generating Resources for Chatbots and Conversational Agents-Development and Evaluation. 29–38.Google Scholar
- Muhammad Ashfaq, Jiang Yun, Shubin Yu, and Sandra Maria Correia Loureiro. 2020. I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telematics and Informatics 54 (2020), 101473.Google ScholarCross Ref
- Carolyn Ashurst, Emmie Hine, Paul Sedille, and Alexis Carlier. 2021. AI Ethics Statements - Analysis and lessons learnt from NeurIPS Broader Impact Statements. arXiv:2111.01705https://arxiv.org/abs/2111.01705Google Scholar
- Charlotte Bloch. 2002. Managing the emotions of competition and recognition in academia. The Sociological Review 50, S2 (2002), 113–131.Google ScholarCross Ref
- Ching-Yi Chang, Shu-Yu Kuo, and Gwo-Haur Hwang. 2022. Chatbot-facilitated Nursing Education: Incorporating a Knowledge-Based Chatbot System into a Nursing Training Program. Educational Technology & Society 25, 1 (2022), 15–27. https://www.jstor.org/stable/48647027Google Scholar
- Jennifer Chubb and Richard Watermeyer. 2017. Artifice or integrity in the marketization of research impact? Investigating the moral economy of (pathways to) impact statements within research funding proposals in the UK and Australia. Studies in Higher Education 42, 12 (2017), 2360–2372. https://doi.org/10.1080/03075079.2016.1144182 arXiv:https://doi.org/10.1080/03075079.2016.1144182Google ScholarCross Ref
- Fernando Fogliano, Fernando Fabbrini, André Souza, Guilherme Fidélio, Juliana Machado, and Rachel Sarra. 2019. Edgard, the Chatbot: Questioning Ethics in the Usage of Artificial Intelligence Through Interaction Design and Electronic Literature. In Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Healthcare Applications, Vincent G. Duffy (Ed.). Springer International Publishing, Cham, 325–341.Google Scholar
- Don Gotterbarn, Bo Brinkman, Catherine Flick, Michael S Kirkpatrick, Keith Miller, Kate Varansky andMarty J Wolf, Eve Anderson, Ron Anderson, Amy Bruckman, Karla Carter, Michael Davis, Penny Duquenoy, Jeremy Epstein, Kai Kimppa, Lorraine Kisselburgh, Shrawan Kumar, Andrew McGettrick, Natasa Milic-Frayling, Denise Oram, Simon Rogerson, David Shamma, Janice Sipior, Eugene Spafford, and Les Waguespack. 2018. ACM Code of Ethics and Professional Conduct. Retrieved October 10, 2022 from https://www.acm.org/code-of-ethicsGoogle Scholar
- Intan Permata Hapsari and Ting-Ting Wu. 2022. AI Chatbots Learning Model in English Speaking Skill: Alleviating Speaking Anxiety, Boosting Enjoyment, and Fostering Critical Thinking. In Innovative Technologies and Learning, Yueh-Min Huang, Shu-Chen Cheng, João Barroso, and Frode Eika Sandnes (Eds.). Springer International Publishing, Cham, 444–453.Google Scholar
- Zhenhui Peng. 2021. Designing and Evaluating Intelligent Agents’ Interaction Mechanisms for Assisting Human in High-Level Thinking Tasks. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI EA ’21). Association for Computing Machinery, New York, NY, USA, Article 70, 6 pages. https://doi.org/10.1145/3411763.3443424Google ScholarDigital Library
- Carina EA Prunkl, Carolyn Ashurst, Markus Anderljung, Helena Webb, Jan Leike, and Allan Dafoe. 2021. Institutionalizing ethics in AI through broader impact requirements. Nature Machine Intelligence 3, 2 (2021), 104–110.Google ScholarCross Ref
- Miriya Samarakoon and John S Rowan. 2008. A critical review of environmental impact statements in Sri Lanka with particular reference to ecological impact assessment. Environmental Management 41, 3 (2008), 441–460.Google ScholarCross Ref
- Donald A Schön. 1992. The reflective practitioner: How professionals think in action. Routledge.Google Scholar
- Lu Wang, Munif Ishad Mujib, Jake Ryland Williams, George Demiris, and Jina Huh-Yoo. 2021. An Evaluation of Generative Pre-Training Model-based Therapy Chatbot for Caregivers. CoRR abs/2107.13115(2021). arXiv:2107.13115https://arxiv.org/abs/2107.13115Google Scholar
- CP Wolf. 2019. The Cultural Impact Statement 1. In Cultural Resources: Planning and Management. Routledge, 178–193.Google Scholar
- Yu Wu, Wei Wu, Chen Xing, Can Xu, Zhoujun Li, and Ming Zhou. 2019. A Sequential Matching Framework for Multi-Turn Response Selection in Retrieval-Based Chatbots. Computational Linguistics 45, 1 (03 2019), 163–197. https://doi.org/10.1162/coli_a_00345 arXiv:https://direct.mit.edu/coli/article-pdf/45/1/163/1809677/coli_a_00345.pdfGoogle ScholarDigital Library
- Yonghan Zhu, Rui Wang, and Chengyan Pu. 2022. “I am chatbot, your virtual mental health adviser.” What drives citizens’ satisfaction and continuance intention toward mental health chatbots during the COVID-19 pandemic? An empirical study in China. DIGITAL HEALTH 8(2022), 20552076221090031. https://doi.org/10.1177/20552076221090031 arXiv:https://doi.org/10.1177/20552076221090031PMID: 35381977.Google ScholarCross Ref
Index Terms
- ImpactBot: Chatbot Leveraging Language Models to Automate Feedback and Promote Critical Thinking Around Impact Statements
Recommendations
Chatbot with Touch and Graphics: An Interaction of Users for Emotional Expression and Turn-taking
CUI '20: Proceedings of the 2nd Conference on Conversational User InterfacesUse of chatbots for emotional exchange is recently increasing in various domains. However, as existing chatbots have been considered in terms of natural language processing techniques for interaction with text-based chatting, chatbot interaction with ...
Revealing Chatbot Humanization Impact Factors
Human-Computer InteractionAbstractChatbots are software that simulates human conversational tasks. They are used for various purposes, including customer service and therapeutic conversations. A challenge encountered in developing chatbots is making them more humanized. This ...
The impact of critical feedback choice on students' revision, performance, learning, and memory
This article examines empirically the impact of students' critical feedback choices on their memory for feedback. It also examines the effect of choosing versus receiving feedback on learning outcomes. First, a correlational study was designed to ...
Comments