skip to main content
10.1145/3686038.3686060acmotherconferencesArticle/Chapter ViewAbstractPublication PagestasConference Proceedingsconference-collections
research-article
Open access

Is Your Prompt Detailed Enough? Exploring the Effects of Prompt Coaching on Users' Perceptions, Engagement, and Trust in Text-to-Image Generative AI Tools

Published: 16 September 2024 Publication History

Abstract

Prompts are the primary medium for interacting with generative AI tools. However, users often lack sufficient prompt literacy and motivation to fully benefit from these tools. To address this, we explore whether introducing prompt coaching into a chatbot-based generative AI interface can influence users’ perceptions and engagement of prompting, and further affect their trust in the system. In a user study (N = 132), we found that prompt coaching encourages users to specify more details in their prompts, even though over half initially believed their prompts were sufficient. Furthermore, the coach increased users’ cognitive elaboration, which was associated with higher perceived trust calibration. However, prompt coaching did not significantly enhance UX, although users in the coaching absent condition expressed a strong need for prompt assistance for better user experience. These findings have practical implications for the design of trustworthy and responsible generative AI interfaces.

References

[1]
Ritu Agarwal and Elena Karahanna. 2000. Time flies when you're having fun: Cognitive absorption and beliefs about information technology usage. MIS Quarterly 24, 4 (December 2000), 665–694. https://doi.org/10.2307/3250951
[2]
Simon Albrecht and Anthony Travaglione. 2003. Trust in public-sector senior management. The International Journal of Human Resource Management 14, 1 (February 2003), 76–92. https://doi.org/10.1080/09585190210158529
[3]
Hussam Alkaissi and Samy I McFarlane. 2023. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus (February 2023). https://doi.org/10.7759/cureus.35179
[4]
Muhammad Ashfaq, Jiang Yun, Shubin Yu, and Sandra Maria Correia Loureiro. 2020. I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telematics and Informatics 54, (November 2020), 101473. https://doi.org/10.1016/j.tele.2020.101473
[5]
Nada Nasr Bechwati and Lan Xia. 2003. Do computers sweat? The impact of perceived effort of online decision aids on consumers’ satisfaction with the decision process. Journal of Consumer Psychology 13, 1–2 (2003), 139–148. https://doi.org/10.1207/S15327663JCP13-1&2_12
[6]
Tiago Bianchi. 2024. Interest in generative AI on Google searches from February 2022 to June 2024 worldwide. statista. Retrieved from https://www.statista.com/statistics/1367868/generative-ai-google-searches-worldwide/
[7]
ChatGPT 4 and Sabit Ekin. 2023. Prompt engineering for ChatGPT: A quick guide to techniques, tips, and best practices. Retrieved from https://doi.org/10.36227/techrxiv.22683919.v2
[8]
Cheng Chen and S Shyam Sundar. 2023. Is this AI trained on credible data? The effects of labeling quality and performance bias on user trust. April 23, 2023. Association for Computing Machinery, Hamburg, Germany. https://doi.org/10.1145/3544548.3580805
[9]
John P. Chin, Virginia A. Diehl, and Kent L. Norman. 1988. Development of an instrument measuring user satisfaction of the human-computer interface. Proceedings of the SIGCHI conference on Human factors in computing systems (1988), 213–218. Retrieved from https://doi.org/10.1145/57167.57203
[10]
Michelle Cohn, Mahima Pushkarna, Gbolahan O. Olanubi, Joseph M. Moran, Daniel Padgett, Zion Mengesha, and Courtney Heldreth. 2024. Believing anthropomorphism: Examining the role of anthropomorphic cues on trust in large language models. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, May 11, 2024. ACM, Honolulu HI USA, 1–15. https://doi.org/10.1145/3613905.3650818
[11]
Pratibha A. Dabholkar and Xiaojing Sheng. 2012. Consumer participation in using online recommendation agents: effects on satisfaction, trust, and purchase intentions. The Service Industries Journal 32, 9 (2012), 1433–1449.
[12]
Hai Dang, Lukas Mecke, Florian Lehmann, Sven Goller, and Daniel Buschek. 2022. How to prompt? Opportunities and challenges of zero- and few-shot learning for human-AI interaction in creative applications of generative models. Retrieved June 6, 2024 from http://arxiv.org/abs/2209.01390
[13]
Fred D. Davis. 1989. Perceived usefulness, Perceived ease of use, and user acceptance of information technology. MIS Quarterly 13, 3 (1989), 205–219. Retrieved from https://doi-org.elon.idm.oclc.org/10.2307/249008
[14]
Claes Fornell and David F. Larcker. 1981. Evaluating structural equation models with unobservable variables and measurement error. Journal of marketing research 18, 1 (1981), 39–50. https://doi.org/10.2307/3151312
[15]
Sai Gattupalli, Robert Maloy, and Sharon Edwards. 2023. Prompt literacy: A pivotal educational skill in the age of AI. University of Massachusetts Amherst. https://doi.org/10.7275/3498-WX48
[16]
Anne Geraci. 1991. IEEE standard computer dictionary: Compilation of IEEE standard computer glossaries. IEEE Press.
[17]
Marc Hassenzahl and Noam Tractinsky. 2006. User experience-a research agenda. Behaviour & information technology 25, 2 (2006), 91–97. Retrieved from https://doi.org/10.1080/01449290500330331
[18]
Shari L. Jackson, Joseph Krajcik, and Elliot Soloway. 1998. The design of guided learner-adaptable scaffolding in interactive learning environments. In Proceedings of the SIGCHI conference on Human factors in computing systems - CHI ’98, 1998. ACM Press, Los Angeles, California, United States, 187–194. https://doi.org/10.1145/274644.274672
[19]
Heidi Hayes Jacob and Michael Fisher. 2023. Prompt literacy: A key for AI-based learning. ascd. Retrieved from https://www.ascd.org/el/articles/prompt-literacy-a-key-for-ai-based-learning
[20]
Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 03, 2021. ACM, Virtual Event Canada, 624–635. https://doi.org/10.1145/3442188.3445923
[21]
Devon Johnson and Kent Grayson. 2005. Cognitive and affective trust in service relationships. Journal of Business research 58, 4 (2005), 500–507.
[22]
Ankur Joshi, Saket Kale, Satish Chandel, and D. Pal. 2015. Likert scale: Explored and explained. BJAST 7, 4 (January 2015), 396–403. https://doi.org/10.9734/BJAST/2015/14975
[23]
LeeAnn Kahlor, Sharon Dunwoody, Robert J. Griffin, Kurt Neuwirth, and James Giese. 2003. Studying heuristic‐systematic processing of risk communication. Risk Analysis 23, 2 (April 2003), 355–368. https://doi.org/10.1111/1539-6924.00314
[24]
Hyunjin Kang and S Shyam Sundar. 2013. Depleted egos and affirmed selves: The two faces of customization. Computers in Human Behavior 29, 6 (November 2013), 2273–2280. https://doi.org/10.1016/j.chb.2013.05.018
[25]
Joonghoon Kim, Saeran Park, Kiyoon Jeong, Sangmin Lee, Seung Hun Han, Jiyoon Lee, and Pilsung Kang. 2023. Which is better? Exploring prompting strategy for LLM-based metrics. Retrieved April 30, 2024 from http://arxiv.org/abs/2311.03754
[26]
Ki Joon Kim, Eunil Park, and S. Shyam Sundar. 2013. Caregiving role in human–robot interaction: A study of the mediating effects of perceived benefit and social presence. Computers in Human Behavior 29, 4 (2013), 1799–1806.
[27]
Justin Kruger, Derrick Wirtz, Leaf Van Boven, and T. William Altermatt. 2004. The effort heuristic. Journal of Experimental Social Psychology 40, 1 (2004), 91–98.
[28]
Carine Lallemand, Guillaume Gronier, and Vincent Koenig. 2015. User experience: A concept without consensus? Exploring practitioners’ perspectives through an international survey. Computers in human behavior 43, (2015), 35–48. Retrieved from https://doi.org/10.1016/j.chb.2014.10.048
[29]
John D. Lee and Katrina A. See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
[30]
Sangwook Lee. 2024. Prompting in Generative AI: The effects of providing user control over display of advertisements in generative AI response. Pennsylvania State University, State College, PA, USA.
[31]
Sangwook Lee, Won-Ki Moon, Jae-Gil Lee, and S. Shyam Sundar. 2023. When the machine learns from users, is it helping or snooping? Computers in Human Behavior 138, (January 2023), 107427. https://doi.org/10.1016/j.chb.2022.107427
[32]
Mengqi Liao. 2024. Does conversationality and verifiability affect trust in content generated by ChatGPT? Pennsylvania State University, State College, PA, USA.
[33]
Q. Vera Liao and S. Shyam Sundar. 2022. Designing for responsible trust in AI systems: A communication perspective. In 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022. Association for Computing Machinery, Seoul Republic of Korea, 1257–1268. https://doi.org/10.1145/3531146.3533182
[34]
Vivian Liu and Lydia B Chilton. 2022. Design guidelines for prompt engineering text-to-image generative models. In CHI Conference on Human Factors in Computing Systems, April 29, 2022. ACM, New Orleans LA USA, 1–23. https://doi.org/10.1145/3491102.3501825
[35]
Leo S. Lo. 2023. The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship 49, 4 (2023), 102720.
[36]
Robert W. Maloy and Sai Gattupalli. 2024. Prompt literacy. EdTechnica (2024), 209–215.
[37]
Ethan Mollick. 2024. Co-intelligence: Living and working with AI. Portfolio.
[38]
Clifford Nass and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. Journal of social issues 56, 1 (2000), 81–103. https://doi.org/10.1111/0022-4537.00153
[39]
Leonardo Nicoletti and Dina Bass. 2023. Humans are biased. Generative AI is even worse: Stable diffusion's text-to-image model amplifies stereotypes about race and gender - here's why that matters. Bloomberg. Retrieved from https://www.bloomberg.com/graphics/2023-generative-ai-bias/
[40]
Jakob Nielsen. 2023. The articulation barrier: Prompt-driven AI UX hurts usability. Retrieved from https://www.linkedin.com/pulse/prompt-driven-ai-ux-hurts-usability-jakob-nielsen/
[41]
Heather O'Brien. 2016. Theoretical perspectives on user engagement. Why engagement matters: Cross-disciplinary perspectives of user engagement in digital media (2016), 1–26.
[42]
Heather L. O'Brien, Paul Cairns, and Mark Hall. 2018. A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form. International Journal of Human-Computer Studies 112, (2018), 28–39. https://doi.org/10.1016/j.ijhcs.2018.01.004
[43]
Joe O'Connor and Jacob Andreas. 2021. What context features can transformer language models use? arXiv preprint arXiv:2106.08367 (2021).
[44]
Jeeyun Oh, Saraswathi Bellur, and S. Shyam Sundar. 2018. Clicking, assessing, immersing, and sharing: An empirical model of user engagement with interactive media. Communication Research 45, 5 (2018), 737–763.
[45]
Isabel P. Riquelme and Sergio Román. 2014. Is the influence of privacy and security on online trust the same for all type of consumers? Electronic Markets 24, (2014), 135–149. Retrieved from https://doi.org/10.1007/s12525-013-0145-3
[46]
Gordon Pennycook, Ziv Epstein, Mohsen Mosleh, Antonio A. Arechar, Dean Eckles, and David G. Rand. 2021. Shifting attention to accuracy can reduce misinformation online. Nature 592, 7855 (April 2021), 590–595. https://doi.org/10.1038/s41586-021-03344-2
[47]
Gordon Pennycook and David G. Rand. 2021. The psychology of fake news. Trends in cognitive sciences 25, 5 (2021), 388–402.
[48]
Elizabeth M. Perse. 1990. Involvement with local television news cognitive and emotional dimensions. Human Comm Res 16, 4 (June 1990), 556–581. https://doi.org/10.1111/j.1468-2958.1990.tb00222.x
[49]
Richard E Petty and John T. Cacioppo. 1986. The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology 19, (1986), 123–205. Retrieved from https://doi.org/10.1016/S0065-2601(08)60214-2
[50]
Sheizf Rafaeli. 1988. From new media to communication. Sage annual review of communication research: Advancing communication science 16, 1 (1988), 110–134.
[51]
Steve Rathje, Jon Roozenbeek, Jay J. Van Bavel, and Sander Van Der Linden. 2023. Accuracy and social motivations shape judgements of (mis) information. Nature Human Behaviour 7, 6 (2023), 892–903. Retrieved from https://doi.org/10.1038/s41562-023-01540-w
[52]
Keng Siau and Weiyu Wang. 2018. Building trust in Artificial Intelligence, machine learning, and robotics. Cutter Business Technology Journal 31, 2 (2018), 47–53.
[53]
Hyeonjin Soh, Leonard N. Reid, and Karen Whitehill King. 2009. Measuring trust in advertising. 38, 2 (July 2009), 83–104. https://doi.org/10.2753/JOA0091-3367380206
[54]
Anselm Strauss and J. Corbin. 1990. Basics of qualitative research: Grounded theory procedures and techniques. Sage, Thousand Oaks, California.
[55]
S. Shyam Sundar. 2008. The MAIN model: A heuristic approach to understanding technology effects on credibility. In Digital media, youth, and credibility. The MIT Press, Cambridge, MA, 73–100. Retrieved from https://library.oapen.org/bitstream/handle/20.500.12657/26088/1003998.pdf
[56]
S. Shyam Sundar. 2020. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication 25, 1 (2020), 74–88. https://doi.org/10.1093/jcmc/zmz026
[57]
S. Shyam Sundar, Saraswathi Bellur, Jeeyun Oh, Haiyan Jia, and Hyang-Sook Kim. 2016. Theoretical importance of contingency in human-computer interaction: Effects of message interactivity on user engagement. Communication Research 43, 5 (2016), 595–625.
[58]
S. Shyam Sundar, Haiyan Jia, T. Franklin Waddell, and Yan Huang. 2015. Toward a theory of interactive media effects (TIME): Four models for explaining how interface features affect user psychology. In The Handbook of the Psychology of Communication Technology (1st ed.), S. Shyam Sundar (ed.). Wiley, 47–86. https://doi.org/10.1002/9781118426456.ch3
[59]
S. Shyam Sundar, Sriram Kalyanaraman, and Justin Brown. 2003. Explicating web site interactivity: Impression formation effects in political campaign sites. Communication research 30, 1 (2003), 30–59. Retrieved from https://doi.org/10.1177/0093650202239025
[60]
S. Shyam Sundar, Jinyoung Kim, and Andrew Gambino. 2017. Using theory of interactive media effects (TIME) to analyze digital advertising. In Digital advertising: Theory and research (third edition), Shelly Rodgers and Esther Thorson (eds.). Routledge, 86–109. Retrieved from https://doi.org/10.4324/9781315623252
[61]
S. Shyam Sundar, Jinyoung Kim, Mary Beth Rosson, and Maria D. Molina. 2020. Online privacy heuristics that predict information disclosure. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020. 1–12. https://doi.org/10.1145/3313831.3376854
[62]
S. Shyam Sundar and Mengqi Liao. 2023. Calling BS on ChatGPT: Reflections on AI as a communication source. Journalism & Communication Monographs 25, 2 (June 2023), 165–180. https://doi.org/10.1177/15226379231167135
[63]
Su-Fang Yeh, Meng-Hsin Wu, Tze-Yu Chen, Yen-Chun Lin, XiJing Chang, You-Hsuan Chiang, and Yung-Ju Chang. 2022. How to guide task-oriented chatbot users, and when: A mixed-methods study of combinations of chatbot guidance types and timings. In CHI Conference on Human Factors in Computing Systems, April 29, 2022. ACM, New Orleans LA USA, 1–16. https://doi.org/10.1145/3491102.3501941
[64]
J.D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann, and Qian Yang. 2023. Why Johnny can't prompt: How non-AI experts try (and fail) to design LLM prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, April 19, 2023. ACM, Hamburg Germany, 1–21. https://doi.org/10.1145/3544548.3581388
[65]
2022. The DALL·E 2 prompt book. Retrieved from https://dallery.gallery/the-dalle-2-prompt-book/

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
TAS '24: Proceedings of the Second International Symposium on Trustworthy Autonomous Systems
September 2024
335 pages
ISBN:9798400709890
DOI:10.1145/3686038
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 September 2024

Check for updates

Author Tags

  1. Cognitive elaboration
  2. Generative AI
  3. Perceived trust calibration
  4. Prompt coaching
  5. User Engagement
  6. User Experience
  7. User Interface

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

TAS '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 634
    Total Downloads
  • Downloads (Last 12 months)634
  • Downloads (Last 6 weeks)163
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media