skip to main content
10.1145/3591196.3593515acmconferencesArticle/Chapter ViewAbstractPublication Pagesc-n-cConference Proceedingsconference-collections
research-article
Open access

The Prompt Artists

Published: 19 June 2023 Publication History

Abstract

This paper examines the art practices, artwork, and motivations of prolific users of the latest generation of text-to-image models. Through interviews, observations, and a user survey, we present a sampling of the artistic styles and describe the developed community of practice around generative AI. We find that: 1) artists hold the text prompt and the resulting image can be considered collectively as a form of artistic expression (prompts as art), and 2) prompt templates (prompts with “slots” for others to fill in with their own words) are developed to create generative art styles. We discover that the value placed by this community on unique outputs leads to artists seeking specialized vocabulary to produce distinctive art pieces (e.g., by reading architectural blogs to find phrases to describe images). We also find that some artists use “glitches” in the model that can be turned into artistic styles of their own right. From these findings, we outline specific implications for design regarding future prompting and image editing options.

References

[1]
[n. d.]. AICAN. https://www.aican.io/. (Accessed on 08/31/2022).
[2]
[n. d.]. Imagen: Text-to-Image Diffusion Models. https://imagen.research.google/. (Accessed on 08/31/2022).
[3]
[n. d.]. Midjourney. https://www.midjourney.com/home/. (Accessed on 09/01/2022).
[4]
[n. d.]. Parti: Pathways Autoregressive Text-to-Image Model. https://parti.research.google/. (Accessed on 08/31/2022).
[5]
[n. d.]. Stable Diffusion launch announcement — Stability.Ai. https://stability.ai/blog/stable-diffusion-announcement. (Accessed on 08/31/2022).
[6]
Blaise Agüera y Arcas. 2017. Art in the Age of Machine Intelligence. Arts 6, 4 (2017). https://doi.org/10.3390/arts6040018
[7]
David L Altheide and Christopher J Schneider. 2012. Qualitative media analysis. Vol. 38. Sage publications.
[8]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
[9]
Andrew Brock, Jeff Donahue, and Karen Simonyan. 2018. Large Scale GAN Training for High Fidelity Natural Image Synthesis. CoRR abs/1809.11096 (2018). arXiv:1809.11096http://arxiv.org/abs/1809.11096
[10]
Kieran Browne. 2022. Who (or What) Is an AI Artist?Leonardo 55, 2 (04 2022), 130–134. https://doi.org/10.1162/leon_a_02092 arXiv:https://direct.mit.edu/leon/article-pdf/55/2/130/2004755/leon_a_02092.pdf
[11]
Daniel Buschek, Lukas Mecke, Florian Lehmann, and Hai Dang. 2021. Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems. Workshops at the International Conference on Intelligent User Interfaces (IUI) (2021).
[12]
Eva Cetinic and James She. 2022. Understanding and Creating Art with AI: Review and Outlook. ACM Trans. Multimedia Comput. Commun. Appl. 18, 2, Article 66 (Feb 2022), 22 pages. https://doi.org/10.1145/3475799
[13]
Eugene Ch’ng. 2019. Art by Computing Machinery: Is Machine Art Acceptable in the Artworld?ACM Trans. Multimedia Comput. Commun. Appl. 15, 2s, Article 59 (Jul 2019), 17 pages. https://doi.org/10.1145/3326338
[14]
Christie’s. 2018. The first piece of AI-generated art to come to auction | Christie’s. https://www.christies.com/features/a-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx. (Accessed on 09/06/2022).
[15]
John Joon Young Chung, Minsuk Chang, and Eytan Adar. 2021. Gestural Inputs as Control Interaction for Generative Human-AI Co-Creation. Workshops at the International Conference on Intelligent User Interfaces (IUI) (2021).
[16]
John Joon Young Chung, Shiqing He, and Eytan Adar. 2021. The Intersection of Users, Roles, Interactions, and Technologies in Creativity Support Tools. In Designing Interactive Systems Conference 2021 (Virtual Event, USA) (DIS ’21). Association for Computing Machinery, New York, NY, USA, 1817–1833. https://doi.org/10.1145/3461778.3462050
[17]
Nicholas Davis, Chih-PIn Hsiao, Kunwar Yashraj Singh, Lisa Li, Sanat Moningi, and Brian Magerko. 2015. Drawing Apprentice: An Enactive Co-Creative Agent for Artistic Collaboration. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition (Glasgow, United Kingdom) (C&C ’15). Association for Computing Machinery, New York, NY, USA, 185–186. https://doi.org/10.1145/2757226.2764555
[18]
Nicholas Davis, Chih-PIn Hsiao, Kunwar Yashraj Singh, Lisa Li, and Brian Magerko. 2016. Empirically Studying Participatory Sense-Making in Abstract Drawing with a Co-Creative Cognitive Agent. In Proceedings of the 21st International Conference on Intelligent User Interfaces (Sonoma, California, USA) (IUI ’16). Association for Computing Machinery, New York, NY, USA, 196–207. https://doi.org/10.1145/2856767.2856795
[19]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423
[20]
Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion Models Beat GANs on Image Synthesis. In Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.). Vol. 34. Curran Associates, Inc., 8780–8794. https://proceedings.neurips.cc/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf
[21]
Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, and Marian Mazzone. 2017. CAN: Creative Adversarial Networks, Generating "Art" by Learning About Styles and Deviating from Style Norms. https://doi.org/10.48550/ARXIV.1706.07068
[22]
Jonas Frich, Lindsay MacDonald Vermeulen, Christian Remy, Michael Mose Biskjaer, and Peter Dalsgaard. 2019. Mapping the Landscape of Creativity Support Tools in HCI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3290605.3300619
[23]
Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2015. A Neural Algorithm of Artistic Style. https://doi.org/10.48550/ARXIV.1508.06576
[24]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative Adversarial Networks. Commun. ACM 63, 11 (Oct 2020), 139–144. https://doi.org/10.1145/3422622
[25]
Imke Grabe, Miguel González-Duque, Sebastian Risi, and Jichen Zhu. 2022. Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications. Workshops at the International Conference on Intelligent User Interfaces (IUI) (2022).
[26]
Dejan Grba. 2019. Forensics of a molten crystal: challenges of archiving and representing contemporary generative art. ISSUE Annual Art Journal: Erase 8, 3-15 (2019), 5.
[27]
Dejan Grba. 2021. Brittle Opacity: Ambiguities of the Creative AI. In Proceedings of the xCoAx, 9th Conference on Computation, Communication, Aesthetics & X Proceedings, xCoAx, Graz, Austria. 12–16.
[28]
Dejan Grba. 2022. Deep Else: A Critical Framework for AI Art. Digital 2, 1 (2022), 1–32. https://doi.org/10.3390/digital2010001
[29]
Aaron Hertzmann. 2018. Can Computers Create Art?Arts 7, 2 (2018). https://doi.org/10.3390/arts7020018
[30]
Aaron Hertzmann. 2020. Computers Do Not Make Art, People Do. Commun. ACM 63, 5 (Apr 2020), 45–48. https://doi.org/10.1145/3347092
[31]
Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. 2022. Cascaded Diffusion Models for High Fidelity Image Generation.J. Mach. Learn. Res. 23 (2022), 47–1.
[32]
Rania Hodhod and Brian Magerko. 2016. Closing the Cognitive Gap between Humans and Interactive Narrative Agents Using Shared Mental Models. In Proceedings of the 21st International Conference on Intelligent User Interfaces (Sonoma, California, USA) (IUI ’16). Association for Computing Machinery, New York, NY, USA, 135–146. https://doi.org/10.1145/2856767.2856774
[33]
Cheng-Zhi Anna Huang, Hendrik Vincent Koops, Ed Newton-Rex, Monica Dinculescu, and Carrie J. Cai. 2020. AI Song Contest: Human-AI Co-Creation in Songwriting. (2020). https://doi.org/10.48550/ARXIV.2010.05388
[34]
Angel Hsing-Chi Hwang. 2022. Too Late to Be Creative? AI-Empowered Tools in Creative Processes. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 38, 9 pages. https://doi.org/10.1145/3491101.3503549
[35]
Youngseung Jeon, Seungwan Jin, Patrick C. Shih, and Kyungsik Han. 2021. FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 576, 18 pages. https://doi.org/10.1145/3411764.3445093
[36]
Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, and Mingli Song. 2020. Neural Style Transfer: A Review. IEEE Transactions on Visualization and Computer Graphics 26, 11 (2020), 3365–3385. https://doi.org/10.1109/TVCG.2019.2921336
[37]
Pegah Karimi, Nicholas Davis, Mary Lou Maher, Kazjon Grace, and Lina Lee. 2019. Relating Cognitive Models of Design Creativity to the Similarity of Sketches Generated by an AI Partner. In Proceedings of the 2019 on Creativity and Cognition (San Diego, CA, USA) (C&C ’19). Association for Computing Machinery, New York, NY, USA, 259–270. https://doi.org/10.1145/3325480.3325488
[38]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A Style-Based Generator Architecture for Generative Adversarial Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[39]
Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J. Cai. 2020. Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376739
[40]
Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. 2015. Generating Images from Captions with Attention. https://doi.org/10.48550/ARXIV.1511.02793
[41]
Marian Mazzone and Ahmed Elgammal. 2019. Art, Creativity, and the Potential of Artificial Intelligence. Arts 8, 1 (2019). https://doi.org/10.3390/arts8010026
[42]
Jon McCormack, Toby Gifford, Patrick Hutchings, Maria Teresa Llano Rodriguez, Matthew Yee-King, and Mark d’Inverno. 2019. In a Silent Way: Communication Between AI and Improvising Musicians Beyond Sound. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300268
[43]
Matt McFarland. 2016. Google’s psychedelic ‘paint brush’ raises the oldest question in art - The Washington Post. https://www.washingtonpost.com/news/innovations/wp/2016/03/10/googles-psychedelic-paint-brush-raises-the-oldest-question-in-art/. (Accessed on 09/14/2022).
[44]
Matthew B Miles and A Michael Huberman. 1984. Drawing valid meaning from qualitative data: Toward a shared craft. Educational researcher 13, 5 (1984), 20–30.
[45]
Alexander Mordvintsev, Christopher Olah, and Mike Tyka. 2015. Inceptionism: Going Deeper into Neural Networks. https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
[46]
Michael Muller, Justin D Weisz, and Werner Geyer. 2020. Mixed Initiative Generative AI Interfaces: An Analytic Framework for Generative AI Applications. In Proceedings of the Workshop The Future of Co-Creative Systems-A Workshop on Human-Computer Co-Creativity of the 11th International Conference on Computational Creativity (ICCC 2020).
[47]
Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2021. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021).
[48]
Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, and Bongwon Suh. 2018. I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3174223
[49]
OpenAI. [n. d.]. DALL·E 2. https://openai.com/dall-e-2/. (Accessed on 08/31/2022).
[50]
OpenAI. [n. d.]. DALL·E: Creating Images from Text. https://openai.com/blog/dall-e/. (Accessed on 08/31/2022).
[51]
Allison Perrone and Justin Edwards. 2019. Chatbots as Unwitting Actors. In Proceedings of the 1st International Conference on Conversational User Interfaces (Dublin, Ireland) (CUI ’19). Association for Computing Machinery, New York, NY, USA, Article 2, 2 pages. https://doi.org/10.1145/3342775.3342799
[52]
Brian Quanz, Wei Sun, Ajay Deshpande, Dhruv Shah, and Jae-eun Park. 2020. Machine learning based co-creative design framework. arXiv preprint arXiv:2001.08791 (2020).
[53]
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, 2020. Exploring the limits of transfer learning with a unified text-to-text transformer.J. Mach. Learn. Res. 21, 140 (2020), 1–67.
[54]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. https://doi.org/10.48550/ARXIV.2204.06125
[55]
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-Shot Text-to-Image Generation. In Proceedings of the 38th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 8821–8831. https://proceedings.mlr.press/v139/ramesh21a.html
[56]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10684–10695.
[57]
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. https://doi.org/10.48550/ARXIV.2205.11487
[58]
Othman Sbai, Mohamed Elhoseiny, Antoine Bordes, Yann LeCun, and Camille Couprie. 2018. DesIGN: Design Inspiration from Generative Networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops.
[59]
Yang Shi, Nan Cao, Xiaojuan Ma, Siji Chen, and Pei Liu. 2020. EmoG: Supporting the Sketching of Emotional Expressions for Storyboarding. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376520
[60]
Angie Spoto and Natalia Oleynik. [n. d.]. Library of Mixed-Initiative Creative Interfaces. http://mici.codingconduct.cc/aboutmicis/. (Accessed on 08/31/2022).
[61]
Georgios N Yannakakis, Antonios Liapis, and Constantine Alexopoulos. 2014. Mixed-initiative co-creativity. (2014).
[62]
Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, and Yonghui Wu. 2022. Scaling Autoregressive Models for Content-Rich Text-to-Image Generation. https://doi.org/10.48550/ARXIV.2206.10789
[63]
Nanxuan Zhao, Nam Wook Kim, Laura Mariah Herman, Hanspeter Pfister, Rynson W.H. Lau, Jose Echevarria, and Zoya Bylinskii. 2020. ICONATE: Automatic Compound Icon Generation and Ideation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376618
[64]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).

Cited By

View all
  • (2025)From Heritage Building Information Modelling Towards an ‘Echo-Based’ Heritage Digital TwinHeritage10.3390/heritage80100338:1(33)Online publication date: 17-Jan-2025
  • (2025)Creación de ambientes para niveles de videojuegos utilizando Stable DiffusionOptimizing video game level atmosphere with Stable Diffusion-assisted set dressingEuropean Public & Social Innovation Review10.31637/epsir-2025-135410(1-18)Online publication date: 27-Jan-2025
  • (2025)Summon a demon and bind it: A grounded theory of LLM red teamingPLOS ONE10.1371/journal.pone.031465820:1(e0314658)Online publication date: 15-Jan-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
C&C '23: Proceedings of the 15th Conference on Creativity and Cognition
June 2023
564 pages
ISBN:9798400701801
DOI:10.1145/3591196
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 June 2023

Check for updates

Author Tags

  1. AI art
  2. Artists using AI
  3. Text-to-Image models

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

C&C '23
Sponsor:
C&C '23: Creativity and Cognition
June 19 - 21, 2023
Virtual Event, USA

Acceptance Rates

Overall Acceptance Rate 108 of 371 submissions, 29%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,969
  • Downloads (Last 6 weeks)318
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)From Heritage Building Information Modelling Towards an ‘Echo-Based’ Heritage Digital TwinHeritage10.3390/heritage80100338:1(33)Online publication date: 17-Jan-2025
  • (2025)Creación de ambientes para niveles de videojuegos utilizando Stable DiffusionOptimizing video game level atmosphere with Stable Diffusion-assisted set dressingEuropean Public & Social Innovation Review10.31637/epsir-2025-135410(1-18)Online publication date: 27-Jan-2025
  • (2025)Summon a demon and bind it: A grounded theory of LLM red teamingPLOS ONE10.1371/journal.pone.031465820:1(e0314658)Online publication date: 15-Jan-2025
  • (2025)How do people experience the images created by generative artificial intelligence? An exploration of people's perceptions, appraisals, and emotions related to a Gen-AI text-to-image model and its creationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103375193:COnline publication date: 1-Jan-2025
  • (2024)Advancing Innovation in Medical Presentations: A Guide for Medical Educators to Use Images Generated With Artificial IntelligenceCureus10.7759/cureus.74978Online publication date: 2-Dec-2024
  • (2024)Design e as Inteligências Artificiais Gerativas: uma revisão sistemática de literaturaBlucher Design Proceedings10.5151/cidiconcic2023-15_649304(231-245)Online publication date: Jun-2024
  • (2024)Communicating AI for Architectural and Interior Design: Reinterpreting Traditional Iznik Tile Compositions through AI Software for Contemporary SpacesBuildings10.3390/buildings1409291614:9(2916)Online publication date: 15-Sep-2024
  • (2024)The Dearth of the Author in AI-Supported WritingProceedings of the Third Workshop on Intelligent and Interactive Writing Assistants10.1145/3690712.3690725(48-50)Online publication date: 11-May-2024
  • (2024)Computational Poetry is Lost PoetryProceedings of the Halfway to the Future Symposium10.1145/3686169.3686179(1-4)Online publication date: 21-Oct-2024
  • (2024)Can Machines Tell What People Want? Bringing Situated Intelligence to Generative AIProceedings of the Halfway to the Future Symposium10.1145/3686169.3686172(1-6)Online publication date: 21-Oct-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media