skip to main content
10.1145/3591196.3593515acmconferencesArticle/Chapter ViewAbstractPublication Pagesc-n-cConference Proceedingsconference-collections
research-article
Open Access

The Prompt Artists

Published:19 June 2023Publication History

ABSTRACT

This paper examines the art practices, artwork, and motivations of prolific users of the latest generation of text-to-image models. Through interviews, observations, and a user survey, we present a sampling of the artistic styles and describe the developed community of practice around generative AI. We find that: 1) artists hold the text prompt and the resulting image can be considered collectively as a form of artistic expression (prompts as art), and 2) prompt templates (prompts with “slots” for others to fill in with their own words) are developed to create generative art styles. We discover that the value placed by this community on unique outputs leads to artists seeking specialized vocabulary to produce distinctive art pieces (e.g., by reading architectural blogs to find phrases to describe images). We also find that some artists use “glitches” in the model that can be turned into artistic styles of their own right. From these findings, we outline specific implications for design regarding future prompting and image editing options.

References

  1. [n. d.]. AICAN. https://www.aican.io/. (Accessed on 08/31/2022).Google ScholarGoogle Scholar
  2. [n. d.]. Imagen: Text-to-Image Diffusion Models. https://imagen.research.google/. (Accessed on 08/31/2022).Google ScholarGoogle Scholar
  3. [n. d.]. Midjourney. https://www.midjourney.com/home/. (Accessed on 09/01/2022).Google ScholarGoogle Scholar
  4. [n. d.]. Parti: Pathways Autoregressive Text-to-Image Model. https://parti.research.google/. (Accessed on 08/31/2022).Google ScholarGoogle Scholar
  5. [n. d.]. Stable Diffusion launch announcement — Stability.Ai. https://stability.ai/blog/stable-diffusion-announcement. (Accessed on 08/31/2022).Google ScholarGoogle Scholar
  6. Blaise Agüera y Arcas. 2017. Art in the Age of Machine Intelligence. Arts 6, 4 (2017). https://doi.org/10.3390/arts6040018Google ScholarGoogle ScholarCross RefCross Ref
  7. David L Altheide and Christopher J Schneider. 2012. Qualitative media analysis. Vol. 38. Sage publications.Google ScholarGoogle Scholar
  8. Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.Google ScholarGoogle Scholar
  9. Andrew Brock, Jeff Donahue, and Karen Simonyan. 2018. Large Scale GAN Training for High Fidelity Natural Image Synthesis. CoRR abs/1809.11096 (2018). arXiv:1809.11096http://arxiv.org/abs/1809.11096Google ScholarGoogle Scholar
  10. Kieran Browne. 2022. Who (or What) Is an AI Artist?Leonardo 55, 2 (04 2022), 130–134. https://doi.org/10.1162/leon_a_02092 arXiv:https://direct.mit.edu/leon/article-pdf/55/2/130/2004755/leon_a_02092.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  11. Daniel Buschek, Lukas Mecke, Florian Lehmann, and Hai Dang. 2021. Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems. Workshops at the International Conference on Intelligent User Interfaces (IUI) (2021).Google ScholarGoogle Scholar
  12. Eva Cetinic and James She. 2022. Understanding and Creating Art with AI: Review and Outlook. ACM Trans. Multimedia Comput. Commun. Appl. 18, 2, Article 66 (Feb 2022), 22 pages. https://doi.org/10.1145/3475799Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Eugene Ch’ng. 2019. Art by Computing Machinery: Is Machine Art Acceptable in the Artworld?ACM Trans. Multimedia Comput. Commun. Appl. 15, 2s, Article 59 (Jul 2019), 17 pages. https://doi.org/10.1145/3326338Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Christie’s. 2018. The first piece of AI-generated art to come to auction | Christie’s. https://www.christies.com/features/a-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx. (Accessed on 09/06/2022).Google ScholarGoogle Scholar
  15. John Joon Young Chung, Minsuk Chang, and Eytan Adar. 2021. Gestural Inputs as Control Interaction for Generative Human-AI Co-Creation. Workshops at the International Conference on Intelligent User Interfaces (IUI) (2021).Google ScholarGoogle Scholar
  16. John Joon Young Chung, Shiqing He, and Eytan Adar. 2021. The Intersection of Users, Roles, Interactions, and Technologies in Creativity Support Tools. In Designing Interactive Systems Conference 2021 (Virtual Event, USA) (DIS ’21). Association for Computing Machinery, New York, NY, USA, 1817–1833. https://doi.org/10.1145/3461778.3462050Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Nicholas Davis, Chih-PIn Hsiao, Kunwar Yashraj Singh, Lisa Li, Sanat Moningi, and Brian Magerko. 2015. Drawing Apprentice: An Enactive Co-Creative Agent for Artistic Collaboration. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition (Glasgow, United Kingdom) (C&C ’15). Association for Computing Machinery, New York, NY, USA, 185–186. https://doi.org/10.1145/2757226.2764555Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Nicholas Davis, Chih-PIn Hsiao, Kunwar Yashraj Singh, Lisa Li, and Brian Magerko. 2016. Empirically Studying Participatory Sense-Making in Abstract Drawing with a Co-Creative Cognitive Agent. In Proceedings of the 21st International Conference on Intelligent User Interfaces (Sonoma, California, USA) (IUI ’16). Association for Computing Machinery, New York, NY, USA, 196–207. https://doi.org/10.1145/2856767.2856795Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423Google ScholarGoogle ScholarCross RefCross Ref
  20. Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion Models Beat GANs on Image Synthesis. In Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.). Vol. 34. Curran Associates, Inc., 8780–8794. https://proceedings.neurips.cc/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdfGoogle ScholarGoogle Scholar
  21. Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, and Marian Mazzone. 2017. CAN: Creative Adversarial Networks, Generating "Art" by Learning About Styles and Deviating from Style Norms. https://doi.org/10.48550/ARXIV.1706.07068Google ScholarGoogle ScholarCross RefCross Ref
  22. Jonas Frich, Lindsay MacDonald Vermeulen, Christian Remy, Michael Mose Biskjaer, and Peter Dalsgaard. 2019. Mapping the Landscape of Creativity Support Tools in HCI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3290605.3300619Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2015. A Neural Algorithm of Artistic Style. https://doi.org/10.48550/ARXIV.1508.06576Google ScholarGoogle ScholarCross RefCross Ref
  24. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative Adversarial Networks. Commun. ACM 63, 11 (Oct 2020), 139–144. https://doi.org/10.1145/3422622Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Imke Grabe, Miguel González-Duque, Sebastian Risi, and Jichen Zhu. 2022. Towards a Framework for Human-AI Interaction Patterns in Co-Creative GAN Applications. Workshops at the International Conference on Intelligent User Interfaces (IUI) (2022).Google ScholarGoogle Scholar
  26. Dejan Grba. 2019. Forensics of a molten crystal: challenges of archiving and representing contemporary generative art. ISSUE Annual Art Journal: Erase 8, 3-15 (2019), 5.Google ScholarGoogle Scholar
  27. Dejan Grba. 2021. Brittle Opacity: Ambiguities of the Creative AI. In Proceedings of the xCoAx, 9th Conference on Computation, Communication, Aesthetics & X Proceedings, xCoAx, Graz, Austria. 12–16.Google ScholarGoogle Scholar
  28. Dejan Grba. 2022. Deep Else: A Critical Framework for AI Art. Digital 2, 1 (2022), 1–32. https://doi.org/10.3390/digital2010001Google ScholarGoogle ScholarCross RefCross Ref
  29. Aaron Hertzmann. 2018. Can Computers Create Art?Arts 7, 2 (2018). https://doi.org/10.3390/arts7020018Google ScholarGoogle ScholarCross RefCross Ref
  30. Aaron Hertzmann. 2020. Computers Do Not Make Art, People Do. Commun. ACM 63, 5 (Apr 2020), 45–48. https://doi.org/10.1145/3347092Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. 2022. Cascaded Diffusion Models for High Fidelity Image Generation.J. Mach. Learn. Res. 23 (2022), 47–1.Google ScholarGoogle Scholar
  32. Rania Hodhod and Brian Magerko. 2016. Closing the Cognitive Gap between Humans and Interactive Narrative Agents Using Shared Mental Models. In Proceedings of the 21st International Conference on Intelligent User Interfaces (Sonoma, California, USA) (IUI ’16). Association for Computing Machinery, New York, NY, USA, 135–146. https://doi.org/10.1145/2856767.2856774Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Cheng-Zhi Anna Huang, Hendrik Vincent Koops, Ed Newton-Rex, Monica Dinculescu, and Carrie J. Cai. 2020. AI Song Contest: Human-AI Co-Creation in Songwriting. (2020). https://doi.org/10.48550/ARXIV.2010.05388Google ScholarGoogle ScholarCross RefCross Ref
  34. Angel Hsing-Chi Hwang. 2022. Too Late to Be Creative? AI-Empowered Tools in Creative Processes. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 38, 9 pages. https://doi.org/10.1145/3491101.3503549Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Youngseung Jeon, Seungwan Jin, Patrick C. Shih, and Kyungsik Han. 2021. FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 576, 18 pages. https://doi.org/10.1145/3411764.3445093Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, and Mingli Song. 2020. Neural Style Transfer: A Review. IEEE Transactions on Visualization and Computer Graphics 26, 11 (2020), 3365–3385. https://doi.org/10.1109/TVCG.2019.2921336Google ScholarGoogle ScholarCross RefCross Ref
  37. Pegah Karimi, Nicholas Davis, Mary Lou Maher, Kazjon Grace, and Lina Lee. 2019. Relating Cognitive Models of Design Creativity to the Similarity of Sketches Generated by an AI Partner. In Proceedings of the 2019 on Creativity and Cognition (San Diego, CA, USA) (C&C ’19). Association for Computing Machinery, New York, NY, USA, 259–270. https://doi.org/10.1145/3325480.3325488Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Tero Karras, Samuli Laine, and Timo Aila. 2019. A Style-Based Generator Architecture for Generative Adversarial Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarCross RefCross Ref
  39. Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J. Cai. 2020. Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376739Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. 2015. Generating Images from Captions with Attention. https://doi.org/10.48550/ARXIV.1511.02793Google ScholarGoogle ScholarCross RefCross Ref
  41. Marian Mazzone and Ahmed Elgammal. 2019. Art, Creativity, and the Potential of Artificial Intelligence. Arts 8, 1 (2019). https://doi.org/10.3390/arts8010026Google ScholarGoogle ScholarCross RefCross Ref
  42. Jon McCormack, Toby Gifford, Patrick Hutchings, Maria Teresa Llano Rodriguez, Matthew Yee-King, and Mark d’Inverno. 2019. In a Silent Way: Communication Between AI and Improvising Musicians Beyond Sound. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300268Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Matt McFarland. 2016. Google’s psychedelic ‘paint brush’ raises the oldest question in art - The Washington Post. https://www.washingtonpost.com/news/innovations/wp/2016/03/10/googles-psychedelic-paint-brush-raises-the-oldest-question-in-art/. (Accessed on 09/14/2022).Google ScholarGoogle Scholar
  44. Matthew B Miles and A Michael Huberman. 1984. Drawing valid meaning from qualitative data: Toward a shared craft. Educational researcher 13, 5 (1984), 20–30.Google ScholarGoogle Scholar
  45. Alexander Mordvintsev, Christopher Olah, and Mike Tyka. 2015. Inceptionism: Going Deeper into Neural Networks. https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.htmlGoogle ScholarGoogle Scholar
  46. Michael Muller, Justin D Weisz, and Werner Geyer. 2020. Mixed Initiative Generative AI Interfaces: An Analytic Framework for Generative AI Applications. In Proceedings of the Workshop The Future of Co-Creative Systems-A Workshop on Human-Computer Co-Creativity of the 11th International Conference on Computational Creativity (ICCC 2020).Google ScholarGoogle Scholar
  47. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2021. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021).Google ScholarGoogle Scholar
  48. Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, and Bongwon Suh. 2018. I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3174223Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. OpenAI. [n. d.]. DALL·E 2. https://openai.com/dall-e-2/. (Accessed on 08/31/2022).Google ScholarGoogle Scholar
  50. OpenAI. [n. d.]. DALL·E: Creating Images from Text. https://openai.com/blog/dall-e/. (Accessed on 08/31/2022).Google ScholarGoogle Scholar
  51. Allison Perrone and Justin Edwards. 2019. Chatbots as Unwitting Actors. In Proceedings of the 1st International Conference on Conversational User Interfaces (Dublin, Ireland) (CUI ’19). Association for Computing Machinery, New York, NY, USA, Article 2, 2 pages. https://doi.org/10.1145/3342775.3342799Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Brian Quanz, Wei Sun, Ajay Deshpande, Dhruv Shah, and Jae-eun Park. 2020. Machine learning based co-creative design framework. arXiv preprint arXiv:2001.08791 (2020).Google ScholarGoogle Scholar
  53. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, 2020. Exploring the limits of transfer learning with a unified text-to-text transformer.J. Mach. Learn. Res. 21, 140 (2020), 1–67.Google ScholarGoogle Scholar
  54. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. https://doi.org/10.48550/ARXIV.2204.06125Google ScholarGoogle ScholarCross RefCross Ref
  55. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-Shot Text-to-Image Generation. In Proceedings of the 38th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 8821–8831. https://proceedings.mlr.press/v139/ramesh21a.htmlGoogle ScholarGoogle Scholar
  56. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10684–10695.Google ScholarGoogle ScholarCross RefCross Ref
  57. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. https://doi.org/10.48550/ARXIV.2205.11487Google ScholarGoogle ScholarCross RefCross Ref
  58. Othman Sbai, Mohamed Elhoseiny, Antoine Bordes, Yann LeCun, and Camille Couprie. 2018. DesIGN: Design Inspiration from Generative Networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops.Google ScholarGoogle Scholar
  59. Yang Shi, Nan Cao, Xiaojuan Ma, Siji Chen, and Pei Liu. 2020. EmoG: Supporting the Sketching of Emotional Expressions for Storyboarding. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376520Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Angie Spoto and Natalia Oleynik. [n. d.]. Library of Mixed-Initiative Creative Interfaces. http://mici.codingconduct.cc/aboutmicis/. (Accessed on 08/31/2022).Google ScholarGoogle Scholar
  61. Georgios N Yannakakis, Antonios Liapis, and Constantine Alexopoulos. 2014. Mixed-initiative co-creativity. (2014).Google ScholarGoogle Scholar
  62. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, and Yonghui Wu. 2022. Scaling Autoregressive Models for Content-Rich Text-to-Image Generation. https://doi.org/10.48550/ARXIV.2206.10789Google ScholarGoogle ScholarCross RefCross Ref
  63. Nanxuan Zhao, Nam Wook Kim, Laura Mariah Herman, Hanspeter Pfister, Rynson W.H. Lau, Jose Echevarria, and Zoya Bylinskii. 2020. ICONATE: Automatic Compound Icon Generation and Ideation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376618Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. The Prompt Artists

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      C&C '23: Proceedings of the 15th Conference on Creativity and Cognition
      June 2023
      564 pages
      ISBN:9798400701801
      DOI:10.1145/3591196

      Copyright © 2023 Owner/Author

      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 June 2023

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate108of371submissions,29%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format