skip to main content
10.1145/3580305.3599557acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
abstract

Generative AI meets Responsible AI: Practical Challenges and Opportunities

Published: 04 August 2023 Publication History

Abstract

Generative AI models and applications are being rapidly developed and deployed across a wide spectrum of industries and applications ranging from writing and email assistants to graphic design and art generation to educational assistants to coding to drug discovery. However, there are several ethical and social considerations associated with generative AI models and applications. These concerns include lack of interpretability, bias and discrimination, privacy, lack of model robustness, fake and misleading content, copyright implications, plagiarism, and environmental impact associated with training and inference of generative AI models.
In this tutorial, we first motivate the need for adopting responsible AI principles when developing and deploying large language models (LLMs) and other generative AI models, as part of a broader AI model governance and responsible AI framework, from societal, legal, user, and model developer perspectives, and provide a roadmap for thinking about responsible AI for generative AI in practice. We provide a brief technical overview of text and image generation models, and highlight the key responsible AI desiderata associated with these models. We then describe the technical considerations and challenges associated with realizing the above desiderata in practice. We focus on real-world generative AI use cases spanning domains such as media generation, writing assistants, copywriting, code generation, and conversational assistants, present practical solution approaches / guidelines for applying responsible AI techniques effectively, discuss lessons learned from deploying responsible AI approaches for generative AI applications in practice, and highlight the key open research problems. We hope that our tutorial will inform both researchers and practitioners, stimulate further research on responsible AI in the context of generative AI, and pave the way for building more reliable and trustworthy generative AI applications in the future.

References

[1]
Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Large language models associate Muslims with violence. Nature Machine Intelligence, Vol. 3, 6 (2021), 461--463.
[2]
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In ACM Conference on Fairness, Accountability, and Transparency.
[3]
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In NeurIPS.
[4]
Rishi Bommasani et al. 2021. On the Opportunities and Risks of Foundation Models. ArXiv (2021). https://crfm.stanford.edu/assets/report.pdf
[5]
Rishi Bommasani, Daniel Zhang, Tony Lee, and Percy Liang. 2023. Improving Transparency in AI Language Models: A Holistic Evaluation. Stanford HAI Policy Brief (2023).
[6]
Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric Wallace. 2023. Extracting training data from diffusion models. arXiv preprint arXiv:2301.13188 (2023).
[7]
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting Training Data from Large Language Models. In USENIX Security Symposium, Vol. 6.
[8]
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In NeurIPS.
[9]
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416 (2022).
[10]
Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. 2022. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858 (2022).
[11]
Jie Huang, Hanyin Shao, and Kevin Chang. 2022b. Are Large Pre-Trained Language Models Leaking Your Personal Information?. In ICML Workshop on Knowledge Retrieval and Language Models.
[12]
Sonya Huang, Pat Grady, and GPT-3. 2022a. Generative AI: A Creative New World. https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/
[13]
Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. 2020. Social Biases in NLP Models as Barriers for Persons with Disabilities. In ACL. 5491--5501.
[14]
Subbarao Kambhampati. 2022. Changing the nature of AI research. Commun. ACM, Vol. 65, 9 (2022).
[15]
Li Lucy and David Bamman. 2021. Gender and representation bias in GPT-3 generated stories. In Proceedings of the Third Workshop on Narrative Understanding. 48--55.
[16]
James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David L Roberts, Matthew E Taylor, and Michael L Littman. 2017. Interactive learning from policy-dependent human feedback. In ICML.
[17]
Donald Metzler, Yi Tay, Dara Bahri, and Marc Najork. 2021. Rethinking search: Making domain experts out of dilettantes. In ACM SIGIR Forum, Vol. 55.
[18]
Jakob Mökander, Jonas Schuett, Hannah Rose Kirk, and Luciano Floridi. 2023. Auditing large language models: A three-layered approach. arXiv preprint arXiv:2302.08500 (2023).
[19]
Andrew Ng. 2022. ChatGPT Mania!, Crypto Fiasco Defunds AI Safety, Alexa Tells Bedtime Stories. https://www.deeplearning.ai/the-batch/issue-174/ The Batch - Deeplearning.ai newsletter.
[20]
Long Ouyang et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155 (2022).
[21]
David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350 (2021).
[22]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML.
[23]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In CVPR.
[24]
Karthik Abinav Sankararaman, Sinong Wang, and Han Fang. 2022. BayesFormer: Transformer with Uncertainty Estimation. arXiv preprint arXiv:2206.00826 (2022).
[25]
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, Francc ois Yvon, Matthias Gallé, et al. 2022. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model. arXiv preprint arXiv:2211.05100 (2022).
[26]
Chirag Shah and Emily M Bender. 2022. Situating search. In ACM SIGIR Conference on Human Information Interaction and Retrieval. 221--232.
[27]
Or Sharir, Barak Peleg, and Yoav Shoham. 2020. The cost of training NLP models: A concise overview. arXiv preprint arXiv:2004.08900 (2020).
[28]
Irene Solaiman. 2023. The Gradient of Generative AI Release: Methods and Considerations. arXiv preprint arXiv:2302.04844 (2023).
[29]
Eric Wallace, Tony Zhao, Shi Feng, and Sameer Singh. 2021. Concealed Data Poisoning Attacks on NLP Models. In NAACL-HLT.
[30]
Yizhong Wang et al. 2022a. Super-NaturalInstructions: Generalization via Declarative Instructions on 1600 NLP Tasks. In EMNLP.
[31]
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022b. Self-Instruct: Aligning Language Model with Self Generated Instructions. arXiv preprint arXiv:2212.10560 (2022).
[32]
Robert Wolfe, Yiwei Yang, Bill Howe, and Aylin Caliskan. 2022. Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias. arXiv preprint arXiv:2212.11261 (2022).
[33]
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. OPT: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 (2022).
[34]
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 (2019).

Cited By

View all
  • (2025)Beyond the Hype: A Comprehensive Review of Current Trends in Generative AI Research, Teaching Practices, and Tools2024 Working Group Reports on Innovation and Technology in Computer Science Education10.1145/3689187.3709614(300-338)Online publication date: 22-Jan-2025
  • (2025)Generative Artificial Intelligence Adoption: An Exploration of Challenges and PerceptionsThe Economic Impact of Small and Medium-Sized Enterprises10.1007/978-3-031-74554-6_10(213-231)Online publication date: 18-Feb-2025
  • (2024)Generative Insights Unveiling AI's Evolution and AlgorithmsResponsible Implementations of Generative AI for Multidisciplinary Use10.4018/979-8-3693-9173-0.ch001(1-28)Online publication date: 20-Sep-2024
  • Show More Cited By

Index Terms

  1. Generative AI meets Responsible AI: Practical Challenges and Opportunities

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      KDD '23: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
      August 2023
      5996 pages
      ISBN:9798400701030
      DOI:10.1145/3580305
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 August 2023

      Check for updates

      Author Tags

      1. case studies from industry
      2. ethics in ai
      3. generative ai models and applications
      4. large language models
      5. responsible ai

      Qualifiers

      • Abstract

      Conference

      KDD '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

      Upcoming Conference

      KDD '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)3,095
      • Downloads (Last 6 weeks)217
      Reflects downloads up to 27 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Beyond the Hype: A Comprehensive Review of Current Trends in Generative AI Research, Teaching Practices, and Tools2024 Working Group Reports on Innovation and Technology in Computer Science Education10.1145/3689187.3709614(300-338)Online publication date: 22-Jan-2025
      • (2025)Generative Artificial Intelligence Adoption: An Exploration of Challenges and PerceptionsThe Economic Impact of Small and Medium-Sized Enterprises10.1007/978-3-031-74554-6_10(213-231)Online publication date: 18-Feb-2025
      • (2024)Generative Insights Unveiling AI's Evolution and AlgorithmsResponsible Implementations of Generative AI for Multidisciplinary Use10.4018/979-8-3693-9173-0.ch001(1-28)Online publication date: 20-Sep-2024
      • (2024)Introduction to Generative AI in CybersecurityAI Techniques for Securing Medical and Business Practices10.4018/979-8-3693-8939-3.ch001(1-44)Online publication date: 27-Sep-2024
      • (2024)Exploring Security Challenges in Generative AI for Web EngineeringGenerative AI for Web Engineering Models10.4018/979-8-3693-3703-5.ch016(331-360)Online publication date: 27-Sep-2024
      • (2024)Introduction to Generative AI in Web EngineeringGenerative AI for Web Engineering Models10.4018/979-8-3693-3703-5.ch015(297-330)Online publication date: 27-Sep-2024
      • (2024)Exploring the Ethical Implications of Generative AI in HealthcareThe Ethical Frontier of AI and Data Analysis10.4018/979-8-3693-2964-1.ch011(180-195)Online publication date: 12-Apr-2024
      • (2024)A Joint Survey in Decentralized Federated Learning and TinyML: A Brief Introduction to Swarm LearningFuture Internet10.3390/fi1611041316:11(413)Online publication date: 8-Nov-2024
      • (2024)Emergent AI-assisted discourse: a case study of a second language writer authoring with ChatGPTJournal of China Computer-Assisted Language Learning10.1515/jccall-2024-0011Online publication date: 1-Nov-2024
      • (2024)AI and Actors: Ethical Challenges, Cultural Narratives and Industry Pathways in Synthetic Media PerformanceEmerging Media10.1177/275235432412891082:3(523-546)Online publication date: 8-Oct-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media