1 Introduction
Games and gaming present a very particular type of ethical complexity: they create an additional plane where action and interaction occur, with harm and benefits manifesting both within the game world and in reality. They put players in relation and sometimes in interaction with many other stakeholders, such as game designers, gaming companies, and other third parties. This creates a multifaceted and multiagent ethical landscape, where the boundaries between virtual actions and real-life consequences can be blurred. Introducing artificial intelligence (AI) to this already complex system amplifies existing ethical risks and benefits.
As an area of practice, ethics of games and gaming is not well established. There are no standard procedures or a common framework to identify and mitigate ethical concerns that arise from gaming. Instead, these issues are discussed in parts through business ethics, research and bioethics, and technology ethics. While these various domains of applied ethics combined cover relevant concerns, they do not provide a well-structured framework or clear standards for practice. We argue that the emerging field of responsible AI with its tools, practices, organizational governance, and its approach to user agency, could fill this gap and provide the gaming industry with guidance as they navigate big data and AI.
2 AI's Impact on Ethics in Gaming
Games allow individuals to access various experiences, engaging them (in varying degrees) emotionally and physically. They can help improve cognitive abilities, strengthen problem-solving and strategic planning skills, and create social connections and a feeling of belonging for players. They can help designers build narratives that unfold and expand storytelling into new artistic genres.
Even without incorporating AI, games present a number of well-established ethical concerns; the line between the flow of gaming and addiction can be thin, the privacy of players can be reduced to improve the gaming experience, games can be biased, it can be difficult to strike a balance between protecting freedom of speech in games and allowing toxic behavior, and in-game violence can influence players’ real life attitude toward violence—to name but a few.
Introducing AI into this already complex ethical landscape only raises the stakes. While AI can be used to detect and reduce bias and toxic behavior, it also poses added risks, as these technologies have been repeatedly proven to reproduce bias and toxicity. Personalizing AI systems expose players to manipulation, as they increase the ability to covertly steer the behavior of each player. Combined with other AI-accelerated technologies such as extended reality and biotracking, the game provides a setting where a player's most intimate information can be obtained and used, exposing them to massive privacy violations and harm.
While AI and in particular generative AI (genAI) may open the doors for new content creators, they also give rise to ethical issues around arts and creativity. Multiple lawsuits already demonstrate how genAI may manifest intellectual property infringements and violation of artistic ownership. In using genAI systems, game designers further limit their own control over the game and its safety, resulting in risks of unforeseen harm to players.
Integrating AI into the gaming industry has a potential for creating a new generation of great games, but it also increases the ethical risks that are present in the conceptualization, design, development, dissemination, and monetization of games. With more personalized and unpredictable in-game interaction between players and algorithms, game designers must be more cognizant of potential harms and disparities.
3 Responsible AI for Gaming
Some of these risks fall well within the purview of existing teams within an organization, such as legal and security teams. However, these risks are not isolated from each other—in fact, they often need to be balanced against other risks and benefits. For example, while one could reduce toxicity by surveilling in-game conversations, the surveillance itself could then constitute a privacy violation. This is to say that a comprehensive understanding and navigation of ethical risks require risk assessments and mitigations to happen not in silos but rather within a broader ethics framework and governance structure.
A comprehensive ethics risk and impact assessment would involve identifying the stakeholders, estimating the impact and externalities, laying out the technically feasible actions, and conducting a nuanced assessment of each ethical concern and their tradeoffs. Such an assessment can be conducted using AI ethics principles. A notable part of the responsible AI discourse and practice worldwide, AI ethics principles are derived from the traditional bioethics principles, which are in turn based on fundamental theories in moral and political philosophy. While these principles offer no resolution on how to deal with tradeoffs, they do provide a comprehensive starting point. Using an AI ethics risk and impact assessment tool could enable game designers and developers to assess both the game and the incorporated AI systems for their impact on individual autonomy and agency, their risks of harm and potential benefits, and the distribution of burdens and benefits within the target audience as well as the broader society [
1].
Such ethics risk and impact assessments are only one part of an ethical design and development workflow. As AI systems are developed or integrated into game design, designers and developers make numerous ethically loaded decisions. These decisions range from which AI systems to integrate into a game environment to which data to collect and use, which AI model to choose, what fairness or bias metrics to employ, or what privacy measures to implement. To make these ethical decisions, multiple responsible AI tools and practices such as error analysis and bias testing should be integrated into the game development workflow, and, following the ethics-by-design approach, each ethical decision should be converted into a design choice and implemented.
Again, well-structured and well-integrated responsible AI workflows exist that can ensure that all relevant ethical concerns are taken into account when they are actionable and that necessary actions are indeed taken. Applying these would minimize and mitigate the risks that a game poses, enhance the agency of players, and improve fairness within the game. It would also provide game designers and developers with a comprehensive scope and understanding of the risks and benefits that their game poses.
4 From Age Labels to Risk Labels
As many measures of applied ethics and responsible AI frameworks rest on notions and practices of agency and informed consent, one as-yet unanswered question would be when, how, and how much to communicate this information to players to enable informed decision making without spoiling the thrill of the game---consider the simple example of how revealing information about a dynamic difficulty adjustment system may result in a lessened sense of accomplishment, or a feeling of lacking fairness, or attempts to min-max and exploit the system to the detriment of other players' enjoyment. Currently, the gaming industry's approach to information disclosure is heavily based on other entertainment industries like film or television, using an age-based rating system. Depending on the local legislative or self-regulatory framework, they may add details regarding the type of potentially disturbing content such as violence, sexuality, or use of substances, or, more recently, elements of gambling or microtransactions.
We argue that these approaches, often based in regulatory demands, are insufficient, and they overlook many types of risk that are specific to the gaming industry. Instead, the disclosure system should be replaced by one that conveys the types of risks that the game poses, including its AI and data practices. In this, we again return to the ideas developed within the responsible AI field such as model cards and AI and data labels [
2]. Such labels would provide the consumer with an overview of a much broader set of risks, not only limited to what happens inside the game but also those that take place behind the scenes, as, for example, privacy risks. While we imagine a simplified overview label for easy decision making, it can and should be companied with access to detailed information, using efficient and effective user interface practices.
5 Conclusion
There remains a significant difference between how we approach innovation, design, and technology, and how we approach ethics. The perception seems to be that ethics is paperwork, a slow and burdensome procedure. But ethics is an integral and inevitable part of innovation and design, as there is simply no decision that does not carry some ethical implications. We have to acknowledge that with the increasing integration of AI, the risk of harm that games pose increases immensely. As such, we cannot rely on simple ratings and terms and conditions. We have to approach ethics systematically and with the same innovative rigor, implementing ethics-by-design workflows, integrating responsible AI tools and practices, and allowing players to make informed decisions with relevant information.
Acknowledgments
We would like to thank Tomo Lazovich for their contribution to the earlier discussions and research on this topic.