Keywords

1 Introduction

Science fiction gives us visions of the future and shows us stories of people using technologies that are yet to exist or enjoy widespread use. Due to the dependence of science fiction upon existing technological progress and human imagination, science fiction media are inextricably linked to contemporary culture. Science fiction media, including video games, are inexplicably tied to the temporal context of their creation [1] with depictions of technology in science fiction works following contemporary, real-world trends in research and development, featuring technologies that parallel the prevalent attitudes and interests of the time [2]. Sci fi influences the design of technology by inspiring development of real-world parallels, fulfilling predictions and expectations for the evolution of similar technologies, providing a social context in which novel technologies can be placed and observed, and introducing new form factor and user behavior paradigms in existent and future interfaces [3].

1.1 HCI Relevance in Generalized Science Fiction

Science fiction often involves interactions between human (or otherwise intelligent) beings and new forms of technology and thus affords itself to HCI speculation. Pop culture and academia have acknowledged this opportunity, with HCI research citing science fiction more than ever [4]. Researchers are increasingly cataloging examples of technologies in science fiction and real life, gleaning information from fictional case studies for application to real-world case studies of novel interfaces, such as gesture recognition [5]. Scientists affirm the close relationship between science fiction works and real-world technology research and development, with scientific communities drawing inspiration and learning from science fiction examples [6].

Much like science fiction draws from and contributes to real-world technologies, HCI theories of practice in science fiction works can influence and be influenced by the study of real use cases. Science fiction media challenges HCI researchers and practitioners to imagine possible interface schemes and interaction methods that may accompany future technologies, and to consider the ethical implications of such developments [4].

1.2 HCI Relevance in Video Game Science Fiction

Most science fiction research appears to focus on film, television, and literature examples, but the same principles used to evaluate these sci fi phenomena are highly, if not more so, applicable to video games, as they offer interactive demonstrations within situated contexts and provide opportunities for active exploration of technologies and their interactions with people [7]. Design fiction, as described by Joseph Lindley and Paul Coulton, creates a fictitious, yet plausible environment within which ideas can be built and tested, and thus opened for discussion and consideration [8]. This same definition can be applied to science fiction and technology representations in video games. Since their inception, video games have brought an interactive dimension to the science fiction genre, offering players the opportunity to experience future technologies in simulated worlds. Video games provide immersive spaces where players can experience technologies through a digital agent (such as a character, avatar, or camera view) and use these technologies to achieve game-defined goals in designed user contexts while exploring the game environment. Players experience “spatial presence” when interacting with a game world, suspending their disbelief and perceiving objects in the virtual space as “real” [9]; thus, the experience of the player as a user in a game world is comparable to a user experience outside the game in a similar context.

The player actions involved with fictitious interfaces and technologies in these game spaces are analogous to the user tasks of technologies in the real world. Video games can act as testbeds for novel interfaces and emergent technologies, and demonstrate models for effective user-centered design. Many games explore different designs for addressing the same problem and together provide a more comprehensive view of possible avenues for new technology and the design of human interactions with said technology. Video games can be held to the same HCI principles and expectations of other interactive systems, such as ease of use and learnability, and have had to solve HCI-related problems to become as successful and prolific as they are today [10]. Like all science fiction, video games both inform and are products of their cultural context, and thus apply themselves just as well to real-world technological study as other types of science fiction media [11, 12]. The types of fictional technologies that emerge in game worlds over time reflect to the progression of technological capabilities, the proliferation of emerging technologies, and the cultural values of people associated with certain technologies.

Based on our understanding that science fiction influences real-world developments and vice versa, and that science fiction in video games is demonstrative of this relationship, a pre-existing model for the influential relationships of science fiction film ideas and real world technologies can be appropriated for use in the context of video games [13]. Video games can provide high-fidelity, interactive mockups of technological systems, and artifacts; by recognizing science fiction media, including games, as a means to design, build, and test future technologies, such media can be incorporated into real-world design processes and be examined using the same standards [14]. Video game design can act as a vanguard for the development of novel interactive technologies [15] (Fig. 1).

Fig. 1.
figure 1

Model describing the influence of science fiction in video games on the development of real-world technology counterparts, and how collaboration between technologists and designers of fictional technologies can influence both domains. Based off of the model described by Schmitz, Endres, and Butz (Schmitz et al. [13]).

Fictional, interactive environments like game worlds can be designed to share commonalities with the real world - so players can bring with them their mental model of the world and its technologies - while providing plausible contexts that currently do not exist but could in the future. To make these fictional aspects more plausible, media producers often consult with domain experts and technologists to make informed decisions when designing the technologies and associated interactions in a given context [6].

Although current research examines various science fiction technologies using HCI principles, a set of base criteria for applying HCI principles to science fiction that accounts for the reciprocity and real-world relevance of this relationship, appears to be missing [6]. In order to more effectively describe and understand science fiction as it pertains to HCI and vice versa, specifically in the powerful medium of video games, this paper seeks to formally categorize the dimensions of technologies and human-computer interfaces presented in games over past 40 years, and to formulate a taxonomy for classifying video game representations of future technologies that can be analyzed and applied to real-world designs. This set of frameworks would refer to contemporary understandings of human-computer interaction and lend itself to applications in conceptual analysis and user testing of emergent technologies. The following is a sample of the proposed design frameworks, using augmented reality as an applied example.

2 Definition of the HCI Framework

In order to apply these principles, we must determine working definitions for whom we regard as the “human” and for what we consider the “computer” in Human-Computer Interaction in fictional, yet real-world applicable, contexts. A real-world, human player is always involved in an HCI relationship with a video game, but the role and manner of interaction between the user(s) and technology can vary based on factors such as degree of separation or level of abstraction. The following subset of our frameworks can provide a means by which real and virtual users are described and the technologies they interact with are defined (Fig. 2).

Fig. 2.
figure 2

Visual representation of the proposed framework defining the “human” and “computer” aspects of HCI in the context of video game world technologies, and the bidirectional relationship between them.

2.1 “Human” or Potential User – The Player

The “human” aspect can refer to any potential user of the technology: the player (real user, character/avatar, abstract representation in virtual space) or to a diegetic entity that belongs to the game world of interest. Real, human players are an inherent component of video game interactions but can interact with games as abstract entities, which have a greater degree of separation from the fictional game world, or as embodied entities, which are immersed in the game world. An abstract player entity does not have a virtual “body” or inhabit a virtual, “physical” manifestation in the game world. Abstract player entities are typically represented by cursors or other non-diegetic GUI elements, and can adopt either or both of the following viewpoints:

Third-Person Omniscient Point of View

Referred to colloquially as “God mode”, this perspective provides the human player with a wide viewing angle and ability to see the game world with the least visual restriction. This method is commonly used for simulation and strategy games, where the player’s view is not unlike that of a military strategy map.

First-Person Point of View

This “on the ground”, more limited view frame trades a bird’s eye view for a more immersive experience, allowing the player to gain a perspective that is up close and personal. This point of view is usually tied to a character or entity that is more situated in the game world.

The player can also interact with game-related elements by embodying a virtual form, adopting the perspective and capabilities of a diegetic being. The virtual manifestation of the player can vary based on the player entity’s relationship to game narrative or integration with the game world. An embodied player may be able to choose between first- or third person point of view, and often navigates the virtual world as one of two virtual physical forms:

Character

This type of player entity is predetermined by game designers (ex. Mario, Master Chief) or limited customizability, this entity is usually tied more closely to game’s narrative structure, where an actor fills a role. Games vary in how many characters a player can embody and the level of customization a player can apply to the appearance or performance statistics of their character.

Avatar

An avatar is more customizable, thus having the potential to more closely reflect the appearance of the human player, or conversely to be more abstract than other in-game entities. This type of entity is typically less integral to the game’s narrative.

2.2 “Human” or Potential User – Non-Player Entities

The user of a fictional video game technology can also be an agent besides the player; NPC’s, or non-playable characters, are native inhabitants of the game world and are controlled by the game through artificial intelligence or other game system protocols. These entities cohabit game worlds with players and can vary in their level of interactivity and responsiveness:

Ambient

Ambient NPCs occupy the game world but do not interact with the player in any meaningful capacity, similar to “extras” in film (ex. background crowds in Persona 5). These entities usually serve as aesthetic elements that provide context or plausibility to the game world.

Interactable

These NPCs are able to interact with the representation of the player in a meaningful capacity (ex. Team members in Mass Effect, “Sims” in The Sims, combat enemies). These types of entities can be narrative characters or otherwise significantly impressionable or intelligent. Interactable NPCs are more likely to act as users of technology than ambient NPCs because they can use said technology as a means to interact with the player/player character.

2.3 “Computer” – Technology of Interest

The “computer” can be any technology artifact or system that is related to the game world, whether it’s diegetic (situated in the world of the game where virtual entities like player characters or NPCs can perceive it) or non-diegetic (only the player can see it with a third person omniscient viewpoint). Artifacts and objects are usually diegetic and “physically” situated in the game world, whereas GUI components are often non-diegetic or separate from the game world. Diegetic elements lend themselves to more concrete representation while non-diegetic features tend to be abstract. In-game technologies can afford players different capabilities, facilitate game mechanics, serve as narrative devices, convey information, or serve as the primary means through which players interact with the game system as a whole.

2.4 Interaction

Interaction can refer to the real, human player’s interplay with the hardware and software of a game system, but for the scope of these frameworks, “interaction” will pertain primarily to the relationships between players, player characters, and NPCs with fabricated manifestations of technologies in or relating to fictional, simulated game worlds. The means, method, and flavor of these interactions are heavily influenced by the aspects outlined in the following technology framework.

3 Definition of the Technology Framework

These categorized dimensions of design choices draw from extant HCI descriptors and can be populated with examples from a large variety of games. The following is the proposed framework through which in-game future technologies can be understood, but derivative frameworks could be developed for specific examples of technologies, such as wearables, human-robot interfaces, artificial intelligence and personal assistants, volumetric projection and telepresence, and body augmentation using prosthetics and cybernetics, all of which have unique affordances, real-world inspirations, and ethical considerations (Fig. 3).

Fig. 3.
figure 3

Visual representation of the proposed framework for classifying technologies in the context of game worlds, derived from HCI paradigms.

Contextualization.

How is the presence of the technology (in the game world or more generally) explained? Is the technology part of the lore of the game world (are characters aware of it), or is it only known to the player?

Information Type.

What type of information is being conveyed and in what form, if any? What is the purpose of the technology to the player and/or character(s)?

Level of Detail.

How complex is the technology and/or the information it conveys? Are multiple levels of detail available on demand?

Level of Abstraction.

How literal or abstract is the technology generally? How literal or abstract is it compared to the information it presents? What is the level of realism or plausibility of the technology, in the game world or otherwise?

Visual Integration with Environment.

Is the technology situated or “tethered” in the environment, or is it overlaid regardless of the visual scene?

Thematic Integration with Environment.

Is the technology designed to be cohesive with an overarching contextual theme, or is it thematically separate? Is the technology diegetic (belonging to the game world) or non-diegetic (set apart from the game world)?

Temporal Conditions.

Are aspects of the technology persistent, or are they conditional based on criteria such as time elapsed, temporal relevance, etc.?

Spatial Conditions.

Are aspects of the technology dependent on criteria such as viewing distance, proximity to objects or other users/NPCs, etc.?

Exclusivity.

Is the technology only accessible to a given user, at a given time or circumstance, or can it be accessed by others in the space? Do different users have different permissions or affordances?

Personalization/Customization.

Is the technology specific to the user or generalized to a wider user base? Is the experience of interacting with the technology the same across users? Does the technology change depending on personal user data?

Automation and Agency.

Is the technology or its features automatically activated, or are they controlled by deliberate user actions? What aspects of the technology can the user control, if any, and how?

4 Example Application of the HCI and Technology Frameworks

To demonstrate how these frameworks can be applied to understanding technologies in game worlds, we will examine the design, employment, and human-computer interaction of augmented reality (AR) in a popular video game. Augmented reality is a common mechanic in video games that manifests in a variety of forms and addresses a multitude of user tasks and needs. AR congeners are arguably some of the oldest examples of tech in game worlds, as they primarily serve as the GUI and informational overlays the player uses to understand and navigate the game world. Aspects of augmented reality in video games can be classified by metrics used to categorize AR systems in the real world. The following table and example utilize the aforementioned technology framework to classify and provide a lens through which we can understand real-world augmented reality systems and its in-game counterparts (Fig. 4).

Fig. 4.
figure 4

The proposed technology framework adapted to specifically examine augmented reality as a future technology, as executed and experienced in a game world setting.

4.1 Augmented Reality and the “Sheikah Slate” in the Legend of Zelda: Breath of the Wild

The Legend of Zelda: Breath of the Wild

(Nintendo, [16]) is an open world, action-adventure game that takes place in the fictional kingdom of Hyrule: a game world that is not unlike the real, physical world, in physical likeness and behavior. In this game, the player controls the character Link as he completes quests and solves puzzles, using his physical abilities and arsenal of equipment, to save the land from dark magic.

Applying the HCI Framework

In this example, the “human” or user(s) in the HCI relationship is the human player embodied as an in-game character (Link), with some degree of third-person perspective looking inwards on the game via the screen. The player-controlled character is narratively tied to the game world and uses the technology in this context. The “computer”, or technology example, is the “Sheikah slate,” a handheld tablet device that allows the player (through the character) to manipulate aspects of the physical world and perceive information by means of augmented reality features.

Applying the Technology Framework

Contextualization.

The technology exists as part of the lore of the game world. This technology was developed by Sheikah people of Hyrule, as explained by the game’s narrative and by design clues incorporated into the technology: the name “Sheikah slate” harkens to its creators and is emblazoned with the Sheikah people’s symbol.

Information Type.

The purpose of the technology is to provide information to Link (and by extension the player) and provide a means of interacting with items in the physical game world through abilities like magnetism, ice formation, and time stopping. Information is conveyed on the device screen (which doubles as the general game GUI for human player) and augmented reality artifacts (with are overlaid and integrated with environment).

Level of Detail.

Set levels of detail are available for most features (such as placement indicators for ice blocks and arrows showing force applied to time-stopped objects). Some features support more detail on demand and the ability for the player to add/remove detail (the map feature shows more detail as the zoom level is increased, and the player can set or remove waypoints that appear on the map).

Level of Abstraction.

The physical form of the Sheikah slate in the game world is fairly literal (a tablet object with a screen). The augmentations produced by the Sheikah slate vary in level of abstraction, from arrows depicting degree and direction of an applied physical force, to holographic representations of objects showing where they can be placed.

Visual Integration.

Some AR elements are situated in the environment (overlays, highlights in relation to objects being manipulated, and environmental effects) while others are overlaid as general GUI elements (a weather indicator, item shortcuts, icon for which feature is currently equipped). Screen elements, such as the map and photo album, are overlaid on the player’s screen but understood to be viewed by Link on the Sheikah slate in-game.

Thematic Integration.

The device and its features are designed to coincide thematically with the other Sheikah technology in the game world, primarily in aesthetics like color, shape, and iconography.

Temporal Conditions.

Passive information, such as the weather forecast and item shortcuts, are present continuously during gameplay unless actively disabled by the human player via the in-game menu system. These pervasive elements are primarily AR overlays. Contextually-relevant information such as a selected object of interest while using a feature (for example, a visual force field around a time-stopped object) appears as long as the current feature is actively in use. If the augmentation is not tied to the Sheikah slate but is an environmental indicator of state, it persists as long as the object is in close interaction range and viewing distance.

Spatial Conditions.

Passive information is shown on the sides of the screen regardless of spatial criteria. Feature-specific indicators are restricted to the operable space in which said feature can be used (for example, only objects that can be manipulated with the magnetism feature are highlighted while the magnetism tool is active).

Exclusivity.

The technology can be used by whomever is currently in possession of the device, but is primarily used by Link, who is controlled by the player. Information and features appear only to be accessible to the immediate user unless deliberately shared (for example, showing the Sheikah slate’s screen to another person in the vicinity). All augmentations are accessible by the player, but not all are presumed to be perceived by Link.

Personalization/Customization.

The device and associated interface appear to be designed for a general audience, but some features allow for the user to create custom content, such as photos or map markings.

Automation and Agency.

Pervasive augmentations displaying general information such as the game world clock appear automatically but can be turned off by the player in the game’s settings menu. The user of the Sheikah slate can control when and how most features are used, but augmentations associated with the active feature appear and disappear automatically (Fig. 5).

Fig. 5.
figure 5

Examples of AR elements in the game space are outlined. A, D, and F are perpetual, non-diegetic HUD elements, while B, C, and E are situated in the environment, temporary to context, and visible to the playable character. Situated augmentations vary in level of abstraction, from halos around objects (B) to sparkles reflecting sunlight that draw attention (E). A–D are presumed to originate from the Sheikah slate technology, while E is an example of a thematic environmental cue that can also be considered AR in the holistic scope of the game world.

5 Conclusion

These categorized dimensions of design choices can be populated with examples from a large variety of games. Frameworks could be developed for other types of technologies such as wearable technology, human-robot interaction, artificial intelligence and personal assistants, volumetric projection and telepresence, and body augmentation using prosthetics and cybernetics, all of which have unique affordances, real-world inspirations, and ethical considerations.

The nature of technologies in games is heavily influenced by gameplay and artistic choices that may not prioritize practicality nor accurately reflect real-world constraints; however, these interfaces are designed to engage players, convey information, and provide players with resources and information to meet an appropriate challenge. Game designers and technology developers identify the same needs in their respective users. Technologies modelled in games are designed within a given context and carry affordances and limitations analogous to their real-world counterparts. These parallels and a unique interactive quality lend video games to the use of human-computer interaction research methods and frameworks - such as heuristic evaluation and contextual inquiry - that can be used to evaluate theoretical designs and apply findings based on this research to real-world analogues.

Video games are artifacts of their real-world context and reflect the values and current technologies of a given culture and time period; designers are pushed to imagine what future technologies will be like, how they will be used in given contexts, and what ethical considerations are involved based on contemporary knowledge and best practices. Video games serve as compelling test cases for new technologies and provide the means for interacting with them in environments and contexts beyond the present, empowering us to make informed design decisions and proceed with technological development in an effective, accessible, and ethical manner as advancements in human-computer interaction continue to progress.