Elsevier

Advances in Computers

Volume 77, 2009, Pages 79-115
Advances in Computers

Chapter 3 Playing with All Senses: Human–Computer Interface Devices for Games

https://doi.org/10.1016/S0065-2458(09)01203-0Get rights and content

Abstract

For a long time, computer games were limited to input and output devices such as mouse, joystick, typewriter keyboard, and TV screen. This has changed dramatically with the advent of inexpensive and versatile sensors, actuators, and visual and acoustic output devices. Modern games employ a wide variety of interface technology, which is bound to broaden even further. This creates a new task for game designers. They have to choose the right option, possibly combining several technologies to let one technology compensate for the deficiencies of the other or to achieve more immersion through new modes of interaction. To facilitate this endeavor, this chapter gives an overview on current and upcoming human–computer interface technologies, describes their inner workings, highlights applications in commercial games and game research, and points out promising new directions.

Introduction

The quick success of Nintendo's remote controller for the Wii gaming console, see Fig. 1, after its launch in 2006 did not lead to the demise of the mouse and the joystick as game input devices. It exposed, however, a dire need to explore other options for user interface devices. Even though its technology as such cannot be considered novel, it gained breakthrough popularity through its ingenious combination of different sensors in one wireless box, the availability of many games that employ the new options, its very affordable price point (about US $40)—and its “hackability.” Quickly after its release, amateur software developers were able to reverse engineer the device's protocols to use it with standard desktop and notebook computers.

In the wake of this success, more and more novel input devices for games become available that turn technology that before was fragile and expensive into robust and inexpensive packages. For instance, haptic input/output became affordable with the Novint Falcon. And at press time of this book, Emotiv is expected to sell the EPOC, a brain‐to‐computer interface based on electroencephalography (EEG), for about US $300.

Several factors are key to this development. First, mass production can leverage the cheapness of the bare hardware, even though its development may be expensive; this effect also allowed consumer‐level graphics cards to take the lead over highly specialized professional 3D rendering subsystems. Second, embedded processors within the devices as well as the computer's central processing units have gained considerably in power, which facilitates tasks such as image processing. Third, accompanying technologies such as highly integrated chips (custom or configurable) and wireless connections have matured along with the processors and sensors.

Another driver of the trend to novel interfaces for games are mobile games as offered for cellular phones and for Personal digital assistants (PDAs). Even though these devices comprise keyboards (possibly on‐screen), displays, and joysticks, their small form factor severely hampers usability. Additional ways of input and output are sought to enhance interaction.

This all has created a new playing field (pun intended) for tangible interaction [1] and multimodal interfaces [2], with a particular focus on pervasive games [3], that is, computer games that no longer detach the player from the physicality and the social relations of the real world. In the long run, the devices that now become available for gaming will also influence other strands of computer applications, possibly starting with music creation (see Bierbaum et al. [4] for an example and Blaine [5] for a survey), audio and video playback (see e.g., Tokuhisa et al. [6]), and pedestrial navigation (see e.g., Stahl [7]). This effect, too, has been visible with 3D graphics accelerators. Now that they are a standard facility in almost every computer, they are increasingly employed for nongame purposes as well, ranging from visually sophisticated windowing interfaces such as Apple OS X's Aqua and Windows Vista's Aero Glass to complex scientific applications based on frameworks such as Nvidia CUDA.

Section snippets

Typology

There is a number of ways to structure the vast field of user interface devices. A fundamental characteristic is the physical quantity upon which a device operates; this quantity may be measured (input) and/or changed (output) by the device, see Table I. Note, however, that the assignment of a given device to a physical quantity may be ambiguous: Does not an optical computer mouse rather sense the motion of the light pattern reflected from the underground than the position of the user's hand?

Buttons, Keys, and Keyboards

Even something as simple as a pushbutton is employed by game designers in a large number of ways. At one extreme, a user may hold single buttons in his or her hand; at another extreme, he or she may have to step on a floor panel containing a dozen of buttons such as in the game “Dance Dance Revolution” (Konami, 1998). Even the mockup musical instrument for “Guitar Hero” (Red Octane, 2005) can be considered a number of buttons mounted in a sophisticated body. The game's objective is to press

Mice, Joysticks, Faders, and Similar

Since its invention in 1967 by Douglas Engelbart, the operating principle of the computer mouse has not changed much, with the biggest addition to the original design possibly being the scrollwheel. Computer mice sense a position set by the user's hand. Thus, they can be grouped together with:

  • Trackballs (inverted mice; much easier to handle in mobile settings)

  • Joysticks (borrowed from an aircraft's cockpit and suited well for corresponding steering tasks)

  • Rotary knobs (as in ball‐and‐paddle games

Pen and Touch Input

Pen‐based interaction with a screen dates back to the 1950s, which is even further back than the invention of the computer mouse. Pen‐based input never became popular with games on standard‐sized consoles and PCs, even though graphics tablets are readily available—as the standard computer peripheral for graphic design. The reluctant use in games was not even changed by the advent of TabletPCs in which display screen and graphics tablet are united. “Crayon Physics Deluxe” by Petri Purho hints at

Inertial Sensors

Inertial sensors respond to changes in linear motion. This category of input devices comprises two major classes: accelerometers, which sense linear acceleration, and gyroscopes, which sense angular velocity.

Accelerometers measure how quickly and in which direction the velocity changes at the current point of time. By the laws of physics, gravitation (the attraction of all objects to the earth's center) is invariably mingled with mechanical force: By physical experiments, one can not tell

Cameras

Cheap webcams have become commonplace for video telephony and conferencing applications. Increasingly more notebook computers are sold with integrated cameras that capture the user's face. Many mobile phones and Nintendo's mobile console DSi, see Fig. 5, contain even two cameras: one as a standard digital still camera, the other directed at the user when he or she looks at the display.

The most lightweight use of a camera in a game may be to photograph bar codes or more advanced graphical codes [

Specific Position and Orientation Sensors

Position and orientation or direction sensing is so vital to human–computer interfaces in general and to gaming interfaces in special that many more approaches than mouse, joystick, and their ilk as well as camera‐based solutions have been invented. One of the interfaces popular with video games in the 1980s does no longer work with current screens: the light gun. It sensed a bright mark on the screen or—more precisely—the dot of the scanning beam inside a television's tube; this dot is too

Displays

The form factors of visual displays range from electronic goggles to cinema screens and room lighting. This section first looks at screens and projections and then at goggle‐type personal displays. Stereoscopy (colloquially often called “real 3D”) is treated in Section 9.3.

Audio Input

The primary use of audio input is player‐to‐player communication in games. In addition, games such as “SingStar” (Sony Computer Entertainment, 2004) have popularized a karaoke‐style mixing of the user's singing with accompanying music, awarding scores based on an automatic analysis of the rhythmic and tonal quality as well as the expressivity. In principle, games could also easily make use of speech recognition for hands‐free command input. The necessary recognition routines are part of today's

Audio Output

Audio output may be employed for better immersion, like the soundtrack of a movie. It may, however, also serve as the main medium of output. Audio‐only games [47] can be used by visually impaired persons or they can be used in places where watching a computer screen is distracting (e.g., when walking across the street) or too clumsy (e.g., when sitting in a bus).

Television sets and computers are used both to play games and to watch movies. Due to their support by DVD‐based videos,

Tactile, Haptic, and Locomotion Interfaces

All interfaces that employ touch can be called “tactile,” which at its extreme includes devices as simple as a sensor operated by touching. Generally, the term “haptic” is reserved for more complex devices that also possess kinesthetic aspects such as position sensing. Resembling audio output devices, there are two major strands of applications of tactile and haptic devices: First, they can improve immersion (e.g., one can grasp and feel virtual objects); second, they provide a different

Kinetic Devices and Robots

Currently, tangible user interfaces mostly address input. The output of physical motion [52], however, is only rarely used in common computer applications. It can be accomplished through simple actuators such as servo motors or through devices as complex as a humanoid robot.

Puppets equipped with sensors and actuators may for instance be used in a boxing game, even though the implementation put forward by Shimizu et al. [53] mostly concerns motion input; the output consists of vibration.

Biosignal Sensors

In the realm of Affective Computing, a number of biosignals has been put forward for automated emotion detection. These comprise the electrical conductivity of the skin (“galvanic skin response”), the heart beat rate as well as the rate and the volume of breath. For a study of short‐term responses of a number of signals see Ravaja et al. [56]. Such signals can be employed to adapt a game to the emotional state of the user [8] or to use that state to control the game [57]. In a biathlon‐type

Conclusion

Games are not as strictly tied to well‐known modes of interaction as standard applications are. They can more easily break with traditional interface devices. And if they do so, the huge market volume allows to demonstrate otherwise expensive technologies at affordable prices. In some cases, this can open up broader application domains outside of gaming, first due to availability, second due to the public's familiarity with the device. Thus, gaming applications can act like both an eye opener

References (66)

  • S. Beckhaus et al.

    Unconventional human computer interfaces

  • N. Villar et al.

    The VoodooIO gaming kit: A real‐time adaptable gaming controller

    Comput. Entertain.

    (2007)
  • S. Jordà et al.

    The reacTable: Exploring the synergy between live music performance and tabletop tangible interfaces

  • F. Mueller et al.

    Remote impact: Shadowboxing over a distance

  • S. Bucolo et al.

    User experiences with mobile phone camera game interfaces

  • T. Schlömer et al.

    Gesture recognition with a Wii controller

  • E. Toye et al.

    Interacting with mobile services: An evaluation of camera‐phones and visual tags

    Pers. Ubiquitous Comput.

    (2007)
  • L. Loke et al.

    Labanotation for design of movement‐based interaction

  • D. Hunt et al.

    Puppet show: Intuitive puppet interfaces for expressive character control

  • S. Laakso et al.

    Design of a body‐driven multiplayer game system

    Comput. Entertain.

    (2006)
  • F.F. Mueller et al.

    Sports over a distance

    Pers. Ubiquitous Comput.

    (2007)
  • Y. Wang et al.

    Using human body gestures as inputs for gaming via depth analysis

  • Z. Zeng et al.

    A survey of affect recognition methods: Audio, visual and spontaneous expressions

  • E. Vendrovsky et al.

    Markerless facial motion capture using texture extraction and nonlinear optimization

  • D. Wagner et al.

    The invisible train: Collaborative handheld augmented reality demonstrator

  • D. Wagner et al.

    First steps towards handheld augmented reality

  • A. Henrysson et al.

    Face to face collaborative AR on mobile phones

  • O. Rath et al.

    Sight quest: A mobile game for paper maps

  • V. Paelke et al.

    Foot‐based mobile interaction with games

  • E. Jönsson

    If looks could kill—An evaluation of eye tracking in computer games

    Master's Thesis at the School of Computer Science and Engineering

    (2005)
  • G. Yahav et al.

    3D imaging camera for gaming application

  • Y. Pekelny et al.

    Articulated object reconstruction and markerless motion capture from depth video

    Comput. Graph. Forum

    (2008)
  • Cited by (5)

    View full text