Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag August 16, 2016

Introduction to this Special Issue on Smart Glasses

  • Leif Oppermann

    Leif Oppermann is head of the Mixed and Augmented Reality Solutions group at Fraunhofer FIT in Sankt Augustin, which is a part of the Cooperation Systems research department. Prior to joining FIT, he was a research fellow at the Mixed Reality Lab of the University of Nottingham, UK, where he worked on pervasive gaming projects and also earned his PhD with a thesis about “Facilitating the Development of Location-Based Experiences”. Leif has a background in real-time graphics programming and finished his Mediainformatics studies in Wernigerode with a work on Augmented Reality. His main research interest is in location-based experiences, mobile HCI, web-based collaboration, and applying it all to the workplace.

    EMAIL logo
    and Wolfgang Prinz

    Prof. Wolfgang Prinz, PhD studied informatics at the University of Bonn and received his PhD in computer science from the University of Nottingham. He is vice chair of Fraunhofer FIT in Sankt Augustin, division manager of the Cooperation Systems research department in FIT, and Professor for Cooperation Systems at RWTH Aachen. His main research interest is in CSCW, web-based collaboration, and the application of AR/VR technologies in the workplace.

From the journal i-com

Abstract

The idea of augmented or virtual reality in combination with head mounted display is being discussed already since at least 1968. However, for a long time, this topic was discussed mainly within the academic research area with only limited effect or uptake in the work place. Primary reason for this was the missing availability of robust and affordable hardware as well as the limited mobile graphics capabilities. This has changed recently with the availability of numerous affordable devices in combination with applications from the entertainment and gaming area.

This Special Issue on Smart Glasses presents a mix of recent research papers and reports to provide an overview of ongoing research and developments in work place environments. In the remainder of this introductory paper we present an overview of the history of Smart Glasses and their applications over the last decades. We also clarify the term Augmented Reality in this historic context. Then we present a topology of current products as well as their intended application areas. Finally, we introduce the papers of this issue within this context.

1 Science Fiction

Technological advances have fuelled the imagination of mankind since at least the first industrial revolution with its steam engines and factories. They provided the science backdrop for the fictional works of novelists like Jules Verne and film-makers like Charlie Chaplin. The second industrial revolution at the beginning of the 20th century brought electricity to the centre of public attention. At this point, electricity had not only entered the work-domain, but also eased communication at large. The old and the new world had already been linked by the first functional transatlantic telegraph cable for about 50 years (laid in 1866 by the ship “Great Eastern”) and people could see the benefits of this technology. The cable dramatically sped-up transatlantic communication by orders of magnitude – from the time it took a ship to travel the ocean to the speed of electricity – and along the way removed the requirement for close physical proximity between information source and destination for receiving an instant message, thus paving the way for remote collaboration.

It is in light of advances like these that domain experts from different fields were gathered by journalist Arthur Brehmer in 1910 to envision the world in 100 years from then in a collection of essays. Amongst many oddities, the resulting book contains a few spot-on predictions, most notably what we nowadays call Internet, mobile phones, and video telephony [1]. Such notions also reached the mass media before long, as one can see in this 1929 contemporary vision of video telephony depicted in figure 1.

Figure 1 
          Video telephony according to a 1929 vision.1http://www.aec.at/oojanoh1/project/die-welt-100-jahren/.
Figure 1

Video telephony according to a 1929 vision.1[1]

Fast forward a few decades to the late 1950s / early 1960s and the television entered public life as the dominant mass media and started to appear in public houses and house-holds in large quantities. Around this time, eccentric inventors envisioned what nowadays look like some of the earliest precursors to smart glasses. Notable virtual reality pioneer Morton Heilig not only built the famous “Sensorama Simulator” (later cited in Rheingold’s VR book [16]), but also filed a patent for a “stereoscopic-television apparatus for individual use” in 1957 (see figure 2, left). His design looks remarkably similar to the Virtual Reality Head-Mounted Displays of today, such as the Oculus Rift, the HTC Vive, or the Samsung Gear VR. It even details out the lenses and adjustable vision parameters that are frequently seen in today’s designs. Similarly, figure 2 (right) shows an iconic shot of science fiction evangelist Hugo Gernsback wearing his television glasses that does not look too far off from what we have seen recurring over the last decades. While Gernsback’s innovations were largely unsuccessful, he is still remembered for his magazine publications such as “Modern Electrics” (started 1908) and “Amazing Stories” (started 1926).2[2] To this day he is considered as one of the fathers of science fiction, alongside Jules Verne and H. G. Wells, and the annual Hugo awards for science fiction literature were named in honour of him.

Figure 2 
          Morton Heilig’s design from 1957 (left), Hugo Gernsback TV glasses from 1963 (right).
Figure 2

Morton Heilig’s design from 1957 (left), Hugo Gernsback TV glasses from 1963 (right).

2 From Science Fiction to Science

Shortly after, in 1968, Ivan Sutherland presented the “Sword of Damocles”, a head-mounted three dimensional display that is largely considered to be the scientific ancestor of the devices that we see today. As can be seen in figure 3, it featured a stereoscopic display (left) and had to be mounted from the ceiling, due to its weight (middle). As the system already contained head-tracking components (a mechanical and an ultrasonic one), it allowed its user to change his view of the 3D world by rotating his head. The system would then present a simple wire-frame rendered 3D perspective image which changes as the user moves. Because at that time no available general-purpose computer was fast enough to provide a flicker-free dynamic image, Sutherland and his team built additional special-purpose digital matrix multiplier and clipping divider components to reach an interactive frame-rate of 30 frames per second when showing 3000 hardware-accelerated lines [18].

Figure 3 
          Ivan Sutherland’s “Sword of Damocles”, a three-dimensional head-mounted display from 1968.
Figure 3

Ivan Sutherland’s “Sword of Damocles”, a three-dimensional head-mounted display from 1968.

The application-domains that Sutherland only briefly touched upon in his seminal engineering-paper included visualising chemical molecules and virtual rooms, such as the one depicted in figure 3, right.

His work showcased the main components that can be found in all head-mounted display and smart-glass setups to date: a display, a processing unit (connected via cables), and position and orientation sensors. The electrical power in his design obviously came from a socket.

Later designs by researchers everywhere would subsequently strive to improve on every aspect of Sutherland’s seminal work and thereby making head-mounted displays and smart glasses more mobile. Power remains an issue to this date, with many designs still either requiring a power-cable or suffering from battery drain.

The only component that is not present in Sutherland’s early work at all is a networking component which has become a lot more important in the years that followed. But given that the precursor of the Internet as we know it today, the Arpanet, only became operational a year later in 1969, and also under the influence of Sutherland at the Information Processing Techniques Office of DARPA, it has to be stated that this was by no means a flaw in his work. Rather, those strands of work had just not been combined, yet.

3 Lessons from the Microcomputer Revolution

Throughout the 1970s, computing became generally more powerful and networked, especially at the universities and research centres all over the world. This decade brought about landmark achievements in computing, such as Unix, the C-programming language, and finally also home-computing in 1975 with the arrival of the MITS Altair 8800 [29].

The Altair has sparked the microcomputer revolution and brought Microsoft into business, as they were providing a BASIC interpreter for that machine and thus making it easier to program for. Nevertheless, and much like the IMSAI 8080 and the Apple I, which followed about a year after, the Altair was delivered in parts and was thus only really feasible to build for people with an electronics background. This changed with the arrival of the first generation of pre-assembled home-computers like the Apple II, the Tandy TRS-80, and the Commodore PET (see figure 4).

Figure 4 
          Commodore PET, TRS-80 Model I, and the Apple II.3http://arstechnica.com/features/2005/12/total-share/3/.
Figure 4

Commodore PET, TRS-80 Model I, and the Apple II.3[3]

At the turn of the decade, Atari introduced their successful 400 and 800 series. They were soon after followed up in the early 1980s by Commodore with their VIC 20 and C64 home-computers. The latter would go on to become the best-selling computer model of all time, as acknowledged by the Guinness Book of World Records. Their success was arguably based on three main pillars: a desirable technology, an affordable price, and games. Despite their lead in the market, Commodore would later fail to appropriately position themselves in the business domain with a series of bad management decisions and finally went bankrupt in 1994 [5]. Competitor Apple was more fortunate, in that their platform received the first business “killer-application” VisiCalc – the first spreadsheet and early ancestor of established office programs like Lotus 1-2-3 and Microsoft Excel.

Apple would later continue to succeed with business-oriented applications in graphics and design in general and in desktop publishing (DTP) in particular, and thus lay foundation for a creative image that they would foster and that would carry them until this day. But together with all other competitors in the home-computer market, they would ultimately have to give way for the PC with its open design and its many clones, which together sum up to a market share in the region of 95 %, according to Gartner.4[4]

4 Virtual Reality, Cyberspace, and the World Wide Web

Within only ten years, from 1975 until the mid-1980s, computers penetrated life in personal and in office use. In conjunction with the aforementioned Internet, and especially in the 1990s with the World Wide Web [39], the applicability of computing became so ubiquitous that historians were later speaking of the microcomputer revolution, the digital revolution, and even: the third industrial revolution.

As Moore’s law about the doubling of complexity in integrated circuits every one or two years held true, available computers became ever more powerful, and the programs they were running ever more demanding, especially with regards to graphics. When Apple released the Macintosh in 1984, they brought Graphical User Interfaces (GUIs) to main stream computing – although it was still all black and white pixels. This changed in 1985 with the introduction of the high-end Silicon Graphics IRIS 1000 and the low-end Commodore Amiga, which both brought hardware-accelerated colour graphics with them – unlike Microsoft Windows 1, which also got released in 1985. Silicon Graphics workstations were very powerful (and expensive) and allowed for sophisticated real-time 3D graphics experiments for those who could afford them.

Amongst them would have been Jaron Lanier who is frequently named in conjunction with Virtual Reality (VR), as he founded the first company to sell VR products and also worked on early multi-user virtual worlds with head-mounted displays and avatars to represent the user in the virtual world [16]. According to Lanier’s bio on his own website their early application scenarios were in surgical simulation, vehicle interior prototyping, virtual sets for television production, and assorted other areas [21]. Requiring high computational power and really expensive machines, these applications were instrumental to the development of a line of research which led to immersive scientific visualization systems such as the CAVE [7] or Powerwalls, which have been widely adopted in virtual engineering processes of cars, aircraft, and the like.

In the games domain, home computers and video games became a mass market. In the pre-Playstation and 3D graphics-card age, interested players would also visit amusement arcades where they could play games with much better graphics than at home in exchange for a coin. It was this market that Jonathan Waldern5[5] and his pioneering company Virtuality targeted with the first VR game system [3]. Initially based on Amiga computers and dedicated graphics hardware, they provided for cooperative play from an ego-perspective in 3D virtual worlds in 1991, well before id Software brought their seminal multi-player 3D game Doom to the home PC in 1993 [9]. Games like Doom and Descent would subsequently play an important role in marketing early affordable VR helmets for home use, like the Forte VFX1 from 1994. Much like today’s Oculus & Co, it required a dedicated PC for rendering to which it was connected via cables. The helmet featured two colour displays, a head-tracker, loudspeakers, a microphone, as well as an accelerometer-driven hand-held input device. In essence, systems like the Virtuality 1000 or the VFX1 provided for a fairly high-fidelity 3D virtual experience, but still at a certain cost. Finally, backed by their success with the 1989 low-cost mobile Game Boy, Nintendo introduced their console Virtual Boy in 1995 which provided for a low-cost and low-fidelity monochrome 3D view, and was a big market failure.

In these early home-VR days around 1995, expectations of the technology had to get a grip somewhere so that users could integrate them into their mindset through family resemblance [24]. Gaming had just stepped into the third dimension with the above mentioned software-rendered PC games and the arrival of the Sony Playstation with its hardware-accelerated 3D graphics. PC hardware-accelerated 3D gaming did not really start before 1996 with the release of the 3dfx Vodoo, or even 1997 with the release of the OpenGL version of the popular 3D game Quake. Pre-rendered 3D graphics in film-production also made a leap in the early to mid-nineties. Companies like SoftImage, Autodesk, and NewTek provided integrated 3D modelling, animation, and rendering packages that were used for visual effects in big-budget Hollywood movies like Terminator 2 (1991), The Lawnmower Man (1991), Jurassic Park (1993), or Johnny Mnemonic (1995). The latter was even based on a script by William Gibson, the author of the popular science-fiction novel Neuromancer and recipient of a Hugo Award, who coined the terms Cyberspace and Matrix in the first place.

It is within this world of unleashed digital achievements that VR had to succeed. It is our hypothesis that people expected VR to be more advanced than the games of the time, and thus expected it to be closer to the visual effects from pre-rendered Hollywood movies. But the graphical fidelity of the Visual Boy or even of the much more expensive VFX1 were not even coming close to that. Why would you want to spend your money on VR gear? Thus, consumer VR in the nineties turned out to be an overhyped and finally broken promise. 3D gaming on consoles and the PC prevailed – and then came the World Wide Web! Cyberspace in its original VR sense was dead and the World Wide Web became the new Cyberspace in public perception at the turn of the millennium.

Figure 5 
          Virtuality CS1000 (left), Forte VFX1 (middle), playing Doom in VR6https://youtu.be/J0n5B3fl-bU?t=7m33s. (right).
Figure 5

Virtuality CS1000 (left), Forte VFX1 (middle), playing Doom in VR6[6] (right).

5 Mixed and Augmented Reality

Where Virtual Reality tried to replace a user’s perception of the world around him with a purely virtual environment, Mixed Reality (MR) and Augmented Reality (AR) try to combine the real and the virtual. Thus, the digital information becomes part of the world around us by means of computer generated graphics and displays such as head-mounted displays and smart glasses.

Figure 6 shows the mix of the real and the virtual as described by Milgram and Kishino in their virtuality continuum [30]. The notion of mixing the real and the digital to various degrees embraces the popular form Augmented Reality [42], which is a mostly real environment with a bit of digital information fitted in, and the less-common Augmented Virtuality, which is basically tele-conferencing in virtual environments, i. e. linking videos of physical spaces via 3D virtual worlds [17].

Figure 6 
          Virtuality Continuum by Milgram and Kishino.
Figure 6

Virtuality Continuum by Milgram and Kishino.

Augmented Reality deserves a closer look and a definition, as it is frequently associated with mobile smart glass application scenarios, not least due to the press presence of the Google Glass and similar devices from Epson, Vuzix, or Microsoft. Moreover, marketing departments of some of these companies, as well some individuals, are currently trying to redefine what some previously well-established terms should mean according to their opinion.

The Google Glass team and their technical lead Thad Starner frequently associated their product with the term Augmented Reality on the basis that it would allow for “a life augmented by technology” [43]. This notion is generally well and useful. It could also certainly be traced back to the pioneering works of Douglas Engelbart (“Augmenting the Human Intellect” [8]) and even Vannevar Bush’s 1945 Memex, “in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility” [41]. This is a very interesting field of study and certainly part of Mixed Reality. However, calling this Augmented Reality is more like throwing it all together. It is also a bit ruthless, as there is a seminal definition for Augmented Reality by Ronald Azuma, which has been used for two decades by now, and which is still being used by the latest books on the subject [10, 25, 33].

Azuma [31] defines Augmented Reality as a technology that satisfies three requirements:

  1. combines real and virtual

  2. interactive in real time

  3. registered in 3D.

Mixing up terms naturally complicates finding a common language, which is a core interest for design at work [28]. We would thus suggest sticking to Azuma’s definition when talking specifically about Augmented Reality, and embracing similar, but different technologies with more flexible terms like Mixed Reality or Human Augmentation.

Now, with the terms clarified it can be clearly stated that Augmented Reality requires a tracking component in order to add a virtual component in real-time to the user’s perspective of the world. This could generally be done with an inside-out approach, i. e. tracking something from the perspective of the user’s device, or with an outside-in approach, i. e. tracking the user and his device(s) from the perspective of the environment. Typical examples of the former are marker-, feature, or model-based tracking approaches, where the camera of the user’s device captures the environment, a computer vision algorithm extracts the perspective on the world (“camera pose”) from there, and a renderer finally draws the virtual overlay according to that camera pose. Typical examples of the latter include electromagnetic tracking (e. g. Flock of Birds), ultrasonic or radio-frequency (e. g. Ubisense), depth cameras (e. g. Kinect), or other virtual studio technology (like it has been used to produce the Microsoft Hololens marketing videos, for example).

Excursus: The most commonly used toolkit for marked-based tracking was ARToolKit by Hirokazu Kato and Mark Billinghurst [15], which was improved upon by Mark Fiala with ARTag, presented at ISMAR 2004. The speed and stability improvements over ARToolKit were very impressive, but the source-code was not available to the interested researchers, including the author and people from the Studierstube team (who were all talking about it at the evening reception), due to a limitation from Fiala’s funding body. This led to Studierstube integrating Fiala’s core ideas into their tracking framework basically over night [11] and eventually ended up with Vuforia becoming a successful commercial product a few years later [12]. The message here is that being too restrictive about sharing results can sometimes be harmful to success.

When AR research was popularized in the 1990s / 2000s, prototypes gradually went outside. They came from prepared indoor environments [32], like rooms or corridors, then went to controlled outdoor sites [6, 35] and finally to the city [22]. This move outside is illustrated in figure 7. The left image shows two users discussing about a virtual architectural model through smart glasses while sitting at a round table from a project called Arthur. The middle image shows the mobile AR system from “Epidemic Menace”, a showcase from the IPerG project on pervasive gaming [26]. The user is wearing a back-pack which contains the graphical processing power in form of a Dell M90 mobile workstation, as well as monocular smart glasses,7[7] GPS and orientation sensors [19] (admittedly, the image has been photoshopped to appeal for marketing purposes, but the real-time view used the same objects in motion). The right image shows an actual in-game view from the IPCity project showcase Time Warp [22]. The showcase was staged in two iterations, first with smart glasses and then with ultra-mobile PCs.

Figure 7 
          Augmented Reality in a prepared indoor environment (left), prepared outdoor-environment on campus (middle), and in an unprepared city-environment (right).
Figure 7

Augmented Reality in a prepared indoor environment (left), prepared outdoor-environment on campus (middle), and in an unprepared city-environment (right).

6 Into the Mobile

Regardless whether you are planning for Augmented Reality in the sense of Azuma or Starner, taking computers outside with wearable computing [40] was a logical consequence. Nowadays smart phones and tablets are “everyware” [2] , i. e. they are so ubiquitous that they became an integral part of our lives [27]. The processing power that is crammed into these devices is generally higher than what was available for any of the projects that have been presented above. For example, mobile graphics processing units in smart phones can easily push millions of textured and lit polygons per second. Furthermore, the devices come with mobile data connections and a range of sensors, with GPS and orientation being the norm, thus making it relatively easy to develop mobile augmented reality or location-based experiences [34]. Nevertheless, research in this area, again, did not start with the iPhone, or Android, but actually a few decades earlier.

Steve Mann [38] must be regarded as the pioneering figure when it comes to wearable computing. Shortly after Sony made music wearable with their Walkman, he came up with the idea that computers should become portable and wearable before long. His initial design from 1980 was made to control photographic equipment. Mann continued to build and wear his creations for the years to come and to this day [36], and by doing so persistently, helped to bootstrap the discipline of wearable computing, as mentioned by MIT Media Lab director Nicholas Negroponte. At MIT Media Lab, Mann also met Thad Starner, who would start constantly wearing his computer in 1993, and later proceed to leading the Google Glass team [37].

Mann’s work shows a certain thread that is related to the mobile capture, transmission, and permanent presentation of video images; a process which he called “glogging” – cyborg logging. As these tasks became more commonplace with today’s devices, mobile data networks, and social acceptance of cameras everywhere, he became increasingly interested in counter-surveillance topics, or “sousveillance”, as he calls it. Figure 8 shows the evolution of his devices over the years until they resembled the computerized eyewear that we now call smart glasses.

Figure 8 
          Evolution of wearable computing devices by Steve Mann.
Figure 8

Evolution of wearable computing devices by Steve Mann.

7 Reflection

So what can be learned from the early history of these exhibits? We think the lessons are manifold. Technical innovations that were targeted at the public often came from an entertainment or leisure background and facilitated remote communication and collaboration. The idea of using, for example, mobile video-telephony for personal and business is not new at all, but can be traced back over the last century alongside the 2nd and 3rd industrial revolutions and its accompanying science fiction literature. With regards to personal computing and head-mounted displays, the basic technical problems have been identified and addressed decades ago. It is through time and integration that computing reached home and office use and eventually became mobile and affordable.

The history of personal computers since the microcomputer revolution can teach us a few things as well. Being first to a market is helpful, but does not warrant prolonged success (compare with MITS). Business studies teach us that being a follower, or “late mover”, can be a good position as well. You can build dominating corporations, but still lose their market shares a few years later (compare with Commodore or Nokia). Games and entertainment experiences are a driving force, not only for children, but also for adults. They are desirable and help to establish the market – as software and content sell hardware. Still, marketing can backfire even if it helped selling a product in the first place. Overhyping a technology will disappoint customers and drive them to spend their time and money on something else (compare to first wave of VR). Oddly enough, it can be mentioned as a side-note that back-pack computers, like the ones depicted in figures 7 & 8, might see a small comeback. Several hardware manufacturers recently announced selling back-pack designs to accommodate potent gaming PC hardware and a battery for untethered VR experiences. Hardware is not only getting ever smaller as the expectations rise again!

For making a successful business case, and staying in the market, you need a killer-application that solves a real problem (compare with Apple and VisiCalc revolutionising office work through digitisation). This might come from research, and given the “long nose of innovation”, as Bill Buxton calls it, it is possible that the next killer-application has already been proposed, maybe even in this issue? But it not the sole mission of research to provide just that. As human-computer interaction researchers we have to delve into the topic of ever-changing hardware, software, content, and interfaces, and research how the communication between man and machine can be best designed. Let us keep it with Fred Brooks’humble definition of Intelligence Amplification, which is “using the computer as a tool that makes a task easier for a human to perform” [13].

8 Conclusion

In addition to the ubiquitous gaming and entertainment applications, the future is wide-open for using smart glasses for applications in business and research. Today’s market offers various versions of smart glasses respectively AR / VR head mounted displays. Common to all types is the integration of the following components:

  1. A head mounted display that is either designed as a video or see through display.

  2. A CPU that is either attached directly to the display (Google Glass, Vuzix M100) or connected via a cable (Epson Moverio). If a high level of graphics power is required a wired connection to a Computer with a powerful graphics cards is typically used (Oculus, HTC Vive).

  3. Various sets of positioning and orientation sensors that are required to recognized the users movement and location.

  4. One or more cameras to support image or video recordings and that furthermore enable image or gesture recognition (this might not be present in all designs).

  5. Microphone and loudspeakers or earplugs to enable audio communication as well as speech recognition.

  6. Networking capabilities, i. e. WLAN or Bluetooth.

  7. A power supply.

Based on the realization of the head mounted display we can distinguish three different types of devices as well as corresponding application areas.

The first type are the classic smart glasses which can be characterized as a wearable computer with a head mounted display. This display is mounted in front of one eye of the user without interfering too much with the field or focus of view (see figure 9, left). Typical representatives are Google Glass or the Vuzix M100 model. These type of smart glasses are very suitable for all applications where the user should be provided with situated information in a hands-free manner, such as navigational information in a routing or logistic scenario. The second and fourth papers of this issue provide two application examples from the medical service area. They nicely present how users can be assisted with situated information during their work process.

Figure 9 
          Three types of smart glasses with peripheral vision (left), central see-through (middle), and immersion (right).
Figure 9

Three types of smart glasses with peripheral vision (left), central see-through (middle), and immersion (right).

The second type can be characterized as augmented reality glasses for which the Epson Moverio series or the upcoming Microsoft HoloLens is a typical example (see figure 9, middle). Instead of a single display these glasses provide two see-through glasses in front of both eyes which enables applications that virtually project 3D images in front of the user. These glasses are very suitable for application in which a visual representation of objects can provide valuable information for the users. The third paper of this issue illustrates this for a serious game in the context of rehabilitation. Further application examples are education and training [14], or architecture visualizations [23].

The third type are VR headsets such as the Oculus Rift or HTC Vive: Instead of see-through glasses these devices position monitors in front of the user eyes. In combination with special lenses, the user is immersed in a 3D scenery (see figure 9, right). The increasing graphics and computing capabilities of modern smart phones enabled the development of VR headsets that do not require separate monitors or CPUs anymore but which utilize the display and CPUs and GPUs of these devices. Examples are Samsung Gear VR, Google Cardboard, and Daydream. Current application examples primarily focus on games, entertainment or the presentation of 360° images or videos. However, similar to augmented reality glasses these devices are also very suitable for education or architecture application as demonstrated by the Auto AR system [4]. It should also be noted that a VR headset can also be used as an AR headset by applying video see through techniques. In this case the video image of a camera that is mounted in front of the device is displayed directly on the two monitors of the VR headsets thus imitating a see through glass. This video stream can then be augmented with additional information. However the user experience with this technique is not as good as with see-through glasses based on the inherent delay caused by the system latency as well as missing stereoscopic effect caused by a single camera image [20].

This brief introduction to the different smart glasses, AR and VR headmounted display types indicates that there is no one size fits all solution for all application domains. Each technique has its own application areas. The first paper of this issue addresses this issue by investigating application scenarios of smart glasses in the industrial sector, while the last paper investigates how people perceive and adopt this new technology. Finally, this special issue is supplemented by a case report and market prediction on the use of head-mounted displays in German companies.

About the authors

Leif Oppermann

Leif Oppermann is head of the Mixed and Augmented Reality Solutions group at Fraunhofer FIT in Sankt Augustin, which is a part of the Cooperation Systems research department. Prior to joining FIT, he was a research fellow at the Mixed Reality Lab of the University of Nottingham, UK, where he worked on pervasive gaming projects and also earned his PhD with a thesis about “Facilitating the Development of Location-Based Experiences”. Leif has a background in real-time graphics programming and finished his Mediainformatics studies in Wernigerode with a work on Augmented Reality. His main research interest is in location-based experiences, mobile HCI, web-based collaboration, and applying it all to the workplace.

Wolfgang Prinz

Prof. Wolfgang Prinz, PhD studied informatics at the University of Bonn and received his PhD in computer science from the University of Nottingham. He is vice chair of Fraunhofer FIT in Sankt Augustin, division manager of the Cooperation Systems research department in FIT, and Professor for Cooperation Systems at RWTH Aachen. His main research interest is in CSCW, web-based collaboration, and the application of AR/VR technologies in the workplace.

References

[1] A. Brehmer, Die Welt in 100 Jahren: Mit einem einführenden Essay “Zukunft von gestern” von Georg Ruppelt. Olms, Georg, 2014.Search in Google Scholar

[2] A. Greenfield, Everyware: The Dawning Age of Ubiquitous Computing. Peachpit Press, 2006.Search in Google Scholar

[3] A. Schmenk, A. Wätjen, and R. Köthe, WAS IST WAS, Band 100: Multimedia und virtuelle Welten. Nürnberg: Tessloff, 1999.Search in Google Scholar

[4] “Auto AR – In Situ Visualization for Building Information Modelling.” [Online]. Available: http://ercim-news.ercim.eu/en103/special/auto-ar-in-situ-visualization-for-building-information-modelling. [Accessed: 28-Jun-2016].Search in Google Scholar

[5] B. Bagnall, On the Edge: The Spectacular Rise and Fall of Commodore: A Company on the Edge. Winnipeg: Variant Press, 2006.Search in Google Scholar

[6] B. Thomas, B. Close, J. Donoghue, J. Squires, P. D. Bondi, and W. Piekarski, “First Person Indoor / Outdoor Augmented Reality Application: ARQuake,” Pers. Ubiquitous Comput., vol. 6, no. 1, pp. 75–86, 2002.10.1007/s007790200007Search in Google Scholar

[7] C. Cruz-Neira, D. J. Sandin, and T. A. DeFanti, “Surround-screen Projection-based Virtual Reality: The Design and Implementation of the CAVE,” in Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 1993, pp. 135–142.10.1145/166117.166134Search in Google Scholar

[8] D. Engelbart, “Augmenting Human Intellect: A Conceptual Framework,” 1962. [Online]. Available: http://www.dougengelbart.org/pubs/augment-3906.html. [Accessed: 19-Feb-2016].10.21236/AD0289565Search in Google Scholar

[9] D. Kushner, Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture, New Ed. London: Piatkus, 2004.Search in Google Scholar

[10] D. Schmalstieg and T. Hollerer, Augmented Reality: Theory and Practice. Boston, MA: Pearson Education, 2016.10.1145/2897826.2927365Search in Google Scholar

[11] D. Wagner and D. Schmalstieg, Artoolkitplus for pose tracking on mobile devices. na, 2007.Search in Google Scholar

[12] D. Wagner, I. Barakonyi, I. Siklossy, J. Wright, R. Ashok, S. Diaz, B. MacIntyre, and D. Schmalstieg, “Building your vision with Qualcomm’s Mobile Augmented Reality (AR) platform: AR on mobile devices,” 2011, pp. 1–1.10.1109/ISMAR-AMH.2011.6093640Search in Google Scholar

[13] F. P. Brooks, Jr., “The computer scientist as toolsmith II,” Commun ACM, vol. 39, no. 3, pp. 61–68, März 1996.10.1145/227234.227243Search in Google Scholar

[14] H. Buchholz, C. Brosda, and R. Wetzel, “Science Center To Go: A Mixed Reality Learning Environment of Miniature Exhibits,” presented at the Learning with ATLAS@CERN Workshops Inspiring Science Learning, Rethymno, Greece, 2010, pp. 85–96.Search in Google Scholar

[15] H. Kato and M. Billinghurst, “Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System,” in International Workshop on Augmented Reality, San Francisco, California, 1999.Search in Google Scholar

[16] H. Rheingold, Virtual Reality, Exploring the brave new technologies of artificial experience and interactive worlds from Cyberspace to Teledildonics, Book Club Edition. QPD, 1991.Search in Google Scholar

[17] H. Schnädelbach, A. Penn, P. Steadman, S. Benford, B. Koleva, and T. Rodden, “Moving Office: Inhabiting a Dynamic Building,” in Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work, New York, NY, USA, 2006, pp. 313–322.10.1145/1180875.1180924Search in Google Scholar

[18] I. E. Sutherland, “A Head-mounted Three Dimensional Display,” in Proceedings of the December 9–11, 1968, Fall Joint Computer Conference, Part I, New York, NY, USA, 1968, pp. 757–764.10.1145/1476589.1476686Search in Google Scholar

[19] I. Lindt, J. Ohlenburg, U. Pankoke-Babatz, S. Ghellal, L. Oppermann, and M. Adams, “Designing Cross Media Games,” in PerGames Workshop, Munich, Germany, 2005.Search in Google Scholar

[20] J. P. Rolland, R. L. Holloway, and H. Fuchs, “Comparison of optical and video see-through, head-mounted displays,” in Proceedings of SPIE – The International Society for Optical Engineering, 1995, pp. 293–307.10.1117/12.197322Search in Google Scholar

[21] “Jaron Lanier’s Bio.” [Online]. Available: http://www.jaronlanier.com/general.html. [Accessed: 27-Jun-2016].Search in Google Scholar

[22] L. Blum, R. Wetzel, R. McCall, L. Oppermann, and W. Broll, “The final TimeWarp: Using Form and Content to Support Player Experience and Presence when Designing Location-Aware Mobile Augmented Reality Games,” in Designing Interactive Systems, Newcastle, 2012.10.1145/2317956.2318064Search in Google Scholar

[23] L. Oppermann, M. Shekow, and D. Bicer, “Mobile Cross-Media Visualisations made from Building Information Modelling Data,” in MobileHCI 2016 Adjunct Proceedings, Florence, Italy, 2016.10.1145/2957265.2961852Search in Google Scholar

[24] L. Wittgenstein, Werkausgabe, Band 1: Tractatus logico-philosophicus / Tagebücher 1914–1916 / Philosophische Untersuchungen, 1. ed. Frankfurt am Main: Suhrkamp Verlag, 1984.Search in Google Scholar

[25] M. Billinghurst, A. Clark, and G. Lee, A Survey of Augmented Reality. now publishers Inc, 2015.10.1561/9781601989215Search in Google Scholar

[26] M. Montola, J. Stenros, and A. Waern, Pervasive Games: Theory and Design. Morgan Kaufmann, 2009.10.1201/9780080889795Search in Google Scholar

[27] M. Weiser, “The Computer for the 21st Century,” Sci. Am., vol. 265, no. 3, pp. 66–75, 1991.10.1038/scientificamerican0991-94Search in Google Scholar

[28] P. Ehn and M. Kyng, “Cardboard Computers: Mocking-it-up or Hands-on the future,” in Design at Work: Cooperative Design of Computer Systems, J. Greenbaum and M. Kyng, Eds. Lawrence Erlbaum Associates, 1991, pp. 170–195.Search in Google Scholar

[29] P. Freiberger and M. Swaine, Fire in the Valley: Making of the Personal Computer, Updated. New York: B&T, 1999.Search in Google Scholar

[30] P. Milgram and F. Kishino, “A Taxonomy of Mixed Reality Visual Displays,” IEICE Trans. Inf. Syst., vol. E77-D, no. 12, pp. 1321–1329, 1994.Search in Google Scholar

[31] R. Azuma, “A Survey of Augmented Reality,” Presence Teleoperators Virtual Environ., vol. 6, no. 4, pp. 355–385, 1997.10.1162/pres.1997.6.4.355Search in Google Scholar

[32] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, and B. MacIntyre, “Recent Advances in Augmented Reality,” IEEE Comput. Graph. Appl., vol. 21, no. 6, pp. 34–47, 2001.10.1109/38.963459Search in Google Scholar

[33] R. Dörner, W. Broll, P. Grimm, and B. Jung, Virtual und Augmented Reality (VR / AR) Grundlagen und Methoden der Virtuellen und Augmentierten Realität. Berlin, Heidelberg: Springer Vieweg, 2013.10.1007/978-3-642-28903-3Search in Google Scholar

[34] S. Benford, “Future Location-Based Experiences,” 14-Aug-2009. [Online]. Available: http://www.jisc.ac.uk/media/documents/techwatch/jisctsw_05_01.pdf.Search in Google Scholar

[35] S. Feiner, B. MacIntyre, T. Höllerer, and A. Webster, “A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for Exploring the Urban Environment,” in International Symposium on Wearable Computing, Cambridge, Massachusetts, 1997, pp. 74–81.10.1007/BF01682023Search in Google Scholar

[36] S. Mann, “Steve Mann: My ‘Augmediated’ Life,” IEEE Spectrum: Technology, Engineering, and Science News, 03-Jan-2013. [Online]. Available: http://spectrum.ieee.org/geek-life/profiles/steve-mann-my-augmediated-life. [Accessed: 28-Jun-2016].Search in Google Scholar

[37] “Smart Clothes: Wearable Computing Intro Page.” [Online]. Available: http://www.wearcam.org/computing.html/. [Accessed: 28-Jun-2016].Search in Google Scholar

[38] “Steve Mann; Personal Web Page.” [Online]. Available: http://wearcam.org/steve. [Accessed: 28-Jun-2016].Search in Google Scholar

[39] T. Berners-Lee, Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web, 1 edition. San Francisco: HarperBusiness, 2000.Search in Google Scholar

[40] T. Starner, S. Mann, B. Rhodes, J. Levine, J. Healey, D. Kirsch, R. W. Picard, and A. Pentland, Augmented Reality Through Wearable Computing. 1997.10.1162/pres.1997.6.4.386Search in Google Scholar

[41] V. Bush, “As We May Think,” The Atlantic, July, 1945.Search in Google Scholar

[42] W. Broll, “Augmentierte Realität,” in Virtual und Augmented Reality (VR / AR), Springer Vieweg, 2013, pp. 241–294.10.1007/978-3-642-28903-3_8Search in Google Scholar

[43] “Wearable-technology pioneer Thad Starner on how Google Glass could augment our realities and memories,” Engadget. [Online]. Available: https://www.engadget.com/2013/05/22/thad-starner-on-google-glass/. [Accessed: 28-Jun-2016].Search in Google Scholar

Published Online: 2016-08-16
Published in Print: 2016-08-01

© 2016 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2016-0028/html
Scroll to top button