Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag August 16, 2016

Photo-enriched Documentation during Surgeries with Google Glass: An Exploratory Usability Study in a Department of Paediatric Surgery

  • Tilo Mentler

    Tilo Mentler is a research assistant at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Luebeck. He holds a diploma in Informatics, specializing in Digital Media. Recently, he finished his dissertation about the usability of mobile interactive systems in regular and extraordinary missions of Emergency Medical Services. His main current research interests include human-computer interaction in safety-critical contexts (e.g. medicine), usability engineering and interaction design of mobile devices. He is a founding member and vice-chairman of the sub-group “Human-Computer Interaction in Safety-Critical Systems” within the special interest group “Human-Computer Interaction” of the German Informatics Society (GI).

    EMAIL logo
    , Janosch Kappel

    Janosch Kappel is a student of Medical Engineering Science at the University of Luebeck. Recently, he finished his Bachelor’s degree and is now proceeding with the Master programme Medical Engineering Science at the University of Luebeck.

    , Lutz Wünsch

    Lutz Wünsch is professor of paediatric surgery at the University of Luebeck and chairman of the Department of Paediatric Surgery at the UKSH (University Medical Center Schleswig-Holstein). His areas of interest are paediatric urology and minimal invasive surgery and he has authored many articles on these topics. He is also interested in surgical education and new strategies to improve surgical skills.

    and Michael Herczeg

    Prof. Dr. rer. nat. Michael Herczeg is professor of practical computer science and media informatics and director of the Institute for Multimedia and Interactive Systems (IMIS) of the University of Luebeck. His main areas of interest are human-computer interaction, software ergonomics, interaction design, multimedia and interactive systems, computer-aided teaching and learning as well as safety-critical human-machine systems. He is a co-founder and chair of the German ACM SIGCHI and Human-Computer-Interaction section of the German Informatics Society (GI). Prof. Herczeg is a member of ACM and GI and served as an organizer, reviewer, chair and keynote speaker for more than 100 conferences and workshops. He is an author and editor of more than 200 publications and is an editor for books and journals in interactive media. He works as a consultant for industry and government in the area of human-computer-interaction, human factors, software-ergonomics, usability engineering, eLearning and safety-critical human-machine systems.

From the journal i-com

Abstract

Due to hygienic regulations and mobility requirements, medical professionals show great interest in wearable devices allowing for hands-free interaction and ubiquitous information access. Smartglasses like the prototype “Google Glass” have already been evaluated in pre-hospital as well as clinical medical care. Based on laboratory studies according to the reliability of voice and gesture recognition and field studies during four surgeries in the department of paediatric surgeries, we discuss usability and acceptance of smartglasses for photo-enriched documentation during surgeries. While technical limitations (e. g. poor camera quality) have to be overcome, usable solutions for human-smartglasses interaction by voice and gesture recognition seem to be possible midterm. Surgeons and other members of surgical teams are curious about smartglasses in their working environment. This can be a starting point for a wider use, if user interface and interaction design for smartglasses are further explored and developed in a user-centered process meeting their requirements. In this regard, transmodal consistency is recommended as a design principle for applications supporting multiple input and output modalities.

1 Introduction

For about five years growing interest in head-mounted displays (HMDs) with augmented reality (AR) modalities (“smartglasses”) can be observed in the healthcare domain (e. g. [10, 17]). On one hand, it is fostered by high requirements on physicians’ and nurses’ mobility. On the other hand, cheaper and more powerful products are commercially available which may support more pervasive flows of information and increased data quality. However, usability and acceptance have to be carefully considered before applications for smartglasses and other wearable devices are introduced for daily practice in this time- and safety-critical domain. After outlining the background and related work in chapter 2, methods and results of laboratory and field studies according to usability aspects of accomplishing photo-based documentation during surgeries with Google’s smartglasses prototype (“Google Glass”) are explained in chapters 3 and 4. Conclusions according to effective and efficient usage of smartglasses in mission- or safety-critical contexts are drawn in chapter 5.

2 Background and Related Work

Subsequently, the state of the art of documenting surgeries in the department of paediatric surgery of a university medical centre will be explained (see section 2.1). Healthcare-related use cases of smartglasses are summarized in section 2.2.

2.1 Documentation During Surgeries

Documenting relevant details of surgical procedures is of utmost importance for surgeons and surgical teams in general for legal reasons, quality management and teaching [7, 16]. Written reports are state of the art but “accuracy of description […] depends on the vocabulary and the descriptive prowess of the surgeon” [20]. Pictures taken with digital cameras are a valuable addition to written documentations during surgeries.

Currently, photo-enriched documentation is performed either by assistants taking photos with the aid of a portable digital camera or a camera integrated in an operating light (see Figure 1). While the first approach requires an assistant to bend over the operating room table and hold the digital camera over the patients’ body, the latter one might require additional and time-consuming coordination between surgeon and assistant in order to get the desired picture as Figure 1 illustrates. Both approaches require a dedicated “part-time-photographer” waiting for the next photo to be taken. As human resources in surgeries are limited, there seems to be some room for improvement.

Figure 1 
            Camera integrated in an operating light. Although being able to preview the picture, surgeons cannot change to a certain segment or zoom level. The camera control unit is on the other side of the operating room table and is operated by an assistant.
Figure 1

Camera integrated in an operating light. Although being able to preview the picture, surgeons cannot change to a certain segment or zoom level. The camera control unit is on the other side of the operating room table and is operated by an assistant.

Furthermore, surgeons see the operating room table and especially the patient from a certain angle. Important details might look slightly different from other perspectives. Smartglasses enable taking pictures from a “first person view”. Therefore, documentation is one the major use cases associated with smartglasses in medical care.

2.2 Smartglasses in Healthcare

Taking pictures and recording video during or after medical procedures has been evaluated both out of operating rooms (e. g. under forensic settings; [1]) and during surgeries (e. g. [6]) with promising results. However, “for deployment in clinical care, issues such as hygiene, data protection, and privacy need to be addressed and are currently limiting chances for professional use” [1]. Usability and acceptance of medical professionals as well as patients must be considered as well. Apart from documentation, further use cases for smartglasses in pre-hospital and clinical medical care are summarized in Table 1.

Table 1

Use Cases for smartglasses in pre-hospital and clinical medical care [12, 13].

Use Cases Description References
Communication making and taking (phone) calls (rettungsdienst.de, 2013, [14])
Identifying Hazardous Goods getting information about characteristics and risks of materials [3]
Scanning identifying patients or resources [2, 9]
Search looking for site plans or maps (rettungsdienst.de, 2013)
Streaming broadcasting treatments live [11, 14]
Telemedicine bi-directional transmission of audio and video, consulting an expert [11, 19]
Triage determining the urgency of each patients’ treatment based on algorithms [3, 5]
Visualization /Augmented Reality (AR) displaying data in situ and supplementing real-world environments [4, 18]

3 Laboratory Study

In the following sections preparation, conduction and results of tests in a media laboratory with Google Glass will be described. These tests have been designed to examine reliability of voice and gesture recognition in consideration of clinical settings. Previously, the context of use has been analysed with respect to users’ characteristics, tasks, workflows and organizational structures.

3.1 Preparation

Different preliminary measures (e. g. procuring sterile and unsterile gloves in different sizes, defining criteria for voice and gesture recognition) were necessary to be able to examine different input modalities of Google Glass. They are summarized for voice recognition in section 3.1.1 and for gesture recognition in section 3.1.2.

3.1.1 Voice Recognition

Based on a list of Google Glass’s predefined voice commands (e. g. “take a picture”), short statements made up of commonly used English words (e. g. “people from work”), words sounding similar (e. g. “when then them there where”), medical terms (e. g. “pancreatic carcinoma”) and word orders containing medical terms (e. g. “with a suspected pulmonary embolism”), different settings for voice input (e. g. with or without noise at 75–78 dB, another user of smartglasses nearby, a group talking nearby) were defined. Depending on internet access, Google Glass supports speech recognition based on freely chosen word (groups) or predefined keywords. Both options had to be considered.

Criteria for correctness and accuracy of word recognition were derived from Euler (2006). Finally, an application for converting voice input into text shown on the optical head-mounted display was implemented.

3.1.2 Gesture Recognition

Because users of smartglasses in healthcare will often wear gloves of different types (e. g. sterile, unsterile) and sizes (e. g. form-fitting, too loose), various models have been created and criteria for accuracy of recognition has been defined. Following Euler’s (2006) criteria “word accuracy”, “gesture accuracy (GA)” was defined as ratio

Figure 2

Gesture Accuracy (GA) as the ratio between total number of performed gestures (KG), number of wrong recognitions (KV), number of missing recognitions (KA) and number of additional recognitions (KE).

Taps and swipes were differentiated. Fixed orders for performing them were defined (see Figure 3).

Figure 3 
              Order of swipe gestures to be performed by the user. Swipe direction (arrow) and number of fingers to use are shown.
Figure 3

Order of swipe gestures to be performed by the user. Swipe direction (arrow) and number of fingers to use are shown.

Furthermore, an application for detecting gestures and showing the results on the optical head-mounted display was implemented (see Figure 4).

Figure 4 
              Structure of the test application „GestureDetector“. Gestures are counted and the gesture recognized last is named.
Figure 4

Structure of the test application „GestureDetector“. Gestures are counted and the gesture recognized last is named.

3.2 Conduction

Voice recognition tests were performed by 6 participants (1 male speaker, 1 female speaker, 4 people talking if required) and with the aid of a smartphone application (“Sound Meter”) in order to measure noise levels. Reference values for correctness and accuracy of word recognition were determined for both speakers under quiet conditions.

Gesture recognition was tested by one person. The person had to repeat fixed procedures (see Figure 3) under varying conditions (e. g. with or without gloves). Davis and Rosenfield (2015, p. 919) suggest “a portion of a standard sterile plastic drape (e. g., 3M 1010 Steri-Drape; 3M, St. Paul, Minn.) can be used to cover the right temple / arm of Glass to allow aseptic touch access”. This approach was considered, too.

3.3 Results

With respect to speech input, there was no difference between the male and the female speaker. Online voice recognition worked better than offline voice recognition under all conditions. Because thresholds for predefined voice commands could not be changed, few opportunities existed for influencing recognition behaviour. A noise level of 75 dB led to much decreased word correctness and accuracy. Short word groups (e. g. “take a picture”) showed better results than single words. These results agree with other statements (e. g. [8]). In any case, similar sounding words should not be used as voice commands.

As Table 2 clarifies, Gesture Accuracy (GA) mainly depends on two factors – fitting of gloves and condition of fingers or gloves. Air bubbles, winkles and wetness lead to decreased GA. Using Google Glass with form-fitting and dry gloves shows promising results. However, gloves will hardly stay dry in surgical practice. Sterile drape has no major impact on recognition rates under dry conditions.

Table 2

Gesture Accuracy (GA) for taps and swipes depending on gloves, sterile drape and wetness of fingers or gloves. Conditions with GA < 0.75 either for taps or for swipes are highlighted.

Condition Wearing gloves Sterile drape applied Condition of fingers / gloves Gesture Accuracy for taps Gesture Accuracy for swipes
1 no no dry 1.00 0.96
2 no no wet 0.83 0.92
3 no yes dry 1.00 1.00
4 no yes wet 0.63 0.58
5 unsterile (fitting) no dry 0.93 1.00
6 unsterile (fitting) no wet 0.70 0.75
7 sterile (loose) no dry 0.90 0.71
8 sterile (loose) no wet 0.60 0.29
9 sterile (fitting) yes dry 0.90 0.88
10 sterile (fitting) yes wet 0.70 0.21

In summary, it can be stated that reliability of both voice and gesture control with Google Glass seriously depends on environmental factors which can hardly be planned in advance. However, results were promising enough to conduct further studies in an at least partially well-defined environment like an operating room.

4 Field Study

In the next sections preparation, conduction and results of a field study in the Department of Paediatric Surgery at a university medical center will be described. The study has been designed to examine usability and acceptance of photo-enriched documentation with Google Glass during surgeries.

4.1 Preparation

As part of a preliminary meeting, surgeons and other members of the department of paediatric surgery were introduced to Google Glass and had the opportunity to try it out by taking pictures and explore interaction procedures. A guidance document provided further information and step-by-step instructions for taking and deleting photos. Although offline voice recognition worked worse than online in the laboratory studies, it was used for privacy and security reasons. In addition, relying on wireless internet connection is a potential issue at this stage of an exploratory usability study.

Applying Google Glass was found to be without any observable or stated problems – even for participants wearing glasses. Only one test person was afraid of losing the glasses during surgeries and recommended further fixation. While some reservations were made according to privacy regulations and hygiene, general interest in smartglasses was high. After a few attempts, all participants were able to take pictures via voice command. However, minor interaction problems were noticed:

  1. After activating Google Glass by saying „ok glass“, some user hesitated and asked questions about the following steps. After that, they were unsure about the current state of Google Glass’ operating system (still activated?).

  2. Statements of participants currently not wearing Google Glass were sometimes accepted as voice commands, especially in relation to the aforementioned breaks. Hence, applications were started by mistake.

Pictures were transferred to a desktop computer in order to assess them. Although they failed to reach the resolutions of state-of-the-art digital cameras or smartphones, the image quality was considered to be still sufficient. However, the missing zoom functionality was mentioned as a major disadvantage.

4.2 Conduction

Photo-enriched documentation with the aid of Google Glass was tested during 4 operations conducted by 2 different surgeons. As recommended, one of them used an extra fastening for safety reasons (see Figure 5). In total 52 photos were taken (3, 6, 15, 28). The range results from different times required (1–2 hours) and scales of operation. With respect to wearing smartglasses two different approaches could be observed:

  1. Surgeons putting on the activated wearable device by themselves before hand disinfection and other measures for ensuring sterility were performed.

  2. Assistants working in the unsterile areas of the operating room attached the wearable device at the user’s head after hand disinfection and other measures for ensuring sterility were accomplished.

Figure 5 
            (left) Additional fastening with tape, (right) Surgeon wearing Google Glass, magnifying spectacles and ordinary glasses.
Figure 5

(left) Additional fastening with tape, (right) Surgeon wearing Google Glass, magnifying spectacles and ordinary glasses.

As shown in Figure 5, smartglasses were combined with magnifying spectacles and ordinary glasses. In order to take a photo, users had to (re-)activate the application by nodding of the head and using a certain voice command (“ok glass, take a picture”). A brief preview was shown before the result was saved and the procedure could be repeated.

4.3 Results

Due to its low weight and flexibility, the two surgeons felt comfortable wearing Google Glass and stated that they were not distracted by the additional glasses. In preparation of the surgical intervention, they followed two different approaches in order to put the wearable device on – by themselves or with the aid of an assistant. Both of them satisfied the hygiene-conscious users. During one operation, the surgeon nearly forgot the wearable device he was equipped with and said out loud “Oh, right, I should take photos” after more than 20 minutes. Using Google Glass in the previously described manner (head nodding, voice recognition) did not pose major usability problems either. Few and far between, both head gesture and speech input had to be repeated. Occasionally, users tried to take photos in a row and repeated “ok glass, take a picture” while the preview was shown. Because this option was not available at this stage, voice input was recognized “ok glass, delete this”. However, active assistance of the two observers (a student of medical engineering and a HCI researcher) was required only in one case. Due to a voice command recognized wrongly, another application was started and the surgeon did not know how to exit the program. For hygienic reasons, the system-wide swipe gesture for ending applications was not available and the application-specific speech input was unknown to the user. In contrast to gesture control, there is no system-wide voice command for ending applications. Conversations of other members of the surgical teams or surrounding sounds (e. g. alarm of a monitoring device) did not affect human-computer interaction at all. Both sufficient distances between team members and their hushed voices avoided accidental speech input. The well-defined and professional environment of an operating room contributes to the reliability of human-computer interaction with smartglasses.

The vast majority of photos taken during the surgeries were characterized by overexposure. Only if the operating light was switched off or the object of interest was located outside the central light field, picture quality was acceptable. Although the first-person view would still be a benefit for surgeons, switching off lights or moving body parts repeatedly would be no suitable solution for the documentation tasks in daily work. Varying light intensity, colour temperature or central light field size had no noticeable impact on picture quality (see Figure 6).

Figure 6 
            (left) Photos taken with object of interest located outside the central light field, (middle and right) Impact of different light intensities (highest and lowest) on picture quality.
Figure 6

(left) Photos taken with object of interest located outside the central light field, (middle and right) Impact of different light intensities (highest and lowest) on picture quality.

These results contrast with the initials tests summarized in section 4.1 and prove the importance of conducting field studies in real working environments. In accordance with preliminary judgments, zoom functionality was deemed necessary. In summary, it can be stated that these hardware-related aspects could not satisfy either users or observers.

5 Conclusions

As the described studies in a laboratory and in the department of paediatric surgery prompt, photo-based documentation with Google Glass during surgeries can be accomplished with respect to human-computer interaction. However, technical limitations (e. g. poor camera quality, short battery life, critical heat generation) have to be overcome in order to be usable and practical. During the two-day visit, many surgeons and other members of surgical teams showed great interest in smartglasses. Long-term studies have to clarify acceptance apart from curiosity.

With respect to user interface and interaction design of applications for smartglasses and other wearable devices supporting different input and output modalities, general design principles and best practices as well as style guides have to be derived. During our study, active assistance of the observers was required in one case due to a speech input recognized wrongly and inconsistent gesture and speech control according to availability of a system-wide command. Therefore, transmodal consistency is recommend as a general design principle and defined as follow: “If an interactive system employs different input and output modalities, it is transmodal consistent if it grants access to same functionality and feedback via different modalities with comparable interaction efforts”. If there is a system-wide touch gesture for ending applications, there should be a comparable voice command (e. g. “ok glass, home”, according to the previously mentioned use case). Especially in mission- or safety-critical application domains, users must be enabled to deal with rare or even unforeseen circumstances (e. g. high noise level hampering speech input, too wet or loose work gloves impeding touch gestures). A total breakdown of interaction or time-consuming workarounds could compromise safe actions and sustainable acceptance.

Because hands-free interaction does not necessarily mean that less attention is required for using an interactive system, further studies have to be performed according to different levels of skill-based, rule-based and knowledge-based behaviors and performances [15]. Nevertheless, smartglasses might help to improve certain work situations, e. g. pictures taken by an assistant not able to get a preview under the guidance of a surgeon not able to control the camera.

About the authors

Tilo Mentler

Tilo Mentler is a research assistant at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Luebeck. He holds a diploma in Informatics, specializing in Digital Media. Recently, he finished his dissertation about the usability of mobile interactive systems in regular and extraordinary missions of Emergency Medical Services. His main current research interests include human-computer interaction in safety-critical contexts (e.g. medicine), usability engineering and interaction design of mobile devices. He is a founding member and vice-chairman of the sub-group “Human-Computer Interaction in Safety-Critical Systems” within the special interest group “Human-Computer Interaction” of the German Informatics Society (GI).

Janosch Kappel

Janosch Kappel is a student of Medical Engineering Science at the University of Luebeck. Recently, he finished his Bachelor’s degree and is now proceeding with the Master programme Medical Engineering Science at the University of Luebeck.

Lutz Wünsch

Lutz Wünsch is professor of paediatric surgery at the University of Luebeck and chairman of the Department of Paediatric Surgery at the UKSH (University Medical Center Schleswig-Holstein). His areas of interest are paediatric urology and minimal invasive surgery and he has authored many articles on these topics. He is also interested in surgical education and new strategies to improve surgical skills.

Michael Herczeg

Prof. Dr. rer. nat. Michael Herczeg is professor of practical computer science and media informatics and director of the Institute for Multimedia and Interactive Systems (IMIS) of the University of Luebeck. His main areas of interest are human-computer interaction, software ergonomics, interaction design, multimedia and interactive systems, computer-aided teaching and learning as well as safety-critical human-machine systems. He is a co-founder and chair of the German ACM SIGCHI and Human-Computer-Interaction section of the German Informatics Society (GI). Prof. Herczeg is a member of ACM and GI and served as an organizer, reviewer, chair and keynote speaker for more than 100 conferences and workshops. He is an author and editor of more than 200 publications and is an editor for books and journals in interactive media. He works as a consultant for industry and government in the area of human-computer-interaction, human factors, software-ergonomics, usability engineering, eLearning and safety-critical human-machine systems.

References

[1] Albrecht, U.-V., Jan, U. von, Kuebler, J., Zoeller, C., Lacher, M., Muensterer, O. J., Ettinger, M., Klintschar, M. & Hagemeier, L. (2014). Google Glass for documentation of medical findings: evaluation in forensic medicine. Journal of medical Internet research, 16 (2), e53.10.2196/jmir.3225Search in Google Scholar PubMed PubMed Central

[2] Aldaz, G., Shluzas, L. A., Pickham, D., Eris, O., Sadler, J., Joshi, S. & Leifer, L. (2015). Hands-free image capture, data tagging and transfer using Google Glass: a pilot study for improved wound care management. PloS one, 10 (4), e0121179.10.1371/journal.pone.0121179Search in Google Scholar PubMed PubMed Central

[3] Berndt, H., Mentler, T. & Herczeg, M. (2015). Optical Head-Mounted Displays in Mass Casualty Incidents. International Journal of Information Systems for Crisis Response and Management, 7 (3), 1–15.10.4018/IJISCRAM.2015070101Search in Google Scholar

[4] Chimenti, P. C. & Mitten, D. J. (2015). Google Glass as an Alternative to Standard Fluoroscopic Visualization for Percutaneous Fixation of Hand Fractures: A Pilot Study. Plastic and reconstructive surgery, 136 (2), 328–330.10.1097/PRS.0000000000001453Search in Google Scholar PubMed

[5] Cicero, M. X., Walsh, B., Solad, Y., Whitfill, T., Paesano, G., Kim, K., Baum, C. R. & Cone, D. C. (2015). Do you see what I see? Insights from using google glass for disaster telemedicine triage. Prehospital and disaster medicine, 30 (1), 4–8.10.1017/S1049023X1400140XSearch in Google Scholar PubMed

[6] Davis, C. R. & Rosenfield, L. K. (2015). Looking at plastic surgery through Google Glass: part 1. Systematic review of Google Glass evidence and the first plastic surgical procedures. Plastic and reconstructive surgery, 135 (3), 918–928.10.1097/PRS.0000000000001056Search in Google Scholar PubMed

[7] Eickhoff, U. & Fenger, H. (2004). Chirurgie und Recht (Facharzt und Recht). Berlin, Heidelberg: Springer.10.1007/978-3-642-17050-8Search in Google Scholar

[8] Euler, S. (2006). Grundkurs Spracherkennung. Wiesbaden: Vieweg & Sohn Verlag.Search in Google Scholar

[9] Feng, S., Caire, R., Cortazar, B., Turan, M., Wong, A. & Ozcan, A. (2014). Immunochromatographic diagnostic test analysis using Google Glass. ACS nano, 8 (3), 3069–3079.10.1021/nn500614kSearch in Google Scholar PubMed PubMed Central

[10] Glauser, W. (2013). Doctors among early adopters of Google Glass. CMAJ: Canadian Medical Association journal = journal de l’Association medicale canadienne, 185 (16), 1385.10.1503/cmaj.109-4607Search in Google Scholar PubMed PubMed Central

[11] Knight, H. M., Gajendragadkar, P. R. & Bokhari, A. (2015). Wearable technology: using Google Glass as a teaching tool. BMJ case reports, 2015.10.1136/bcr-2014-208768Search in Google Scholar PubMed PubMed Central

[12] Mentler, T. & Herczeg, M. (2016). Herausforderungen und Lösungsansätze für die Gebrauchstauglichkeit interaktiver Datenbrillen in der prä- und innerklinischen Versorgung. In: Arbeit in komplexen Systemen. Digital, vernetzt, human?!. Bericht zum 62. Arbeitswissenschaftlichen Kongress vom 02. – 04. März 2016, Hrsg.: Gesellschaft für Arbeitswissenschaft e. V. (GfA).Search in Google Scholar

[13] Mentler, T., Wolters, C. & Herczeg, M. (2015). Use cases and usability challenges for head-mounted displays in healthcare. Current Directions in Biomedical Engineering, 1 (1), 534–537.10.1515/cdbme-2015-0127Search in Google Scholar

[14] Moshtaghi, O., Kelley, K. S., Armstrong, W. B., Ghavami, Y., Gu, J. & Djalilian, H. R. (2015). Using Google Glass to solve communication and surgical education challenges in the operating room. The Laryngoscope, 125 (10), 2295–2297.10.1002/lary.25249Search in Google Scholar PubMed

[15] Rasmussen, J. (1983). Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics 13(3), 257–266.10.1109/TSMC.1983.6313160Search in Google Scholar

[16] Severn Audit and Research Collaborative in Orthopaedics. (2016). Assessing the quality of operation notes: a review of 1092 operation notes in 9 UK hospitals. Patient Safety in Surgery, 10 (1), 203.10.1186/s13037-016-0093-xSearch in Google Scholar PubMed PubMed Central

[17] Udani, A. D., Harrison, T. K., Howard, S. K., Kim, T. E., Brock-Utne, J. G., Gaba, D. M. & Mariano, E. R. (2012). Preliminary study of ergonomic behavior during simulated ultrasound-guided regional anesthesia using a head-mounted display. Journal of ultrasound in medicine: official journal of the American Institute of Ultrasound in Medicine, 31 (8), 1277–1280.10.7863/jum.2012.31.8.1277Search in Google Scholar PubMed

[18] Vorraber, W., Voessner, S., Stark, G., Neubacher, D., DeMello, S. & Bair, A. (2014). Medical applications of near-eye display devices: an exploratory study. International journal of surgery (London, England), 12 (12), 1266–1272.10.1016/j.ijsu.2014.09.014Search in Google Scholar PubMed

[19] Widmer, A. & Müller, H. (2014). Using Google Glass to enhance pre-hospital care. Swiss Medical Informatics, 30, 1–4.10.4414/smi.30.00316Search in Google Scholar

[20] Wurnig, P. N., Hollaus, P. H., Wurnig, C. H., Wolf, R. K., Ohtsuka, T. & Pridun, N. S. (2003). A new method for digital video documentation in surgical procedures and minimally invasive surgery. Surgical endoscopy, 17 (2), 232–235.10.1007/s00464-002-9022-4Search in Google Scholar PubMed

Published Online: 2016-08-16
Published in Print: 2016-08-01

© 2016 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 26.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2016-0017/html
Scroll to top button