Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag July 19, 2022

Evaluation of Priority-Dependent Notifications for Smart Glasses Based on Peripheral Visual Cues

  • Anja K. Faulhaber

    Dr.-Ing. Anja K. Faulhaber, *1991, received her master’s degree in Cognitive Science from Osnabrueck University in 2017 and her PhD in 2021 from TU Braunschweig, where she conducted research on human factors in aviation at the Institute of Flight Guidance. She is currently a research associate at the Human-Machine Systems Engineering Group, University of Kassel. Her research interests include human factors in augmented and virtual reality with a focus on cognitive aspects.

    ORCID logo EMAIL logo
    , Moritz Hoppe

    Moritz Hoppe, *1998, studied Industrial Engineering (B. Sc.) at the University of Kassel. During his studies, he focused on production engineering and ergonomics. Since 2021, he is enrolled in a master’s program with the same focus areas. In 2021, he also worked as a student assistant for the Human-Machine Systems Engineering Group at the University of Kassel. He explored the use of new technologies, such as augmented reality, with a user-centered approach.

    and Ludger Schmidt

    Univ.-Prof. Dr.-Ing. Ludger Schmidt, *1969, has studied Electrical Engineering at the RWTH Aachen University. There he also worked as a research assistant, research team leader, and chief engineer at the Institute of Industrial Engineering and Ergonomics. Afterwards he was the head of the department “Ergonomics and Human-Machine Systems” at today’s Fraunhofer Institute for Communication, Information Processing and Ergonomics in Wachtberg near Bonn. In 2008, he became Professor of Human-Machine Systems Engineering in the Department of Mechanical Engineering at the University of Kassel. He is director of the Institute of Industrial Sciences and Process Management and director of the Research Center for Information System Design at the University of Kassel.

From the journal i-com

Abstract

Smart glasses are increasingly commercialized and may replace or at least complement smartphones someday. Common smartphone features, such as notifications, should then also be available for smart glasses. However, notifications are of disruptive character given that even unimportant notifications frequently interrupt users performing a primary task. This often leads to distractions and performance degradation. Thus, we propose a concept for displaying notifications in the peripheral field of view of smart glasses and with different visualizations depending on the priority of the notification. We developed three icon-based notifications representing increasing priority: a transparent green icon continuously becoming more opaque (low priority), a yellow icon moving up and down (medium priority), and a red and yellow flashing icon (high priority). To evaluate the concept, we conducted a study with 24 participants who performed a primary task and should react to notifications at the same time using the Nreal Light smart glasses. The results showed that reaction times for the low-priority notification were significantly higher and it was ranked as the least distracting. The medium- and high-priority notifications did not show a clear difference in noticeability, distraction, or workload. We discuss implications of our results for the perception and visualization of notifications in the peripheral field of view of smart glasses and, more generally, for augmented reality applications.

1 Introduction

Smart glasses such as the Microsoft HoloLens are increasingly commercialized. Recently, more and more new manufacturers arise and novel models of smart glasses with advanced technological features enter the market. Examples of new and soon-to-be-released models are Nreal Light and Air, Magic Leap 2, Snap Spectacles 3, and Tooz smart glasses to name just a few. Smart glasses can be defined as wearable computer devices displaying virtual information superimposed on the real world. Moreover, they may register the user’s environment to allow for augmented reality (AR) experiences [40]. This way, they create connections between the real and the virtual world, which can be expected to gain increasing relevance in the future. In this context, researchers have even suggested that smart glasses could replace or at least complement smartphones someday [2], [50]. Common features of mobile devices should then also be available for smart glasses.

One major feature of mobile devices are notifications. They continuously supply users with more or less relevant information. A study showed that users receive around 100 notifications throughout a day interrupting and distracting them during primary tasks [31]. Some of these notifications contain important or urgent information while others are less urgent and can even be ignored [42]. Notifications can lead to negative effects due to interruptions and information overload. On the other hand, users want to stay informed, and not receiving any notifications has also been associated with negative effects such as anxiety, loneliness, and fear of missing out [36]. Thus, it would be desirable to achieve a balance between the disruptive character of notifications and their informational benefit.

For this purpose, it may even be beneficial to display notifications on smart glasses. Users do not need to take out their smartphones but can quickly glance at the information on the smart glasses. This leaves the hands free to perform other tasks and may reduce the disruptive character of notifications. Users can continue to perform their primary task while perceiving the notification with less abrupt attentional shifts [41]. Moreover, notifications can be displayed in the periphery at the edges of the field of view of smart glasses keeping the central field of view free of occluding virtual information [25], [26]. This has been suggested as particularly relevant to avoid hazardous situations in mobile contexts, e. g., while riding a bike [16]. Peripherally displayed notifications can even be perceived without requiring direct visual attention and users can keep their visual focus on the street or on other relevant areas in their environment. Despite all these advantages, research regarding notifications for smart glasses is still limited and further studies are required [24].

With the present study, we aim to contribute to this research area and fill a gap in the literature by proposing a novel notification concept for smart glasses aiming at a balance between the disruptive character of notifications and their informational benefit. We, therefore, developed a priority-dependent notification concept including three different icon-based visualizations to display notifications of low, medium, and high priority. We combined findings from previous research regarding notifications, interruptions, and peripheral vision in the context of smart glasses to create notifications of varying intensity. To evaluate the concept, we conducted an empirical study with a dual-task approach and analyzed the noticeability of the notifications as well as the distraction and workload they evoke. All further details regarding the notification concept and the empirical study will be explained in the following sections.

2 Related Work

To provide further background information, we first describe relevant related work in this chapter. We offer a more thorough understanding of notifications and their purposes and we describe research investigating the effects of notifications as interruptions. Moreover, we give a short overview of prior research regarding notifications and peripheral cues displayed on smart glasses.

2.1 Notifications and Interruptions

Iqbal and Bailey [15] describe a notification as “a visual cue, auditory signal, or haptic alert generated by an application or service that relays information to a user outside her current focus of attention”. The main objective of notifications is, hence, to provide the user with specific information. This may be, for example, a new message, an appointment reminder from a calendar app, or the latest breaking news. Frequently, these notifications are referred to as push notifications meaning that the notifications appear instantly triggered by a certain event [47]. Thus, the user receives the information automatically and does not need to monitor an app constantly to retrieve new or changed information.

According to McCrickard et al. [30], notifications serve three main purposes based on user goals: interruption, reaction, and comprehension. Regarding interruption, notifications are frequently used with the objective of interrupting the user performing a primary task. Such notifications attract attention to convey important information. Interruption commonly has a negative connotation but may be vital, e. g., in safety-critical domains with warning or alarm notifications triggered to evoke an urgent action. This leads over to the reaction purpose given that certain notifications require immediate action by the user. Other notifications are not that urgent and their main purpose is comprehension. This means that they mainly serve to remind users of something or to make them understand. Comprehension is a prerequisite for all notifications, but McCrickard et al. suggest that the balance between these three purposes should be the objective in the design process of notifications.

While some notifications are specifically designed to interrupt and provide useful or rather essential interruptions, others are less urgent and lead to undesirable distractions. As a consequence, notifications may entail negative effects. Studies have shown that the users’ performance in a primary task was negatively affected by notifications. Users performed the primary task slower due to the interruptions [3], [7] and their performance was significantly deteriorated [20], [43]. Moreover, a study by Adamczyk and Bailey [1] indicated that interruptions and distractions due to notifications increased the users’ perceived frustration. Further studies showed that notifications can lead to inattention and hyperactivity which are related to reduced productivity [23]. Kim et al. [20] additionally detected physiological effects via changes in brain waves due to notifications and interpreted those in terms of reduced concentration and cognitive ability.

However, the disruptiveness of a notification is not always at the same level. It depends upon several factors such as individual differences between users [37], notification timing [1], [32], and characteristics of the primary task, e. g., type and complexity [31]. Moreover, the positioning and visual features of the notification affect the extent to which it interferes with a primary task [29]. This suggests that the way notifications are presented matters and influences how users are affected by the interruptions.

2.2 Notifications on Smart Glasses

While there exists a large body of research regarding desktop, smartphone, and even smartwatch notifications as referred to in the previous section, the literature focusing on notifications for smart glasses is rather scarce [24]. One of the earlier studies by Ishiguro and Rekimoto [16] studied the display of notification-like information in the peripheral field of view of smart glasses. The information was first displayed as a simple icon switching to detailed information once the user gazed at it. Via this peripheral display, they aimed to avoid distractions and disturbances of the central field of view, which is particularly relevant for the mobile use of smart glasses.

Several other studies focused on aspects related to social interactions and acceptance of receiving notifications on smart glasses in public. Lucero and Vetek [27] investigated the use of smart glasses for notification delivery while walking in public. They mainly considered pragmatic aspects and social acceptability. Similarly, Rzayev et al. [41] explored how notifications should be displayed on smart glasses during social interactions such as face-to-face communication. More precisely, they compared several positions and alignments of notifications. The positioning of information on smart glasses was also investigated by Orlosky et al. [35] who focused on the display of multiple information elements in text form. This was again studied in the context of social interactions by analyzing the extent to which information elements interfered with conversations. While the previously described studies examined only the visual display of notifications, Lazaro et al. [24] reported results regarding the multimodal presentation of notifications. They compared visual, auditory, and multimodal notifications and found that multimodal notifications – consisting of both visual and auditory stimuli – were preferred.

In summary, most studies in the context of notifications on smart glasses so far have focused on positioning aspects and social acceptance. Empirical studies investigating the noticeability of notifications on smart glasses and distraction from primary tasks are still rather limited. Additionally, the difference in importance or urgency of a notification has not been considered in the context of the design of notifications on smart glasses to our knowledge. The present study, therefore, aims to fill this gap.

2.3 Peripheral Cues on Smart Glasses

Even though the literature on notifications for smart glasses is limited, there exists a wide body of research regarding related fields such as the display of peripheral cues. In this context, the interaction paradigm Glanceable AR emerged more recently [25], [26]. According to this paradigm, information in the periphery of vision can be accessed and monitored quickly while staying unobtrusive. The peripheral visual field is often divided into near- (8° to 30°), mid- (30° to 60°), and far-peripheral vision (60° to the boundaries) [12]. However, the field of view of smart glasses is still limited so that we will be referring here mainly to the near-peripheral vision.

Studies on peripheral vision in general have already shown that certain strengths and limitations need to be taken into account when designing peripheral cues. Aspects that need to be considered refer mainly to the use of size, eccentricity, color, motion, and animation. The combination of size and eccentricity affects how noticeable a peripheral cue is with higher eccentricities requiring stimuli of larger sizes [45]. Colors are more difficult to distinguish with peripheral vision but studies showed that certain colors are more peripheral-vision-friendly than others. Examples are blue and yellow which are more easily noticeable via the peripheral vision as compared to red and green [33]. Also motion and animation are frequently mentioned in the literature on peripheral vision. Both are mostly described as well noticeable in the periphery, but animations were shown to be more distracting than moving cues [6], [48].

Peripheral cues on smart glasses have mostly been investigated in the context of navigational tasks [5], [9], [10], [11], [38]. One of these studies showed that using peripheral cues on smart glasses is more efficient and less demanding compared to the same information presented on a smartphone screen [5]. Another study investigated different light stimuli and found that particularly moving light stimuli were perceived quickly in the periphery [11]. Similarly, Kruijff et al. [22] found that peripheral motion cues are particularly noticeable on optical-see-through smart glasses. They additionally investigated different colors and their results showed that blue is the most noticeable color. In conclusion, all these findings need to be considered when designing notifications to be displayed in the peripheral field of view of smart glasses.

3 Notification Concept and Hypotheses

The objective of the present study was to develop and evaluate a concept for notifications displayed on smart glasses. The focus was exclusively on the visual display of the notifications and other modalities were not taken into account here. In this section, we want to describe the notification concept in detail as well as the hypotheses used to evaluate the concept in the empirical study.

Figure 1 
            Notification icons used for the three categories low priority (a), medium priority (b), and high priority (c).
Figure 1

Notification icons used for the three categories low priority (a), medium priority (b), and high priority (c).

Following Ishiguro and Rekimoto [16] and the Glanceable AR paradigm [25], [26], the concept entailed displaying notifications as simple icons in the peripheral field of view of smart glasses. This was chosen to avoid information occluding the central field of view. Consequently, notifications can be perceived by the user via peripheral vision without having to reallocate the gaze [28]. This is useful for several potential primary tasks performed while using smart glasses, such as walking, driving, or riding a bike. We, therefore, considered limitations and capabilities of peripheral vision in the design of the notifications. Moreover, as mentioned previously, notifications differ in priority given that some require immediate action by the user, while others may be less urgent and should not distract the user [42]. Thus, several studies suggested and investigated possibilities to present notifications differently based on their content and priority [31], [44]. We wanted to take this into account by designing different notification cues aiming to evoke different levels of interruption and distraction from a primary task.

The notification concept was developed in the context of a project aiming to improve the information available for public transport passengers. Thus, the notifications used represent exemplary content from a public transport app. We adopted a content-driven approach to specify the different priority levels. Potential notifications from the public transport app were categorized according to their urgency which resulted in three priority levels – low, medium, and high priority. Low-priority notifications only aim to inform the user and do not require a reaction. Medium-priority notifications aim to inform but may also require the user to react based on the information received even though this reaction is not (time-)critical. High-priority notifications aim to inform and require immediate actions. Accordingly, we developed three icon-based visualizations for notifications representing these three categories of increasing priority. The icons were displayed in different ways to evoke varying levels of noticeability, distraction, and workload as explained in the following.

The icon for the low-priority notification is shown in Figure 1a and aims to inform the passengers about medium occupancy levels. This notification only has the purpose to inform and does not require a reaction. Due to the low priority, the notification should not distract the user. We, therefore, used a circle with green color for the icon as the peripheral vision is less sensitive to green [33]. Moreover, studies suggested that the sudden onset of a stimulus attracts attention involuntarily [18], [49]. To provide a less sudden presentation, the icon first appeared transparently and became gradually more opaque and thus visible until it was completely opaque after five seconds. This interval was chosen based on exploratory testing. With the green color and the change in transparency, we aimed to create a stimulus of low intensity.

For the medium-priority notification, the icon informs about a high occupancy level which may prompt the user to find a less busy alternative (Figure 1b). For this purpose, we changed the color to yellow and blue which are colors easily perceived in the periphery [33]. Yellow is also a color typically used for low-level warnings [4]. Moreover, the icon was moving up and down continuously at a speed of 0.4 m/s. Such vertical orientation and motion have been suggested to be perceived quickly in the periphery [6], [34] while being less distracting than animations [48]. This visualization was, hence, chosen to provide a stimulus of medium intensity by combining vertical motion with peripheral-vision-friendly colors.

The high-priority notification represents a more urgent matter such as a cancellation requiring the user to find an alternative connection. The notification was, therefore, designed to attract the user’s attention. The icon for this notification category was a red triangle including a white exclamation mark. The icon disappeared after 450 ms and appeared again 150 ms later with the colors changed to a blue exclamation mark on a yellow triangle (Figure 1c). This process was repeated continuously leading to a flashing animation with color changes to attract attention [29], [48]. The colors were chosen because red is a signal color conveying the meaning of high-priority warnings [4]. Blue and yellow were additionally chosen again because they are peripheral-vision-friendly. Moreover, the shape was different and the size was slightly larger than in the previous categories to make the notification even more noticeable [45]. In consequence, this stimulus was intended to be of high intensity by means of shape, size, animation, and color choice.

To evaluate the outlined notification concept, we conducted an empirical study in which participants performed a primary task and were asked to react to the notifications via button presses at the same time. With this dual-task perspective, we investigated whether we achieved a balance of noticeability, distraction, and workload. On the one hand, we expected that the low-priority notification would lead to the slowest noticeability while causing the least distraction and workload for the user. For the high-priority notification, on the other hand, we expected that it would be perceived the fastest while causing the highest distraction and workload. The medium-priority notification was expected to reach medium levels regarding noticeability, distraction, and workload as compared to the other two notification cues. The objective was only to evaluate the concept as a whole and not to investigate how specific aspects such as color or animation of the icons affect the results. For the evaluation of the concept, we used the following non-directional hypotheses:

  1. The three notification cues differ with respect to their noticeability.

  2. The three notification cues differ in the distraction they evoke.

  3. The three notification cues differ in the subjectively perceived workload.

4 Method

4.1 Participants

Data from 24 participants (9 female, 15 male) aged between 19 and 55 years (M = 26.75 years, SD = 7.08 years) were analyzed. All participants had normal or corrected to normal vision. Three of them wore glasses during the experiment and two used contact lenses. Four of the participants reported prior experience with AR, e. g., with the Microsoft HoloLens or with smartphone-based AR. More than half of the participants (n = 16) described themselves as technophile. Moreover, we conducted an additional questionnaire assessing technology affinity [19]. The results showed a mean score of 3.50 (SD = 0.55) on a scale from one (low) to five (high). Thus, the sample can be described as rather technophile. Participants did not receive payment and participated voluntarily.

4.2 Study Design

To evaluate our notification concept, we designed a study in which participants should react to the notifications displayed in the peripheral field of view while performing a primary task in the central field of view of the smart glasses. They had to react to the notifications as fast as possible by pressing a button on a controller. All notification cues were presented in a display-fixed manner at an eccentricity of approximately 20°. With this constant peripheral display, the central field of view could be used for the primary task.

We used the Nreal Light Developer Kit (Figure 2) as smart glasses. These glasses are equipped with an optical-see-through display with a resolution of 1080p and a field of view of 52°. They are connected to a computing unit and a controller serves as input device. The notifications as well as the primary task were implemented for the Nreal Light in Unity and all further details regarding the specific implementation will be described in the following sections.

Figure 2 
              Nreal Light Developer Kit consisting of glasses, computing unit, and controller.
Figure 2

Nreal Light Developer Kit consisting of glasses, computing unit, and controller.

The study was in a within-subject design so that all participants were exposed to all three notification cues. We used three experiment blocks with only one of the three notification cues displayed per block. The order of the three blocks was counterbalanced across participants to avoid effects due to order or learning. The respective notification was displayed ten times throughout the block, five times both on the left and right side in randomized order. The interval between the notifications was also randomized ranging between 12–24 s. The notifications disappeared once the participant pressed the app button on the Nreal Light controller (Figure 2). If the button was not pressed, the notifications disappeared automatically after 7 s. All display times were tested and chosen in an exploratory manner.

4.2.1 Primary Task

For the primary task, we chose an artificial task on the smart glasses to be able to control for the constant eccentricity of the peripherally displayed notifications. Moreover, this allowed us to synchronize both tasks as will be explained in more detail in the following. We chose the n-back task [21] in the two-back version as primary task in our study. In this task, various stimuli are presented one-by-one successively and the participant has to indicate for each stimulus whether it matches the stimulus presented two items ago. This task requires visual attention, concentration, and working memory capacity. Moreover, it requires the participants to keep their visual attention continuously at a central location thus ensuring a constant eccentricity of the displayed notifications.

We implemented the two-back task for the smart glasses based on Jaeggi et al. [17] with the letters C, G, H, K, P, Q, T, and W as stimuli. They were displayed as white letters on blue background (Figure 3), as generally suggested by Debernadis et al. [8] for text presentation in AR. We used the sans serif font DejaVu Sans Mono which is also monospaced allowing each letter to appear at the exact location with the same size. Each letter was presented for 500 ms followed by an interval of 2500 ms before the next letter appeared for 500 ms [17]. The participants had to indicate whether a letter matched the two-back letter by pressing the track pad of the Nreal Light controller (Figure 1). Afterwards, they received feedback regarding the correctness of their choice via red or green bars above and below the blue billboard (Figure 3).

Figure 3 
                Display of the two-back task including the transparency-changing (top row), moving (middle row), and flashing notification (bottom row) as well as feedback after a false (top right) or correct (bottom right) button press.
Figure 3

Display of the two-back task including the transparency-changing (top row), moving (middle row), and flashing notification (bottom row) as well as feedback after a false (top right) or correct (bottom right) button press.

Each block consisted of the two-back task with a total of 60 letters presented to the participants. Of these 60 letters, 18 were matches requiring the participant to press the button. Thus, the rate of targets was 30 % to 70 % non-targets. We developed three versions of the two-back task, one for each block. The comparability was assured by using an equal amount of letters and the same rate of targets and non-targets. We additionally controlled whether the notifications appeared while targets were presented. More precisely, three notifications were displayed while a target was presented in the two-back task. Three notifications were presented when a target was first indicated, meaning when the letter appeared that participants had to remember. The remaining four notifications appeared while non-targets were presented.

4.2.2 Dependent Variables

To assess the hypotheses, we collected objective data and complemented them with subjective data from questionnaires. To analyze the noticeability of the notifications, we mainly collected reaction times. Additionally, we prepared a short concept-driven questionnaire, which participants completed after each block. This questionnaire was composed of items such as I noticed the notifications quickly or I had difficulties perceiving the notifications. Participants were asked to rate their experience regarding these items on a Likert scale from one (completely disagree) to five (completely agree). Participants were also asked to rank the notifications with respect to noticeability in the end.

To assess the distraction evoked due to the notifications, we analyzed the participants’ performance in the two-back task. We, therefore, analyzed hits, false alarms, and missed targets and calculated the general performance as the percentage of correct responses. We additionally prepared a questionnaire regarding distraction, which participants completed after each block. It contained items such as It was easy for me to focus on the n-back task despite the notifications or The notifications distracted me a lot rated on a Likert scale from one (completely disagree) to five (completely agree). Participants also ranked the notification cues with respect to distraction in the end.

Finally, we assessed subjectively perceived workload via the NASA-Task Load Index (TLX) [14]. It is a questionnaire consisting of the six subscales mental demand, physical demand, temporal demand, performance, effort, and frustration. Participants rated these on a scale from 0 (low) to 100 (high) after each block.

4.2.3 Procedure

The experiment sessions were conducted in a lab where lighting conditions were kept constant. Each session took approximately 50 minutes and started with oral and written instructions for the participants. Moreover, they completed a demographic questionnaire and the technology affinity questionnaire [19]. Subsequently, participants familiarized themselves with the Nreal Light and the two-back task by performing a short training trial. Then, the three experiment blocks began with participants reacting to one of the three notification cues while performing the two-back task as described previously. Each experiment block lasted about three minutes and was followed by the questionnaires for the assessment of noticeability, distraction, and workload. After the participants had completed the last block, they filled in the concluding questionnaire including the ranking of the three notification cues in terms of noticeability and distraction.

5 Results

We analyzed the collected data to evaluate the notification concept in terms of the three hypotheses regarding noticeability, distraction, and workload. We conducted repeated-measures ANOVAs and post-hoc tests with Bonferroni correction. If assumptions for parametric tests were not met, we conducted a Friedman test instead. All analyses were conducted with R 4.1.1 and will be presented in the following.

Figure 4 
            Reaction times for the three notification cues.
Figure 4

Reaction times for the three notification cues.

5.1 Noticeability

With respect to the noticeability of the notifications, we had collected reaction time data as well as subjective ratings and rankings of the three notification cues. The results regarding these measures will be reported next.

5.1.1 Reaction Times

We first analyzed reaction times and the results showed that the reaction times were considerably higher for low-priority notifications as compared to medium- and high-priority notifications (Figure 4). Reactions for the low-priority notifications took on average 2.44 s (SD=0.65s) while the means for medium- (M=1.18s, SD=0.28s) and high-priority notifications (M=1.13s, SD=0.28s) were at a similar lower level. We conducted a repeated-measures ANOVA with Greenhouse-Geisser correction given that Mauchly’s test for sphericity was significant and indicated violations of the sphericity assumption. The results showed that reaction times were significantly affected by the different notification cues, F(2,46)=88.23, p<0.0001, ηG2=0.67. Post-hoc tests with Bonferroni correction revealed that the pairwise comparisons between notifications of low and medium priority (p<0.0001, d=2.16) as well as low and high priority (p<0.0001, d=1.97) were statistically significant with large effects. There was no significant difference in reaction times between high- and medium-priority notifications (p=0.43).

Figure 5 
                Ranking results of the three notification cues according to the best noticeability. Rank one means that the participants rated the noticeability of the respective notification cue best.
Figure 5

Ranking results of the three notification cues according to the best noticeability. Rank one means that the participants rated the noticeability of the respective notification cue best.

5.1.2 Subjective Ratings

For the subjective ratings regarding the noticeability of notifications, we analyzed the respective questionnaire items. The results indicated the lowest ratings for the low-priority notification (M=3.52, SD=1.06) followed by the medium- (M=4.54, SD=0.56) and high-priority notifications (M=4.55, SD=0.48). The results of the Friedman test revealed that the subjective ratings of noticeability were significantly affected by the different notifications, χ2(2)=13.83, p=0.001. We conducted post-hoc tests with Bonferroni correction and again the difference was significant for the comparison of low- and medium-priority (p=0.001, r=0.70) as well as low- and high-priority notifications (p=0.002, r=0.69) with large effects. There was no significant difference in noticeability ratings for medium- and high-priority notifications (p=1).

We additionally analyzed the rankings by participants regarding the best noticeability of the three notification cues (Figure 5). The results showed a clear trend with median rankings of three for the low-, two for the medium-, and one for the high-priority notification. Especially the low-priority notification was ranked third by 83.33 % of participants. The results were statistically significant (χ2(2)=22.58, p<0.0001) with the post-hoc tests showing significant differences and large effects for the comparisons between low- and medium-priority (p<0.0001, r=0.86) as well as low- and high-priority notifications (p=0.002, r=0.71). The comparison between high- and medium-priority notifications did not reach significance (p=1).

5.2 Distraction

To analyze distraction, we had collected data regarding the performance in the two-back task. Moreover, the participants had also rated and ranked the three notification cues with respect to the evoked distraction. The results are reported in the following subsections.

5.2.1 Two-Back Task Performance

To analyze the performance in the two-back task, we first had a look at the number of hits, false alarms, and missed targets. The results are presented in Table 1. The mean number of hits trended higher for low- followed by medium- and high-priority notifications. Participants also tended to miss fewer targets in the condition with the low-priority notifications. The number of false alarms – meaning that a button was pressed for a non-target – was lowest for the high-priority notification followed by the low- and medium-priority notifications.

Table 1

Results for the performance in the two-back task.

Hits False Alarms Missed



M SD M SD M SD
Low Priority 15.21 2.08 3.29 2.14 2.79 2.08
Medium Priority 14.42 3.03 3.71 2.03 3.58 3.03
High Priority 14.00 2.89 2.42 1.93 4.00 2.89

Based on these results, we calculated the participants’ general performance in the two-back task as the percentage of correct responses. The results are shown in Figure 6 and indicate that there were no clear differences in performance between the three notification cues. The results were at similar levels with means of 90.69 % (SD=4.86%) for the low-priority, 87.85 % (SD=6.03%) for the medium-priority, and 89.31 % (SD=6.06%) for the high-priority notification. The results of the Friedman test showed that there was no statistically significant difference in performance for the three notification cues, χ2(2)=1.65, p=0.44.

Figure 6 
                Two-back task performance for the three notification cues.
Figure 6

Two-back task performance for the three notification cues.

Figure 7 
                Ranking results of the three notification cues according to the distraction they evoked. Rank one means that the participants rated the respective notification as least distracting.
Figure 7

Ranking results of the three notification cues according to the distraction they evoked. Rank one means that the participants rated the respective notification as least distracting.

5.2.2 Subjective Ratings

Concerning the subjective ratings of distraction evoked by the notifications, we first analyzed the questionnaire data. The low-priority notification was rated as least distracting with a mean of 3.12 (SD=0.73). The medium-priority notification was rated with an average of 3.43 (SD=0.96) and the high-priority notification with 3.52 (SD=0.93). We conducted a Friedman test and the results showed that the difference in distraction ratings between the three notification cues was not significant, χ2(2)=2.63, p=0.27.

We then analyzed the rankings by the participants with respect to the distraction evoked by the notifications. The results are shown in Figure 7 and indicate an inverse trend as compared to the noticeability rankings. The median ranks were one for the low-, two for the medium-, and three for the high-priority notifications. A Friedman test showed that the ranking results were significant, χ2(2)=12.25, p=0.002. Post-hoc tests with Bonferroni correction showed a significant and large effect for the comparison of the low- and high-priority notifications (p=0.003, r=0.67). The results were not significant for the comparisons between low- and medium-priority (p=0.08) as well as between medium- and high-priority notifications (p=0.62).

5.3 Workload

Finally, we analyzed the workload ratings from the TLX questionnaire. We used the raw TLX (RTLX) scores without the weighting procedure [13]. The results for all subscales are displayed in Table 2 and show that the low-priority notification yielded the lowest RTLX scores in all six subscales.

Table 2

RTLX scores for each TLX subscale for the three notification cues.

Low Priority Medium Priority High Priority



M SD M SD M SD
Mental demand 65.83 18.75 70.63 18.26 69.79 19.81
Physical demand 15.42 11.79 16.88 12.05 17.92 15.87
Temporal demand 48.33 19.60 50.42 22.45 51.04 26.50
Performance 43.33 18.75 50.00 18.89 48.96 18.88
Effort 59.79 20.67 61.88 24.08 65.21 20.51
Frustration 38.75 20.01 46.04 23.31 47.29 23.86

We then analyzed the composite RTLX scores as shown in Figure 8. The scores trended highest for the high-priority notification with a mean of 50.03 (SD=12.00). It was followed by the medium- (M=49.31, SD=13.12) and the low-priority notifications (M=45.24, SD=10.50). To analyze the statistical significance of these results, we conducted a Friedman test. The test results showed that the subjectively perceived workload was not significantly affected by the notification cues, χ2(2)=0.84, p=0.66.

Figure 8 
              RTLX scores for the three notification cues.
Figure 8

RTLX scores for the three notification cues.

6 Discussion

In summary, the results of the present study showed that reaction times were significantly higher for the low-priority notification as compared to the medium- and high-priority notifications. This was additionally reflected in the subjective ratings regarding noticeability. The medium- and high-priority notifications were rated and ranked significantly better in noticeability than the low-priority notification. These results were inversed in the context of the distraction evoked by the notifications. Low-priority notifications were ranked and rated as least distracting. The objective measure of distraction – the performance in the two-back task – did not yield clear differences between the three notification cues. Finally, the low-priority notification tended to evoke the lowest workload levels followed by the medium- and the high-priority notifications with slightly increased workload rating. However, this difference was not statistically significant.

These results indicate that contrary to our expectations, there were only clear differences for the low-priority notification compared to the medium- and high-priority notifications. The latter two did not yield the clear differences aimed at by our notification concept. We had chosen the vertical motion given that it has been found to be less irritating than flashing cues [48]. A possible explanation for why we did not find such a clear difference here is the pop-out effect. This effect refers to the fact that stimuli stand out from other elements in the visual field because they are different and thus salient [46]. We had designed the notification concept for everyday contexts in which moving stimuli are more common than flashing ones so that the latter can be assumed to pop out more. In our study, however, we used the two-back task implemented in a way that results in a flashing visualization of the presented letters. Thus, in the task context, the stimulus with the vertical motion popped out more than the flashing stimulus. Testing our notification concept with a different primary task would shed more light on the plausibility of this explanation.

A further limitation of our study regarding the noticeability of the high-priority notification is that the flashing frequency was probably inappropriate. Several participants mentioned that they had already pressed the button before the first color change appeared so that they did not even perceive the flashing animation in the first place which was supposed to produce a stimulus of high intensity [29]. This issue had not been detected in the pretests we conducted but can be remedied in future studies by increasing the flashing frequency. Also the design and content of the notification icons could be improved. We chose the icons based on the public transportation app but the low- and medium-priority notifications looked very similar which may have affected the results. Additionally, the two-back task may have been too easy for the participants so that the interruptions due to the notifications did not lead to considerable performance deteriorations. The performance in the task was high around 90 % of correct responses. A more difficult task might have resulted in clearer differences regarding the distracting effects of the notifications. In general, our study only serves to evaluate the notification concept and does not allow conclusions regarding the effect of specific aspects of the notification design such as color, animation, or size of the icons. Further studies would be required to investigate the influence of each factor separately.

Nevertheless, despite these limitations, the study showed promising results particularly with respect to the low-priority notification. As intended, this notification cue yielded higher reaction times and lower noticeability ratings while being ranked as the least distracting. The results regarding distraction and workload were, however, not completely conclusive due to a lack of significance. It would be interesting to investigate whether the objective distraction measures would result in clearer differences with participants performing a more difficult primary task. Nevertheless, the results indicate that it is possible to represent notifications in AR with at least two priority levels. The low-priority notification could be used for less urgent notifications requiring only the purpose of comprehension according to McCrickard et al. [30]. The medium- or high-priority notifications could be used to represent more urgent notifications serving the purpose of interruption and requiring the users’ reaction. It is questionable now, whether two priority levels would suffice. We chose the three levels dependent on the specific public transport app and other applications may need more or less distinctions in priority levels. A further option would be to consider and evaluate the feasibility and usability of continuous rather than discrete priority levels. More research is required to shed light on these open issues.

We have presented and examined a specific use case with our notification concept, but the study results can also provide implications for the general design of AR applications. This may not only be relevant for displaying notifications for users while they are using an AR app. Other relevant application areas in the AR context are Glanceable AR [25], [26] in general and off-screen cues used to guide the user’s attention to an object that is currently off-screen [39]. These cues are commonly displayed in the peripheral field of view of AR glasses and may be more or less urgent depending on the context. Thus, our results regarding the design of the peripherally displayed notifications can also be applied to the design and visualization of such off-screen cues.

There are of course further topics for future research. First, future studies should investigate whether our results also apply to different types of primary tasks. These may be other virtual tasks performed with smart glasses. It would, however, be particularly interesting to find out whether these results also apply to primary tasks performed in the real world. One assumption would be that depending on the characteristics of the primary task, different visual features of notifications could pop out. Taking the task context into account could then allow for more appropriate context-aware notifications [44]. Moreover, the notifications could be complemented by haptic or auditory cues. Lazaro et al. [24] showed that the recognition rate was higher for multimodal notifications and they were preferred. Further research could focus on how different modalities could be included in a notification concept with priority levels.

In conclusion, our results provide insights regarding the noticeability and design of priority-dependent notifications presented in the peripheral field of view of smart glasses. This research thus complements prior studies investigating notification presentation with smart glasses. Additionally, implications can be derived for presenting notifications or other information peripherally during the use of AR applications. These contexts of use may particularly benefit from our dual-task perspective and design considerations regarding the perception of notifications via the peripheral vision.

Award Identifier / Grant number: 16SV8241

Funding statement: This research was funded by the German Federal Ministry of Education and Research (funding code 16SV8241).

About the authors

Anja K. Faulhaber

Dr.-Ing. Anja K. Faulhaber, *1991, received her master’s degree in Cognitive Science from Osnabrueck University in 2017 and her PhD in 2021 from TU Braunschweig, where she conducted research on human factors in aviation at the Institute of Flight Guidance. She is currently a research associate at the Human-Machine Systems Engineering Group, University of Kassel. Her research interests include human factors in augmented and virtual reality with a focus on cognitive aspects.

Moritz Hoppe

Moritz Hoppe, *1998, studied Industrial Engineering (B. Sc.) at the University of Kassel. During his studies, he focused on production engineering and ergonomics. Since 2021, he is enrolled in a master’s program with the same focus areas. In 2021, he also worked as a student assistant for the Human-Machine Systems Engineering Group at the University of Kassel. He explored the use of new technologies, such as augmented reality, with a user-centered approach.

Ludger Schmidt

Univ.-Prof. Dr.-Ing. Ludger Schmidt, *1969, has studied Electrical Engineering at the RWTH Aachen University. There he also worked as a research assistant, research team leader, and chief engineer at the Institute of Industrial Engineering and Ergonomics. Afterwards he was the head of the department “Ergonomics and Human-Machine Systems” at today’s Fraunhofer Institute for Communication, Information Processing and Ergonomics in Wachtberg near Bonn. In 2008, he became Professor of Human-Machine Systems Engineering in the Department of Mechanical Engineering at the University of Kassel. He is director of the Institute of Industrial Sciences and Process Management and director of the Research Center for Information System Design at the University of Kassel.

References

[1] Piotr D. Adamczyk and Brian P. Bailey. 2004. If not now, when? The effects of interruption at different moments within task execution. In CHI’04: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 271–278. https://doi.org/10.1145/985692.985727.10.1145/985692.985727Search in Google Scholar

[2] Ronald T. Azuma. 2016. The most important challenge facing augmented reality. Presence: Teleoperators and Virtual Environments 25, 3, 234–238. https://doi.org/10.1162/PRES_a_00264.10.1162/PRES_a_00264Search in Google Scholar

[3] Brian P. Bailey, Joseph A. Konstan, and John V. Carlis. 2000. Measuring the effects of interruptions on task performance in the user interface. In SMC 2000 Conference Proceedings. 2000 IEEE International Conference on Systems, Man and Cybernetics.’Cybernetics Evolving to Systems, Humans, Organizations, and their Complex Interactions’, 757–762. https://doi.org/10.1109/ICSMC.2000.885940.10.1109/ICSMC.2000.885940Search in Google Scholar

[4] Alphonse Chapanis. 1994. Hazards associated with three signal words and four colours on warning signs. Ergonomics 37, 2, 265–275. https://doi.org/10.1080/00140139408963644.10.1080/00140139408963644Search in Google Scholar

[5] Isha Chaturvedi, Farshid H. Bijarbooneh, Tristan Braud, and Pan Hui. 2019. Peripheral vision: A new killer app for smart glasses. In IUI’19: Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, New York, NY, 625–636. https://doi.org/10.1145/3301275.3302263.10.1145/3301275.3302263Search in Google Scholar

[6] Mon-Chu Chen and Roberta L. Klatzky. 2007. Displays attentive to unattended regions: Presenting information in a peripheral-vision-friendly way. In Human-Computer Interaction. Interaction Platforms and Techniques. HCI 2007, Julie A. Jacko (Ed.). Lecture Notes in Computer Science, 4551. Springer, Berlin, Heidelberg, 23–31. https://doi.org/10.1007/978-3-540-73107-8_3.10.1007/978-3-540-73107-8_3Search in Google Scholar

[7] Edward Cutrell, Mary Czerwinski, and Eric Horvitz. 2001. Notification, disruption, and memory: Effects of messaging interruptions on memory and performance. Human-Computer Interaction: INTERACT 1, 263–269.Search in Google Scholar

[8] Saverio Debernardis, Michele Fiorentino, Michele Gattullo, Giuseppe Monno, and Antonio E. Uva. 2014. Text readability in head-worn displays: Color and style optimization in video versus optical see-through devices. IEEE Transactions on Visualization and Computer Graphics 20, 1, 125–139. https://doi.org/10.1109/TVCG.2013.86.10.1109/TVCG.2013.86Search in Google Scholar PubMed

[9] Anja K. Faulhaber and Ludger Schmidt. 2021. Perception of peripheral visual cues in augmented reality during walking: A pilot study. In Arbeit HUMAINE gestalten: 67. Kongress der Gesellschaft für Arbeitswissenschaft. GfA-Press, Dortmund, 1–6.Search in Google Scholar

[10] Aryan Firouzian, Yukitoshi Kashimoto, Zeeshan Asghar, Niina Keranen, Goshiro Yamamoto, and Petri Pulli. 2017. Twinkle Megane: Near-eye LED indicators on glasses for simple and smart navigation in daily life. In eHealth 360°, Kostas Giokas, Laszlo Bokor and Frank Hopfgartner (Eds.). Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 181. Springer, Cham, 17–22. https://doi.org/10.1007/978-3-319-49655-9_3.10.1007/978-3-319-49655-9_3Search in Google Scholar

[11] Uwe Gruenefeld, Tim C. Stratmann, Jinki Jung, Hyeopwoo Lee, Jeehye Choi, Abhilasha Nanda, and Wilko Heuten. 2018. Guiding smombies: Augmenting peripheral vision with low-cost glasses to shift the attention of smartphone users. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 127–131. https://doi.org/10.1109/ISMAR-Adjunct.2018.00050.10.1109/ISMAR-Adjunct.2018.00050Search in Google Scholar

[12] Carl Gutwin, Andy Cockburn, and Ashley Coveney. 2017. Peripheral popout: The influence of visual angle and stimulus intensity on popout effects. In CHI’17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 208–219. https://doi.org/10.1145/3025453.3025984.10.1145/3025453.3025984Search in Google Scholar

[13] Sandra G. Hart. 2006. Nasa-task load index (NASA-TLX); 20 years later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9, 904–908. https://doi.org/10.1177/154193120605000909.10.1177/154193120605000909Search in Google Scholar

[14] Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Human Mental Workload, Peter A. Hancock and Najmedin Meshkati (Eds.). Advances in Psychology, 52. North Holland, Amsterdam, 139–183. https://doi.org/10.1016/S0166-4115(08)62386-9.10.1016/S0166-4115(08)62386-9Search in Google Scholar

[15] Shamsi T. Iqbal and Brian P. Bailey. 2010. Oasis: A framework for linking notification delivery to the perceptual structure of goal-directed tasks. ACM Transactions on Computer-Human Interaction 17, 4, 1–28. https://doi.org/10.1145/1879831.1879833.10.1145/1879831.1879833Search in Google Scholar

[16] Yoshio Ishiguro and Jun Rekimoto. 2011. Peripheral vision annotation. Noninterference information presentation method for mobile augmented reality. In AH’11: Proceedings of the 2nd Augmented Human International Conference. ACM, New York, NY, 1–5. https://doi.org/10.1145/1959826.1959834.10.1145/1959826.1959834Search in Google Scholar

[17] Susanne M. Jaeggi, Martin Buschkuehl, Walter J. Perrig, and Beat Meier. 2010. The concurrent validity of the N-back task as a working memory measure. Memory 18, 4, 394–412. https://doi.org/10.1080/09658211003702171.10.1080/09658211003702171Search in Google Scholar PubMed

[18] John Jonides and Steven Yantis. 1988. Uniqueness of abrupt visual onset in capturing attention. Perception & Psychophysics 43, 4, 346–354. https://doi.org/10.3758/BF03208805.10.3758/BF03208805Search in Google Scholar

[19] Katja Karrer, Charlotte Glaser, Caroline Clemens, and Carmen Bruder. 2009. Technikaffinität erfassen – der Fragebogen TA-EG. In Der Mensch im Mittelpunkt technischer Systeme. 8. Berliner Werkstatt Mensch-Maschine-Systeme. ZMMS Spektrum, Reihe 22, 29. VDI, Düsseldorf, 196–201.Search in Google Scholar

[20] Seul-Kee Kim, So-Yeong Kim, and Hang-Bong Kang. 2016. An analysis of the effects of smartphone push notifications on task performance with regard to smartphone overuse using ERP. Computational Intelligence and Neuroscience 2016. https://doi.org/10.1155/2016/5718580.10.1155/2016/5718580Search in Google Scholar PubMed PubMed Central

[21] Wayne K. Kirchner. 1958. Age differences in short-term retention of rapidly changing information. Journal of Experimental Psychology 55, 4, 352–358. https://doi.org/10.1037/h0043688.10.1037/h0043688Search in Google Scholar PubMed

[22] Ernst Kruijff, Jason Orlosky, Naohiro Kishishita, Christina Trepkowski, and Kiyoshi Kiyokawa. 2019. The influence of label design on search performance and noticeability in wide field of view augmented reality displays. IEEE Transactions on Visualization and Computer Graphics 25, 9, 2821–2837. https://doi.org/10.1109/TVCG.2018.2854737.10.1109/TVCG.2018.2854737Search in Google Scholar PubMed

[23] Kostadin Kushlev, Jason Proulx, and Elizabeth W. Dunn. 2016. “Silence your phones”: Smartphone notifications increase inattention and hyperactivity symptoms. In CHI’16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1011–1020. https://doi.org/10.1145/2858036.2858359.10.1145/2858036.2858359Search in Google Scholar

[24] May J. Lazaro, Sungho Kim, Jaeyong Lee, Jaemin Chun, and Myung-Hwan Yun. 2021. Interaction modalities for notification signals in augmented reality. In ICMI’21: Proceedings of the 2021 International Conference on Multimodal Interaction. ACM, New York, NY, 470–477. https://doi.org/10.1145/3462244.3479898.10.1145/3462244.3479898Search in Google Scholar

[25] Feiyu Lu and Doug A. Bowman. 2021. Evaluating the potential of glanceable AR interfaces for authentic everyday uses. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), 768–777. https://doi.org/10.1109/VR50410.2021.00104.10.1109/VR50410.2021.00104Search in Google Scholar

[26] Feiyu Lu, Shakiba Davari, Lee Lisle, Yuan Li, and Doug A. Bowman. 2020. Glanceable AR: Evaluating information access methods for head-worn augmented reality. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 930–939. https://doi.org/10.1109/VR46266.2020.00113.10.1109/VR46266.2020.00113Search in Google Scholar

[27] Andrés Lucero and Akos Vetek. 2014. NotifEye: Using interactive glasses to deal with notifications while walking in public. In ACE’14: Proceedings of the 11th Conference on Advances in Computer Entertainment Technology. ACM, New York, NY. https://doi.org/10.1145/2663806.2663824.10.1145/2663806.2663824Search in Google Scholar

[28] Kris Luyten, Donald Degraen, Gustavo Rovelo Ruiz, Sven Coppers, and Davy Vanacken. 2016. Hidden in plain sight: An exploration of a visual language for near-eye out-of-focus displays in the peripheral view. In CHI’16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 487–497. https://doi.org/10.1145/2858036.2858339.10.1145/2858036.2858339Search in Google Scholar

[29] Aristides Mairena, Carl Gutwin, and Andy Cockburn. 2019. Peripheral notifications in large displays: Effects of feature combination and task interference. In CHI’19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1–12. https://doi.org/10.1145/3290605.3300870.10.1145/3290605.3300870Search in Google Scholar

[30] D. S. McCrickard, C. M. Chewar, Jacob P. Somervell, and Ali Ndiwalana. 2003. A model for notification systems evaluation — Assessing user goals for multitasking activity. ACM Transactions on Computer-Human Interaction 10, 4, 312–338. https://doi.org/10.1145/966930.966933.10.1145/966930.966933Search in Google Scholar

[31] Abhinav Mehrotra, Mirco Musolesi, Robert Hendley, and Veljko Pejovic. 2015. Designing content-driven intelligent notification mechanisms for mobile applications. In UbiComp’15: Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, New York, NY, 813–824. https://doi.org/10.1145/2750858.2807544.10.1145/2750858.2807544Search in Google Scholar

[32] Leanne G. Morrison, Charlie Hargood, Veljko Pejovic, Adam W. A. Geraghty, Scott Lloyd, Natalie Goodman, Danius T. Michaelides, Anna Weston, Mirco Musolesi, Mark J. Weal, and Lucy Yardley. 2017. The effect of timing and frequency of push notifications on usage of a smartphone-based stress management intervention: An exploratory trial. PloS one 12, 1, e0169162. https://doi.org/10.1371/journal.pone.0169162.10.1371/journal.pone.0169162Search in Google Scholar PubMed PubMed Central

[33] Gerald M. Murch. 1984. Physiological principles for the effective use of color. IEEE Computer Graphics and Applications 4, 11, 48–55. https://doi.org/10.1109/MCG.1984.6429356.10.1109/MCG.1984.6429356Search in Google Scholar

[34] Takuro Nakuo and Kai Kunze. 2016. Smart glasses with a peripheral vision display. In UbiComp’16: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. Adjunct. ACM, New York, NY, 341–344. https://doi.org/10.1145/2968219.2971393.10.1145/2968219.2968273Search in Google Scholar

[35] Jason Orlosky, Kiyoshi Kiyokawa, Takumi Toyama, and Daniel Sonntag. 2015. Halo content: Context-aware view management for non-invasive augmented reality. In IUI’15: Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, New York, NY, 369–373. https://doi.org/10.1145/2678025.2701375.10.1145/2678025.2701375Search in Google Scholar

[36] Martin Pielot and Luz Rello. 2017. Productive, anxious, lonely: 24 hours without push notifications. In MobileHCI’17: Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, New York, NY. https://doi.org/10.1145/3098279.3098526.10.1145/3098279.3098526Search in Google Scholar

[37] Martin Pielot, Amalia Vradi, and Souneil Park. 2018. Dismissed! A detailed exploration of how mobile phone users handle push notifications. In MobileHCI’18: Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, New York, NY. https://doi.org/10.1145/3229434.3229445.10.1145/3229434.3229445Search in Google Scholar

[38] Benjamin Poppinga, Niels Henze, Jutta Fortmann, Wilko Heuten, and Susanne Boll. 2012. AmbiGlasses – Information in the Periphery of the Visual Field. In Mensch & Computer 2012: Interaktiv informiert allgegenwärtig und allumfassend!? 12. fachübergreifende Konferenz für interaktive und kooperative Medien, Oliver Deussen and Harald Reiterer (Eds.). Oldenbourg, München, 153–162.10.1524/9783486718782.153Search in Google Scholar

[39] Patrick Renner and Thies Pfeiffer. 2017. Attention guiding techniques using peripheral vision and eye tracking for feedback in augmented-reality-based assistance systems. In 2017 IEEE Symposium on 3D User Interfaces (3DUI), 186–194. https://doi.org/10.1109/3DUI.2017.7893338.10.1109/3DUI.2017.7893338Search in Google Scholar

[40] Young K. Ro, Alexander Brem, and Philipp A. Rauschnabel. 2018. Augmented reality smart glasses: Definition, concepts and impact on firm value creation. In Augmented Reality and Virtual Reality, Timothy Jung and M. C. Tom Dieck (Eds.). Progress in IS. Springer, Cham, 169–181. https://doi.org/10.1007/978-3-319-64027-3_12.10.1007/978-3-319-64027-3_12Search in Google Scholar

[41] Rufat Rzayev, Susanne Korbely, Milena Maul, Alina Schark, Valentin Schwind, and Niels Henze. 2020. Effects of position and alignment of notifications on AR glasses during social interaction. In NordiCHI’20: Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society. ACM, New York, NY, 1–11. https://doi.org/10.1145/3419249.3420095.10.1145/3419249.3420095Search in Google Scholar

[42] Alireza Sahami Shirazi, Niels Henze, Tilman Dingler, Martin Pielot, Dominik Weber, and Albrecht Schmidt. 2014. Large-scale assessment of mobile notifications. In CHI’14: Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, New York, NY, 3055–3064. https://doi.org/10.1145/2556288.2557189.10.1145/2556288.2557189Search in Google Scholar

[43] Cary Stothart, Ainsley Mitchum, and Courtney Yehnert. 2015. The attentional cost of receiving a cell phone notification. Journal of Experimental Psychology. Human Perception and Performance 41, 4, 893–897. https://doi.org/10.1037/xhp0000100.10.1037/xhp0000100Search in Google Scholar PubMed

[44] Jan W. Streefkerk, D. S. McCrickard, Myra P. van Esch-Bussemakers, and Mark A. Neerincx. 2012. Balancing awareness and interruption in mobile patrol using context-aware notification. International Journal of Mobile Human-Computer Interaction 4, 3, 1–27. https://doi.org/10.4018/jmhci.2012070101.10.4018/jmhci.2012070101Search in Google Scholar

[45] Xuetong Sun and Amitabh Varshney. 2018. Investigating perception time in the far peripheral vision for virtual and augmented reality. In SAP’18: Proceedings of the 15th ACM Symposium on Applied Perception. ACM, New York, NY, 1–8. https://doi.org/10.1145/3225153.3225160.10.1145/3225153.3225160Search in Google Scholar

[46] Anne Treisman. 1985. Preattentive processing in vision. Computer Vision, Graphics, and Image Processing 31, 2, 156–177. https://doi.org/10.1016/S0734-189X(85)80004-9.10.1016/S0734-189X(85)80004-9Search in Google Scholar

[47] Nabilah Z. Viderisa, Harry B. Santoso, and R. Y. K. Isal. 2019. Designing the prototype of personalized push notifications on e-commerce application with the user-centered design method. In 2019 International Conference on Advanced Computer Science and information Systems (ICACSIS), 41–48. https://doi.org/10.1109/ICACSIS47736.2019.8979756.10.1109/ICACSIS47736.2019.8979756Search in Google Scholar

[48] Colin Ware, Joseph Bonner, William Knight, and Rod Cater. 1992. Moving icons as a human interrupt. International Journal of Human-Computer Interaction 4, 4, 341–348. https://doi.org/10.1080/10447319209526047.10.1080/10447319209526047Search in Google Scholar

[49] S. Yantis and J. Jonides. 1984. Abrupt visual onsets and selective attention: Evidence from visual search. Journal of Experimental Psychology: Human Perception and Performance 10, 5, 601–621. https://doi.org/10.1037//0096-1523.10.5.601.10.1037/0096-1523.10.5.601Search in Google Scholar

[50] Fengyuan Zhu and Tovi Grossman. 2020. BISHARE: Exploring bidirectional interactions between smartphones and head-mounted augmented reality. In CHI’20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 1–14. https://doi.org/10.1145/3313831.3376233.10.1145/3313831.3376233Search in Google Scholar

Published Online: 2022-07-19
Published in Print: 2022-08-26

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 19.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2022-0022/html
Scroll to top button