Abstract
Goal of this paper is to investigate usability and user experience (UX) of a touch-based control of user interfaces that the user can not directly interact with. An example is a user controlling the television screen via a touch interaction on the remote control, or the driver of a car using touch to control the input on the steering wheel for the middle-console screen.
Based on a controlled within subject experiment investigating touch based interaction with and without a haptic mark we replicate findings on usability that it is significantly faster to complete a task with haptic marks on the touch area than without haptic marks. For user experience the dimensions of pragmatic quality and attractiveness were rated higher for touch input with a haptic landmark. The variation of user interface animations for target selection did not have a significant impact on user experience, showing that the tactile feedback is the most prominent factor to determine user experience.
The contribution concludes with a discussion how replication of studies must become part of user-centered design and development processes to handle the threat of outdated research due to technology change.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Figure 1 shows an example of absolute indirect touch interaction: When changing the TV channel on the remote control, the area the user touches corresponds to the area on the user interface: if the finger is in the upper left corner, the upper left corner on the screen is selected. Once the finger is moved to the right one step, the corresponding element in the grid is selected. Using a haptic mark on the touch area allows the user to feel the position, without the need to look at the touch area. The finger position in this interaction is absolute, as moving one step to the right, moves one step to the right on the screen. The interaction itself is indirect as, evidently, the user is not touching directly the user interface, but touches the touch area of the control grid, to have an effect in the user interface.
The problem is that as touch technologies are changing and improving, it is important to verify if enhancements in the technology as well as the material still lead to the same impact on usability and how such novel touch concepts are perceived in terms of user experience (UX).
Main research question was to understand if usability effects like improvement of time taken for the completion of the task still holds, and what type of user experience is associated to such an interaction. In terms of UX enhancement a set of animations was investigated looking for possible improvements of UX. A controlled experiment was performed to answer these questions.
In the following, the article shows the current state of the art on touch interaction (that is indirect and/or absolute) in terms of usability. Then the method is described, followed by results and a discussion of the results. The article concludes by a more general discussion on how to change user-centered design and development processes to take into account the replication of findings based on technology change.
2 Related Work
2.1 Using Touch for Controlling User Interfaces via a Distance
Direct touch interaction on a screen is becoming a de-facto standard for interactive systems like mobile phones or tablets. While touch interaction with direct and immediate feedback on the area you touched is taken up quickly, the usage of touch to control elements on a distant screen is perceived as less usable [17]. While usability seems to be lower, industry [32] claims that using touch elements as control elements for interaction on a distant screen enhances user experience. Such elements of (technological novel) touch input for distant screens is currently considered for in-car systems for secondary driving tasks, for in-home applications like interactive TV and even for aircraft cockpits.
The traditional solution for any task including navigation, selection, or in general interaction with a visual display from a distance are buttons, knobs or sliders [32]. A recent trend is the incorporation of touch as means of interaction, ranging from using tablets to interact with large displays [1] to incorporating touch elements in cars, especially for secondary tasks while driving [34]. Touch interaction for distant screens or displays is different from the direct touch interaction known from mobile phones, tablets or touch screens, as the touch input is performed on an area dissociated from the output area, typically a distant screen.
For touch interaction on a distant screen there are in general two ways to map the user input to the movement on the distant screen: absolute and relative. Absolute mapping is defined as a homothetic correspondence between the position of a contact on the input surface and the position of an object on the output display [12]. In other words, it is a position-to-position and velocity-to-velocity mapping between the input and output device [18]. In contrast, relative mapping/pointing is the correspondence of the displacement on the input surface and the displacement of an object on the output display. It generally involves a non-linear transfer function to support fast movements over large distances and precise interactions with small objects [12]. Absolute pointing is claimed to be easier to learn [18], and to be more natural and convenient; however, absolute pointing also has disadvantages.
Absolute pointing could lead to parallax error [4], or perception error due the apparent shift of the interaction area against a background when the observer position is not aligned with the device. It could also lead to occlusion effects - when interacting, the finger, hand and/or arm can hide a part of the output device and can even totally occlude small targets [33]. Forlines et al. [11] report that absolute touch interaction is uncomfortable on large displays and/or during long use. Relative pointing is said to be less natural than absolute pointing and more difficult to learn, but on the contrary it allows speeding up the interaction as different transfer functions can support movement over longer distances [12].
Touch as interaction technique to interact with user interfaces on distant screens has been discussed controversially for different application domains, including the television [32], automotive [30] or aeronautic [15] domains.
Compared to other types of interactions, e.g. input elements like knobs, sliders or other forms with haptic feedback, a set of disadvantages is reported for touch interactions:
-
Touch misses the dimension of immediate haptic feedback [37].
-
Touch was reported to be less efficient [32] for selection and navigation tasks.
-
Touch was reported to be less effective [32] for selection and navigation tasks.
On the other hand, touch interaction is increasing user experience, especially the overall hedonic quality and the user’s need for novelty in a product or novelty of an interaction technique, compared to standard interactions like buttons [32].
2.2 Absolute Indirect Touch
Absolute indirect touch is the use of one-to-one mapping between a separated touch input device and a distant display.
Norman & Norman [25] compared the use of a Nintendo Wii Remote for a selection task in three different conditions. The first condition was absolute pointing using an infrared camera to detect the movement, the second one was stabilized absolute pointing using the camera coupled with a 6-axis accelerometer, and the last one was relative pointing using only the gyroscope. They conclude that the advantage of absolute pointing compared to relative pointing is its intuitiveness. However, in their study relative pointing showed better performance and users preferred relative pointing to absolute pointing. The intuitiveness stems from the direct mapping users learn during all their live. König et al. [18], who proposed a precision enhancing technique for absolute pointing devices, confirm the hypothesis that absolute pointing is a more natural and more convenient pointing experience, as it provides easier hand-eye coordination compared to relative pointing. However, König et al. pointed out the common problem shared by all absolute indirect pointing approaches, which is the missing precision - especially when using high resolution displays.
Gilliot et al. [12] investigated the influence of form factors on absolute indirect-touch pointing performance in two studies. In the first one, they compared two different screen sizes (196 × 147 mm, 66 × 50 mm) and two visual conditions (looking at the input device, not looking at the input device). They found that users get better performances when they can look at the input surface, and that scale does not affect user performance. In the second experiment, they compared several aspect ratios between the input and the output device, and they conclude that the same aspect ratio leads to better performance.
Pietrosek and Lank [31] investigated spatial correspondence between a smartphone screen and a projection screen to select targets. They investigated two different conditions. In the first one, the desired target was displayed on the projection screen and on the smartphone screen, while in the other condition the desired target was only displayed on the projection screen. They found out that error rate was 3.5% (of screen width) when the target was mirrored on the smartphone screen, while it doubled to around 6% when the target was only displayed on the distant screen.
Palleis and Hussmann [28] explored the effect of touch indirectness on spatial memory and navigation performance in a 2D panning task. Comparing direct absolute touch to indirect absolute touch, they found out that spatial memory performance is not decreased by a spatial separation of touch input gesture and visual display, and also that decreasing the size of the input surface increases navigation efficiency.
For the automotive domain, Sheik-Nainar et al. [34] compared three different touch interaction techniques for target selection for drivers in cars: direct absolute pointing, indirect absolute pointing and indirect relative pointing. Their study revealed comparable performance for absolute indirect touch and absolute direct touch in terms of efficiency, effectiveness, distraction, and user preferences. Compared to relative indirect input, absolute indirect input showed better performances, lower distraction and higher user ratings.
For interaction with large displays the ARC-Pad [22] is an indirect interaction technique for interacting with large displays using a mobile phone’s touchscreen. It combines absolute and relative pointer positioning. Tapping with ARC-Pad roughly positions the cursor to the corresponding location on the distant screen, using an absolute mapping. Then the user can adjust the cursor location by sliding her finger on the touchscreen, using a relative mapping. This technique reduces clutching by half compared to a cursor acceleration technique.
2.3 Tactile and Visual Feedback in Touch Interaction
Bruke et al. [6] compared the effect of visual-auditory and visual-tactile feedback on user performance in a meta-analysis of 43 studies. They selected studies that reported at least one comparison between single modes and multimodal combinations, and that reported a measure of error rate, reaction time, and/or performance score outcome. They found that visual-tactile feedback provides a significant advantage over using a visual-only feedback system, and that visual-tactile feedback is particularly effective when multiple tasks are being performed, and under normal workload condition.
Another finding of this meta-analysis is that while multimodal feedback seems to enhance performance, improving performance scores and reducing reaction times, it has little or no effect on error rate.
Pasquero and Hayward [29] investigated the use of tactile feedback in the task of scrolling through a long list of items. They conducted a study with two different conditions - in the control condition, no tactile feedback was provided, while in the experimental condition, a short tactile feedback was provided when the user moves from an item to another, and a longer tactile feedback was provided every 10 items. They measured the frequency at which users needed to look at the screen. They observed an average reduction of 28% in the number of glances that the users required to complete a task with tactile feedback compared to the number of glances that the users required to complete a task without tactile feedback.
Treskunov et al. [36] investigate how haptic feedback affects the user experience of a touchpad-based television remote. They conducted two user studies with two haptic prototypes. A pilot study with eight users, employing smartphones to simulate a directional touchpad, revealed that users preferred enabled haptic feedback. Encouraged by the results, they conducted a second study. In this study they use a touch remote control coupled with a Linear Resonant Actuator on the back of the remote. They compared three haptic conditions (5 ms, 25 ms, No Haptic). They did not find significant effects on time, error, or ratings. However, at the end of the study users were asked to choose which haptic condition they preferred, and although some participant did not make any distinction between the 5 ms and the 25 ms conditions, eight of the nine participants preferred haptic feedback over no haptic feedback.
HaptiCase [9] is an interaction technique for smartphones that provides back-of-device tactile marks that users sense to estimate the position of her finger in relation to the touchscreen. By pinching the thumb to a finger at the back, the finger location is transferred to the front as the thumb touches the touch screen. The study revealed that users were more accurate for eyes-free indirect typing with HaptiCase compared to having no tactile marks. The second study investigated the impact of tactile targeting on visual targeting, when both targeting strategies are combined. Users where both faster and had a lower offset to the target when being able to look at the input device compared to when they could not look at the input device. Guerreiro et al. [13] attached tactile marks on mobile devices’ touch screens to guide blind people’s interactions. It showed positive effects on the acquisition of targets on screen, and it was perceived as helpful by users.
2.4 Animations
Early work by Disney [35] shows that Animations affect user experience in general. Chevalier et al. [8] revisited the pioneer work of Baecker and Small [2] about the place of animation in interfaces. They concluded that user experience is the most important aspect for using animations. Merz et al. [23] investigated how different animation principles for animated transitions in mobile application influence the perceived user experience. They conducted a pilot study in which they compared three different animation styles: slow in and slow out, exaggeration, and linear. The results of this pilot study showed a tendency that animation style could affect the perception of UX.
2.5 Research Overview
Table 1 gives an overview on current literature related to the dimensions absolute and relative mapping, direct and indirect mapping and is complemented by the categories visual feedback, tactile feedback and animation. As highlighted with checkmarks, the contribution of this article is to understand how a combination of absolute indirect touch input with visual feedback and/or haptic feedback influences usability, and especially the overall user experience, as this dimension is not explored in the current literature.
3 The Problem of Touch Interaction with Haptic Marks
To support absolute touch interaction in situations where the screen is out of reach for the user, we developed a touch interaction element with haptic marks that can be applied in various contexts and domains, e.g. as an interaction element in the car to control secondary tasks while driving, in a cockpit for tasks where the pilot cannot reach the screen, or for standard applications like TV to be included in a remote control. Figure 1 shows some possible usages of such an absolute touch interaction element with haptic feedback.
Contrary to absolute touch elements mapping the touch input to the user interface one to one (1:1), this touch interaction element has a number of fixed areas that can be varied depending on the constraints and necessities for the different domains, mapping the area on the touch input field absolute to the fixed area on the user interface. The number of elements on the touch input depends on the application area. For tasks with high cognitive load and risk, like in cars [34], there are only 3 × 3 fields, while for areas with less cognitive load, or more entertainment oriented applications like the TV or interactions on large screens, there are more fields (e.g., 4 × 3, 3 × 4, or larger), see also Fig. 1.
The haptic marks support the user in achieving their goals by offering the opportunity to use them without having to look at them, as the haptic marks can easily be felt with the fingertips. As opposed to a flat touch area where the user has to evaluate the position of the pointer on the distant screen constantly, the haptic marks support ease of use, efficiency, and effectiveness by providing unobtrusive haptic feedback on the touchpad and simplify target acquisition on the distant screen.
General goal of this research is to focus on touch interaction as an input for distant displays, such as television screens, car displays or aircraft displays. Put simply, we aimed to investigate whether haptic feedback (“to feel”), visual feedback (“to see”), or a combination of both are more important to the user, and how this affects usability and UX.
3.1 Research Question and Hypothesis
The research questions were the following: (1) How does the presence or the absence of haptic marks influence usability of the system and affect user experience? And (2) How does the presence or absence of animated visual feedback influence the usability and the overall user experience?
Hypothesis 1 (flat vs haptic marks):
There is a significant difference in terms of usability (efficiency, effectiveness, satisfaction) and user experience (naturalness, aesthetics, hedonic and pragmatic qualities) when using the flat touch interaction input element compared to using the touch interaction element with haptic marks.
Hypothesis 2a (visual feedback/no feedback): There is a significant difference in terms of usability (efficiency, effectiveness, satisfaction and naturalness) and user experience (naturalness, aesthetics, hedonic and pragmatic qualities) when using a system with animated visual feedback compared to using a system without visual feedback.
Hypothesis 2b (visual feedback with three different curves):
There are significant differences in terms of user experience (naturalness, aesthetics, hedonic and pragmatic qualities) when using a system with animated visual feedback that uses ease in combined with easy out, a linear curve or only easy out.
3.2 Method, Participants and Procedure
A within-subject design was performed with 16 participants. The experiment consisted of two parts: in the first part of the experiment, the independent variables are the remote control and the feedback condition, while in the second part of the experiment, the independent variable consists of the type of animation used (cf. Tables 2 and 3). Both parts of the experiment collected data about usability and user experience metrics using measures through observation and logging, standard questionnaires, short semi-structured interviews upon completion of conditions, as well as short interviews at the end of the experiment.
Sixteen participants (14 male and 2 female), aged from 21 to 25 years (mean = 23, SD = 1.41) took part in the study. The sample was a convenience sample recruited via Facebook, mailing lists and personal contacts. In order to avoid biases caused by a missing familiarity with touch interaction, we recruited young people, as they are more likely familiar with touch interaction. All participants use a touch device at least several times a week, and own either a smartphone or use a tablet at home. All of the participants were right handed, with normal or corrected to normal vision, and no participant indicated to be color blind. Daily TV consumption ranged from no TV usage to up to 4 h – 2 participants indicated to never watch TV (12.5%), 6 participants watch less than 30 min a day (37.5%), one participant up to an hour (6.3%), 4 participants up to two hours (25%), one participant up to 3 h (6.3%), and 2 participants up to 4 h (12.5%).
The daily smartphone usage ranged from no usage to more than 4 h a day – one participants stated to not use a smartphone (6.3%), 2 participants use it up to an hour (12.5%), 4 persons use it up to 1.5 h a day (25%), another four persons use it up to 3 h (25%), while one person uses it up to 4 h (6.3%). Finally, 4 persons indicated that they use their smartphone for more than 4 h every day (25%).
3.3 System Information
In order to evaluate the touch element with haptic marks, two remote controls where produced: one included a standard touch interaction pad, while the other included haptic marks. Figure 2 shows the two remote controls used in the experimentation. The driver software for both remote controls is identical, and both touch areas, regardless of the haptic marks, send information in a 12 byte array (for a 3 times 4 grid). Each byte indicates how close the finger of the user is to the sensor on a scale from 0 to 255 for the given area – this allows interpolating the position of the user’s finger on the touch grid. The only difference between the two remote controls is that the sensors of the flat touch area are slightly more sensitive to account for the differences in the height of the touch area without the recessed haptic marks: this implies the same sensor sensitivity for both remote controls.
The user interface prototype consists of a page with 12 tiles (4 columns, 3 rows). During the experimentation dots appear pseudo randomized on the tiles. Users have to click on the corresponding area of the remote to select the indicated tile. Correct selections are indicated with a green checkmark, incorrect selection with a red cross on the item. Figure 3 shows the user interface for the different conditions.
For the second part of the experiment, the user interface (UI) consisted simply of twelve areas with a set of TV channels (see Fig. 4 below) and images simulating a TV channel displayed in the background.
3.4 Material
In the first part of the experiment, two versions of the prototypical UI were tested in order to provide two different feedback types for the condition that offers visual feedback within the user interface. The condition with feedback offered two visual clues for the interaction, which were a highlight of the corresponding tile in the UI when an area of the remote control was touched, as well as a temporary downscaling of the corresponding tile in the UI when an area of the remote control was pressed. The condition without feedback did not offer this visual feedback.
The tiles in the UI have a square shape and occupy the maximum space on the screen, taking into account the gaps at the border and between two tiles (see Fig. 3).
The background of the prototype is medium gray, the tiles are black with a different opacity whether they are highlighted or not, and the dots are white. This choice of color was made to avoid any biases related any types to color blindness. The contrast between the dot and a tile is important (above 50%) even if the tile is selected.
For the second part of the study, the UI only changed in terms of animations used (see Table 3). A variation of the remote control with haptic marks was used, enhanced by two buttons (left/right) that allowed changing pages within the grid.
The experiment was conducted in a usability lab that resembles a living room. The room is equipped with a 40 inch TV with 4 k resolution, two sofas, and a coffee table. Two cameras recorded each session, one behind the user to have an ‘over-the-shoulder’ view of the interaction with the remote control and capture the use of the remote control, and the second one below the TV in front of the user to capture the facial expressions and posture of the user. The prototypical user interfaces used in the study were running on a small form factor computer behind the television to give the participants the impression that they are using a normal TV with a set-top box.
The experiment started with an introduction about the general goal of the study, followed by a demographic questionnaire that investigated the media consumption habits of the participants and a short pre-interview. Subsequently, participants were introduced to the user interface, and were asked to perform tasks – the selection of dots on the tiles of the UI for the four experimental conditions in the first part of the experiment, and the selection of specific channels in the UI for the three experimental conditions in the second part of the experiment. The experiment used a within-subjects design, where each participant evaluated all four conditions for remote control and feedback in the first part of the study, and the three different conditions for the animations in the second part of the study. Condition order was randomized and counterbalanced within the sample, and each evaluation sessions lasted about 45 min.
For each task in each condition, task completion rate, task completion time, and number of errors were collected. After each condition, participants were asked for ratings regarding the ease of use of the system, how comfortable it is to use the system, how natural the use of the system was perceived by the user, how accurate the remote control was perceived, how smooth the interaction with the system was, how responsive the system was, and how pertinent and suitable the animations were for the given tasks. Additionally, the participants were filling in the SUS [5] questionnaire and the AttrakDiff questionnaire [16] after having completed the tasks for each condition.
After the two parts of the experiment, participants were asked which remote controls they preferred in terms of usage and in terms of design, as well as which one they perceived as more accurate in a closing interview. Test subjects did not receive any compensation for their participation.
Tasks
The selection task in the first part of the experiment consisted of a sequence of 24 dots that were randomly appearing on one of the 12 tiles of the user interface (two dots per tile per condition) that the participants needed to select as fast and as precise as possible. The procedure was repeated for each of the four experimental conditions (without visual feedback, and without haptic feedback (1); without visual feedback, and with haptic feedback (2); with visual feedback, and without haptic feedback (3); and with visual feedback, and with haptic feedback (4)).
The selection task in the second part of the experiment consisted of a sequence of eight channels that the user needed to select one after the other, again as fast and as precise as possible. The procedure was repeated for each of the three experimental conditions of the second part of the experiment (Ease In and Ease Out animation; Linear animation; Ease Out animation only).
4 Results
The data of the two parts of the experiment was analyzed with respect to the experimental conditions and the underlying research hypotheses.
4.1 Impact of Haptic Marks and Visual Feedback on Usability
Usability: Task Completion Time
Haptic marks on the remote control have a significant influence on users’ performance: when using the remote control with haptic marks, users were faster in terms of task completion (flat: mean = 00:02.29; haptic marks: mean = 00:01.30) which is statistically significant: a Mann-Whitney test indicated that task time was significantly faster for the remote control with haptic marks (Mdn = 00:01.00) than for the flat remote control (Mdn = 00:01.30; U = 194823,5 p = .000).
Providing visual feedback increases the task completion time from 00:01.55 to 00:02.03. A Mann-Whitney test indicated that task time was significantly faster for the No-Feedback condition (Mdn = 00:01.00) than for the Feedback condition (Mdn = 00:01.30; U = 206449, p = .000).
This is in line with previous findings that people tend to wait until the visual feedback is over, but feedback is important for such types of tasks in case of interruptions [6].
Usability: User Ratings of the Interaction
Haptic marks do furthermore significantly influence users’ ratings and perceptions on the following dimensions: perceived speed, perceived likelihood for errors, perceived difficulty, comfort, naturalness, accuracy, smoothness and responsiveness. Table 4 gives an overview on these results.
Usability: Impact of Visual Feedback
The Feedback/No Feedback condition yielded significant results for perceived comfort (Mdn: 3 for no-feedback; Mdn: 2 for feedback), perceived naturalness (Mdn: 2 for no-feedback; Mdn: 1.5 for feedback), as well as pertinence of the animation (Mdn: 2.5 for no-feedback; Mdn: 1.0 for feedback), where the scale was ranging from 1 being best to 5 being worst. Additionally, also attractiveness scored significant results (Mdn:.71 for no-feedback; Mdn: 1.0 for feedback) on a scale from –3 (worst) to +3 (best).
The Feedback/No Feedback condition did not yield significant results for speed feeling, error rate feeling, perceived difficulty, perceived accuracy, perceived smoothness, perceived responsiveness, success percentage, and both hedonic quality identification as well as stimulation.
Interaction Effects
There was no statistically significant interaction effect between the feedback condition and the type of remote control used on the combined dependent variables, F(14, 47) = 1.390, p = .196; Wilks’ Λ = .707.
4.2 Impact of Haptic Marks/Visual Feedback on User Experience
User Experience
A Kruskal-Wallis H test showed that there was a statistically significant difference in pragmatic quality as well as attractiveness between the different study conditions.
No difference has been observed in the variables for perceived smoothness, perceived responsiveness, hedonic quality – identification, as well as hedonic quality - stimulation between the different study conditions (see Fig. 5 for AttrakDiff metrics for the different study conditions).
4.3 Impact of Animation Type
Statistical analysis compared usability and user experience metrics of the different animation conditions of the second part of the study. There were no significant differences in the scores for the usability and user experience metrics between the conditions ‘Ease In /Ease Out’, ‘Linear’, and ‘Ease Out’, except for error count between the ‘Linear’ and the ‘Ease Out’ condition which was significantly higher for the ‘Linear’ condition – Mann Whitney U = 88, N = 32, Z = -2.104, p = .035, r = .37. These results might also be biased by channel logos unfamiliar for the user, as most of the time, errors were related to confusing channel logos.
4.4 Final Interview
In the closing final interview, twelve of the sixteen participants stated they preferred the remote control with the haptic marks (75%), while four participants stated to prefer the flat remote control (25%).
Users were also asked whether they have perceived a difference between the test sessions of the second part of the experiment, where only the animation types changed between the tasks. The majority of participants (10 persons) indicated that they did not recognize differences, while 6 persons stated they perceived differences. The differences that the users observed were related to the speed (4 persons), the fluidity (1 person), and the change of the page on the UI (1 person).
Finally, participants were asked if they preferred one session over the others in the second part of the study. Seven of the 16 participants did not state any preferences four participants preferred the Ease In /Ease Out session, three persons preferred the Ease Out session, while two persons stated to prefer the Linear animation session.
5 Summary
Our study shows that using haptic marks significantly improves usability and some aspects of user experience. Usability indicators showed significantly better scores for the haptic landmark remote control (faster for task completion; higher perceived speed, lower perceived likelihood for error, lower perceived difficulty, more accurate, more responsive). These findings were supported by the ratings for pragmatic quality in the AttrakDiff questionnaire which were significantly better.
In terms of user experience the haptic marks influence the users’ perceptions on comfort, naturalness and attractiveness. The sub-dimension hedonic quality of the AttrakDiff questionnaire with its sub-dimensions stimulation and identification did not yield significant results. A possible explanation is that the type of task is too narrow and the prototype user interface to limited in terms of functionality to allow the investigation of UX. In similar studies of interactive TV systems using a broader range of tasks we were able to show influences on UX by manipulation type of interaction technique [32].
Our second hypothesis regarding the influence of visual feedback on usability and user experience was limited verified. For the usability metrics, we did not observe significant differences between the feedback/no-feedback conditions except for the task time, which was significantly slower with feedback than without it, which is in line with previous findings that people tend to wait for animations to finish. Pragmatic quality in the AttrakDiff was close to being significant (.052), but not within the 95% confidence interval.
Concerning user experience, the visual feedback conditions did not yield significant differences regarding the hedonic quality dimension, which could be again explained by the fact that the type of task is too narrow and the prototype user interface to limited in terms of functionality to allow the investigation of UX, but we observed significant results for perceived comfort, perceived naturalness, as well as attractiveness. This indicates that visual feedback has a positive impact on some aspects of UX.
Variations in terms of animation speed curves during the second part of the experiment did not show significant differences in terms of usability or user experience. This could be explained by a selective perception bias. Users were asked to find channels without any information about changes in terms of animation. The majority of them were likely so focused on finding the right channels, that they did not realize the change of animation. This assumption is consistent with the final interviews, where the majority of participants stated that they did not observe differences between the three sessions, and also no clear favorite was noticeable when the participants were asked for their preferred session.
The results of our experiment suggest that touchpads enhanced with haptic marks are a possible solution to overcome current limitations of touch. It also indicates that even if visual feedback was not as significant as tactile feedback, it still has an impact on UX, and should be taken into account when it comes to absolute indirect touch interaction design.
We acknowledge that the mean age of our sample is quite young. This was chosen on purpose, as we were aiming for a high familiarity with touch devices and smartphones.
6 Discussion and Future Work
In Software engineering, processes have been tuned and adapted to take into account specific software qualities such as safety [3], reliability [21], learnability [20] or usability [26]. One issue with these contributions is that focusing on improving a specific property might damage other ones are they are usually conflicting [10]. Beyond, what is missing in all of these processes are clear indications of when and how to re-evaluate scientific findings, due to technology changes. For instance, while the process presented in [24] allows integrating evaluation results (through scenarios) into task models, the integration of a pure repetition of evaluations is not considered. This missing re-confirmation and replication of knowledge can be a threat for the scientific community. As user centered design and development (UCD) approaches require iterative design and detailed evaluation at each iteration step [14], it means that evaluators’ work is not well supported as it is difficult to compare evaluation results from different UCD stages.
In this particular case it became clear that conflicting software properties like users judgment on usability and user experience were aligned, but that traditional approaches for enhancing the user experience (animations) did not impact the judgment of the users. Thus, an advancement in this field should look more on the haptic feedback, than on any type of visual feedback, which is outside of the mainstream approaches currently applied in the field.
For the indirect absolute touch interaction itself, future work will focus on the investigation of the technology with a larger set of users that includes a broader variety of tasks to better address the UX dimension. In terms of technology the haptic touch remote control will be enhanced with different elements, like a relative touch element, in order to enrich the interaction technique by combining the advantages of both mappings.
On a longer term we intent to investigate an automotive application of the haptic touch element. We will adapt our solution to this different context e.g. using less fields. And we will conduct a secondary-task experiment and investigate the effect of the haptic marks on user attention, cognitive load, and usability. Based on the current results, we expect that haptic marks will reduce distraction and provide a more eyes-free experience than other touch-based user interface that are currently on the market.
Concerning the integration of replication studies and continuous evaluation of upcoming technologies a series of investigation in industrial (design) oriented context is underway to develop enhanced user-centered design and development processes that will be able to integrated a set of software qualities, including usability, user experience, reliability, safety and security. This work will conclude efforts that are on the way for more than 10 years [27].
References
Avellino, I., Fleury, C., Beaudouin-Lafon, M.: Accuracy of deictic gestures to support telepresence on wall-sized displays. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 2015), pp. 2393–2396. ACM, New York (2015)
Baeker, R., Small, I.: Animation at the interface. In: The Art of Human-Computer Interface Design, pp. 251–267. Addison-Wesley Longman Publishing Co., Inc. Boston (1990)
Basnyat, S., Palanque, P., Schupp, B., Wright, P.: Formal socio-technical barrier modelling for safety-critical interactive systems design. Saf. Sci. 45(5), 545–565 (2007)
Bieber, G., Abd Al Rahman, E., Urban, B.: Screen coverage: a pen-interaction problem for PDA’s and touch screen computers. In: Proceeding of the Third International Conference on Wireless and Mobile Communications (ICWMC 2007), pp. 87–87. IEEE (2007)
Brooke, J.: SUS: a ‘quick and dirty’ usability scale. In: Jordan, P.W., Thomas, B., Weerdmeester, B.A., McClelland, A.L. (eds.) Usability Evaluation in Industry. Taylor and Francis, London (1996)
Bruke, J.L., et al.: Comparing the effects of visual-auditory and visual-tactile feedback on user performance: a meta-analysis. In: Proceedings of the 8th International Conference on Multimodal Interfaces (ICMI 2006), pp. 108–117. ACM, New York (2006)
Chang, B.W., Ungar, D.: Animation: from cartoons to the user interface. In: Proceedings of the 6th Annual ACM Symposium on User Interface Software and Technology (UIST 1993), pp. 45–55. ACM, New York (1993)
Chevalier, F., Henry Riche, N., Plaisant, C., Chalbi, A., Hurter, C.: Animations 25 years later: new roles and opportunities. In: Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI 2016), pp. 280–287. ACM, New York (2016)
Corsten, C., Cherek, C., Karrer, T., Borchers, J.: HaptiCase: back-of-device tactile marks for eyes-free absolute indirect touch. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI 2015), pp. 2171–2180. ACM, New York (2015)
Fayollas, C., Martinie, C., Palanque, P., Ait-Ameur, Y.: QBP notation for explicit representation of properties, their refinement and their potential conflicts: application to interactive systems. In: Clemmensen, T., Rajamanickam, V., Dannenmann, P., Petrie, H., Winckler, M. (eds.) INTERACT 2017. LNCS, vol. 10774, pp. 91–105. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-92081-8_9
Forlines, C., Vogel, D., Balakrishnan, R.: HybridPointing: fluid switching between absolute and relative pointing with a direct input device. In: Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology (UIST 2006), pp. 211–220. ACM, New York (2006)
Gilliot, J., Casiez, G., Roussel, N.: Impact of form factors and input conditions on absoute indirect-touch tasks. In: Proceedings of Proceedings of the 2014 CHI Conference on Human Factors in Computing Systems (CHI 2014). ACM, Toronto (2014)
Guerreiro, T., Jorge, J., Gonçalves, D.: Exploring the non-visual acquisition of targets on touch phones and tablets. In: 2nd Workshop on Mobile Accessibility (MobileHCI 2012). ACM, New York (2012)
Gulliksen, J., Göransson, B.: Usability design: integrating user centered system design in the software development process. In: IFIP TC 2013 INTERACT Conference (2003)
Hamon, A., Palanque, P., André, R., Barboni, E., Cronel, M., Navarre, D.: Multi-Touch interactions for control and display in interactive cockpits: issues and a proposal. In: Proceedings of the International Conference on Human-Computer Interaction in Aerospace (HCI-Aero 2014), Article 7, 10 p. ACM, New York (2014)
Hassenzahl, M.: The interplay of beauty, goodness, and usability in interactive products. J. HCI 19(4), 319–349 (2004)
Kim, J., Koren, I.: Comparing relative and absolute touch input for remote controls. In: Proceedings of the Conference on Human Factors in Computing Systems (CHI 2015), pp. 97–108. ACM. New York (2015)
König, W.A., Gerken, J., Dierdorf, S., Reiterer, H.: Adaptive pointing – design and evaluation of a precision enhancing technique for absolute pointing devices. In: Gross, T., Gulliksen, J., Kotzé, P., Oestreicher, L., Palanque, P., Prates, R.O., Winckler, M. (eds.) INTERACT 2009. LNCS, vol. 5726, pp. 658–671. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03655-2_73
Liddle, D.: Emerging guidelines for communicating with animation in mobile user interfaces. In: Proceedings of the 34th ACM International Conference on the Design of Communication (SIGDOC 20166). Article 16, 9 Pages. ACM, New York (2016)
Martinie, C., Palanque, P., Navarre, D., Winckler, M., Poupart, E.: Model-based training: an approach supporting operability of critical interactive systems. In: Proceedings of Engineering Interactive Computing Systems (EICS 2011), pp. 53–62 (2011)
Martinie, C., Palanque, P., Navarre, D., Barboni, E.: A development process for usable large scale interactive critical systems: application to satellite ground segments. In: Winckler, M., Forbrig, P., Bernhaupt, R. (eds.) HCSE 2012. LNCS, vol. 7623, pp. 72–93. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34347-6_5
McCallum, D.C., Irani, P.: ARC-Pad: Absolute + Relative cursor positioning for large displays with a mobile touchscreen. In: Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology (UIST 2009), pp. 153–156. ACM, New York (2009)
Merz, B., Opwis, K., Tuch, A.N.: Perceived user experience of animated transitions in mobile user interfaces. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA 2016), pp. 3152–3158. ACM, New York (2016)
Navarre, D., Palanque, P., Paternò, F., Santoro, C., Bastide, R.: A tool suite for integrating task and system models through scenarios. In: Johnson, C. (ed.) DSV-IS 2001. LNCS, vol. 2220, pp. 88–113. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45522-1_6
Norman, K.L., Norman, K.D.: Comparison of relative versus absolute pointing devices. In: Human-Computer Interaction Technical Report, HCIL (2010)
Palanque, P., Barboni, E., Martinie C., Navarre, D., Winckler, M.: A model-based approach for supporting engineering usability evaluation of interaction techniques. In: Proceedings of Engineering Interactive Computing Systems (EICS 2011), pp. 21–30 (2011)
Palanque, P., Basnyat, S., Bernhaupt, R., Boring, R., Johnson, C., Johnson, P.: Beyond usability for safety critical systems: how to be sure (safe, usable, reliable, and evolvable)? In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI 2007), pp. 2133–2136. ACM. New York (2007)
Palleis, H., Hussmann, H.: Indirect 2D touch panning: how does it affect spatial memory and navigation performance. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI 2016), pp. 1947–1951. ACM. New York (2016)
Pasquero, J., Hayward, V.: Tactile feedback can assist vision during mobile interactions. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2011), pp. 3277–3280. ACM, New York (2011)
Pfeiffer, M., Kern, D., Schöning, J., Döring, T., Krüger, A., Schmidt, A.: A multi-touch enabled steering wheel: exploring the design space. In: Extended Abstracts on Human Factors in Computing Systems (CHI EA 2010), pp. 3355–3360. ACM, New York (2010)
Pietroszek, K., Lank, E.: Clicking blindly: using spatial correspondence to select targets in multi-device environments. In: Proceedings of MobileHCI 2012, pp. 331–334. ACM, San Francisco (2012)
Pirker, M., Bernhaupt, R., Mirlacher, T.: Investigating usability and user experience as possible entry barriers for touch interaction in the living room. In: Proceedings of the 8th International Interactive Conference on Interactive TV&Video (EuroITV 2010), pp. 145–154, ACM, New York (2010)
Roudaut, A., Huot, S., Lecolinet, E.: TapTap and MagStick: improving one-handed target acquisition on small touch-screens. In: Proceedings of the Working Conference on Advanced Visual Interfaces (AVI 2008), pp. 146–153. ACM, New York (2008)
Sheik-Nainar, M., Huber, J., Bose, R., Matic, N.: Force-enabled touchpad in cars: improving target selection using absolut input. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA 2016), pp. 2697–2704. ACM, New York (2016)
Thomas, F., Johnston, O.: Disney Animation: The Illusion of Life. Abbeville Press, New York (1981)
Treskunov, A., Darnell, M., Wang, R.: Active haptic feedback for touch enabled TV remote. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (ICMI 2015), pp. 319–322. ACM, New York (2015)
Zimmermann, S., Rümelin, S., Butz. A.: I feel it in my fingers: haptic guidance on touch surfaces. In: Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction (TEI 2014), pp. 9–12. ACM, New York (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 IFIP International Federation for Information Processing
About this paper
Cite this paper
Bernhaupt, R., Drouet, D., Pirker, M. (2019). Absolute Indirect Touch Interaction: Impact of Haptic Marks and Animated Visual Feedback on Usability and User Experience. In: Bogdan, C., Kuusinen, K., Lárusdóttir, M., Palanque, P., Winckler, M. (eds) Human-Centered Software Engineering. HCSE 2018. Lecture Notes in Computer Science(), vol 11262. Springer, Cham. https://doi.org/10.1007/978-3-030-05909-5_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-05909-5_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05908-8
Online ISBN: 978-3-030-05909-5
eBook Packages: Computer ScienceComputer Science (R0)