Keywords

1 Introduction

Figure 1 shows an example of absolute indirect touch interaction: When changing the TV channel on the remote control, the area the user touches corresponds to the area on the user interface: if the finger is in the upper left corner, the upper left corner on the screen is selected. Once the finger is moved to the right one step, the corresponding element in the grid is selected. Using a haptic mark on the touch area allows the user to feel the position, without the need to look at the touch area. The finger position in this interaction is absolute, as moving one step to the right, moves one step to the right on the screen. The interaction itself is indirect as, evidently, the user is not touching directly the user interface, but touches the touch area of the control grid, to have an effect in the user interface.

Fig. 1.
figure 1

The touch interaction element on a remote control (left) or on a driving wheel (middle). If the finger is positioned on the orange area, the related element on the screen (right, green arrow) is highlighted. When pushing the area, the item in the user interface is selected. (Color figure online)

The problem is that as touch technologies are changing and improving, it is important to verify if enhancements in the technology as well as the material still lead to the same impact on usability and how such novel touch concepts are perceived in terms of user experience (UX).

Main research question was to understand if usability effects like improvement of time taken for the completion of the task still holds, and what type of user experience is associated to such an interaction. In terms of UX enhancement a set of animations was investigated looking for possible improvements of UX. A controlled experiment was performed to answer these questions.

In the following, the article shows the current state of the art on touch interaction (that is indirect and/or absolute) in terms of usability. Then the method is described, followed by results and a discussion of the results. The article concludes by a more general discussion on how to change user-centered design and development processes to take into account the replication of findings based on technology change.

2 Related Work

2.1 Using Touch for Controlling User Interfaces via a Distance

Direct touch interaction on a screen is becoming a de-facto standard for interactive systems like mobile phones or tablets. While touch interaction with direct and immediate feedback on the area you touched is taken up quickly, the usage of touch to control elements on a distant screen is perceived as less usable [17]. While usability seems to be lower, industry [32] claims that using touch elements as control elements for interaction on a distant screen enhances user experience. Such elements of (technological novel) touch input for distant screens is currently considered for in-car systems for secondary driving tasks, for in-home applications like interactive TV and even for aircraft cockpits.

The traditional solution for any task including navigation, selection, or in general interaction with a visual display from a distance are buttons, knobs or sliders [32]. A recent trend is the incorporation of touch as means of interaction, ranging from using tablets to interact with large displays [1] to incorporating touch elements in cars, especially for secondary tasks while driving [34]. Touch interaction for distant screens or displays is different from the direct touch interaction known from mobile phones, tablets or touch screens, as the touch input is performed on an area dissociated from the output area, typically a distant screen.

For touch interaction on a distant screen there are in general two ways to map the user input to the movement on the distant screen: absolute and relative. Absolute mapping is defined as a homothetic correspondence between the position of a contact on the input surface and the position of an object on the output display [12]. In other words, it is a position-to-position and velocity-to-velocity mapping between the input and output device [18]. In contrast, relative mapping/pointing is the correspondence of the displacement on the input surface and the displacement of an object on the output display. It generally involves a non-linear transfer function to support fast movements over large distances and precise interactions with small objects [12]. Absolute pointing is claimed to be easier to learn [18], and to be more natural and convenient; however, absolute pointing also has disadvantages.

Absolute pointing could lead to parallax error [4], or perception error due the apparent shift of the interaction area against a background when the observer position is not aligned with the device. It could also lead to occlusion effects - when interacting, the finger, hand and/or arm can hide a part of the output device and can even totally occlude small targets [33]. Forlines et al. [11] report that absolute touch interaction is uncomfortable on large displays and/or during long use. Relative pointing is said to be less natural than absolute pointing and more difficult to learn, but on the contrary it allows speeding up the interaction as different transfer functions can support movement over longer distances [12].

Touch as interaction technique to interact with user interfaces on distant screens has been discussed controversially for different application domains, including the television [32], automotive [30] or aeronautic [15] domains.

Compared to other types of interactions, e.g. input elements like knobs, sliders or other forms with haptic feedback, a set of disadvantages is reported for touch interactions:

  • Touch misses the dimension of immediate haptic feedback [37].

  • Touch was reported to be less efficient [32] for selection and navigation tasks.

  • Touch was reported to be less effective [32] for selection and navigation tasks.

On the other hand, touch interaction is increasing user experience, especially the overall hedonic quality and the user’s need for novelty in a product or novelty of an interaction technique, compared to standard interactions like buttons [32].

2.2 Absolute Indirect Touch

Absolute indirect touch is the use of one-to-one mapping between a separated touch input device and a distant display.

Norman & Norman [25] compared the use of a Nintendo Wii Remote for a selection task in three different conditions. The first condition was absolute pointing using an infrared camera to detect the movement, the second one was stabilized absolute pointing using the camera coupled with a 6-axis accelerometer, and the last one was relative pointing using only the gyroscope. They conclude that the advantage of absolute pointing compared to relative pointing is its intuitiveness. However, in their study relative pointing showed better performance and users preferred relative pointing to absolute pointing. The intuitiveness stems from the direct mapping users learn during all their live. König et al. [18], who proposed a precision enhancing technique for absolute pointing devices, confirm the hypothesis that absolute pointing is a more natural and more convenient pointing experience, as it provides easier hand-eye coordination compared to relative pointing. However, König et al. pointed out the common problem shared by all absolute indirect pointing approaches, which is the missing precision - especially when using high resolution displays.

Gilliot et al. [12] investigated the influence of form factors on absolute indirect-touch pointing performance in two studies. In the first one, they compared two different screen sizes (196 × 147 mm, 66 × 50 mm) and two visual conditions (looking at the input device, not looking at the input device). They found that users get better performances when they can look at the input surface, and that scale does not affect user performance. In the second experiment, they compared several aspect ratios between the input and the output device, and they conclude that the same aspect ratio leads to better performance.

Pietrosek and Lank [31] investigated spatial correspondence between a smartphone screen and a projection screen to select targets. They investigated two different conditions. In the first one, the desired target was displayed on the projection screen and on the smartphone screen, while in the other condition the desired target was only displayed on the projection screen. They found out that error rate was 3.5% (of screen width) when the target was mirrored on the smartphone screen, while it doubled to around 6% when the target was only displayed on the distant screen.

Palleis and Hussmann [28] explored the effect of touch indirectness on spatial memory and navigation performance in a 2D panning task. Comparing direct absolute touch to indirect absolute touch, they found out that spatial memory performance is not decreased by a spatial separation of touch input gesture and visual display, and also that decreasing the size of the input surface increases navigation efficiency.

For the automotive domain, Sheik-Nainar et al. [34] compared three different touch interaction techniques for target selection for drivers in cars: direct absolute pointing, indirect absolute pointing and indirect relative pointing. Their study revealed comparable performance for absolute indirect touch and absolute direct touch in terms of efficiency, effectiveness, distraction, and user preferences. Compared to relative indirect input, absolute indirect input showed better performances, lower distraction and higher user ratings.

For interaction with large displays the ARC-Pad [22] is an indirect interaction technique for interacting with large displays using a mobile phone’s touchscreen. It combines absolute and relative pointer positioning. Tapping with ARC-Pad roughly positions the cursor to the corresponding location on the distant screen, using an absolute mapping. Then the user can adjust the cursor location by sliding her finger on the touchscreen, using a relative mapping. This technique reduces clutching by half compared to a cursor acceleration technique.

2.3 Tactile and Visual Feedback in Touch Interaction

Bruke et al. [6] compared the effect of visual-auditory and visual-tactile feedback on user performance in a meta-analysis of 43 studies. They selected studies that reported at least one comparison between single modes and multimodal combinations, and that reported a measure of error rate, reaction time, and/or performance score outcome. They found that visual-tactile feedback provides a significant advantage over using a visual-only feedback system, and that visual-tactile feedback is particularly effective when multiple tasks are being performed, and under normal workload condition.

Another finding of this meta-analysis is that while multimodal feedback seems to enhance performance, improving performance scores and reducing reaction times, it has little or no effect on error rate.

Pasquero and Hayward [29] investigated the use of tactile feedback in the task of scrolling through a long list of items. They conducted a study with two different conditions - in the control condition, no tactile feedback was provided, while in the experimental condition, a short tactile feedback was provided when the user moves from an item to another, and a longer tactile feedback was provided every 10 items. They measured the frequency at which users needed to look at the screen. They observed an average reduction of 28% in the number of glances that the users required to complete a task with tactile feedback compared to the number of glances that the users required to complete a task without tactile feedback.

Treskunov et al. [36] investigate how haptic feedback affects the user experience of a touchpad-based television remote. They conducted two user studies with two haptic prototypes. A pilot study with eight users, employing smartphones to simulate a directional touchpad, revealed that users preferred enabled haptic feedback. Encouraged by the results, they conducted a second study. In this study they use a touch remote control coupled with a Linear Resonant Actuator on the back of the remote. They compared three haptic conditions (5 ms, 25 ms, No Haptic). They did not find significant effects on time, error, or ratings. However, at the end of the study users were asked to choose which haptic condition they preferred, and although some participant did not make any distinction between the 5 ms and the 25 ms conditions, eight of the nine participants preferred haptic feedback over no haptic feedback.

HaptiCase [9] is an interaction technique for smartphones that provides back-of-device tactile marks that users sense to estimate the position of her finger in relation to the touchscreen. By pinching the thumb to a finger at the back, the finger location is transferred to the front as the thumb touches the touch screen. The study revealed that users were more accurate for eyes-free indirect typing with HaptiCase compared to having no tactile marks. The second study investigated the impact of tactile targeting on visual targeting, when both targeting strategies are combined. Users where both faster and had a lower offset to the target when being able to look at the input device compared to when they could not look at the input device. Guerreiro et al. [13] attached tactile marks on mobile devices’ touch screens to guide blind people’s interactions. It showed positive effects on the acquisition of targets on screen, and it was perceived as helpful by users.

2.4 Animations

Early work by Disney [35] shows that Animations affect user experience in general. Chevalier et al. [8] revisited the pioneer work of Baecker and Small [2] about the place of animation in interfaces. They concluded that user experience is the most important aspect for using animations. Merz et al. [23] investigated how different animation principles for animated transitions in mobile application influence the perceived user experience. They conducted a pilot study in which they compared three different animation styles: slow in and slow out, exaggeration, and linear. The results of this pilot study showed a tendency that animation style could affect the perception of UX.

2.5 Research Overview

Table 1 gives an overview on current literature related to the dimensions absolute and relative mapping, direct and indirect mapping and is complemented by the categories visual feedback, tactile feedback and animation. As highlighted with checkmarks, the contribution of this article is to understand how a combination of absolute indirect touch input with visual feedback and/or haptic feedback influences usability, and especially the overall user experience, as this dimension is not explored in the current literature.

Table 1. Overview on the state of the art summarizing contribution on touch research for usability and user experience. The highlight marks the contribution area of this article

3 The Problem of Touch Interaction with Haptic Marks

To support absolute touch interaction in situations where the screen is out of reach for the user, we developed a touch interaction element with haptic marks that can be applied in various contexts and domains, e.g. as an interaction element in the car to control secondary tasks while driving, in a cockpit for tasks where the pilot cannot reach the screen, or for standard applications like TV to be included in a remote control. Figure 1 shows some possible usages of such an absolute touch interaction element with haptic feedback.

Contrary to absolute touch elements mapping the touch input to the user interface one to one (1:1), this touch interaction element has a number of fixed areas that can be varied depending on the constraints and necessities for the different domains, mapping the area on the touch input field absolute to the fixed area on the user interface. The number of elements on the touch input depends on the application area. For tasks with high cognitive load and risk, like in cars [34], there are only 3 × 3 fields, while for areas with less cognitive load, or more entertainment oriented applications like the TV or interactions on large screens, there are more fields (e.g., 4 × 3, 3 × 4, or larger), see also Fig. 1.

The haptic marks support the user in achieving their goals by offering the opportunity to use them without having to look at them, as the haptic marks can easily be felt with the fingertips. As opposed to a flat touch area where the user has to evaluate the position of the pointer on the distant screen constantly, the haptic marks support ease of use, efficiency, and effectiveness by providing unobtrusive haptic feedback on the touchpad and simplify target acquisition on the distant screen.

General goal of this research is to focus on touch interaction as an input for distant displays, such as television screens, car displays or aircraft displays. Put simply, we aimed to investigate whether haptic feedback (“to feel”), visual feedback (“to see”), or a combination of both are more important to the user, and how this affects usability and UX.

3.1 Research Question and Hypothesis

The research questions were the following: (1) How does the presence or the absence of haptic marks influence usability of the system and affect user experience? And (2) How does the presence or absence of animated visual feedback influence the usability and the overall user experience?

Hypothesis 1 (flat vs haptic marks):

There is a significant difference in terms of usability (efficiency, effectiveness, satisfaction) and user experience (naturalness, aesthetics, hedonic and pragmatic qualities) when using the flat touch interaction input element compared to using the touch interaction element with haptic marks.

Hypothesis 2a (visual feedback/no feedback): There is a significant difference in terms of usability (efficiency, effectiveness, satisfaction and naturalness) and user experience (naturalness, aesthetics, hedonic and pragmatic qualities) when using a system with animated visual feedback compared to using a system without visual feedback.

Hypothesis 2b (visual feedback with three different curves):

There are significant differences in terms of user experience (naturalness, aesthetics, hedonic and pragmatic qualities) when using a system with animated visual feedback that uses ease in combined with easy out, a linear curve or only easy out.

3.2 Method, Participants and Procedure

A within-subject design was performed with 16 participants. The experiment consisted of two parts: in the first part of the experiment, the independent variables are the remote control and the feedback condition, while in the second part of the experiment, the independent variable consists of the type of animation used (cf. Tables 2 and 3). Both parts of the experiment collected data about usability and user experience metrics using measures through observation and logging, standard questionnaires, short semi-structured interviews upon completion of conditions, as well as short interviews at the end of the experiment.

Table 2. Conditions for the first part of the experiment based on the two independent variables
Table 3. Second system’s independent variables & values for the second part of the experiment

Sixteen participants (14 male and 2 female), aged from 21 to 25 years (mean = 23, SD = 1.41) took part in the study. The sample was a convenience sample recruited via Facebook, mailing lists and personal contacts. In order to avoid biases caused by a missing familiarity with touch interaction, we recruited young people, as they are more likely familiar with touch interaction. All participants use a touch device at least several times a week, and own either a smartphone or use a tablet at home. All of the participants were right handed, with normal or corrected to normal vision, and no participant indicated to be color blind. Daily TV consumption ranged from no TV usage to up to 4 h – 2 participants indicated to never watch TV (12.5%), 6 participants watch less than 30 min a day (37.5%), one participant up to an hour (6.3%), 4 participants up to two hours (25%), one participant up to 3 h (6.3%), and 2 participants up to 4 h (12.5%).

The daily smartphone usage ranged from no usage to more than 4 h a day – one participants stated to not use a smartphone (6.3%), 2 participants use it up to an hour (12.5%), 4 persons use it up to 1.5 h a day (25%), another four persons use it up to 3 h (25%), while one person uses it up to 4 h (6.3%). Finally, 4 persons indicated that they use their smartphone for more than 4 h every day (25%).

3.3 System Information

In order to evaluate the touch element with haptic marks, two remote controls where produced: one included a standard touch interaction pad, while the other included haptic marks. Figure 2 shows the two remote controls used in the experimentation. The driver software for both remote controls is identical, and both touch areas, regardless of the haptic marks, send information in a 12 byte array (for a 3 times 4 grid). Each byte indicates how close the finger of the user is to the sensor on a scale from 0 to 255 for the given area – this allows interpolating the position of the user’s finger on the touch grid. The only difference between the two remote controls is that the sensors of the flat touch area are slightly more sensitive to account for the differences in the height of the touch area without the recessed haptic marks: this implies the same sensor sensitivity for both remote controls.

Fig. 2.
figure 2

Touch interaction element with haptic marks integrated in a remote control (left) and without haptic marks (right)

The user interface prototype consists of a page with 12 tiles (4 columns, 3 rows). During the experimentation dots appear pseudo randomized on the tiles. Users have to click on the corresponding area of the remote to select the indicated tile. Correct selections are indicated with a green checkmark, incorrect selection with a red cross on the item. Figure 3 shows the user interface for the different conditions.

Fig. 3.
figure 3

User Interface with correct selection (left) and incorrect selection (right)

For the second part of the experiment, the user interface (UI) consisted simply of twelve areas with a set of TV channels (see Fig. 4 below) and images simulating a TV channel displayed in the background.

Fig. 4.
figure 4

User interface with twelve areas showing TV channels

3.4 Material

In the first part of the experiment, two versions of the prototypical UI were tested in order to provide two different feedback types for the condition that offers visual feedback within the user interface. The condition with feedback offered two visual clues for the interaction, which were a highlight of the corresponding tile in the UI when an area of the remote control was touched, as well as a temporary downscaling of the corresponding tile in the UI when an area of the remote control was pressed. The condition without feedback did not offer this visual feedback.

The tiles in the UI have a square shape and occupy the maximum space on the screen, taking into account the gaps at the border and between two tiles (see Fig. 3).

The background of the prototype is medium gray, the tiles are black with a different opacity whether they are highlighted or not, and the dots are white. This choice of color was made to avoid any biases related any types to color blindness. The contrast between the dot and a tile is important (above 50%) even if the tile is selected.

For the second part of the study, the UI only changed in terms of animations used (see Table 3). A variation of the remote control with haptic marks was used, enhanced by two buttons (left/right) that allowed changing pages within the grid.

The experiment was conducted in a usability lab that resembles a living room. The room is equipped with a 40 inch TV with 4 k resolution, two sofas, and a coffee table. Two cameras recorded each session, one behind the user to have an ‘over-the-shoulder’ view of the interaction with the remote control and capture the use of the remote control, and the second one below the TV in front of the user to capture the facial expressions and posture of the user. The prototypical user interfaces used in the study were running on a small form factor computer behind the television to give the participants the impression that they are using a normal TV with a set-top box.

The experiment started with an introduction about the general goal of the study, followed by a demographic questionnaire that investigated the media consumption habits of the participants and a short pre-interview. Subsequently, participants were introduced to the user interface, and were asked to perform tasks – the selection of dots on the tiles of the UI for the four experimental conditions in the first part of the experiment, and the selection of specific channels in the UI for the three experimental conditions in the second part of the experiment. The experiment used a within-subjects design, where each participant evaluated all four conditions for remote control and feedback in the first part of the study, and the three different conditions for the animations in the second part of the study. Condition order was randomized and counterbalanced within the sample, and each evaluation sessions lasted about 45 min.

For each task in each condition, task completion rate, task completion time, and number of errors were collected. After each condition, participants were asked for ratings regarding the ease of use of the system, how comfortable it is to use the system, how natural the use of the system was perceived by the user, how accurate the remote control was perceived, how smooth the interaction with the system was, how responsive the system was, and how pertinent and suitable the animations were for the given tasks. Additionally, the participants were filling in the SUS [5] questionnaire and the AttrakDiff questionnaire [16] after having completed the tasks for each condition.

After the two parts of the experiment, participants were asked which remote controls they preferred in terms of usage and in terms of design, as well as which one they perceived as more accurate in a closing interview. Test subjects did not receive any compensation for their participation.

Tasks

The selection task in the first part of the experiment consisted of a sequence of 24 dots that were randomly appearing on one of the 12 tiles of the user interface (two dots per tile per condition) that the participants needed to select as fast and as precise as possible. The procedure was repeated for each of the four experimental conditions (without visual feedback, and without haptic feedback (1); without visual feedback, and with haptic feedback (2); with visual feedback, and without haptic feedback (3); and with visual feedback, and with haptic feedback (4)).

The selection task in the second part of the experiment consisted of a sequence of eight channels that the user needed to select one after the other, again as fast and as precise as possible. The procedure was repeated for each of the three experimental conditions of the second part of the experiment (Ease In and Ease Out animation; Linear animation; Ease Out animation only).

4 Results

The data of the two parts of the experiment was analyzed with respect to the experimental conditions and the underlying research hypotheses.

4.1 Impact of Haptic Marks and Visual Feedback on Usability

Usability: Task Completion Time

Haptic marks on the remote control have a significant influence on users’ performance: when using the remote control with haptic marks, users were faster in terms of task completion (flat: mean = 00:02.29; haptic marks: mean = 00:01.30) which is statistically significant: a Mann-Whitney test indicated that task time was significantly faster for the remote control with haptic marks (Mdn = 00:01.00) than for the flat remote control (Mdn = 00:01.30; U = 194823,5 p = .000).

Providing visual feedback increases the task completion time from 00:01.55 to 00:02.03. A Mann-Whitney test indicated that task time was significantly faster for the No-Feedback condition (Mdn = 00:01.00) than for the Feedback condition (Mdn = 00:01.30; U = 206449, p = .000).

This is in line with previous findings that people tend to wait until the visual feedback is over, but feedback is important for such types of tasks in case of interruptions [6].

Usability: User Ratings of the Interaction

Haptic marks do furthermore significantly influence users’ ratings and perceptions on the following dimensions: perceived speed, perceived likelihood for errors, perceived difficulty, comfort, naturalness, accuracy, smoothness and responsiveness. Table 4 gives an overview on these results.

Table 4. User ratings: mean value with/without haptic marks and description of the performed test. Mean on scale 1 to 5, 1 being best.

Usability: Impact of Visual Feedback

The Feedback/No Feedback condition yielded significant results for perceived comfort (Mdn: 3 for no-feedback; Mdn: 2 for feedback), perceived naturalness (Mdn: 2 for no-feedback; Mdn: 1.5 for feedback), as well as pertinence of the animation (Mdn: 2.5 for no-feedback; Mdn: 1.0 for feedback), where the scale was ranging from 1 being best to 5 being worst. Additionally, also attractiveness scored significant results (Mdn:.71 for no-feedback; Mdn: 1.0 for feedback) on a scale from –3 (worst) to +3 (best).

The Feedback/No Feedback condition did not yield significant results for speed feeling, error rate feeling, perceived difficulty, perceived accuracy, perceived smoothness, perceived responsiveness, success percentage, and both hedonic quality identification as well as stimulation.

Interaction Effects

There was no statistically significant interaction effect between the feedback condition and the type of remote control used on the combined dependent variables, F(14, 47) = 1.390, p = .196; Wilks’ Λ = .707.

4.2 Impact of Haptic Marks/Visual Feedback on User Experience

User Experience

A Kruskal-Wallis H test showed that there was a statistically significant difference in pragmatic quality as well as attractiveness between the different study conditions.

No difference has been observed in the variables for perceived smoothness, perceived responsiveness, hedonic quality – identification, as well as hedonic quality - stimulation between the different study conditions (see Fig. 5 for AttrakDiff metrics for the different study conditions).

Fig. 5.
figure 5

Means of UX metrics (for study conditions)

4.3 Impact of Animation Type

Statistical analysis compared usability and user experience metrics of the different animation conditions of the second part of the study. There were no significant differences in the scores for the usability and user experience metrics between the conditions ‘Ease In /Ease Out’, ‘Linear’, and ‘Ease Out’, except for error count between the ‘Linear’ and the ‘Ease Out’ condition which was significantly higher for the ‘Linear’ condition – Mann Whitney U = 88, N = 32, Z = -2.104, p = .035, r = .37. These results might also be biased by channel logos unfamiliar for the user, as most of the time, errors were related to confusing channel logos.

4.4 Final Interview

In the closing final interview, twelve of the sixteen participants stated they preferred the remote control with the haptic marks (75%), while four participants stated to prefer the flat remote control (25%).

Users were also asked whether they have perceived a difference between the test sessions of the second part of the experiment, where only the animation types changed between the tasks. The majority of participants (10 persons) indicated that they did not recognize differences, while 6 persons stated they perceived differences. The differences that the users observed were related to the speed (4 persons), the fluidity (1 person), and the change of the page on the UI (1 person).

Finally, participants were asked if they preferred one session over the others in the second part of the study. Seven of the 16 participants did not state any preferences four participants preferred the Ease In /Ease Out session, three persons preferred the Ease Out session, while two persons stated to prefer the Linear animation session.

5 Summary

Our study shows that using haptic marks significantly improves usability and some aspects of user experience. Usability indicators showed significantly better scores for the haptic landmark remote control (faster for task completion; higher perceived speed, lower perceived likelihood for error, lower perceived difficulty, more accurate, more responsive). These findings were supported by the ratings for pragmatic quality in the AttrakDiff questionnaire which were significantly better.

In terms of user experience the haptic marks influence the users’ perceptions on comfort, naturalness and attractiveness. The sub-dimension hedonic quality of the AttrakDiff questionnaire with its sub-dimensions stimulation and identification did not yield significant results. A possible explanation is that the type of task is too narrow and the prototype user interface to limited in terms of functionality to allow the investigation of UX. In similar studies of interactive TV systems using a broader range of tasks we were able to show influences on UX by manipulation type of interaction technique [32].

Our second hypothesis regarding the influence of visual feedback on usability and user experience was limited verified. For the usability metrics, we did not observe significant differences between the feedback/no-feedback conditions except for the task time, which was significantly slower with feedback than without it, which is in line with previous findings that people tend to wait for animations to finish. Pragmatic quality in the AttrakDiff was close to being significant (.052), but not within the 95% confidence interval.

Concerning user experience, the visual feedback conditions did not yield significant differences regarding the hedonic quality dimension, which could be again explained by the fact that the type of task is too narrow and the prototype user interface to limited in terms of functionality to allow the investigation of UX, but we observed significant results for perceived comfort, perceived naturalness, as well as attractiveness. This indicates that visual feedback has a positive impact on some aspects of UX.

Variations in terms of animation speed curves during the second part of the experiment did not show significant differences in terms of usability or user experience. This could be explained by a selective perception bias. Users were asked to find channels without any information about changes in terms of animation. The majority of them were likely so focused on finding the right channels, that they did not realize the change of animation. This assumption is consistent with the final interviews, where the majority of participants stated that they did not observe differences between the three sessions, and also no clear favorite was noticeable when the participants were asked for their preferred session.

The results of our experiment suggest that touchpads enhanced with haptic marks are a possible solution to overcome current limitations of touch. It also indicates that even if visual feedback was not as significant as tactile feedback, it still has an impact on UX, and should be taken into account when it comes to absolute indirect touch interaction design.

We acknowledge that the mean age of our sample is quite young. This was chosen on purpose, as we were aiming for a high familiarity with touch devices and smartphones.

6 Discussion and Future Work

In Software engineering, processes have been tuned and adapted to take into account specific software qualities such as safety [3], reliability [21], learnability [20] or usability [26]. One issue with these contributions is that focusing on improving a specific property might damage other ones are they are usually conflicting [10]. Beyond, what is missing in all of these processes are clear indications of when and how to re-evaluate scientific findings, due to technology changes. For instance, while the process presented in [24] allows integrating evaluation results (through scenarios) into task models, the integration of a pure repetition of evaluations is not considered. This missing re-confirmation and replication of knowledge can be a threat for the scientific community. As user centered design and development (UCD) approaches require iterative design and detailed evaluation at each iteration step [14], it means that evaluators’ work is not well supported as it is difficult to compare evaluation results from different UCD stages.

In this particular case it became clear that conflicting software properties like users judgment on usability and user experience were aligned, but that traditional approaches for enhancing the user experience (animations) did not impact the judgment of the users. Thus, an advancement in this field should look more on the haptic feedback, than on any type of visual feedback, which is outside of the mainstream approaches currently applied in the field.

For the indirect absolute touch interaction itself, future work will focus on the investigation of the technology with a larger set of users that includes a broader variety of tasks to better address the UX dimension. In terms of technology the haptic touch remote control will be enhanced with different elements, like a relative touch element, in order to enrich the interaction technique by combining the advantages of both mappings.

On a longer term we intent to investigate an automotive application of the haptic touch element. We will adapt our solution to this different context e.g. using less fields. And we will conduct a secondary-task experiment and investigate the effect of the haptic marks on user attention, cognitive load, and usability. Based on the current results, we expect that haptic marks will reduce distraction and provide a more eyes-free experience than other touch-based user interface that are currently on the market.

Concerning the integration of replication studies and continuous evaluation of upcoming technologies a series of investigation in industrial (design) oriented context is underway to develop enhanced user-centered design and development processes that will be able to integrated a set of software qualities, including usability, user experience, reliability, safety and security. This work will conclude efforts that are on the way for more than 10 years [27].