Keywords

1 Introduction

As Kumin et al. indicate [1], there is an emerging research field regarding Down Syndrome (DS) persons’ computer skills. Using games [2, 3], looking for jobs [1], using numerical applications [4] and mathematical learning [5], authentication in software [6], comprehension of mobile requirements [7], work-related tasks using a tablet [8], the use of computer skills [9] and the use of their musculoskeletal system during tablet interactions [10].

There are three main skills that influence computer usage: cognitive, motor and perceptual [6, 11]. DS people’s cognitive skills struggle with abstract thinking, short-term memory and have attention spans [6]. Their visual and spatial memory is better than their hearing and verbal skills [12]. As Brandão et al. [13] pointed out, they have poor imitation skills. And, because children with DS don’t develop spontaneous learning strategies, they find problem-solving very difficult. They also have a problem with decision-making, action-initiative and mental calculation [14]. Another limitation is the association and task composition/decomposition [15].

In a research carried out by Feng and Lazar [6], parents of SD children reported that their offspring had trouble answering who, where, when, what and why questions. In addition to that, authors mention that their perceptual skills are also affected by disability and, in general, they need extra time to complete any task. Most of DS people have limited language communication and memory skills. They acquire fine and gross motor skills at a later age even though most of them have visual and hearing problems; their main strength is that they are highly visual [16].

Their low-reading level impacts their computer usage. The basic-reading performance that those people have, influences their use of the computer. In a research paper parents reported their children had trouble following instructions on websites and apps; they suggested the use of verbal instructions, because spelling is another problem [6].

Nevertheless, the use of technology has increased tremendously, including people with cognitive and physical disabilities, because of the benefits that it represents to them [17, 18]. With enough practice, DS people can acquire the same skills and become as dexterous as anyone. Accessible IoT devices will be a future solution for interaction, so studying them is important. In this paper we propose a methodology to measure what kind of movements are easier to use and determine which ones should be included in a software in order to make it more accessible. Thus, we intend to study the touch, eye movement and body gestures that will be suitable for the design of applications accessible for people with DS.

2 Purpose of this Research

The Web Accessibility Initiative (WAI) [19] has guidelines that are regarded as international standards for Web accessibility. We would like to extend those guidelines to include Down-Syndrome people and to design better interactions for touch, body and eye gestures. We expect to design guidelines for those gestures that are more adequate for all kind of tasks.

We consider that not all gestures are equal, so by measuring their friendliness, we can suggest which ones would simplify those interactions for common software DS people use, such as Spotify, Angry Birds, or Netflix. At the same time, we can suggest difficult actions that would ensure no accidental enabling for app configuring or shut down functions.

3 Work in Progress

The proposed methodology is described here, including an explanation on how to measure the different types of gestures, and the validation of those measurements with empirical observations.

3.1 Gesture Selection

Since we have had little experience with DS users, we began with empirical observations. First, we observed what type of apps they preferred and how often did they use a smartphone. Results varied greatly, but some users were extremely proficient with smartphone use. We decided games would be the best option for measuring gesture use.

A list of the main actions used in touching is shown in Figs. 1, 2, 3 and 4 and body and eye movements in Figs. 5 and 6 was produced. We devised a small game for each task which is explained with greater detail in the gestures sections.

Fig. 1.
figure 1

One hand touch gestures using fingers, selected for measurement.

Fig. 2.
figure 2

Two hands touch gestures using fingers, selected for measurement.

Fig. 3.
figure 3

Whole hand touch gestures, selected for measurement.

Fig. 4.
figure 4

Two whole hands touch gestures, selected for measurement.

Fig. 5.
figure 5

Body gestures selected for measurement.

Fig. 6.
figure 6

Eye gestures selected for measurement.

Instructions for each game will have to be verbal, since we noted in our empirical observations, most of the users had little or no reading skills.

The games will measure their number of attempts, the number of completed gestures, time employed to complete them, and if a gesture was not completed, the percentage of completeness. To identify if Fitt’s law [20] determines those results, each movement will be measured by presenting images used on touch and eye measurements in five different sizes and shown in a random order. All data collected will be analyzed to determine whether there is a correlation between successful gestures and the person’s age, gender, target size and gesture type.

3.2 Participants

This research is qualitative and theoretical. We are following Sampieri et al. [21] who suggest that 20 to 30 cases should be observed in each study. We intend to contact several independent institutions that help people with Down syndrome, thus ensuring a wider socio-economical range. By doing this, we will be sure that the ease of movement is not deviated by their familiarity with the technology used.

We expect to have participants ranging from 10 to 40 years of age, of both genders, and with no restrictions on technology skills.

3.3 Touch Gestures Measurement

In order to determine which of the touch gestures is easier to use, we first listed the most popular ones. Table 1 shows all of them. We divided them according to use of one or two hands, and observed if the person was using several fingers or the whole hand to complete the gesture. Then, we devised a small game to ensure the user would be applying that gesture. as shown on Table 1. We programmed all these for a tablet app which measured the gestures.

Table 1. Gestures and games associated to their measurement.

Grossman et al. [22] suggested task metrics based on task performance which Mendoza et al. [23] adapted to measure task completion by Down Syndrome users:

  1. (1)

    Percentage of users that have completed the task optimally.

  2. (2)

    Percentage of users who completed the task without any help.

  3. (3)

    Ability to complete a task optimally after certain time frame.

  4. (4)

    Decrease in task errors made during a certain period of time

  5. (5)

    Time employed until the user completes a certain task successfully.

We adapted these metrics considering a one-time observation only and also taking into account Fitt’s Law [20] regarding the size of the target. All images will be randomly presented in five sizes in order to determine the minimum target size required to achieve success.

Thus, the same individual would not be using the software several times in order to increase his/her learnability. The metrics that were defined are:

  1. (1)

    Percentage of users who completed the task.

  2. (2)

    Time used to complete de task.

  3. (3)

    Minimum target size for completed tasks.

3.4 Body Gestures Measurement

To determine which body movements are easier to use, we listed the main ones. Table 2 offers a complete list of them. We created a Kinect “dancing game” where the user will copy the gestures presented by the console. We provided three choices of music to increase its appeal. Again, using Grossman’s [22] suggestion, we included these measurements:

Table 2. Interviews to be conducted to DS caretakers.
  1. (1)

    Percentage of users that complete each of the movements,

  2. (2)

    Time taken to complete each task.

In this case, all gestures will be presented in the same order, and chosen to be executed with a fluent movement.

3.5 Eye Gestures Measurement

To measure if there are some eye movements easier than others, an eye tracking test was designed. This game is called “follow the butterfly” and begins by presenting a blank screen with a butterfly in each of the four corners and one in the center, as shown in Fig. 7. After this, in Fig. 8, a butterfly was presented “hidden” among flowers, but still being visible. Immediately afterwards, butterflies appeared in different parts of the screen; sometimes they appeared in the same location as the previous one, and others were hidden in different parts of the screen. Butterflies were hidden by using the same color as the flowers, as in Figs. 9 and 10. We also included images that are very evident distractors, as the cat in Fig. 11, in order to observe if users concentrate on the task or are distracted by other elements in the pictures. The size of the butterflies varied to measure the ease of use according Fitt’s law [20].

Fig. 7.
figure 7

An example of a highly visible butterfly in white background used as instructions for “Spot the Butterfly”. Variations of this image include the same butterfly in all four corners and in the center.

Fig. 8.
figure 8

A highly visible butterfly hidden among flowers from a different color

Fig. 9.
figure 9

An orange butterfly (circled in yellow) hidden among flowers of the same color (Color figure online)

Fig. 10.
figure 10

A red butterfly (circled in yellow) hidden among red, white and purple flowers, on a different part of the screen (Color figure online)

Fig. 11.
figure 11

A cat used as distraction where the butterfly to be found is circled in red (Color figure online)

Finally, the Eye tracking study will produce gaze plots and heat maps that can be reviewed individually and grouped by all users.

3.6 Interviews

Other inputs for our research are the empirical observations of DS people and the use of technology. For example, we have observed that they can be quite proficient when using Spotify. Even though one person could not read, he was able to find his favorite music by looking at the images of albums. And we noticed he used his smartphone with amazing speed. Thus, we expect to conduct interviews with teachers and staff, working in DS institutions.

The script for these interviews is shown in Table 2.

4 Conclusions

Many Graphical User Interface design guidelines exist, but they tend to be too general and do not take into account specific adaptations for specific users with certain disabilities. This can have a negative impact in the user’s interface design. We expect these measurements will enlighten us to decide if some movements are easier than others. With this in mind, we may be able to create a guideline of best practices for usability and accessibility of interfaces designed for DS users. We should remember difficult gestures are the best designed options when we want to be sure users want to perform a drastic action with their device, for example, shutting it down. Therefore, the mentioned guidelines can help determine which gestures should be programmed, and in which situations. We believe there is still much to be done concerning software and hardware inclusive design to improve experiences for cognitive disabilities. We hope this can be a small step to improve this trend.