1 Introduction

The research presented in this paper is drawn from a critical review of the last decade (2005–2015) of research on the development and evaluation of technology for disabled and older people, their needs and wishes for technological support, and attitudes to technology [1]. This large body of research work was grouped into a number of themes based on how the research and development helps older and disabled people, rather than the technology used. One of these themes, that of supporting older and disabled people in accessing and using technology, covers how people physically use technology including input and output devices and interaction techniques. It is this theme that is being presented here, with a range of the research identified.

This paper focuses on the research to support older people, and in particular, what issues for older people have been addressed when examining the use of handheld devices. In the term “handheld devices” are included tablet computers (commonly referred to as “tablets”), mobile phones and smartphones, and, since the selection criteria reaches back a decade, work on Personal Digital Assistants (PDAs) is also included, although these are no longer widely used. The papers that deal with this topic are drawn from the part of the critical review that covered mainstream human computer interaction (HCI) conference proceedings and journals, as shown in Table 1, rather than specialist accessibility or gerontology outlets. This is because the aim was to understand what work was being done by the HCI community in this area, and what is available to researchers who are new to the field and who would start by looking for previous research and ideas in general HCI publications. The selection for inclusion was based on the impact factor of journals [2] and rankings by the Australian Research Council’s ranking of journals and conferences [3].

Table 1. Journals and conferences reviewed for this paper

Two societal changes make the development of usable and acceptable handheld technologies for older people important. These changes are demographic shifts and the rapid growth in the use of handheld computing devices. With respect to demographics, as is well known, the older population is increasing. According to a report from the Population Division of the United Nations [4], in 2015 one in eight people worldwide was aged 60 years or over (60 or 65 years being typical ages to consider the beginning of “old age”). By 2030, older people are projected to account for one in six people globally. In addition, the report notes that improved longevity and the ageing of larger cohorts, particularly those born during the post-World War II “baby boom” years, means that the older population is becoming even older. The proportion of the world’s population who are aged 80 years or over is projected to rise from 14% in 2015 to more than 20% in 2050. Globally, the number of people aged 80 years or over, the “oldest-old” of the population, is growing even faster than the number of older people overall. Projections indicate that in 2015 there were 125 million people over the age of 80 but by 2050, this will number 434 million.

We need to bear in mind that “older people” refers to people whose age span may cover 40 years and more after the age of 60. This means that the category “older people” represents a group with much heterogeneity, not just in terms of age range, but in terms of abilities. People who are currently in the “youngest old” group may be proficient in the use of computers and may enjoy using technology, but may find as they age that they are less able to use mainstream technology because of age-related frailties. Therefore, it is important to understand what demands technologies make of the physical, perceptual and cognitive abilities of users.

The second change is that at the same time, we are witnessing the very rapid increase in the use of handheld computing devices. These are useful not only for their potential for voice telephony, but also for new forms of communication, such as short text messages (SMS) and alternative messaging systems (e.g. Skype and Viber). These devices are now “gateways onto the Internet”, they give access to information searching and transactional applications, such as online shopping and financial services. Finally, these devices are multifunctional, providing acting as alarm clock, diary, camera, address book, games console, and reading device. Users can personalize them to their preferences, and customize applications to their needs.

The rest of the paper is organized as follows: in the next section a review of 25 papers on the use of handheld computing devices for older people is presented. These are arranged into five groups according to the aspect of use under consideration. This is followed by a discussion and conclusions section in which this corpus of research is related to present day and ongoing current concerns.

2 Review

Our review groups the 25 papers reviewed into five sections. These are: older people’s views on the use of these devices; research specifically concerned with interaction devices (touchscreens, pens) and interaction techniques (tapping, dragging, etc.); research about legibility and display considerations (text size, icon and images); text entry; and menu navigation.

2.1 Older People’s Views About Using Tablet Computers and Smartphones

It is important to develop a deep understanding of older people and their abilities, as these affect their use of tablet computers and smartphones. This is an area addressed by Kurniawan [5], who used a multi-method approach to investigate mobile phone use by older people. At the time of this research, mobile phones used physical buttons for input and smaller screens than today’s smartphone touchscreens. She queried two expert users who were older or had experience of older people using mobile phones, held focus groups, and conducted a web survey. Her results showed that the functional declines typically experienced by older people, including in dexterity, touch sensation, muscle strength, visual acuity and working memory, mean that a range of design features of mobile phones were problematic. More specifically, these include:

  • Dexterity: the size and location of buttons were problematic; their arrangement, (too close together); being rubbery, and not providing enough feedback, such as a click when depressed, meant that users did not know if the press had been registered

  • Muscle strength: the size, shape and weight of mobile phones often made them too small to hold comfortably, but larger phones were too heavy, even though an advantage might have been larger, more usable screens

  • Vison: screens were too small, buttons too small to read labels, text size too small to read, even with the help of spectacles

  • Cognitive functioning: interaction was complex due to the number of options available, the need to navigate menus and learning how to use them; there were too many menus, which were often difficult to understand, difficult to remember.

Taken together, many of the functionalities of older phones were difficult to use: for instance, buttons were not arranged in a familiar way, button pressing was a skill that needed to be learned, and also required memory. This meant that texting was a problem due to text entry on some phones requiring users to press number keys that also stood for letters and also having to understand and remember multiple key presses. Perversely, even aids to texting, such as predictive typing became more distracting than helpful, as effort has to be put into deleting wrong predictions.

A paper by Siek et al. [6] offered many insights from observing younger and older people interacting with a PDA and recording their concerns. The older men in this study worried about the button press, which had a 5-way navigation button, fearing their “fat fingers” would cause them to press multiple buttons, but in fact this was not the case. Most participants held the PDA in their non-dominant hand and used the dominant hand to select buttons. When asked about the icon size preference, older people wanted to clearly see the details on the icon, whereas the younger people in their study were interested in having as many icons as possible on the screen. The researchers noted that both age groups held the PDA at the same distance from their eyes, but that the older users would tilt the screen to be able to see with less glare. In terms of holding the PDA, most of the older participants held the device in two hands when doing a recording task, whereas most of the younger participants held it in their non-dominant hand and used their thumb to depress the recording button located on the side of the device. Trying to account for this difference, some older people expressed a fear of breaking the PDA and held it with two hands for better grip. In addition, the researchers observed participants using their dominant hand to stabilize the PDA while the other hand pressed the button. In a scanning task, older participants kept the scanner still and moved the items to be scanned, while younger participants moved the scanner and kept the item stationary.

The study by Harada et al. [7] was also designed to test motor control rather than cognitive effort. It revealed that older people did not have more difficulty than younger people with pressing buttons, and carrying out simple voice recording and scanning tasks that require dexterity and motor coordination. However, they did prefer larger icons (20 mm) to the younger people who preferred smaller icons (5 mm or 10 mm). From these results the researchers concluded that older participants can physically interact as successfully as younger users at the level of motor tasks, with tasks that are not cognitively demanding.

A more recent paper by Harada et al. [7] undertook to study the multi-touch nature of tablets and smartphones and how older people cope with these. They noted that in spite of these devices using direct manipulation they also require users to learn non-intuitive multi-finger gestures, to cope with unexpected sensitivity of the touch surface and understand a conceptual model that differs from desktop computers. They carried out an observational study using focus groups and an experiment using an application that logged all multi-touch events and changes to the system state. For the experiment, the objective was to observe and analyze the usage patterns of individual participants to gain insight into errors and operation issues.

The participants in their study (age range 63–79, 12 women and 9 men) had varying levels of experience from complete novices to active intermediate users. All of them owned a mobile phone, 8 of them owned multi-touch smartphones, and 14 owned a tablet and 6 owned both devices. Each participant performed a task with an address book, and a map. These were tasks were mimicking actual applications, as the researchers wanted to give a context and realistic tasks.

The researchers observed three phenomena. The first phenomenon was that unexpected touchscreen responses were caused by unintentional touches as well as by touches that were not registered. The first type of situation happened when participants in gripping the device, accidentally touched screen elements located on the side of the screen, or when their finger was hovering above the screen, got too close and triggered a touch event. Unexpected responses also occurred when participants touched the screen but the touch did not register, primarily because their fingers were too dry. The second phenomenon was not seeing the whole screen, the participants would concentrate on the soft keys area for entering text and not check what they were typing in the text box; when they needed to re-enter a telephone number, they would not check to see if the number had been deleted, before starting to enter it again. The researchers suggested that it is challenging to shift attention back and forth from the keys to the textbox. The third phenomenon is that the participants disliked “unfriendly” features. These were things like pop-up menus that come up and faded away before the participant could finish reading them; soft buttons that if pressed longer than a quick tap brought up such menus; tap and hold menus because often the finger occludes the options; dismissing the menu without making a selection which required tapping on an area outside of the menu, an action that might activate something else.

The researchers made several recommendations, such as not putting touch sensitive information in screen areas that are close to where the phone is gripped; providing more feedback mechanisms (beeps or auditory messages, such as repeating the numbers pressed when making a call); and providing some instruction for users. In their discussions with participants, they observed ‘Aha!’ moments, such as happened when they explained gestures to users. In particular, gestures such as “pinch in” gesture. They suggested that older users may be less likely to explore an app and therefore some initial instruction is beneficial.

2.2 Interaction Devices and Techniques

The majority of research identified in our review was on interaction devices and techniques, which breaks down into a number of topics, although by far the most work was about touch-based interaction. This is to be expected, given that touch-based devices were becoming popular during the period 2005–2010 [8] and coming into widespread use with smartphones. In addition, there is a smaller amount of research on pen-based input.

2.2.1 Touch-Based Interaction

Murata and Iwase [9] asked younger (20–29 years), middle-aged (50–59 years) and older participants (65–75 years) to undertake a number of pointing tasks with both a touch panel and a mouse. A “touch panel” was what we would now call a “touchscreen”, but here it is described as a separate piece of hardware attached as a peripheral. Two experiments with the same group of 45 people were conducted. In the first experiment, they measured distance to target, target size, approach angle to target and did this with both direct input (touching) and target selection device (mouse). In the second experiment only a touch panel was used and the experiment concerned target location.

Age made no difference in times to complete the tasks for the touch panel, but there was a strong age effect with the mouse, with older people much slower than younger people and much slower than with the touch panel. Unfortunately, the authors do not report on how experienced with the mouse older people were, and there was also a practice effect across sessions, with older participants getting significantly faster across the five sessions. It may be that the older people were not very familiar with using a mouse and this contributed to these results. The main message of the paper is that, touch or direct input is a promising means for older users.

Findlater et al. [10] also compared younger (19–51 years) and older people (61–86 years) using a touchscreen or a mouse to undertake a wider range of tasks (pointing, dragging crossing and steering). They found that while older people were significantly slower overall, the touchscreen reduced this difference. For younger people, the touchscreen was 16% faster than using a mouse, but for older people, it was 35% faster.

On the other hand, Rogers et al. [11] compared using a touchscreen with using a rotary control, a less common indirect input device than the mouse. They compared the performance of younger (18–28 years) and older (51–65 years) people on a number of basic tasks, including controlling sliders, up/down buttons, list boxes and text boxes. Younger participants were significantly faster than older participants on all tasks. However, there was no difference for older participants on the two devices on all but one task (long up/down buttons), whereas for younger participants there were significant differences on five (out of nine tasks), mainly with performance on the touchscreen being faster. So, for older participants the device made no difference, but younger participants were faster with the touchscreen than the rotary control. However, there was high variability amongst the older participants, particularly in the touchscreen condition, which may account for the lack of significant differences between devices for this group.

Stößel and Blessing [12] investigated the acceptability of different ways of using a touchscreen for younger (mean age 26.1) and older (mean age 67.0 years) people, in the gestures now used with multi-touch screens, for example pinching and spreading the fingers to zoom in and out. The older participants judged the proposed gestures on average as more suitable than younger participants. Indeed, in 20 of the 34 tasks, older and younger participants differed in the gesture that was rated most highly. Looking at type of gesture, direct manipulation gestures were rated similarly by younger and older participants whereas symbolic gestures rated more highly by older participants. “Symbolic gestures” refer to drawing an arrow, numbers or letters, rather than more abstract gestures. There were also differences in preferences for the number of fingers to be used in gestures, younger participants rated more highly gestures using two fingers, whereas older participants rated one finger gestures more highly.

Kobayashi et al. [13] investigated the use of the touchscreen by older people (aged in their 60s and 70s), but without making comparisons to either younger people or other devices. They asked 20 older people to undertake a series of tasks involving tapping, dragging and pinching gestures in two sessions, one week apart. They found that mobile touchscreens were generally easy for the older participants to use and a week’s experience generally improved their proficiency. The participants preferred dragging and pinching to tapping and particularly had difficulty tapping on small targets (e.g. a 30-pixel button). Kobayashi et al. used their results to derive a number of recommendations for the design of touchscreen-based systems for older people:

  • User larger targets (8 mm or larger in size)

  • Address the gap between intended and actual touch locations – when older people miss a target location such as a button, provide feedback on where they have touched and where they need to touch

  • Consider using drag and pinch gestures rather than taps

  • Explicitly display the current mode as the participants often did not notice changes in mode and became confused

Gao and Sun [14] also investigated both performance and preferences for touchscreen gestures for younger (19–24 years) and older (52–81 years) people, and although these were for large touchscreens such as are found on public kiosks, their results are included here as they clearly relate to other research discussed here on touchscreens for older people. For both younger and older participants, button sizes larger than 15.9 × 9 mm led to better performance and higher satisfaction. However, the effects of the spacing between buttons were only significant when buttons were small or large. The younger participants favored direct manipulation gestures using multiple fingers whereas older participants preferred the indirect “click-to” designs (e.g. buttons that control zoom in/out by a certain amount). On the basis of their results, Gao and Sun proposed quite detailed design guidelines for touchscreen interaction for older users.

Interestingly, a paper by Wacharamanotham et al. [15] some time later, and when touchscreens have become more commonplace on tablets, smart phones, and even on PCs, noted that touchscreens were a problem for older users with tremor or “finger oscillation”. With regard to touchscreens, a specific problem relevant to older people’s use of touchscreens is tremor. Tremor is interesting as it interferes with the interaction mode most prevalent in the use of smartphones and tablets, tapping on a touchscreen. While not a major problem in the general population, hand tremor is reported to exist in 6.3% of adults aged 60 to 65, and in 21.7% of the population aged over 95 [16]. Tremor induced oscillations are problematic when using a touchscreen as they make it difficult to accurately tap on a target, or cause multiple input, where it is not wanted. One helpful method is to increase target size, but this is not practical on small screen, where space is at a premium. Working on this problem, Wacharamanotham et al. evaluated “swabbing”, a technique whereby the user slides their finger towards a target on the screen edge to select it. It consists of three interlinked actions, touching the screen, sliding the finger towards the target and lifting the finger.

This “swabbing” replaces the “traditional” tapping for selecting items. The researcher conducted two experiments, with 10 users in the first experiment (3 female and 7 male) with differing conditions of tremor from slight (one participant) to moderate (three participants), to marked (three participants) and severe (three participants). In addition, six participants were left-handed and four were right handed. Their hypothesis was that the finger will have less tremor when sliding on the screen, so they compared sliding with three other types of touchscreen interaction (hovering over a spot, resting on a spot, repeatedly tapping on a spot and sliding left and right in a designated area). In the second experiment, with six users, the purpose was to measure accuracy and user satisfaction. The six participants were in the age range of 70 to 87 years, and with varied tremor conditions (one slight, one moderate, two marked and two severe). Conditions were manipulated so as not to show feedback to prevent participants from learning the task. The participants had a training session before undertakingthe experimental tasks.

The results showed that sliding was consistently lower in tremor (measured by an accelerometer measuring tremor frequency) than the other types of interaction. They also showed that swabbing can lessen finger tremor, and that users were more accurate and felt more satisfaction with this technique. The researchers recommended that tapping is suitable when the targets are large, but that for small targets, although sliding is slower that tapping, users prefer its accuracy over speed.

2.2.2 Pen Based Input

A number of papers have studied older users and their use of pens and stylus as input devices.

Hourcade and Berkel [17] investigated the use of tapping with pens compared to touching with pens to improve accuracy. They conducted a study with 60 people, divided into 20 18–22 year olds, 20 50–64 year olds, and 20 65–84 year olds. Their aim was to understand whether the two types of pen based input, at that time the standard way to interact with a handheld computer, differed in accuracy. Their premise was based on the fact that tapping on a physical notebook, is not as natural as touching, making marks, checks, ticks, etc. The tasks that participants undertook were to select targets either by tapping or touching them with the pen. The targets were of different sizes. Based on descriptive statistics, all three age groups were more accurate when touching rather than when tapping. Some people preferred tapping, because they found touching could be tiring for the wrist, but other preferred touching because they said they did not have to concentrate to aim, but could touch near the target and then move towards it to complete a task. The study made three recommendations:

  • That targets should be larger (in the context of the study, they suggest 50%) to enable the older group to achieve similar accuracy

  • That an easy undo functionality would be of benefit for everyone, but for the older age group it could be crucial if they are likely to make mistakes one out of 10 times

  • That touch interactions should be customized to left and right handed users, as touch interactions by left handed users can obscure the screen.

Following on with the theme of accuracy, Moffat and McGrenere [18] started from established results of problems in target acquisition in pen-based interaction, that of “missing the target” (i.e. landing and lifting outside the target boundary), and of “slipping” (i.e. landing inside the target boundary, but slipping out before lifting the pen). Previous research had established that missing was constant across age groups, but that slipping was unique to older users and accounted for almost half their errors. The researchers noted that slipping, although less frequent than misses, is an important problem for older people, as with a slip the pen lands on the target, activates the visual feedback associated with the selection, and indicates to the user they have been successful, when in fact they have not. Thus, slip errors are particularly confusing, and many older people are unaware of the cause of the difficulty, and do not try to correct the problem.

Given this situation, the researchers trialed two interaction aides, “steady clicks”, designed to address mouse input errors from slipping, and “bubble cursor” that makes the target bigger, to create a combined “steadied bubbles”. They performed experiments with 24 participants, 12 in the younger age group (19–23 years old, comprised of 5 women and 7 men) and 12 in an older age group (65–86 years old, with 6 women and 6 men). Participants were right handed and had no diagnosed motor impairments to their hands and all had normal or corrected-to-normal eyesight. All were novices to pen-based computing. The results of the experiments showed benefits of the two techniques individually as well as when the techniques were combined for both older and younger groups. The techniques were especially useful for the older adults. For slipping, they reduced the performance gap between older and younger adults, so that there was no significant difference between the groups. For missing, both groups benefitted, but the older group benefitted more. This was not expected, but could be the result of the targets being smaller than in previous studies.

Other findings from this paper, included the use of pen as being tiring by older adults, and the researchers noted that they used 50% more force than younger adults. This would explain why it is more tiring, but also points to an interpretation that the force is not necessary, but determining how much pressure is needed, is. The researchers note that the biggest benefits were achieved when targets were small, comparable to the height of a text link. They also explain that there are other mouse-based techniques that could be applied to pens. Given the huge frustration caused by slips and misses they recommended further investigation into pen-based research.

A final paper regarding pen-based input for PDAs, was by Ren and Zhou [19]. They noted that unlike in the real world, where pens and pencils come in all shapes and sizes to suit the diverse population, the stylus is provided with a computing device and is limited to the size of the device. They recommend consideration be given to the physical aspects of the pen, specifically length and width of the pen and tip width. They carried out two experiments to evaluate the effect of pen size with three user groups, children (aged 10–11 years, 8 boys and 4 girls), young adults (aged 21–23 years, 9 men and 3 women); and 24 older people (two different groups: one aged 60–71 years, 7 men and 5 women; the other ages 60–79 years, average age 71, 7 men and 5 women). All participants were right-handed. The researchers did not give any information about the prior experience with technology of the participants.

Participants performed a pointing task, a handwriting task, and a steering task. The steering task required participants to move the pointer of the devices a certain distance, such as is done to move the scroll bar of a window. The test for handwriting was not applied to the group of older participants. The researchers gave the instructions to all participants about how to hold the device, and how to support their arms and to be seated during the testing.

The results showed that the dimensions of the pen affected participant performance very little, but participant preferences are significantly affected. Regarding pen length, older participants preferred longer pens (11–15 cm), and the researchers speculated that this was because of their use of brush pens in Asia, which are longer than pencils and mechanical pencils which are the normal tools for Asian children and young adults respectively. A thicker pen-tip width was preferred by children and older adults, and researchers speculated this may be related to the eyesight of older users. A thicker pen width was preferred by young and old adults, again the researchers speculated perhaps because their hands are larger than children. Thus from this work, the researchers were able to specify pen length, pen tip width (1.00–1.5 mm) and pen width of 7 mm that were most preferred and performed the best in the pointing and steering tasks carried out by older users. The researchers concluded that there are other variables and conditions to test and that this work was a start to introduce the notion that pen design should not be dictated by phone design, and that introducing a range of pens to computer users, akin to real pens and pencils for paper use, could offer benefits to users in terms of comfort and security.

2.3 Text and Number Entry

Both tablets and smartphones use text and number entry extensively, both when used as communication devices and for information seeking. On the one hand, there is the need to create email and text messages and as well as to respond these messages. Kurniawan [5] reported that older people felt obliged to respond to texts quickly as part of common courtesy. On the other hand, the need to input text to web browsers, or to input text into interactive applications, all require text entry. In the words of Weilenmann, “learning to text is an ordeal for the elderly” [20].

Weilenmann, although noting that features such as menu navigation and text prediction are also problematic, concentrated her study on the keypad and key presses in for mobile phones. She referred to handsets that use the 12 key keypad based on the International Standard ISO/IEC 9999-8, where the letters A–Z are distributed over keys 2–9 in alphabetical order. Particularly for letters which have special characteristics, such as diacritical marks, such as Ä these are available under the button where A is displayed, although they themselves are not displayed, and to get to them a user has to press the A key repeatedly. Using a video-taped study session of five older participants from a pensioners’ organization, who were learning to use mobile phones, the research showed that multiple presses were problematic for the participants. In addition, two focus groups and 16 interviews were carried out, in which approximately 8 of the interviews included a practical exercise of text input. Analysis showed that older people had problems understanding how to perform sequential pressing of keys, which was a requirement for many functions on the mobile phone, including texting. They tended to press too slowly, press more than one key by mistake, and keep a key depressed for too long. Thus, before they could attempt to undertake tasks with the phone, they needed to learn how to press the keys. In addition, they needed to check the output on the screen, and hold the phone comfortably.

In a more recent paper, Smith and Chaparro [21] compared five text input methods: physical QWERTY keyboard; onscreen QWERTY; tracing; handwriting; and voice input, and studied performance, usability and user preferences for smartphone text entry tasks. The study used 50 people, 25 younger (aged 18–35) and 25 older (aged 60–84). 22 of the younger participants and 20 of the older participants owned a phone with a numeric keypad, none of the participants owned a smartphone and none had smartphone experience of the five input methods. In terms of physical abilities, none had major problems with dexterity or with speech.

The results showed that for both young and old participants, voice input was the most positively rated. However, the participants themselves noted that the experiments were carried out in the laboratory where background noise was minimal and expressed doubt about using it in a more realistic setting where noise could not be controlled. The next best method was the physical QWERTY keyboard that both age groups reported as comfortable because of their familiarity with it. In addition, they valued the space between keys and the tactile and audible beep feedback. Of the three manual touchscreen methods, tracing, onscreen keyboard, and handwriting, tracing fared the best, with both groups, although this was a new technique. Participants performed worse with the onscreen QWERTY, and besides the attributes mentioned above for the physical keyboard, they complained that fingers obscured the keypad. Older participants in particular did not like the pop-up symbol menus that appeared if a key was depressed for too long. Handwriting was the most frustrating input method, as participants needed to adapt their handwriting to get it recognized by the system. The researchers end their study, published in 2015, with a recommendation to smartphone designers to continue to have a physical QWERTY keyboard available for smartphones, and to provide voice and shape-writing recognition input as standard options.

2.4 Legibility and Display Considerations

Given the inevitable miniaturization of screens in the move from desktop-based computing to smartphones, researchers have also studied older people and their use of various aspects of screen display.

Darroch et al. [22] investigated font size for reading text on handheld computers (PDAs), noting that there was a lack of design guidelines for small screens, and in particular for older people. The researchers’ prior work had given some indication that older people might be able to read smaller text sizes on handheld computers. They wished to determine whether different font sizes are required when designing for older people, and whether the need to scroll when reading text has an effect on the font size chosen. The value of such work rests in the fact that the quality of presentational format can have a major influence on reading speed for learning and comprehension.

Their experiment used two group of participants, 12 people in each, with a balance of 6 men and 6 women per group. The younger group were aged 18–29 and the older group 61–78. All participants were fluent in the English, had a comparable education and were tested for average reading vision. Participants had no or very little experience with handheld computers. Each participant read a set of texts where a word had been substituted with one that rhymed with the original word, (e.g. “trees” was substituted with “sneeze”). The purpose of this task was to have participants read as naturally as possible, as opposed to “scan” for information, to complete a task for example. 32 texts were created of two different lengths, (16 each), short and long. The long passages would require scrolling. The texts were presented each time to participants with a choice of two fonts, there were 8 font sizes (2, 4, 6, 8, 10, 12, 14, 16). The participants reading speed and accuracy was measured, and in addition, participants were asked their opinions and preferences on font sizes.

The preference results showed both groups disliked the extremes, font size 2 and 4, and larger fonts 14–16. At the smaller fonts, some older adults indicated the text was not legible, partly because the smaller the font, the less the contrast between the text and the background. The larger fonts were disliked because the “words are spread out more” which “breaks up the flow of reading”. This qualitative comment was not borne out by the data from the accuracy and reading times, which did not show any significant effect. Overall the participants preferred a font in the range 10–11, with younger participants most positive about 8 and 10, and older participants commented positively about sizes 8, 10 and 12. The researchers noted that 12 is the largest font that required no scrolling with short passages. Although scrolling did not affect the objective measures, users expressed a preference for seeing text “on one page”.

Font size preferences were found to be smaller than found on desktop computer reading studies, but this may be due to the resolution of small screens (640 × 480), so that font size 10 on the handheld computer was approximately the same height as 12 on a desktop computer resolution (1024 × 768). Also, the range of font size preferences may be because participants were allowed to move the screen closer and further away from their eyes.

Roring et al. [23] also investigated an issue related to small screens, that of understanding facial expressions and identifying emotions in small images. The researchers wanted to understand whether older adults are disadvantaged when images are displayed on small screens. The motivation for this work is that, to avoid confusion and misunderstandings, older people need to quickly identify rapidly changing facial expressions of their interlocutors, for instance during a video conferencing session. Previous work has already established that older adults have difficulty in processing negative facial expressions, as opposed to younger adults who do not show differences between negative and positive emotions.

The researchers designed an experiment to determine the extent to which smaller images diminished older people’s ability to identify basic emotions. The experiment compared three groups (younger, middle-aged, older) on their ability to match the name of an emotion to a facial expression. Dependent variables were both response time and accuracy after seeing the expression. The composition of the groups were 20 young adults (mean age = 23 years, SD = 4.1 years) 20 middle-aged adults (mean age = 23 years, SD = 3.3 years) and 20 older adults (mean age = 71 years, S = 5.1 years). All participants were native English speakers and were assessed for cognitive status.

The results showed that in general older participants identified emotions less accurately that younger participants for negative emotions, such as sad or fearful. Older participants also performed worse on surprised faces. Older participants showed no difference from the younger and middle-aged participants for disgusted or angry faces. This contradicts previous research, but the researchers speculate it may be due to angry faces being more difficult to process. All groups showed an increase in accuracy at large image sizes. Since these technologies are expected to play an increasing role in older people’s lives, in using communication technologies, and also for health monitoring, it is important that attention is given to the size and quality of images, otherwise these factors along with older adult’s already diminished capacity to identify emotions, will further hinder the effectiveness of the communication.

Leung et al. [24] investigated age-related differences in the usability of mobile device icons. They investigated whether existing graphical icons are harder to use for older people, when compared to younger people. The researchers were motivated by the importance of the use of graphical icons in mobile phones on the one hand, and on the other, the known decline in perceptual and cognitive abilities of normal aging that makes it probable that this has some effect on older people’s ability to interpret icons.

They conducted a qualitative exploratory study and followed this up with an experimental study to determine which icon characteristics help older people in initial icon usability. In the exploratory study, they had 10 participants from three age groups (20s, 60s and 70s). All participants had good or corrected eyesight, some computer experience, basic cellphone experience, but little to no experience with PDAs. A laptop computer screen was used to enlarge icons from contemporary PDAs to approximately twice their size. Participants were asked to examine each icon, to say what they thought it represented, and to say what function they thought it might be associated with. They were also asked to complete a series of icon finding tasks on two handheld computers, (e.g. finding the icon for the camera or the help button). Finally, they compared the different icons used on PDAs and laptops for the same function, and were asked to say which they thought was the more usable and explain their choice.

The results of this exploratory study showed that the older people were less accurate than the younger people in identifying what the icons showed, and what function they represented. Also, when choosing the preferred icon in a pair they chose the one that depicted an obvious link between the image and the function.

Based upon these results, the researchers conducted a experiment to test four interrelated hypotheses: compared with younger adults, older adults would find it relatively easier to use concrete (as opposed to abstract) icons, icons with semantically close meanings and labelled icons. The experimental design used two groups, 18 younger participants (20–37 years old, mean 30.7) and 18 older participants (65 and older, mean 71.5). Participants were required to have basic computer experience, functional eyesight, and fluency in English, and no experience with handheld computers, PDAs or advanced smart phone functions. Three sets of 20 icons were made, with icons drawn from a corpus of 149 icons used on eight popular mobile devices. The sets of icons represented various combinations of concreteness/abstract, semantically close/far. When presented to the participants, the icons were presented in a screen capture that displayed all the icons that would be in the interface at the same time as the test icon. For the labelled icon condition, existing labels were retained unless they were abbreviated or included a manufacturer’s name.

The icons were enlarged and printed on paper, this was to minimize effects of icon size because of individual differences in eyesight and of glare if they had been presented on a computer screen, especially since many older people are sensitive to glare. Paper presentation also minimized the need to interact with actual mobile devices. The participants were shown the three sets of icons and asked a series of questions about them.

The results supported the hypothesis that existing icons are harder for older people to use. The difficulties with using icons, which leads to the using the entire interface, may partly explain why older people find mobile devices difficult to use. Some icons are good metaphors but were still difficult for people to understand, perhaps because they did not have a good mental model of functionalities (for example the “clamp” for a compress function). The researchers suggested using everyday metaphors since commonly used device metaphors (such as a disk for the save function or a wrench for device options) may not be known to older people who generally have less experience with computers. They counter the argument that future generations of older people will have substantial computer experience with the fact that as each generation of technology creates new functionalities and evolves, each generation of people will have trouble in keeping up. The researchers found that labels greatly help both young and older participants to initially use icons. Although in the experiment there were no significant differences, three older participants commented that they interpreted the label before the icon. Thus, the researchers also suggested using popup labels and interface customization, allowing users to select icons. Finally, the researchers noted that older people, because of age-related declines in retaining learned meanings as well less frequency with using their devices, will not necessarily experience increased familiarity and hence usability of icons with long term use. Thus concrete icons, may offer stronger recall cues.

Olwal et al. [25] reported on research that centred on customization. The researchers suggested tackling the problem of mobile phones from the point of view of the software interface, rather than the physical form factor of the device. They suggest a software centric approach that goes beyond focusing on countering age related visual decline by, for instance, making the text larger and suggest a software kit that can be run on mid and low-end devices (rather than smartphones) and be configured to change the behavior or the “look and feel” of the phone.

The researchers suggested supporting the most prioritized functionalities for older people, as determined by previous work. These are making calls; sending/receiving an SMS; the phone book; image storage; and zoom/scaling. Individuals configuring the device, for example, a family member or a carer, can be given a choice of layouts, and five “components”. These are a function for labelling text input areas or images; a soft button that activates a function, when a corresponding physical button is pressed (e.g. delete contact); a text area that allows text input; a list that displays items in a list view, and GUI button (a soft button) that users navigate to and then activate.

Their OldGen customizable user interface framework underwent a formative evaluation in five individual, informal test sessions with older people (63–74 years old), and a further evaluation study with 6 older women (52–76 years old). Due to the small sample size, no statistical analysis was performed. The participants completed a pre-test and post-test questionnaire, and in between undertook tasks with three different user interfaces. The first was the default user interface on a standard phone, the second was the modified OldGen interface on the standard phone, and the third was a phone specifically designed for older people. On the generic phone, all participants required help to complete the tasks. On the OldGen modified interface, several participants completed the task without requiring assistance. The zoom function was liked, but they had problems activating the soft buttons and with scrolling. The participants rated the OldGen phone best: they did not mind the lack of icons, and they liked the visual feedback for the buttons they were pressing while writing. Most completed the tasks without assistance. This evaluation provided the researchers with information about changes to the OldGen to improve its usability in terms of presentation, such as a better integrated zoom, increased contrast, no icons, some renamed menu elements, to avoid scrolling, and give visual feedback for pressed buttons. The researchers emphasized that their intention was to explore how a customizable interface can be used to provide a consistent user interface for older users regardless of model and brand of phone.

2.5 Navigation

A number of papers have investigated the problem of navigation on handheld computing devices for older people, for example through menus on mobile phones and on PDAs.

Ziefle and Bay [26] conducted a study to investigate the relationship between age and the usability of mobile phones, in terms of complexity. Basing their experimental design on the cognitive complexity of two different handsets, as measured by the number of production rules needed to perform a task, the researchers estimated that one phone required 25% more production rules than the other. In their experiments, they found that both younger and older participants performed better using the phone with the lower complexity, which somewhat refutes the claim that younger users are able to master higher complexity of technological devices. The strong differences between the phones were not reflected in the participant ratings of the devices, which also suggests that manufacturers need to look beyond just consumer usability ratings when evaluating their products. In terms of understanding the device, the older participants showed distinctly less understanding than younger participants, and explained in the post experiment interviews that they expected the phone to meet their needs, functions to be easy to access, and to be as transparent and unambiguous as possible. This is further borne out by performance data that showed that older participants once disoriented in the menu structure, were not able to find their way back, as if they were not able to decide which of the menu entries they had already passed and which remained to be explored. The researchers speculated that this lack of tolerance to a trial and error searching style, might mean that older users would prefer goal-oriented instruction.

Continuing with the problem of disorientation in mobile phone meus, Ziefle and Bay [27] investigated which spatial cues support users when navigating two dimension spaces of menus in mobile devices. It is known from studies of spatial abilities in three dimensional space that people use three sources of knowledge to construct a mental model. These correspond to survey knowledge (the “birds-eye” view), route knowledge (known paths) and landmark knowledge (using landmarks to orient themselves at decision points). The researchers implemented two navigation aids in a simulated mobile phone. The first was a “category aid” showing the name of the current category as well as a list of contents, the other navigation aid (the tree aid) also showed more contextual information, such as the higher and lower categories surrounding the category of interest, and used indentations to show the tree structure. These corresponded to landmark knowledge and survey knowledge respectively.

These two aids were the two independent variables examined in the study, which compared the performance of two sets of participants, 16 younger people (aged between 23–28 years) and 16 older people (aged between 46–60, note this is not a particularly old group), with 8 men and 8 women in each group. Participants undertook a set of 9 tasks, where 6 of the tasks required navigating through the menu. The experiment tested for efficiency, measured by time required to complete tasks, the number of times users returned to the top of the menu and the number of steps back to higher levels in the menu; effectiveness, measured by the percentage of tasks achieved within the time limit; and ease of use, measured by a rating scale. The outcomes clearly show that the survey knowledge (tree structure) is crucial for menu navigation, even by users who are proficient in mobile phone technology, as all the participants in the study were. Where older users have decreased spatial ability, it is even more important.

In a variant on navigation in mobile phone menus, Ziefle et al. [28] conducted a study to examine whether disorientation was present when navigating hyperlinks in mobile phone menus. The interest in this situation is due to mobile devices becoming means for access the internet. Therefore users need to be able to navigate the internet on the small screens of mobile devices. Ziefle et al. conducted an experiment to investigate the effects of age on navigation with hyperlinks. The study involved 20 participants, 10 younger (mean age 22.6 years, SD = 2.4) and 10 older (mean age 59 years, SD = 3.7). with equal number of men and women in each group. All the participants were proficient in the use of computers and the internet and all were mobile phone users. None of the older group had any strong age-related limitations. However, the study showed that although older people were proficient and knowledgeable about the hyperlinks, they did feel disoriented and did not know at which point in the menu they were. This was borne out by their performances which showed detours and a high frequency of going back to the home button to start again. The researchers caution that these effects are likely to be exacerbated with older users with age-related limitations and in real life situations, where they need to hold the phone in one hand, input information with the other and pay attention to their surroundings.

Arning and Ziefle [29] continued their work on navigation and age-related effects of small screen usage by investigated in detail user characteristics in terms of spatial ability, verbal memory, confidence to use technological devices (self-efficacy), and computer-expertise. They recruited 32 participants for their study: 16 younger people (18–27 years) and 16 older people (50–69 years), with equal numbers of women and men in each group. The older group were ‘younger and healthy seniors’ and actively employed, and no significant differences were found in computer expertise between older and younger groups. The tasks were to enter an appointment in a diary and to postpone an appointment.

The results showed that spatial ability was the best predictor of menu navigation performance. The researchers speculated that good spatial abilities facilitate an appropriate mental model of the menu structure which in turn supports orientation within the menu. The significance of mental models for navigation performance confirm the connection with spatial models as seen in earlier published work [27].

Further, the researchers found that that older participants were often guided by an inappropriate model of navigation, or even no mental model at all. When this occurred in the younger group, it did not incur the same negative performance, possibly because younger participants are able to compensate the lack of a model with high cognitive abilities or computer experience. Conversely, older users with an appropriate mental model had the same performance level as younger users, thus showing that they can overcome age-effects.

Finally, Mc Carthy et al. [30] noted an interesting finding as part of a study of a PDA-based application to help older people to reminisce. The researchers undertook a feasibility study to investigated whether potential users of the reminiscence application would be able to comfortably use a PDA. 15 participants were involved, six men and nine women, ages were between 55 and 82 years. As part of the feasibility study, the users were asked to perform six tasks. For each of the six tasks, users were required to navigate the menu structure to get to the tasks. Once the participants got to the tasks, they could continue on their own. However, the researchers noted that all the users had difficulty with the navigating to the tasks and had to be helped. Thus they encountered problems even before they got to the tasks of interest to the researchers.

3 Discussion and Conclusions

This paper has reviewed 25 papers that have been published in mainstream HCI conferences and journals and that dealt with older people and their use of tablets and smartphones. Taken together they give a picture of the research landscape over the last decade, a “state of the art” for researchers and practitioners interested in this area and what has and is happening in HCI with regard to this subject. Of course, in more specialized publication outlets, there will be more work, however, our intent was to show what has been happening in mainstream HCI, and give an account of the trajectory over time to explain progress, and to help researchers find precedents and to continue to add to the research in the area.

While for some papers, it may be thought that technology has moved on, for instance, smartphones no longer use keypads [20], and generally text entry is no longer performed this way via button presses, it should not be forgotten that the legacy lives on in other devices such as ticket machines, automatic teller machine (ATMs) and card payment machines which do require key presses, and that these may still be problematic for older users. It is also noting that often older style phones are “inherited” by older people from younger family members, and in less resourced societies, older phones are often still in circulation. Sometimes, familiarity with these phones, is valued over relinquishing them for newer devices that bring with them a new cycle of learning. Similarly, the research on interpreting facial expressions and identify emotions remains interesting, since mobile technology is often proposed as a means to helping older people keep in touch with others and avoid social isolation, it is a major issue if people cannot see the who they are talking to and read their expressions.

Thus we believe that the papers in this review represent a rich heritage of research on the various aspects of technology use that are useful to reference going forward. Even with modern touchscreens on smartphones and tablets, a number of challenges, such as navigational and text input needs remain for older users in particular. These include miniaturization (from desktop computers to handheld devices), the fact of the devices being held in the hand (an additional stain on reduced muscle strength), and being used in environments that are likely to divide the attention of users (e.g. being outside, being in a noisy environment). A recent comment in 2011 from the Communications Consumer Panel [31] shows that there are still issues around devices such as mobile phones for older people:

“At the moment many older and disabled people have trouble using mobile phones and levels of mobile take-up are substantially lower among these groups; this places them at a significant disadvantage in a society increasingly reliant on mobile services” (p. 4).