Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Old age is a natural occurrence in the biological development of humans. It involves a series of physiological changes caused by the sensitive, perceptive, cognitive and motion control impairment in older adults. Humans who reach this stage of life develop diseases of natural degeneration of the body such as Alzheimer and Parkinson, among others [15].

The various problems of vision, hearing, cognition and movement presenting the elderly, hinder the use of existing technological resources because software applications are not designed to cover these deficiencies. Older adults have limitations on the use of technology. The \(47\,\%\) of the problems reported for older adults due to financial and health difficulties. About \(25\,\%\) of the above difficulties can be resolved by designing systems that take into account these limitations [3].

Some proposals have been made to support health care in older adults using mobile technologies. Several of these proposals focus on the monitoring of patients [1, 6]. Monitoring relates to the evolution of a disease, or to ensure that it is following a treatment [79]. In particular in the case of memory tests they have been developed for dementia [4, 912].

In particular, there are analysis and design of mobile applications focusing on the elderly to apply memory tests Luria [5, 10, 11, 13, 14]. For example, in word learning test, multiple words or numerical figures unrelated are shown to the patient. The number of item exceed the number that the patient can remember. Usually the serie consist of ten or twelve words or numerical digits. After this task, the patient is asked to repeat the series in any order.

In one hand, the physical deterioration of the elderly makes it difficult the usability of the user interface of mobile applications [2, 3, 15]. These deteriorations can be auditory, visual, motor and cognitive. They are particular to each elderly. For this reason, a traditional user interface loses effectiveness when it has interaction with older people. In another hand, severals works say that Tablets are the best mobile devices for older adults, because the size of their screen and usability of their user interface [4, 14]. One solution to this problem is to provide different modalities for interaction. This modalities they must be presents in applications for tablets.

In this work, we present the implementation of the test world learning Luria memory test. And we implemented four combinations of modalities for the test:

  • Vision-Haptic. In this combination of modalities, the application presents the serie on the touch screen and after the presentation, the user must type the words using the virtual touch keyboard.

  • Vision-Voice. In this case, the application displays the words on the touch screen and the user indicates the words by voice.

  • Audition-Haptic. In this combination, the applications says the serie and the user must type the serie using the virtual touch keyboard.

  • Audition-Voice. In this case, the applications says the words and the user indicates the words by voice.

Finally we present the result of interaction with the older adults.The application was tested in 27 people: 24 women and 3 mens. The age average of the sample is 70 years old. We notice that older adults neither read or write could not be tested. Older adults who have no experience with the use of mobile devices showed no reluctance to use.

2 Related Work

The work related to this work focuses on several aspects: Interaction modalities, tele rehabilitation and monitoring to help older adults.

From the perspective of modalities. Several studies have been developed to compare the effectiveness and efficiency of multimodal interfaces against unimodal interfaces as well as to clarify some wrong use of multimodal interfaces assumptions [16, 17]. In the literature it has been reported work to find patterns that suit the capabilities of the end user or the context of use of a particular application [18]. And it has been found that low specific tasks use the multimodal communication can handle complex tasks. These tests multimodal include speech input and pen input [18]. Uni modal interfaces are adequately assessed for seniors, such as touch [19]. The use of multimodal interfaces is still not clear to seniors in some cases may be beneficial [6, 20, 21] and other not much [22], although reports of failures in over 70 years old.

Furthermore, many of these studies focus on the effectiveness of using mobile technology and augmented reality for remote rehabilitation and severals authors have reported encouraging results [6]. This rehabilitation ranging from rehabilitation systems for different types of disabilities or motor, visual and cognitive problems to name a few [6, 21].

Acceptance of older adults to use technology to support them in monitoring and rehabilitation process has been reported in several studies [1]. Several authors show that older adults are supported in Internet consultations on health, also they are easy to adopt the use of new technologies such as smartphones, tablets and smart TVs. Monitoring systems can range from the use of cameras in a house, to the use of sensors in devices that are adapted to the body of information acquired by mobile devices using the elderly [1, 7, 8, 2325].

3 Prototype Design

The design of the prototype should be focused on the elderly. It is known that older adults have changes in their mental and physical faculties that may affect the acceptance of the technology. Therefore considered the prototype design guidelines aimed at seniors. The guidelines for designing displays for seniors are taken from [2, 3]. With these guidelines, they developed mockups for testing Luria and subsequently implemented in Tablets using Android OS. There were two evaluations of the prototypes, the first with specialists. This helped us to correct the prototype. In the second phase, the prototype was evaluated by an elderly driver to know that changes had to be made in the prototype group. The third prototype was used for testing.

3.1 GUI Design for Older Adults

Interface design for seniors considered possible natural damage they may have. These impairments are visual, auditory, movement and cognitive.

Vision. Physiological changes to the eye related to aging result in less light reaching the retina, yellowing of the lens (making blue a difficult to discern color), and even the beginning stages of cataracts result in blurriness. The eye muscles are also affected; it can be more difficult for older adults to quickly change focus or get used to fast-changing brightness. Some solutions for design include: conspicuity can be enhanced by enhanced contrast and taking advantage of preattentive processes, and effortful visual search can be lessened through application of Gestalt laws.

Effect of vision. How the visual aspects of the web can interact with aging to produce difficulties.

Background images should be used sparingly if at all because they create visual clutter in displays. High contrast should be maintained between important text or controls and the background. Older users vary greatly in their perceptual capabilities; thus interfaces should convey information through multiple modalities (vision, hearing, touch) and even within modalities (color, organization, size, volume, texture). Within a website, consistency should be the highest priority in terms of button appearance and positioning, spatial layout, and interaction behavior. Older users are likely to have a reduced tolerance for discovery and quit instead of hunting

Information should be presented in small, screen-sized chunks so that the page does not require extensive scrolling. If this cannot be helped, alternative ways of navigating (such as table of contents) or persistent navigation that follows the user as they scroll be provided.

Hearing. A wide variety of changes can occur to hearing. A good auditory design considers both the physical changes in sound perception and the congnitive changes in the comprehension that comes from initial perception. Keeping informational sounds above background noise requires a study of the display environments. The loudness of a sound is truly individual, but can be approximated through the sound pressure levels (dB) and frecuencies typically maintained in the aging ear. When hearing loss is severe enough that users wear an aid, consider how those aids interact with the interface.

List of general design guidelines that can be used to improve the design of auditory menus. Calculate loudness levels. Consider potential background noise For tones, use low-tom-mid-range frequencies When designing a display device, consider physical proximity ti the ear and interactions with hearing aids. Avoid computer-generated voices Use prosody Provide succinct prompts Provide context

Cognition. The main objective in the design of displays is that they are easy to understand. It is intended that the interface is effective, that is, to help users to complete tasks with less confusion and less possible error. To achieve this, we consider some user skills such as: working memory, spatial skills and perceptual speed. Working memory allows the user to recall situations or things in a short period of time. Spatial ability refers to the user to have a location-based representation of the environment where it interacts, in our case, the state of the application. The perceptual speed indicates the rate at which it perceives and processes information. It is known that these skills decline with age, so the design should not be confused with the instructions or the information presented.

For this reason they have only information related to the test. It ensures that each task selection, display, are associated with their own display. Generating an intuitive workflow, as shown in Sect. 3.3. In each display the action to take as a central element occurs, this allows the user to hold the attention.

Movement. The movement is an essential part of many means of interaction, because a series movements, perform an action to complete a task. Motion control refers to the accuracy and response time of a movement of a human. The accuracy and response time decay with age, for various reasons, mainly due to illness, such as Parkinson or arthritis. From [2], it is suggested that there is sufficient time for inputs, have feedback by other means (auditory, visual, haptic). Simplifying the number of target elements with which the user must interact. And use words instead of images.

3.2 Words Learning Luria Memory Test

For this test, the patient is shown several words or numbers not linked to each and whose number exceeds the amount that can remember. Usually the series consist of 10 to 12 words o 8 to 10 numbers. The patient is asked to recall and repeat the series in any order. After recording the number of items retained, presents to the patient again the series and re-record the results. This process is repeated 8 to 10 times and the data obtained are shown in graphical form called “memory curve”. After complete all repetitions and spent 50 to 60 min, the specialist must ask to the patient the series of words without mentioning it to the patient again.

The application must download the series of words in a database containing common series for this test. Must have a variety of these series, so that no repetition between tests. The test shows that words can be read to the patient or in the form of text. In this analysis we consider two cases.

  • For the case in which the words are presented in the form of text, must take the general considerations of a text in this analysis. The words are displayed on screen with an appearance time for the user to have a chance to read the word and hold it. In this analysis it is proposed that the time of occurrence for word is 5 to 10 s.

  • For the case in which the words are presented in the form of audio, playback of sound files is required in the mobile devices. This represents a higher data download and use more features. This form is recommended for this test, because it’s similar to testing experience between the patient and the specialist. Once the series of words presented to the patient, the application indicates that the patient must enter the words that were presented above. These instructions can be displayed as text on screen. The test suggests that the reproduction of the words to be spoken instead of being written. This requires a hearing user interface. The implementation of a speech recognition algorithm for mobile devices solves this problem. When the user pronounce the words that recalls, the application transform the received audio into text, allowing to store as a string the words the patient recalls. Once record a word, the application must request more words until the user indicates that they no longer remember more words or reached the total words. The application should display a button to indicate that the user does not remember more words and will terminate the test process. A simpler way to implement this phase of the test is to apply the words through a text input, that is, the user will write the words remember and these strings register for the test. Incorrect orthography is a problem when validating a user input word belongs to the series of words.

Fig. 1.
figure 1

Prototype design mockup

3.3 Prototype Design

The prototype is designed to have support for Luria Memory test with four modalities. A female voice was used for audio messages. The prototype design considers the user to select the mode of interaction. Depending on different displays that are presented. As shown in Fig. 1, the first display allows the user to select the interaction modality.

When the user select Vision-Haptic modality option, the user is asked to look at the words that he showed. This instruction is presented in both audio and alert panel for a few seconds. After each word is presented for a period of time (shown in a progress bar). This display will be many times as words is displayed to the user. Then, it prompted the user to type the words to remember. This instruction is also presented in both audio and alert panel. After a display is shown in a form where they should write the words to remember. The writing is done through the touch qwerty keyboard. This whole process is repeated three times, increasing the number of words in each iteration.

When the user choose Vision-Voice modality option the user sees the words and to repeat recording them. First, the user is asked to look at the words that he showed. After each word is presented for a period of time (shown in a progress bar). Then, it prompted the user to repeat the words to remember. After a display is presented to record the words to remember. This whole process is repeated three times, increasing the number of words in each iteration.

When the user select Audition-Haptic option the user hears the words and remember to write. First, it prompted the user to listen carefully to the words. Then a display with a progress bar appears while the application plays audio with the words. Then, it prompted the user to type the words to remember. The writing is done through the touch qwerty keyboard. As in previous cases, this process is repeated three times, increasing the number of words in each iteration.

When the user select Audition Voice option the user hears the words and must record the words to remember. First, it prompted the user to listen carefully to the words. This instruction is presented in both audio and alert panel for a few seconds. Then a display with a progress bar appears while the application plays audio with the words. Then, it prompted the user to repeat the words to remember. After a display is presented to record the words to remember. This whole process is repeated three times, increasing the number of words in each iteration.

4 Test and Results

The prototype test Luria learning 10 words applied to 27 seniors with an average age of 70 years. The \(90\,\%\) is female and \(10\,\%\) is male. All seniors in our sample can read and write, are at healthy mental condition, and in a \(25\,\%\) had visual problems (caused by diabetes) and \(25\,\%\) had hearing problems. An adult in our sample suffers from Alzheimer and other three are diagnosed with a condition of arthritis. Only three users had experience using tablets.

Fig. 2.
figure 2

Problems with the GUI design. This graphic shows the number of older adults that have some problem with the prototype GUI.

Fig. 3.
figure 3

Preferences modalities for older adults. This graphic shows the number of older adults that prefer use the different interaction modalities.

The prototype we used had two phases of review. These phases included review by a specialist in the field of neuropsychology and a pilot group of older adults. The prototype was developed for the Android platform version 4.2 and was implemented in 7-inch tablets.

To apply the test, first it is shown and explained to users running prototype. They were then asked to do the test. And finally users answered a questionnaire usability and effectiveness.

Figure 2 shows the problems with the GUI components detected by the users. It can seen that the prototype have minors errors in the GUI design for older adults. 21 users had no problems in using the prototype. Three users have problems with the directions and one user have problem with de size of letter o size of button.

Figure 3 shows the modality that was easier to use. It can be seen that the Audition-Voice and Vision-Voice were more acceptance. The Audition-Haptic mode had little acceptance and Haptic-Vision modality had no acceptance. This graph was obtained by asking users which modality they preferred. The modality more acceptable to the user is the voice.

5 Conclusion

In this paper we have presented the prototype test Luria learning 10 words. We have focused more on the effectiveness of the interface, rather than on the effectiveness of the test. However, this helps us to trust the results of effectiveness of the test in future work.

We can see that for healthy older adults who can read and write, there is no problem in using the modalities. However, we note that older adults prefer to use the modality of voice instead of the haptic modality. Ninety-five percent of user of our sample have the basic education level, making it possible for spelling problems that had caused rejection of the haptic modality.

Eventhough only three users had experience using tablets, \(100\,\%\) of users were able to complete the tasks of the prototype. Older adults without mental illness, as is most of the sample used in this work, not present reluctance to use technology. We attribute this largely to the use of the tablet and design guidelines for older adults who use the prototype. Our experiment shows the sizes of buttons and letters were adequate. This was no reason for users will use the haptic mode. But older adults prefer to use the voice mode [26].