In recent years, AT researchers have explored computer input devices extensively, to help individuals with mobility impairment due to diseases including SMA, quadriplegia, muscular dystrophy (MD), locked-in syndrome, amyotrophic lateral sclerosis (ALS), multiple sclerosis (MS), cerebral palsy (CP), and spinal cord injuries [Pinheiro et al.
2011]. The main challenge to design and build assistive computer-human interfaces is that the proposed devices need to accommodate the special needs of the target individual. Unique personal characteristics and preferences have a significant impact on the kind of sensors that can be used [Tarng et al.
1997], as well as on the actuators and their placement, even though the resulting device may provide the same functionality across different users. The design of an AT device necessitates maximizing information flow while simultaneously minimizing the physical and mental effort of the end user [Abascal
2008]. Consequently, the majority of current AT techniques for people with motor impairments rely on the collection of signals from different parts of the body, such as the tongue, brain, or muscles which are often under the individual’s voluntary control.
In this section, we provide a brief overview of prior work related to input devices for individuals with motor impairments and vibrotactile feedback provided by assistive devices.
3.1 Assistive Input Techniques for Individuals with Motor Impairments
There is a need for hands-free input devices for users with severe hand motor impairments. Brain-Computer Interfaces (BCIs), eye tracking, tongue-based interfaces, and voice input have been explored in prior work. BCIs have been used for brain-to-text communication [Willett et al.
2021] and hands-free wheelchair control [Singla et al.
2014] have enabled interaction with computers and movement without assistance, empowering individuals with paralysis or motor impairments. There are two main types of BCI systems: (1) non-invasive approaches that predominantly use electroencephalography (EEG) data, which is analyzed and deciphered using signal processing and machine learning methods [Birbaumer et al.
1999; McFarland et al.
2008; Vidal
1973; Wolpaw et al.
2002], and (2) invasive methods that involve brain surgery to implant an electronic port physically connected to the brain anatomy. However, invasive BCI techniques are usually inaccessible outside of research labs [O’Doherty et al.
2011]. Recent research has shown significant advances in BCI. For example, a novel hybrid EEG-based BCI system that merges motor imagery with P300 signals has been developed for efficient 2D cursor movement and target selection [Long et al.
2011]. Another framework utilizes EEG signals to control operating system functionalities [Gannouni et al.
2022]. Performance comparison of a non-invasive P300-based BCI mouse to a head-mouse for people with spinal cord injuries revealed that the P300-BCI mouse offered a promising alternative for users with severe motor impairments, showing potential for everyday use [Gannouni et al.
2022]. There is also a growing number of EEG-enabled BCI devices for consumers. Emotiv,
1 Advanced Brain Monitoring,
2 and Muse
3 are some commercial devices that allow integration of various brain signals into a single headset for use in daily life, even though the limitations preclude continuous wearing for extended periods of time necessary for interaction with computers. Despite extensive research in BCIs for over four decades, most BCI devices are limited in their use because of challenges related to EEG signals being highly susceptible to noise both from the user and their environment.
Voice input has been the subject of considerable research and development as a hands-free input technique. For voice-based interaction, sounds are converted into digital instructions [Dai et al.
2003; Harada et al.
2009; Igarashi and Hughes
2001; Polacek et al.
2011; Rosenblatt et al.
2018], whether they are speech or non-speech sounds (e.g., whistling, humming, or hissing) [Bilmes et al.
2005; Harada et al.
2006]. Most speech-based methods have been trained on speech by native speakers of a language, making it challenging for the system to recognize accented speech [Metallinou and Cheng
2014; Ping
2008]. Notable advancements include the Voice Controlled Mouse Pointer (VCMP), which uses voice commands for cursor movement and operating system functions, offering accessibility for people with disabilities without requiring a user’s voice database [Kaki
2013]. Another innovation is a voice-controlled cursor for point-and-click tasks using non-verbal sounds, demonstrating higher accuracy and user preference over traditional spoken digit recognition methods [Chanjaradwichai et al.
2010]. A recent technique combining eye tracking and voice recognition enables laptop operation for those with physical challenges, using cameras for eye movement tracking and converting speech into commands [Kalyanakumar et al.
2023]. These developments illustrate the ongoing progress in voice-based input technologies, enhancing the interaction experience for users with various needs. However, in order for either of these voice-based methods to be effective, a relatively quiet environment is often necessary, since ambient noise can have an undesired impact on their performance, though that is improving with ambient noise canceling methods. In individuals with neuromuscular diseases, speech clarity can be significantly affected by compromised tongue muscle function, influencing the usage of voice-based systems [Kooi-van Es et al.
2023]. Additionally, these systems may struggle to reliably recognize varying accents, potentially leading to user frustration.
Eye gaze tracking has been extensively explored as an input modality. An eye gaze tracking system works by detecting, tracing, and mapping the movements of the user’s eyes to the controls on a computer screen, first demonstrated by Jacob [
1991] in 1991. Following their work, AT experts have studied eye gaze tracking in more detail to minimize the errors associated with this kind of method and increase performance [Adjouadi et al.
2004; Deepika and Murugesan
2015; Rajanna and Hammond
2018; Sesin et al.
2008]. It has been noted that using this type of interaction method over a prolonged period of time can cause headaches [Liossi et al.
2014]. The slower speed of input, lower accuracy, and the need to wear a device, all make it challenging for prolonged use as a primary input method.
Another area of exploration has been Tongue-Computer Interfaces (TCIs). TCIs use sensors mounted on the tongue to measure movement and pressure [Wakumoto et al.
1998]. These types of systems have been used to help perform various tasks. For instance, the Tongue-Drive System (TDS), capable of generating 9 distinct signals [Chu et al.
2018], has been used for operating computers [Kong et al.
2019], managing a hand exoskeleton with one degree of movement control [Ostadabbas et al.
2016], and controlling a power wheelchair [Huo et al.
2008]. The Inductive Tongue-Computer Interface (ITCI), which Struijk initially introduced [Struijk
2006], offers 18 command signals [Andreasen Struijk et al.
2017]. It has been employed as a control interface for multiple applications. The Itongue®,
4 a commercial variant of the ITCI, enables users to operate personal computers and power wheelchairs. ITCI’s performance has been tested on the individuals with and without disabilities through various tasks such as typing [Caltenco et al.
2014; N. S. Andreasen Struijk et al.
2017] cursor control on a computer [Caltenco et al.
2014; Mohammadi et al.
2019], and managing an assistive robotic arm [Andreasen Struijk et al.
2017; Mohammadi et al.
2021]. A significant drawback of these sensors is their placement in the mouth which can cause fatigue and discomfort from extended use.
In addition to these technologies, another area that complements the spectrum of hands-free input methods is the development of head-controlled systems and Camera Mouse technology. These innovations specifically target individuals who, while capable of head movement, face challenges with hand-based interactions, thereby broadening the range of ATs available for diverse motor impairments. Head-controlled systems generally use a piece of equipment, like a transmitter or reflector, attached to the user’s head, designed to interpret the user’s head movements and map them into the cursor’s movements on a computer screen [Chen et al.
2003; Fitzgerald et al.
2009]. An additional switch often substitutes for the mouse button. The Camera Mouse uses a front-facing camera without the need for head attachments [Betke et al.
2002; Magee et al.
2011]. It tracks head movements via computer vision, translating them into on-screen cursor movements. Mouse clicks are enabled through a dwell-time-based customizable process. As stated earlier, these systems require users to have full control over their head’s movements, thus, those who are unable to stabilize and control head movements may find it challenging or impossible to effectively use these systems [Heitger et al.
2006]. Lastly, there are hands-free tools and approaches designed for people who, while unable to move their heads using the aforementioned systems, still retain the voluntary control to move facial muscles and make facial expressions [Taheri et al.
2021a,
2021b].
The diverse range of hands-free input technologies, from BCIs to head-controlled systems, have expanded interaction options, including alternatives to computer mice, for individuals with severe motor impairments. However, there remains a need for more conventional yet adapted input devices. These devices can cater to individuals whose hand impairments may not be severe enough to require entirely hands-free solutions but who still face challenges with standard input methods. For example, trackballs as an alternative to computer mice offer ease of use for those who have difficulty with wrist movements or grasping. They can be operated using fingers, palms, or even the side of the hand, providing flexibility in control methods. However, research has shown that the use of trackballs decreases the strain on shoulder muscles, but increases the strain on the wrist [Harvey and Peper
1997]. Another tool that can be used as an alternative to pointing devices is the joystick. Joysticks are typically used to assist individuals with mobility impairments operate their power wheelchairs, while in other contexts, they are commonly utilized as game controllers. However, operating a joystick requires a certain level of fine motor control and coordination. Some individuals may find it difficult to grasp, move, or manipulate the joystick with the precision required due to limited motor strength, dexterity, or coordination [Aspelund et al.
2020; Martins et al.
2022]. In addition, typically an extra button is necessary for clicking, requiring users to alternate hand movements between the button and the knob. Touchpads or trackpads, commonly built into laptops, require minimal wrist movement and no need for grasping. They support basic gestures like tapping for clicks. However, multi-gesture actions often needed for double-clicks, grabbing screen elements, and so on, may not be feasible for individuals with hand motor impairments who are unable to use more than one finger.
While all these techniques enable interaction with computers, they have their limitations of cost, efficiency, feedback, discomfort, and speed. To overcome some of these limitations, we designed MouseClicker to work with a device-free input method based on facial expression recognition using a common webcam [Taheri et al.
2021a,
2021b], particularly since without the input functionality, it is challenging to demonstrate the correlated haptic feedback. This input method may not work for all individuals with motor impairments, but it was easy for Taheri to use as she is able to voluntarily control her facial muscles.
3.2 Haptic Feedback in AT
Haptic feedback in AT encompasses a variety of modalities, each offering unique benefits to enhance user interaction. This feedback spectrum includes force feedback, tactile feedback, and vibrotactile stimulation, each playing a unique role in augmenting user experience.
Force Feedback in AT: Force feedback or direct pressure, often seen in virtual reality and rehabilitation devices, offers users a tangible sense of resistance or pressure. These systems simulate real-world physical interactions, providing crucial sensory input that aids in motor skill recovery and spatial awareness. For instance, previous studies have shown the effectiveness of haptic feedback in improving finger independence and dexterity in post-stroke patients [Lin et al.
2016; Thielbar et al.
2014], enhancing grasp control in individuals with multiple sclerosis [Jiang et al.
2009], and supporting hand rehabilitation in people with tetraplegia [Markow et al.
2010].
Texture Perception in AT: Tactile feedback encompasses a broad array of sensations, from basic touch to intricate textural information. This type of feedback is particularly beneficial in assistive devices for individuals with sensory impairments, where the tactile sensation can substitute for or augment visual or auditory input. Devices like tactile gloves and Braille displays are prime examples where tactile feedback has been revolutionary.
Vibrotactile Feedback in AT: Within the tactile feedback category, vibrotactile stimulation is a widely used form. Vibration motors and piezo-actuators are commonly used to produce vibrotactile stimulation. Initially popularized for mobile device alerts, vibrations notified users of incoming calls or messages, system states and setting changes [Brown et al.
2005; Kaaresoja and Linjama
2005], with rhythmic and amplitude-varied feedback. Over time, vibrotactile feedback has become a dominant haptic modality in VR experiences. In the realm of touchscreen devices, which lack inherent tactile response, vibrotactile feedback has been pivotal in emulating the sensation of physical buttons, enhancing text entry performance and user experience [Hoggan et al.
2008; Koskinen et al.
2008]. Beyond general usage, vibrotactile feedback has shown immense value in supporting users with various disabilities. It has been effectively employed in AT for blind or visually impaired users, providing an alternative sensory channel, and conveying information that would typically be visual. This approach has been used effectively for shape recognition, reading enhancement through tactile representation of Braille, and navigation assistance, where tactile cues replace visual ones [Kaczmarek and Haase
2003; Sampaio et al.
2001; Velázquez et al.
2018; Zelek
2005]. Similarly, in the context of rehabilitation, vibrotactile signals have aided in improving fine motor skills and grasp control in individuals with motor impairments [Alamri et al.
2007; Feintuch et al.
2006]. For individuals recovering from stroke or those with brain and spinal cord injuries leading to sensorimotor impairments, vibrotactile feedback has been particularly valuable. It provides guided feedback for improvement and correction of movements, potentially reducing the need for constant supervision by therapists [Bao et al.
2018; Bark et al.
2014].
Our focus on vibrotactile feedback for the MouseClicker system, particularly through coin motors, is grounded in its blend of efficacy, simplicity, widespread use, and user accessibility. The choice was driven by the need for a lightweight, compact, and cost-effective haptic actuator that had been previously widely explored, and could be easily integrated into devices. While other forms of tactile feedback, such as pneumatic or electromagnetic actuators, offer different benefits, vibration motors provide an optimal balance of feedback quality, device miniaturization, and affordability. This balance is crucial in AT, where user comfort and device accessibility are paramount. Our approach aims at ensuring that MouseClicker is not only technically effective but also practically accessible to a wide range of users with severe motor impairments or quadriplegia. We plan to open-source our design with the hope that friends and family members of people with severe motor impairments experiment with our design and modify it as needed without incurring high costs typical of ATs.