skip to main content
research-article
Open access

MouseClicker: Exploring Tactile Feedback and Physical Agency for People with Hand Motor Impairments

Published: 28 March 2024 Publication History

Abstract

Assistive technology (AT) design is critical in enabling functionality for people with disabilities, blending essential elements of both practical utility and user experience. Traditionally, AT has successfully addressed core functional needs, such as enabling cursor movement and clicking actions with devices like computer mice. However, a comprehensive approach to AT design also necessitates a thorough consideration of sensory feedback, including tactile sensations, ergonomics, and auditory cues like button click sounds. These aspects are not merely supplementary but are integral to the device’s functionality, enhancing user interaction and long-term comfort, especially for individuals with motor impairments. In this work, we present MouseClicker, a mechatronic AT to surrogate physical agency over a computer mouse and to foster the haptic sensory experience of clicking on it tailored specifically for an individual with Spinal Muscular Atrophy (SMA) who faces challenges in using a standard mouse due to severe hand motor impairments. Our design aims at replicating the holistic experience of clicking a mouse, from its functional mechanical actions to its nuanced tactile and auditory feedback. This work details the MouseClicker’s design and reports on an exploratory user study aimed at identifying optimal vibrotactile feedback parameters – such as location, and intensity – that represent mouse button clicks. MouseClicker presents a step forward in AT design by integrating the functionality, sensory feedback, and the overall experience of taking control over non-AT devices.

1 Introduction

The World Health Organization (WHO) defines Assistive Technology (AT) as technology that “enables and promotes inclusion and participation,” primarily through the design of tools that can “improve an individual’s functioning and independence.” This is usually interpreted to mean functional inclusion vs holistic inclusion, i.e., consideration is made for what a tool can “do” and not for what a tool “is” Newell [2003], despite the WHO mandate to be holistically inclusive.
For people with hand motor impairments, a number of devices have been developed to interact with computers such as mouth and tongue controllers and joysticks [Krishnamurthy and Ghovanloo 2006; Peng and Budinger 2007; Quadstick 2020], brain-computer interfaces (BCIs) [Pires et al. 2012], eye-gaze controllers [Gips and Olivieri 1996; Smith and Graham 2006], facial expression based input [Taheri et al. 2021a, 2021b], and trackpads (if the person has any level of control over their hand muscles). While all these interaction modalities enable input functionality, none provide haptic feedback inherent in physical tools, such as a computer mouse or keyboard.
Studies with users have revealed that thinking and creativity are facilitated by the tools we use, from the versatile pencil and paper to the more purpose-oriented computer keyboards. Psychologist Vygotsky proposed the idea that tools enhance a person’s problem-solving abilities by expanding their Zone of Proximal Development (ZPD) [Vygotsky and Cole 1978]. According to him, tools play a crucial role in shaping and extending human cognitive abilities. That work laid the foundation for understanding how tools and external aids contribute to problem-solving and higher-order thinking processes. To further expand upon the idea, sociologist Sherry Turkle highlighted the significance of objects in providing a sense of materiality, embodiment, and connection to the physical world [Turkle 2011]. She further suggested that physical objects, such as pencils, can engage our senses and enable deep thinking, reflection, and imaginative exploration, offering sensory richness and a tangible presence that digital technologies often lack. Continuing the discourse on the importance of tangibility, designer Don Norman contended that physical objects can shape and influence human cognition and problem-solving [Norman 2013]. His work, particularly the book The Design of Everyday Things, showcased the impact of physical object design on the way we think and interact with the world.
In addition to psychologists and designers highlighting the role of objects in problem-solving and thought construction, artists and musicians, like pianists, also stress the significance of touch in creating and playing music—a sensation that on-screen pianos frequently fail to reproduce. Similarly, computer keyboard enthusiasts wax poetic about the sound, travel, and tactile feedback of mechanical keyboards, and how their hands glide over the keys for effortless input. The same cannot be said of on-screen keyboards, often the dominant input option for people with hand motor impairments. Social scientist Howard Gardner talks about the impact of the tactile sensation of his fingers on keys, noting the soothing influence of typing, which he finds to be more significant than the satisfaction derived from creating well-written content. If presented with the choice, he would choose to use the keyboard rather than bypass it to directly transmit thoughts from his mind to the computer [Gardner and Davis 2013].
Ideas from psychology and design, emphasizing the significance of physical objects, provide the foundation for our work, which draws inspiration from their insights to focus on integrating tactile sensations into the design of ATs. Building the inclusive input device presented in this work began with a conversation about input methods with our co-designer, Taheri. Taheri is a graduate engineering student with spinal muscular atrophy (SMA) who relies exclusively on a single functional thumb to interact with an on-screen keyboard using a touchpad. Beyond the need for an effective input method, met by her on-screen keyboard, Taheri expressed the desire for tactile sensation and a sense of “physical” connection with the input device. In particular, she cited the example of a computer mouse, saying that despite being able to use a mouse with her touchpad she had no idea what using an actual computer mouse felt like. Our ongoing conversations with Taheri revealed a broader desire for the tactile experience of using physical tools like keyboards and musical instruments. While their virtual counterparts are accessible and functional - such as on-screen keyboards and digital music production software like GarageBand - the physical sensation offered by those tools was missing for Taheri. Her feeling of exclusion from the authentic experience of using a physical object highlights a critical gap in current AT designs, which typically prioritize functionality over the holistic experience of using an object [Turkle 2011]. This presented us with both a challenge and an opportunity to investigate the integration of other sensory modalities—such as touch and sound—into an input device, with the aim of creating a more inclusive experience for Taheri.
In this work, we present MouseClicker, an input method co-designed with Taheri, that enables 85 physical agency over a computer mouse and provides the tactile sensation of clicking it (Figure 1). Our initial design for recreating the computer mouse experience was based on prior approaches that emphasize the importance of direct interaction in the design process [Kim et al. 2014]. These approaches suggest that direct interaction is often desirable as it allows for a more efficient and comfortable experience. Based on this, we began with an orthoprosthetic glove, emphasizing direct interaction between the user and the device to provide tactile feedback at the point of interaction. However, upon further investigation, we recognized three serious drawbacks to this approach for Taheri. Our primary concern was the substantial expense associated with developing an exoskeleton. Our work’s objective centers around making our design open-source, allowing others to recreate it inexpensively in their own homes. Furthermore, a publicly available orthoprosthetic glove might not accommodate various hand sizes and shapes, as it would be specifically tailored for Taheri. Lastly, while the orthoprosthetic or exoskeleton solution would allow using the mouse directly by externally controlling Taheri’s fingers to click the mouse buttons, it inadvertently risked diminishing her thumb control due to the mechanics involved in the glove design. Given the priority to maintain Taheri’s existing level of thumb functionality, we (including Taheri) deemed this tradeoff to be unacceptable. This insight prompted a shift in our design approach from direct to decoupled interaction.
Fig. 1.
Fig. 1. Left: Side view of the MouseClicker device showing the microprocessor, lever mechanism, vibrotactile haptic interface, and base of the MouseClicker. Right Top: shows Taheri using the MouseClicker. Right Bottom: The vibration motors attached to Taheri’s fingers for providing haptic feedback corresponding to the mouse clicks.
Unlike assistive input devices that often require the design of entirely new devices to replicate a desired functionality, our prototype leverages the standard computer mouse to recreate the tactile sensations and provide a physical representation of the user’s actions, which is missing in digital-only options. This approach not only allows us to capitalize on the existing functionality of commonly available input devices, but it also helps ensure affordability and simplicity, presenting a novel pathway toward holistic inclusion in AT design. It is known that the sense of touch can augment any interaction method, where haptic feedback serves as a complementary communication channel to other senses such as vision and hearing [Hwang et al. 2001]. Nevertheless, while vibrotactile feedback is significant for interaction design, not much attention has been given to this modality as an accessibility feature for individuals with hand motor impairments.
MouseClicker integrates tactile feedback corresponding to each mouse click. To test our decoupled interaction and tactile feedback, we performed a user study to understand the range and location of tactile feedback that most effectively simulates the sensation of clicking the left and right mouse buttons for Taheri. By conducting a study with individuals without hand motor impairments and Taheri who has a hand motor impairment, our objective was to ensure that the haptic feedback design is grounded in a universally recognizable standard, enhancing its potential efficacy for all users, including those with motor impairments. This methodological choice stems from prior work in AT design [Ammar and Taileb 2017; Fitzgerald et al. 2009; Markow et al. 2010]. We provide details in the study design section (Section 6).
As with other AT designs, our work is tailored to suit the unique needs of one individual. However, our hope is that the tactile parameters learned through our study are generalizable, and can be integrated with other forms of hands-free computer input methods.
Our main contributions are as follows:
A proof-of-concept prototype that is collaboratively designed with and tested by Taheri who has severe hand motor impairments due to a progressive neuromuscular disease.
A user study with 10 participants without motor impairments along with Taheri, aimed at comparing Taheri’s haptic perception with the rest of the participants’ responses, with a focus on mechanoreceptor functionality, and testing location and intensity of vibrotactile feedback, as well as sound representing physical mouse clicks.
Appropriation of a computer mouse to demonstrate our vision of holistic inclusion (functional + sensory) through the integration of multimodal feedback for the design of an AT input device.
Our interpretation of holistic inclusion recognizes that the way we engage with technology goes beyond mere functionality. It considers the sensory aspects, aesthetics, and emotional connection to an object, acknowledging that these elements greatly impact user experience and overall satisfaction. By incorporating a more holistic approach that balances functionality and sensory experiences, we believe AT design created by modifying existing devices has the potential to leverage familiarity, promote sustainability, and democratize worldwide access.

2 Background

Using a mouse for a person without a hand motor impairment involves physically moving the mouse (proprioceptive and kinesthetic feedback), which causes a cursor to move on the screen (functional feedback), and/or clicking one of the buttons to invoke an action (visual, haptic and aural feedback). The multimodal feedback enables a user to establish the link between their intention of clicking a mouse button by moving the mouse cursor, the action of pressing the mouse button, and the evaluation of their action by hearing the click sound, feeling the button press down, and seeing the resulting outcome on screen. The link established between the haptic feedback and the click is coherent as the site of action and feedback received is in the same localized appendage (and sensory modality) for a person without a disability. However, for someone with hand motor impairments, when the intention is exercised through another input modality such as facial expression or gaze, and feedback is received on the hand—this relationship changes and is no longer analogous. In this scenario, the third-party (either human or machine performing input) needs to: (1) acknowledge the user’s intention, (2) have the physical ability to perform the action or an equivalent, (3) recognize when the action is taken successfully, and (4) provide some form of feedback to the user in order for them to evaluate success or failure.
This raises an interesting question: “can the decoupled haptic feedback serve as a complementary communication channel for scenarios where the sites of action and feedback are not the same?” Answering this question can help us understand whether and how to integrate haptic feedback in the design of AT devices for people with hand motor impairments. Our current work is a first step toward answering this question by integrating haptic feedback that corresponds to input and establishing the feedback parameters for Taheri, knowing that for some people with neuromuscular diseases, tactile sensations are also affected [van der Linden et al. 2010] and therefore a baseline of vibrotactile sensations may need to be established for each user.

3 Related Work

In recent years, AT researchers have explored computer input devices extensively, to help individuals with mobility impairment due to diseases including SMA, quadriplegia, muscular dystrophy (MD), locked-in syndrome, amyotrophic lateral sclerosis (ALS), multiple sclerosis (MS), cerebral palsy (CP), and spinal cord injuries [Pinheiro et al. 2011]. The main challenge to design and build assistive computer-human interfaces is that the proposed devices need to accommodate the special needs of the target individual. Unique personal characteristics and preferences have a significant impact on the kind of sensors that can be used [Tarng et al. 1997], as well as on the actuators and their placement, even though the resulting device may provide the same functionality across different users. The design of an AT device necessitates maximizing information flow while simultaneously minimizing the physical and mental effort of the end user [Abascal 2008]. Consequently, the majority of current AT techniques for people with motor impairments rely on the collection of signals from different parts of the body, such as the tongue, brain, or muscles which are often under the individual’s voluntary control.
In this section, we provide a brief overview of prior work related to input devices for individuals with motor impairments and vibrotactile feedback provided by assistive devices.

3.1 Assistive Input Techniques for Individuals with Motor Impairments

There is a need for hands-free input devices for users with severe hand motor impairments. Brain-Computer Interfaces (BCIs), eye tracking, tongue-based interfaces, and voice input have been explored in prior work. BCIs have been used for brain-to-text communication [Willett et al. 2021] and hands-free wheelchair control [Singla et al. 2014] have enabled interaction with computers and movement without assistance, empowering individuals with paralysis or motor impairments. There are two main types of BCI systems: (1) non-invasive approaches that predominantly use electroencephalography (EEG) data, which is analyzed and deciphered using signal processing and machine learning methods [Birbaumer et al. 1999; McFarland et al. 2008; Vidal 1973; Wolpaw et al. 2002], and (2) invasive methods that involve brain surgery to implant an electronic port physically connected to the brain anatomy. However, invasive BCI techniques are usually inaccessible outside of research labs [O’Doherty et al. 2011]. Recent research has shown significant advances in BCI. For example, a novel hybrid EEG-based BCI system that merges motor imagery with P300 signals has been developed for efficient 2D cursor movement and target selection [Long et al. 2011]. Another framework utilizes EEG signals to control operating system functionalities [Gannouni et al. 2022]. Performance comparison of a non-invasive P300-based BCI mouse to a head-mouse for people with spinal cord injuries revealed that the P300-BCI mouse offered a promising alternative for users with severe motor impairments, showing potential for everyday use [Gannouni et al. 2022]. There is also a growing number of EEG-enabled BCI devices for consumers. Emotiv,1 Advanced Brain Monitoring,2 and Muse3 are some commercial devices that allow integration of various brain signals into a single headset for use in daily life, even though the limitations preclude continuous wearing for extended periods of time necessary for interaction with computers. Despite extensive research in BCIs for over four decades, most BCI devices are limited in their use because of challenges related to EEG signals being highly susceptible to noise both from the user and their environment.
Voice input has been the subject of considerable research and development as a hands-free input technique. For voice-based interaction, sounds are converted into digital instructions [Dai et al. 2003; Harada et al. 2009; Igarashi and Hughes 2001; Polacek et al. 2011; Rosenblatt et al. 2018], whether they are speech or non-speech sounds (e.g., whistling, humming, or hissing) [Bilmes et al. 2005; Harada et al. 2006]. Most speech-based methods have been trained on speech by native speakers of a language, making it challenging for the system to recognize accented speech [Metallinou and Cheng 2014; Ping 2008]. Notable advancements include the Voice Controlled Mouse Pointer (VCMP), which uses voice commands for cursor movement and operating system functions, offering accessibility for people with disabilities without requiring a user’s voice database [Kaki 2013]. Another innovation is a voice-controlled cursor for point-and-click tasks using non-verbal sounds, demonstrating higher accuracy and user preference over traditional spoken digit recognition methods [Chanjaradwichai et al. 2010]. A recent technique combining eye tracking and voice recognition enables laptop operation for those with physical challenges, using cameras for eye movement tracking and converting speech into commands [Kalyanakumar et al. 2023]. These developments illustrate the ongoing progress in voice-based input technologies, enhancing the interaction experience for users with various needs. However, in order for either of these voice-based methods to be effective, a relatively quiet environment is often necessary, since ambient noise can have an undesired impact on their performance, though that is improving with ambient noise canceling methods. In individuals with neuromuscular diseases, speech clarity can be significantly affected by compromised tongue muscle function, influencing the usage of voice-based systems [Kooi-van Es et al. 2023]. Additionally, these systems may struggle to reliably recognize varying accents, potentially leading to user frustration.
Eye gaze tracking has been extensively explored as an input modality. An eye gaze tracking system works by detecting, tracing, and mapping the movements of the user’s eyes to the controls on a computer screen, first demonstrated by Jacob [1991] in 1991. Following their work, AT experts have studied eye gaze tracking in more detail to minimize the errors associated with this kind of method and increase performance [Adjouadi et al. 2004; Deepika and Murugesan 2015; Rajanna and Hammond 2018; Sesin et al. 2008]. It has been noted that using this type of interaction method over a prolonged period of time can cause headaches [Liossi et al. 2014]. The slower speed of input, lower accuracy, and the need to wear a device, all make it challenging for prolonged use as a primary input method.
Another area of exploration has been Tongue-Computer Interfaces (TCIs). TCIs use sensors mounted on the tongue to measure movement and pressure [Wakumoto et al. 1998]. These types of systems have been used to help perform various tasks. For instance, the Tongue-Drive System (TDS), capable of generating 9 distinct signals [Chu et al. 2018], has been used for operating computers [Kong et al. 2019], managing a hand exoskeleton with one degree of movement control [Ostadabbas et al. 2016], and controlling a power wheelchair [Huo et al. 2008]. The Inductive Tongue-Computer Interface (ITCI), which Struijk initially introduced [Struijk 2006], offers 18 command signals [Andreasen Struijk et al. 2017]. It has been employed as a control interface for multiple applications. The Itongue®,4 a commercial variant of the ITCI, enables users to operate personal computers and power wheelchairs. ITCI’s performance has been tested on the individuals with and without disabilities through various tasks such as typing [Caltenco et al. 2014; N. S. Andreasen Struijk et al. 2017] cursor control on a computer [Caltenco et al. 2014; Mohammadi et al. 2019], and managing an assistive robotic arm [Andreasen Struijk et al. 2017; Mohammadi et al. 2021]. A significant drawback of these sensors is their placement in the mouth which can cause fatigue and discomfort from extended use.
In addition to these technologies, another area that complements the spectrum of hands-free input methods is the development of head-controlled systems and Camera Mouse technology. These innovations specifically target individuals who, while capable of head movement, face challenges with hand-based interactions, thereby broadening the range of ATs available for diverse motor impairments. Head-controlled systems generally use a piece of equipment, like a transmitter or reflector, attached to the user’s head, designed to interpret the user’s head movements and map them into the cursor’s movements on a computer screen [Chen et al. 2003; Fitzgerald et al. 2009]. An additional switch often substitutes for the mouse button. The Camera Mouse uses a front-facing camera without the need for head attachments [Betke et al. 2002; Magee et al. 2011]. It tracks head movements via computer vision, translating them into on-screen cursor movements. Mouse clicks are enabled through a dwell-time-based customizable process. As stated earlier, these systems require users to have full control over their head’s movements, thus, those who are unable to stabilize and control head movements may find it challenging or impossible to effectively use these systems [Heitger et al. 2006]. Lastly, there are hands-free tools and approaches designed for people who, while unable to move their heads using the aforementioned systems, still retain the voluntary control to move facial muscles and make facial expressions [Taheri et al. 2021a, 2021b].
The diverse range of hands-free input technologies, from BCIs to head-controlled systems, have expanded interaction options, including alternatives to computer mice, for individuals with severe motor impairments. However, there remains a need for more conventional yet adapted input devices. These devices can cater to individuals whose hand impairments may not be severe enough to require entirely hands-free solutions but who still face challenges with standard input methods. For example, trackballs as an alternative to computer mice offer ease of use for those who have difficulty with wrist movements or grasping. They can be operated using fingers, palms, or even the side of the hand, providing flexibility in control methods. However, research has shown that the use of trackballs decreases the strain on shoulder muscles, but increases the strain on the wrist [Harvey and Peper 1997]. Another tool that can be used as an alternative to pointing devices is the joystick. Joysticks are typically used to assist individuals with mobility impairments operate their power wheelchairs, while in other contexts, they are commonly utilized as game controllers. However, operating a joystick requires a certain level of fine motor control and coordination. Some individuals may find it difficult to grasp, move, or manipulate the joystick with the precision required due to limited motor strength, dexterity, or coordination [Aspelund et al. 2020; Martins et al. 2022]. In addition, typically an extra button is necessary for clicking, requiring users to alternate hand movements between the button and the knob. Touchpads or trackpads, commonly built into laptops, require minimal wrist movement and no need for grasping. They support basic gestures like tapping for clicks. However, multi-gesture actions often needed for double-clicks, grabbing screen elements, and so on, may not be feasible for individuals with hand motor impairments who are unable to use more than one finger.
While all these techniques enable interaction with computers, they have their limitations of cost, efficiency, feedback, discomfort, and speed. To overcome some of these limitations, we designed MouseClicker to work with a device-free input method based on facial expression recognition using a common webcam [Taheri et al. 2021a, 2021b], particularly since without the input functionality, it is challenging to demonstrate the correlated haptic feedback. This input method may not work for all individuals with motor impairments, but it was easy for Taheri to use as she is able to voluntarily control her facial muscles.

3.2 Haptic Feedback in AT

Haptic feedback in AT encompasses a variety of modalities, each offering unique benefits to enhance user interaction. This feedback spectrum includes force feedback, tactile feedback, and vibrotactile stimulation, each playing a unique role in augmenting user experience.
Force Feedback in AT: Force feedback or direct pressure, often seen in virtual reality and rehabilitation devices, offers users a tangible sense of resistance or pressure. These systems simulate real-world physical interactions, providing crucial sensory input that aids in motor skill recovery and spatial awareness. For instance, previous studies have shown the effectiveness of haptic feedback in improving finger independence and dexterity in post-stroke patients [Lin et al. 2016; Thielbar et al. 2014], enhancing grasp control in individuals with multiple sclerosis [Jiang et al. 2009], and supporting hand rehabilitation in people with tetraplegia [Markow et al. 2010].
Texture Perception in AT: Tactile feedback encompasses a broad array of sensations, from basic touch to intricate textural information. This type of feedback is particularly beneficial in assistive devices for individuals with sensory impairments, where the tactile sensation can substitute for or augment visual or auditory input. Devices like tactile gloves and Braille displays are prime examples where tactile feedback has been revolutionary.
Vibrotactile Feedback in AT: Within the tactile feedback category, vibrotactile stimulation is a widely used form. Vibration motors and piezo-actuators are commonly used to produce vibrotactile stimulation. Initially popularized for mobile device alerts, vibrations notified users of incoming calls or messages, system states and setting changes [Brown et al. 2005; Kaaresoja and Linjama 2005], with rhythmic and amplitude-varied feedback. Over time, vibrotactile feedback has become a dominant haptic modality in VR experiences. In the realm of touchscreen devices, which lack inherent tactile response, vibrotactile feedback has been pivotal in emulating the sensation of physical buttons, enhancing text entry performance and user experience [Hoggan et al. 2008; Koskinen et al. 2008]. Beyond general usage, vibrotactile feedback has shown immense value in supporting users with various disabilities. It has been effectively employed in AT for blind or visually impaired users, providing an alternative sensory channel, and conveying information that would typically be visual. This approach has been used effectively for shape recognition, reading enhancement through tactile representation of Braille, and navigation assistance, where tactile cues replace visual ones [Kaczmarek and Haase 2003; Sampaio et al. 2001; Velázquez et al. 2018; Zelek 2005]. Similarly, in the context of rehabilitation, vibrotactile signals have aided in improving fine motor skills and grasp control in individuals with motor impairments [Alamri et al. 2007; Feintuch et al. 2006]. For individuals recovering from stroke or those with brain and spinal cord injuries leading to sensorimotor impairments, vibrotactile feedback has been particularly valuable. It provides guided feedback for improvement and correction of movements, potentially reducing the need for constant supervision by therapists [Bao et al. 2018; Bark et al. 2014].
Our focus on vibrotactile feedback for the MouseClicker system, particularly through coin motors, is grounded in its blend of efficacy, simplicity, widespread use, and user accessibility. The choice was driven by the need for a lightweight, compact, and cost-effective haptic actuator that had been previously widely explored, and could be easily integrated into devices. While other forms of tactile feedback, such as pneumatic or electromagnetic actuators, offer different benefits, vibration motors provide an optimal balance of feedback quality, device miniaturization, and affordability. This balance is crucial in AT, where user comfort and device accessibility are paramount. Our approach aims at ensuring that MouseClicker is not only technically effective but also practically accessible to a wide range of users with severe motor impairments or quadriplegia. We plan to open-source our design with the hope that friends and family members of people with severe motor impairments experiment with our design and modify it as needed without incurring high costs typical of ATs.

4 Design Process

Most interaction technologies are designed taking into account the needs and abilities of the vast majority of people, i.e., approaching the problem from a one-size-fits-all perspective. AT design, on the other hand, is frequently centered on addressing the needs of one individual with the best effects seen when it is developed with and tested by the end user [Šumak et al. 2019]. Our development of MouseClicker is deeply rooted in the principles of participatory design, especially given the unique requirements of individuals with severe hand motor impairments. In this regard, Taheri played a pivotal role throughout the entire process. We followed an approach similar to that of Lin et al. [2014] to help build a device uniquely suited to the specific needs and desires of Taheri. Taheri’s participation went beyond simple consultation; she was actively involved in shaping the core design, functionality, and user experience of MouseClicker.
Our iterative design cycle is shown in Figure 2. There are three fundamental questions that we tried to address during this design process: (1) Does the prototype meet Taheri’s needs? (2) Does the prototype create a desired user experience? and (3) Is vibration a suitable method for providing haptic feedback corresponding to mouse clicks for Taheri? To answer these questions, Taheri was involved in every step — from initial brainstorming and defining requirements to participating in the design process at all stages as well as for all evaluation stages, i.e., (1) haptic feedback, (2) device functionality, and (3) full system. This is shown in Figure 2. To establish the target solution, we began by identifying the requirements and hopes that Taheri had and focused on obtaining her insights through a series of brainstorming sessions followed by hardware prototype design iterations.
Fig. 2.
Fig. 2. Design cycle of the MouseClicker system with Taheri, the co-designer, who is a graduate engineering student with SMA and who relies on one functional thumb to interact with an on-screen keyboard via a touchpad.

4.1 Co-Designer

Taheri is a graduate engineering student who is an individual with severe motor impairments due to the neuromuscular disease spinal muscular atrophy (SMA). Taheri can voluntarily control only one finger (right thumb) and uses a trackpad to interact with the computer (Figure 1). Taheri taps on the trackpad with her thumb to perform the left click, but she explained that in some situations, such as when she is tired or her hands are cold, even tapping (left click) becomes challenging and error-prone and requires the help of her caregiver. She is unable to perform complex keyboard + mouse input combinations as they can neither press any buttons nor perform multi-gestures on the trackpad. Instead, her caregiver assists them in performing complex inputs, limiting her ability to work independently. For right-clicking, Taheri uses additional software (PhaseExpress5 in combination with Macro Recorder6) that has about a 1-sec lag before they see the result on the screen. She is unable to do a continuous click or scroll, common interactions that most take for granted. The bottom-right image in Figure 1 shows how Taheri’s hand needs to be placed near the trackpad by her caregiver to allow her to use it for typing with an on-screen keyboard. After discussing with Taheri and observing her interaction with the computer, it became clear that her needs extend beyond just having full mouse functionality. Taheri also expressed a desire for the sensory feedback that comes with using a physical mouse.

5 MouseClicker

Since enabling mouse functionality was not our primary goal, we repurposed an existing mouse to focus on recreating the tactile sensations of using a physical mouse for Taheri. To support input, our prototype uses a hands-free control mechanism to perform common functions such as a single-click, double-click, and continuous press, on the left and right buttons. Facial expressions were chosen as the input method because they provided an easy-to-use hands-free control mechanism for Taheri. Two actuators, controlled by the user’s facial expressions, physically press the left and right buttons to best approximate the experience of using a physical mouse (single click, double click, and continuous press). Vibrotactile feedback is provided on the index and middle fingers (common mouse button click fingers) through two small electric motors, rather than collocated with the facial input (i.e., directly with the site of facial input), in order to provide a haptic experience analogous to the sensations most users associate with using a physical mouse. The decision to deliver feedback to the fingertips acknowledges that the core physical sensation of a mouse click centers on that tactile confirmation in the fingers. While some might expect collocated feedback, our focus was on the authenticity of the haptic sensation itself. By positioning the vibrotactile feedback at the index and middle fingers, we attempt to bridge the gap between the non-collocated input mechanism and the tactile expectations associated with using a conventional mouse. This design decision potentially delivers a more effective interpretation of the task as compared to collocated feedback.

5.1 Target System Operation and Feedback

Full control over a mouse involves moving the mouse on an XY-plane (2 axes = 2 variables,) and clicking the mouse (2 buttons = 2 variables). Considering that Taheri is able to control the pointer’s position using a trackpad independently, our design focuses on the physical button clicking, from both a functional and a feedback standpoint.

5.1.1 Analysis of a Computer Mouse Button.

To design a system that performs clicks, we first analyzed the mechanical function of a common two-button mouse. The electrical switches that are implemented as buttons on the mouse produce a distinctive clicking sound. Usually, they are Single Pole Double Throw (SPDT) switches designed with a snapping mechanism in the moving contact. These types of SPDT switches produce one click when pressed and another click when released. Both clicks are easily identifiable when performing a long-press and hold action using the left mouse button, such as dragging an item on the computer screen. When performing a fast push/release action on the mouse’s buttons (such as clicking on a link), the two-click sounds are often perceived as one.

5.2 Hardware

We appropriated a commonly available two-button mouse for our design to fulfill the user-specific requirements of our primary user Taheri, who helped ideate and design a solution that included physical actuation of a standard computer mouse. The ability to visually observe the mouse buttons being clicked and simultaneously feel the clicks on the fingers resonated with Taheri’s aspiration to avoid feeling excluded, providing her with a heightened sense of “physical” interaction with her input device.
The total cost of materials for our proposed prototype is approximately $100. We believe this makes it an affordable assistive technology that not only provides the functionality of a mouse but more importantly, provides the tactile sensation of using a mouse through haptic feedback on the fingers.

5.2.1 Clicking Mechanism.

The clicking mechanism presents a dual mechanical design for independently operating the left and right mouse buttons. Each side is actuated by an all-metal-geared micro-servo motor coupled with the lever mechanism in a cam and follower configuration. The lever position and pushing force of the end effector is acquired by its closed-loop circuitry. This enables the clicking mechanism to click on a wide range of computer mice and it compensates for loose tolerance in its manufacturing (characteristics appreciated by the maker community).
In our current prototype design (Figure 1), there are two main states of the device, that correspond to two servomotor-shaft positions: (a) rest, and (b) clicking (pushing the mouse button down), as shown in Figure 3. The input of the lever mechanism is the shaft rotation of the servomotor, and the output at the end-effector is the rounded tip of the lever that pushes down the mouse button. This allows us to provide the user with four types of feedback: (1) vibrotactile haptic feedback on two fingers to create the holistic experience of using a mouse, (2) auditory feedback in the form of a click sound from the mouse itself, which was systematically tested in our experiments, (3) visual feedback from the mouse buttons being pressed down by the lever, and a LED bar-graph representing the force applied during the push, although we did not experimentally manipulate it as the benefits of visual components in enhancing user interaction has been extensively documented by prior research [Spence and McDonald 2004], and (4) functional feedback resulting from actions triggered by the mouse clicks on the computer.
Fig. 3.
Fig. 3. Left: The MouseClicker device showing the two main lever/pusher positions: (a) rest, and (b) clicking. Right: Overview showing all the parts of the clicker mechanism.

5.2.2 Haptic Feedback.

To provide vibrotactile feedback that could be considered a tactile representation of clicking a mouse, we focused on: (1) placement of the stimuli source on the body, (2) nature of the stimuli or the actuator, and (3) stimuli specifics, i.e., intensity, duration, and profile. We selected eccentric rotating mass (ERM) motors over reciprocating ones for vibration actuators as they are smaller and lighter.

5.3 Software

For functional mouse input, we wanted to use an off-the-shelf solution as that was not the focus of our design, but a necessary requirement to match the haptic feedback. A hands-free input method was necessary and after discussions with Taheri, it was decided to use a webcam-based facial expression recognition system for input, that was easy to implement and did not require a wearable device needed by gaze-based or BCI input methods. To trigger the MouseClicker, we based our input method on work by Taheri et al. [2021a, 2021b] who presented a facial expression-based system for allowing individuals with motor impairments to play video games. In their work, Facial Action Unit (FAU) recognition [Baltrušaitis et al. 2015] is provided by the OpenFace 2.0 toolkit [Baltrusaitis et al. 2018], which is capable of extracting Action Units (AUs) from a real-time video feed of the user’s face. The authors mapped different combinations of the extracted AUs (defined as facial expressions) to actions in video games. We utilized the first part of their pipeline, i.e., extracting AUs from the webcam stream and combining them into facial expressions to trigger MouseClicker. The facial expressions were mapped onto different clicking modes (single, double, and continuous press) for the left and right clicks as shown in Table 1.
Table 1.
ButtonClick ModeFacial Expression
LeftSingle ClickDisgust
Double ClickSmile
Continuous PressSad
RightSingle ClickWide Eyes
Double ClickContempt
Continuous PressJaw Drop
Table 1. The Mouse Clicking Modes and Corresponding Facial Expressions from Taheri et al. [2021a] that are Mapped to the Mouse Clicks
Taheri selected these mappings through an interface we designed to allow customizing MouseClicker according to personal preference.
Similar to Taheri et al. [2021a], we incorporate a speech recognition system using Google Cloud Speech API7 to activate and deactivate the prototype. This ensures that the system does not operate accidentally if the user makes a facial expression without intending to click. Lastly, like Taheri et al. [2021a], we also created a GUI to allow easy customization of the system’s sensitivity to the user’s facial muscle movements. Since users with motor impairments may present a wide range of abilities in the movement of their facial muscles, this interface can increase the adaptability of the proposed system beyond Taheri. The user can choose which facial expression gets mapped to which clicking mode on MouseClicker. A list of all the available facial expressions and clicking modes for each button is provided in Table 1. The mappings shown in this table are the ones that Taheri felt most comfortable using, with the easiest expressions mapped to the most frequently used mouse clicks.

6 Evaluation

Through this study, our goal is to establish a correlation between the act of clicking a mouse by controlling robotized levers through facial expressions and the corresponding vibrotactile feedback on the index and middle fingers for Taheri. By creating this connection, we aim at providing Taheri with a haptic experience that confirms successful mouse clicks, allowing independent and reliable computer mouse interaction without the need for assistance. Our design does not attempt to fully replicate the proprioceptive experience of hand movements involved in using a mouse, but instead focuses on providing salient sensory feedback to indicate mouse clicks have been executed based on recognized facial expressions. Taheri’s involvement in the design process from the original orthoprosthetic idea to the decoupled interaction in MouseClicker, ensured that the final experience was something they desired. Comparing the haptic preferences of Taheri to those of people without hand motor impairments could help identify any differences or similarities in the way they perceive and respond to the haptic experience of clicking a mouse. If the preferences of Taheri closely align with those without hand motor impairments, it suggests that the current haptic feedback provided through vibrotactile means is suitable for Taheri. However, significant discrepancies might indicate a need for further refinement and personalization of the haptic feedback to better accommodate the unique needs and tactile sensory abilities of Taheri. By exploring this comparison in our study, we can gain valuable insights into the feasibility of perceptual mapping for haptic experiences, specifically in relation to the decoupled feedback (Section 2). The inclusion of visual feedback through the LED, in conjunction with vibrotactile feedback, was important for Taheri to confirm that mouse clicks were successfully executed based on her facial expression inputs. This multimodal feedback provided Taheri assurance and confidence in controlling the mouse independently.
We conducted the evaluation in two parts: (1) a pilot test with Taheri and two other individuals without hand motor impairments, to set an initial input frequency working range for the vibration motors, and (2) a split-plot user study with three experimental variables (position of the motor, intensity of the vibration, and click sound) in 80 randomized experimental states conducted with 10 participants without hand motor impairments. We also conducted this study with Taheri. This study method is similar to other studies for AT design [Ammar and Taileb 2017; Fitzgerald et al. 2009; Markow et al. 2010]. A practical challenge when evaluating our AT design was the difficulty in recruiting a sufficient number of participants with hand motor impairments. These participants were required to be not only available and willing, but also capable of coming to campus for an in-person study. To overcome this, we adopted a broader evaluation approach. Our aim was at identifying and correlating perceptions shared by individuals with and without hand motor impairments. This strategy can help expedite the development of ATs that incorporate haptic feedback. The first part focused on determining an initial range of vibration parameters. We particularly wanted to learn if the vibration was consistently identifiable, and if the left and right clicks were distinguishable. Lastly, we were looking for a comfortable range of vibration intensity for all-day use.
For the second part, we employed a split-plot experimental design that further explored the outcomes from the first part. The 10-person user study was conducted to evaluate the use of ERM motors for providing haptic feedback and to collect the specifications that would allow the feedback model to be a reliable representation of the sensation experienced when clicking a mouse. To this end, we needed participants who had prior experience using a computer mouse, i.e., participants without any hand motor impairments.

6.1 Motor Vibration Frequency Pilot Study

We conducted an early evaluation with Taheri and two other individuals (hereafter referred to as U1 and U2) to heuristically determine the preferred values for vibration frequency and duty cycles that could mimic the sensation of a mouse click. The test equipment comprised of a wave signal generator (Siglent SDG 1032X), general purpose n-channel MOSFETs, a 5-volt power supply (Siglent SPD 303X-E), and 8mm ERM motors, i.e., coin vibrators. We applied square wave signals, sweeping a range of frequencies from 0.2 to 100 Hz and duty cycles ranging from 5% to 100%, allowing three participants (Taheri, U1, and U2) to freely explore the signals. We did not normalize the duration of the vibration in this test, and the participants could repeat the input signals as many times as they wanted. Table 2 presents the ranges of frequencies and duty cycles that each participant selected as representative of what it might feel like (for Taheri) or felt like for (U1 and U2) to click a mouse.
Table 2.
ParticipantFrequencyDuty Cycle
Taheri2–3 Hz7%–10%
U10.75–2 Hz25%
U21.5–3 Hz8%–25%
Table 2. Ranges of Frequencies and Duty Cycles which Each Participant Chose as Representative of the Sensation Perceived when Clicking a Mouse
The frequencies and duty cycles reflect the signal modulation settings on the function generator, corresponding to the energized time of each vibration pulse. The vibration frequencies and amplitudes perceived by the participants depended on the properties of the coin motor and were not measured directly here.

6.2 Split-plot Experimental Design

This was the main experimental phase of our research, designed to test and refine the vibrotactile feedback in MouseClicker. The study incorporated a split-plot experimental design, encompassing three key experimental variables: (1) motor position on the finger, (2) vibration intensity, and (3) click sound. We explored combinations of these three variables with 8 motor positions, 5 vibration intensities, and 2 click-sounds, amounting to 80 distinct experimental states. The layout of the 80 experimental combinations is depicted on the left side of Figure 4.
Fig. 4.
Fig. 4. Split-plot experimental design: The 80 experimental states or “events”, categorized according to three experimental variables: (1) the signal fed to the vibrator, (2) the position of the vibrator on the participant’s index or middle finger, and (3) vibration with and without a clicking sound.
(1)
Motor Position: This variable determined where on the finger the vibration motor was located. We explored eight distinct motor positions, with four positions each on the index and middle fingers (values 1 to 4 for the index finger and values 5 to 8 for the middle finger). Additionally, a 0 value was used for instances when participants were unsure of the vibration motor location or when no vibration was perceived. The specific motor positions of interest are shown on the right side of Figure 4. The rationale for using eight positions, as opposed to the two used in MouseClicker was the need to comprehensively evaluate the spatial acuity of haptic perception, and to go beyond just identifying feedback on the two mouse-click fingers. This expanded range allowed us to explore a broader spectrum of motor placements, facilitating the identification of the most discernible and effective locations for haptic feedback. This approach not only streamlined the experimental process by minimizing biases that could arise from relocating motors but also increased our understanding of how to tailor haptic interfaces to accommodate diverse user needs. Ultimately, the results can help provide crucial insights for developing nuanced haptic interfaces tailored to users with varying motor abilities.
(2)
Vibration Intensity: This variable encompassed different levels of vibration strength, allowing us to understand the most effective intensity for simulating a mouse click. We evaluated five levels of vibration intensity, where level 1 was a no-vibration setting (or zero intensity), level 2 was barely perceivable, level 3 was considered comfortable, level 4 was considered strong, and level 5 was uncomfortably strong and potentially overwhelming.
(3)
Click Sound: This variable explored the impact of the presence or absence of an auditory click sound by simulating the sound of a mouse click in 40 of the 80 experimental states.
Each participant in the study was randomly presented with these 80 combinations of the three variables, without any repetition. The randomization without replacement was done to ensure that each participant’s experience and feedback were unique and unbiased, to help enhance the reliability of our findings. The randomized presentation also allowed us to mitigate any potential learning effects that might skew the results. A detailed illustration of an event, showcasing the varying vibration intensities, can be found in Figure 5. The study’s depth goes beyond identifying which finger received feedback. It delves into pinpointing specific finger parts, which is crucial for tailoring effective haptic interfaces.
Fig. 5.
Fig. 5. Anatomy of an event: A visual description of the event that study participants encountered at each trial. Each event is characterized by haptic feedback in the form of vibration, created by one of the five electrical signals induced in one of the eight electric motors.

6.2.1 Test Bench for User Study.

The test bench comprised of 8 coin vibrators, a relay (SPDT 5-volt), a 5-volt supply (9-volt battery through a linear voltage regulator), and a LED, all were driven by general-purpose n-channel MOSFETs and an Arduino Uno connected to a laptop through the serial port. The LED turned on for 1650 ms in all 80 events. Given that clicking a mouse happens one finger at a time, one vibration source was activated per event, as shown in Figure 6. The LED lit up 750ms before and after a 150ms haptic feedback period. The click sound was produced by the relay that was energized at the beginning of the haptic feedback period and turned off at the end (Figure 5).
Fig. 6.
Fig. 6. Test bench for the user study. On the left is a participant wearing the 8 vibration motors where only one is activated per event; On the right are the parts of the test bench to achieve the implementation of the complete factorial design with, 3 experimental variables for the 80 randomized and automatized experimental states.
The vibration motors were positioned on the index and middle fingers at four specific locations: (1) Ventral Middle Phalanx (hereafter referred to as Ventral), (2) Ventral Distal Phalanx (or Fingertip), (3) Dorsal Middle Phalanx (hereafter referred to as Dorsal), and (4) Dorsal Distal Phalanx (or Nail). The vibration intensity was controlled as pulse trains powering the coin vibrators. The test bench sent one of the five electrical signals at a time to one of the eight motors (four on the index finger and the other four on the middle finger). The five electrical signals were: “intensity 0” (or no vibration), “intensity 1” (duty cycle (d.c.): 10%, period: 50 ms, freq:20 Hz, cycles: 3), “intensity 2” (d.c.: 33%, period: 15 ms, freq: 66.67 Hz cycles: 10), “intensity 3” (d.c.: 50%, period: 20 ms, freq: 50 Hz, cycles: 7), and “intensity 4” (d.c.: 100%, period: 150 ms, freq: 6.67 Hz, cycles: 1). Notably, the duty cycles represent the on-state duration of the coin motors; in other words, the on-state was a fraction of the corresponding full period.

6.2.2 Participants.

We recruited participants who had prior experience using a computer mouse in order to evaluate the vibration feedback. Ten participants (3 females, age range 18–36) were recruited by sending e-mails to department mailing lists. Participants were asked to complete a pre-study questionnaire that included demographic questions and information about how frequently they use a computer mouse and what they use as their primary input method for working with the computer. Eight participants selected the computer mouse (either built-in or external) as their primary input device other than the keyboard, while the other two selected the touchpad. Filling out the questionnaire took less than five minutes.

6.2.3 Experimental Trials.

We conducted the study in-person at our lab. Participants provided informed consent prior to beginning the study. The study protocol was reviewed and approved by the UCSB Human Subjects Committee. Each participant took 20 minutes on average to complete the study. Before beginning the study, we attached eight coin vibrators to the participant’s dominant hand (4 on the index finger and 4 on the middle finger), as shown in Figure 6. Each coin motor was fitted as a ring-like wearable interface.
Participants were asked to look at the green LED on the test bench (Figure 6) in order to evaluate each event. Every time the LED turned on and off, it indicated that an event had taken place. After the event, we asked the participants to identify which motor vibrated and to indicate how strong the vibration felt on a 5-point Likert scale (1 = not at all, 5 = very strong). Participants identified motor location by pointing at the motor (Figure 4). This process was integral to understanding how users differentiate haptic feedback, guiding the development of more nuanced haptic systems. On each event, the test bench sent a signal, corresponding to one of the five intensities and to one of the 8 vibration motors. Half of the events produced a click sound similar to those produced by a computer mouse, while the other half did not. The participant answered one question regarding the click sounds after all 80 events.

6.2.4 Motor Location Test with the Co-designer.

We conducted the same test with Taheri as with the 10 participants. Taheri reported her feedback evaluation on vibration motor locations by speaking out aloud the number of the motor, according to the finger drawing shown in Figure 4, instead of pointing it out with the other hand due to her inability to do so. Needing to remember motor numbers and her locations on the fingers likely imposed some cognitive load on Taheri that the other participants did not experience. Evaluating the ability to identify specific motor locations on the fingers was essential to assess the precision of haptic perception by Taheri, a key factor in designing an effective MouseClicker for her.

7 Results

The results of the experimental split-plot design are presented in a 3-variable plot, illustrating the motor position, the actual and perceived intensity of vibration, and the click sound. Figure 7 shows the average outcomes from 10 participants. Meanwhile, Figure 8 presents the results of the study with Taheri which align with the findings from other participants. The results for the group as depicted in Figure 7 show a smooth surface resembling an inclined plane. This suggests that the participants were able to perceive the vibration intensity values which were consistent with the actual experimental settings in all motor positions. It is noteworthy that these results were rearranged from the experimental events that were originally randomized, eliminating the possibility for participants to learn or compare successive intensity values. The surface plot for Taheri shows greater variability, as averages tend to smooth out.
Fig. 7.
Fig. 7. 10 participants group average of the perceived intensity plotted against actual intensities for the 8 motors, with click sound and without. 800 data points referring to vibration intensity (80 per participant).
Fig. 8.
Fig. 8. Perceived intensity values of the 80 experimental events by Taheri, the co-designer plotted against the actual vibration intensities provided during the experimental trial.
This visual representation also shows that the effect of the click sound was null or minimal, as there was no clear pattern related to the presence or absence of the click sound. Therefore, we focused our evaluation and analysis on the other two experimental variables—motor position and vibration intensity, disregarding the click sound variable. We found an indicator of orthogonality in the experimental results between location and intensity, given that they varied independently; that is, the accuracy of localizing the vibration varied from person to person, while the accuracy of classifying the vibration intensity was consistent across all participants.

7.1 Experimental Results of the User Study: Haptic Feedback

Overall, the experimental results for the vibration intensity showed high consistency across all participants.

7.1.1 Duration of the Haptic Feedback.

In line with our aim at producing a haptic experience that represents pressing a mouse button, we determined that the duration of the vibration should be similar to the duration of a mouse button being pressed. Therefore, we asked participants to click on a computer mouse five times, while we measured the duration of each click using an oscilloscope (Siglent SDS-1204X-E) connected to the mouse. The group’s average duration of clicking was 128 ms with a range of 74 ms to 202 ms. Given that our haptic feedback duration was 150 ms long, it falls within this observed range. Consequently, the vibrotactile stimuli that we used during our evaluation can be considered a fair and reasonable representation of clicking a computer mouse.

7.1.2 Click Sound.

The group average accuracy for localizing motors for the events with and without click sound was 98%. Thus, we considered the effect of the click sound to be very weak, and proceeded to use this experimental variable to test the participant’s self-accuracy by comparing the 40 events with the click sound with their corresponding 40 events without the click sound. The results of this test are shown in Table 3.
Table 3.
Participants Self-Consistency
ParticipantMotor PositionVibration Intensity
P168%78%
P278%58%
P380%68%
P463%75%
P560%73%
P663%78%
P765%48%
P855%68%
P983%73%
P1093%78%
Taheri70%58%
Grand Averages70%69%
Table 3. Given the Split-Plot Experimental Design, we Compare Each Participant’s Self-Consistency
Results show that participants gave the same evaluation when encountering the same event again on average 70% of the time referring to the motor position and 69% in relation to the vibration intensity.
The effect of click sound was assessed after the completion of all 80 events. Participants were asked if they had heard any click sounds during the experiment. Among them, 9 of 10 confirmed hearing the sounds, with varied interpretations: (a) they did not recognize the sound until it was mentioned by the researcher, (b) they acknowledged hearing it but associated it with the normal operation of the test bench and said the sound was always there, presumably the sound was a consistent feature, or (c) they heard it intermittently and attributed it to the functionality of the motors. Only one participant reported hearing the click sound in some of the events; nevertheless, their accuracy was not impacted by it.

7.1.3 Position of the Vibration Motors.

A confusion matrix shows the actual motor positions (rows of the matrix) and the perceived motor positions (columns of the matrix). In the matrix, an entry on row \(x\) and column \(y\) indicates the total number of times that the vibration of a motor at position \(x\) was perceived at position \(y\). Ideally, each entry on the matrix diagonal should be 8, and the rest should be zero, implying that each vibration motor position was correctly perceived. The average of all the confusion matrices for each participant is visualized as a heatmap in Figure 9:Left. According to the averaged heatmap, motors on nails were detected more frequently. This suggests that vibration on the fingers has a higher likelihood of being perceived as coming from the motors on the nails. Figure 9:Right shows the heatmap obtained from Taheri’s feedback. The results were slightly different for Taheri, with the motors on the ventral side of both fingers being selected more frequently than the others. Subsequently, the motors on nails were the second most perceived.
Fig. 9.
Fig. 9. A confusion matrix with the actual motor positions as rows and perceived motor positions as columns were built for each participant. In the matrix, each entry reflects the number of times the vibration in a motor position corresponding to the row was perceived at the motor position in the corresponding column. Left: heatmap visualization derived from taking the average of the matrices for all participants. Right: heatmap visualization of Taheri’s confusion matrix.
We found these results to be particularly interesting, as we had initially anticipated that the fingertips would be the most commonly perceived motor location due to its high concentration of mechanoreceptors. However, our findings indicate that despite the sensitivity of the fingertip, it was the nail that was often misinterpreted as vibrating.
In two of the trials in Taheri’s study, even though the correct motor position was identified, it was attributed to the wrong finger. We believe that this discrepancy may have been due to the need to provide feedback by motor number rather than directly pointing at the motor.
The experimental results for user accuracy of each motor position are shown in Table 4. The results have two aspects, first, as a group where the averages per finger zone show that the most convenient place to attach the vibration motor is directly on the nail, with 61% for the index fingernail and 69% for the middle fingernail. The second best is on the fingertip, with 52% for the index and 48% for the middle finger. Nevertheless, these “best” options are eclipsed by the fact that the average for all zones in both fingers is around 51% (shown in Table 5). If we compare these group averages with the per-participant averages, which is the second aspect of the results, we can see that for 9 out of the 10 participants, there is a zone of greater accuracy than the group average, ranging from 63% up to 100% with an average of 75% for the index and 77% for the middle finger. These results highlight the need for making personal haptic assessments in order to create the most efficient haptic interface for the target user.
Table 4.
Participants’ Accuracy per Finger (%) 
IF ZoneMPP1P2P3P4P5P6P7P8P9P10TaheriAvg/MP
Ventral113775255038503850887552
Tip213100252538381363631005048
Dorsal36375751388130050757548
Nail4385063887550633863757561
Finger Avg317559386334313456846952
 
MF ZoneMPP1P2P3P4P5P6P7P8P9P10TaheriAvg/MP
Ventral501002503813885063757548
Tip61310000755006301006342
Dorsal77575500251303875752541
Nail8637550758863756375637569
Finger Avg388831195634415353785950
Table 4. The Results of the User Accuracy for Each Motor Position [MP] from the 10 Participants and Taheri
The top table is for the index finger [IF] and the bottom one is for the middle finger [MF]. The most accurate zone per participant is shown in bold numbers, as are the group averages. Individual sensitivity data on perception location can provide a starting point for designing haptic feedback devices such as gloves.
Table 5.
Participants’ Combined Accuracy per Zone (%) 
F ZoneMPsP1P2P3P4P5P6P7P8P9P10TaheriAvg/MP
Ventral1, 568850134425694456817550
Tip2, 61310013135644663311005645
Dorsal3, 76975636561301963755044
Nail4, 8506356818156695069697565
Avg348145285934364455816451
Table 5. The Combined Average Accuracy for Motor Positions [MPs] Across the Finger Zone (F Zone)
The participants showed different patterns of accuracy. However, there was similarity between the most accurate zones of the index and middle fingers for each participant. These accuracies are indicated with bold numbers.

7.1.4 Intensity of the Vibration.

Signal intensity was varied between 0% to 100% in each experimental trial. In most of the experimental trials, perceived intensity behaves similarly to the actual intensity and even mirrors it. This indicates that finger skin is sensitive to intensity variations regardless of where on the finger the vibration is applied. It also demonstrates that localizing the motor position on the finger has larger variability than determining the signal intensity of the vibration (Figure 7). That is to say, participants were consistently better at determining the vibration intensity than the vibration location.
Figure 10 illustrates P4’s perception of both vibration motor positions and intensities on both index and middle fingers. The results of this participant are presented because they are representative, especially because all users selected the motors on the nails more frequently than on the other locations.
Fig. 10.
Fig. 10. Visual illustration of actual motor positions [AA], perceived motor positions [PP], actual intensities [AI], perceived intensities [PI] showing P4’s data: the top 4 plots correspond to the index finger, and the bottom 4 plots are for the middle finger.
In Figure 11, we also present the results obtained from the pilot study with Taheri, the co-designer. As shown, the perceived intensities in both fingers are slightly more consistent with the actual intensities. As previously mentioned, however, Taheri’s perception of motor positions reveals that the ventral side and nails are both frequently selected as places where the vibration is felt.
Fig. 11.
Fig. 11. Visual illustration of actual motor positions [AA], perceived motor positions [PP], actual intensities [AI], perceived intensities [PI] obtained from a pilot study conducted with Taheri, the co-designer: the top 4 plots correspond to the index finger, and the bottom 4 plots are for the middle finger.

8 Co-designer’s Reflections On the Full System

We designed our prototype iteratively in collaboration with Taheri, the co-designer through a series of tests and development iterations. Once the prototype was completed, the device was given to Taheri so that they could test it at home for an extended period of time and provide feedback on it. The device provided to Taheri offered all four types of feedback:
(1)
Vibrotactile haptic feedback on two fingers (index and middle) to create the sensation of clicking a mouse. The two coin vibration motors were attached to the index and middle fingers on the ventral side.
(2)
Auditory feedback in the form of a click sound from the mouse itself.
(3)
Visual feedback from the mouse buttons being pressed down by the levers, and a LED bar-graph representing the force applied during the push.
(4)
On-screen functional feedback resulting from actions triggered by mouse clicks through facial expressions.
Prior to using MouseClicker, Taheri had to rely on assistance for tasks such as right-clicking or using keyboard and mouse simultaneously, particularly in 3D applications such as the Unity game engine, a commonly used tool for AR/VR development.
Following a month of using the device, Taheri informed us that she has been able to perform her desired clicking actions whenever she needs without needing assistance. She stated:
“Before, I had to ask my mom for help with doing right clicks, or when I wanted to work in Unity as an example, I needed to use keyboard and mouse simultaneously at times and that made me in need of someone to sit next to me the whole time because using virtual keyboard and continuously pressing mouse buttons were not possible. The situation could become frustrating at some points for me and for that person. The lack of a practical solution, therefore, restricted my ability to work with various applications, such as Unity. But with the MouseClicker, I feel more independent now and that is very comforting for me.”
Most importantly, Taheri noted that receiving vibrotactile feedback from the prototype similar to what one receives when they click on a physical mouse was something she wished to have but were unable to imagine how it would feel:
“Even though I was aware in advance that I would experience the sensation of clicking, I was nonetheless surprised by it. It was a very exciting experience. Not only could I see the result of each clicking action on the screen, which was the functional feedback from the device, but also I was getting a sensation on my fingers’ skin as if I was pressing the buttons with my own fingers. It was wonderful.”
Taheri reported the device being more helpful for her with the feedback than without:
“[T]he fact that I would be able to experience the actual clicking on a physical mouse that everyone else can use pretty easily is something I could not experience with any available assistive devices or software. It gives me a sense of being included because not only I can use the same mouse for clicking as my other non-disabled peers use, but also I can feel on my fingers the same feeling that they experience while using the mouse.”
After several hours of usage on a daily basis, Taheri reported that the device remained comfortable and effective for continuous operation. Specifically, Taheri highlighted that the lightweight vibration motors were easy to wear for long periods without causing discomfort, fatigue, or skin irritation. Taheri emphasized the importance of comfort for ATs intended for daily computer use and felt that MouseClicker successfully met this critical requirement.
Additional feedback focused on Taheri’s desire to feel the scrollwheel and use the device for moving the mouse cursor which would allow it to be a standalone device. The current prototype primarily focuses on enabling users to click hands-free and receiving simulated haptic feedback on their fingers. To the best of our knowledge, such a system has not been explored before. However, we plan to expand the device capability in the near future.
Another comment made by Taheri focused on the use of a common mouse available on the market. They found repurposing a commonly used physical mouse to be a particularly useful idea and appreciated that our design allowed for easy and inexpensive modification of a mouse that they could use. They added:
“One of the most intriguing features of this device is that by making a facial expression I could see some actual physical action taking place in front of me.”
Using a facial expression to click a mouse button serves a practical purpose, but when that click was accompanied by a tangible sensation, it became a newfound ability for Taheri. She felt empowered to manipulate a physical object and control the complete cycle of action and feedback, a capability that was previously unattainable. While we could have integrated haptic feedback that corresponds to Taheri’s trackpad input, our current system not only gives Taheri physical agency, it also enables a degree of future-proofing of Taheri’s input capabilities, considering the progressive nature of SMA and Taheri’s anticipated loss of thumb control in the future. Controlling a physical object for input, having physical agency complemented by the freedom to choose the intensity and location of the haptic feedback, has provided Taheri with an immense sense of joy and independence, as evident from the quotes above. We firmly believe that our decoupled approach represents a small yet significant step toward supporting Taheri’s independence, which can be generalized to a broader audience with similar hand motor impairments.

9 Discussion

In this study, we explored personalized vibrotactile feedback to effectively simulate mouse click sensations for individuals with severe motor impairments and quadriplegia. Through a participatory design approach with our co-designer, Taheri, who has severe motor impairments due to SMA, we developed and evaluated MouseClicker, which provides vibrotactile feedback on the index and middle fingers, corresponding to the hands-free clicking of a physical mouse. Our multi-part study analyzed parameters such as vibration intensity, location, and duration to quantify the tactile experience of mouse clicks. By testing with 10 participants without hand motor impairments alongside Taheri, we aimed at drawing comparisons in haptic perception between those with and without motor impairments. A key objective was to assess whether the simulated vibrotactile feedback feels authentic and intuitive even when decoupled from the site of the clicking action, which is essential for non-collocated input modalities.
The results of our study provide valuable insights into optimizing vibrotactile feedback parameters to effectively simulate mouse click sensation. A key finding was the considerable variability in localizing vibration motors on the fingers, contrasted by the consistent perception of the intensity of the vibration among the participants. This indicates that while intensity can be standardized, localization needs to be personalized in designing effective haptic feedback. Our use of vibration motors as haptic actuators is better suited for rapid-adapting mechanoreceptors in the skin, specifically Meissner corpuscles [Goldstein 1989]. Meissner corpuscles are sensitive to low-frequency vibrations, sensitive to a range of around 10–65 Hz [Oroszi et al. 2020]. In our study, the vibration frequencies delivered by the coin motors fell within the range of 6 and 67 Hz, which aligns with the peak sensitivity of Meissner corpuscles. This explains why participants were able to reliably detect and differentiate the intensity of vibrotactile stimulation in our experiments. In our study, we unexpectedly found that the nail bed was the most frequently perceived location. This was surprising, given that we know that fingertips are the most sensitive, considering their high density of mechanoreceptors [Deflorio et al. 2022]. Interestingly, fingertips emerged as the second most perceived location. Nevertheless, the intra-participant variability highlights the need for customization, as simply targeting sensitive areas does not guarantee consistent localization. The confusion matrix heatmaps in Figure 7 visualize this variability, showing the locations perceived the most frequently by each participant. Accounting for this individual variability can optimize the wearing comfort and effectiveness of vibrotactile devices like our prototype.
An interesting observation from the results of Taheri, the co-designer was the tendency to localize vibrations on the ventral side of the finger. This contrasts with the group trend of localization to the nails. The discrepancy highlights the value of working with end-users in AT development, to account for unique tactile perception capabilities in disability contexts. It also reinforces that localization requires personalization.
Including participants without motor impairments allowed us to address the question, “Did severe hand motor impairment alter the tactile perception of Taheri?” Our user study showed that “it did not”; finger-sensitivity of Taheri aligns with the parameters observed in testing with participants without hand motor impairments, as demonstrated in Tables 3 and 4. This emphasizes the importance of conducting comparative user studies between able-bodied users and AT users. In particular, such comparisons can contribute to advancing the collective knowledge of human factors and ergonomics for AT users, who are often underrepresented in the ergonomic literature. By acknowledging that our similarities outweigh our differences, experimentally, we can effectively pinpoint specific ergonomic characteristics of interest, thereby facilitating and expediting AT research to generate more comprehensive points of comparison.
Our findings on vibration intensity help standardize that parameter for haptic mouse simulations. Across participants, perceived intensity reliably matched actual values and showed less variability than localization. This allows the intensity to be objectively mapped to the mouse click intensity. The 150 ms pulse aligns with the average duration of human clicks, making the simulation perceptually accurate.
An unexpected finding was the minimal effect of clicking sound on vibrotactile localization. We had anticipated an interaction between auditory and tactile cues. However, the clicking sound did not enhance vibration detection. This suggests that the vibrotactile sensation alone is sufficiently salient. Minimizing multimodal feedback channels can simplify the design of devices.
This study underscores the importance of capturing vibration localization in addition to characterizing intensity for effective vibrotactile feedback in AT design. Through our quantitative analysis, we establish guidelines for parameters such as intensity, location, and duration, which are crucial for simulating realistic experiences such as mouse clicks. Our work highlights the need to account for individual tactile perception abilities and variance in perceptual abilities, underscoring the role of participatory design with end-users. This approach not only facilitates an authentic haptic experience and optimized device ergonomics, but also aids in the development of customized vibrotactile systems, thereby creating accessible and empowering AT devices.

10 Limitations and Future Work

To the best of our knowledge, our work presents a first-of-its-kind exploration of repurposing an existing input device to serve the needs of a user with motor impairments, especially in terms of enabling her to feel similar sensations from using the device as a user without those impairments. We focused on the haptic aspects of interacting with a common computer mouse and our system has a few limitations. First, given the mechatronic design, it will work only with a two-button mouse. Second, the facial expression-based input for triggering the mouse clicks may not be ideal for all users depending on their ability to control their facial muscles. While we provide an interface to customize the sensitivity of expression detection and mapping to clicks, it may still not be the ideal input trigger for other users with motor impairments who may require a different type of hands-free input mechanism (e.g., eye-gaze, BCI). Third, as is common with generated haptic feedback, the vibrotactile feedback is a limited proxy of the actual tactile feedback that a person feels when using a mouse (e.g., smoothness of the plastic, temperature of the material, feel of the mouse grasp in the whole hand). Lastly, we acknowledge the limitation presented by the small sample size in our split-plot experiment, which is particularly relevant given the variability in human sensory functions. While our study design yielded detailed information sufficient for validating our proof-of-concept, we recognize that a broader sample would enhance the robustness of our findings. Despite this, the data gathered provide meaningful insights, laying a foundation for future research that we anticipate will build upon and expand the generalizability of our results.
There are several avenues for future work that stem from this first prototype. The most immediate one is the addition of scrolling ability and cursor movement to make it a stand-alone AT device. We encourage further exploration of using existing devices for accessibility as low-cost alternatives created with rapid prototyping methods to solve problems without the need and the potentially long wait for the release of consumer products that may solve similar problems for our target audience.

11 Conclusion

In this work, we presented the design, implementation, and evaluation of MouseClicker, a mechatronic prototype to demonstrate the potential of repurposing existing input devices to integrate haptic feedback for people with hand motor impairments. MouseClicker enables our co-designer, who has severe hand motor impairments, to experience the sensation of using a computer mouse as a physical object, beyond the functionality it provides. In our user study on evaluating the haptic feedback, we found that localizing the vibration motor position had more variability than the input signal intensity of the vibration. It was notable that for each participant, there was a specific zone of greater perception on the index or middle fingers, implying the need for personalized haptic feedback design. With this work, we hope to encourage researchers and designers of AT devices to consider adding sensory feedback in addition to replicating functionality for people with hand-motor disabilities.

Acknowledgments

We thank Alejandro Aponte for reviewing an early draft and Purav Bhardwaj for his invaluable feedback and insights.

Footnotes

2
Advanced Brain Monitoring: https://www.advancedbrainmonitoring.com/
7
Speech-to-Text: Automatic Speech Recognition | Google Cloud: https://cloud.google.com/speech-to-text/

References

[1]
Julio Abascal. 2008. Users with disabilities: Maximum control with minimum effort. In Proceedings of the International Conference on Articulated Motion and Deformable Objects. Springer, 449–456.
[2]
Malek Adjouadi, Anaelis Sesin, Melvin Ayala, and Mercedes Cabrerizo. 2004. Remote eye gaze tracking system as a computer interface for persons with severe motor disability. In Proceedings of the International Conference on Computers for Handicapped Persons. Springer, 761–769.
[3]
Atif Alamri, Rosa Iglesias, Mohamad Eid, Abdulmotaleb El Saddik, Shervin Shirmohammadi, and Edward Lemaire. 2007. Haptic exercises for measuring improvement of post-stroke rehabilitation patients. In Proceedings of the 2007 IEEE International Workshop on Medical Measurement and Applications. IEEE, 1–6.
[4]
Heyfa Ammar and Mounira Taileb. 2017. SmileToPhone: A mobile phone system for quadriplegic users controlled by EEG signals. International Journal of Advanced Computer Science and Applications 8, 5 (2017), 537–541.
[5]
Lotte NS Andreasen Struijk, Line Lindhardt Egsgaard, Romulus Lontis, Michael Gaihede, and Bo Bentsen. 2017. Wireless intraoral tongue control of an assistive robotic arm for individuals with tetraplegia. Journal of Neuroengineering and Rehabilitation 14, 1 (2017), 1–8.
[6]
Sanders Aspelund, Priya Patel, Mei-Hua Lee, Florian A. Kagerer, Rajiv Ranganathan, and Ranjan Mukherjee. 2020. Controlling a robotic arm for functional tasks using a wireless head-joystick: A case study of a child with congenital absence of upper and lower limbs. Plos One 15, 8 (2020), e0226052.
[7]
Tadas Baltrušaitis, Marwa Mahmoud, and Peter Robinson. 2015. Cross-dataset learning and person-specific normalisation for automatic action unit detection. In Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, 1–6.
[8]
Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. 2018. Openface 2.0: Facial behavior analysis toolkit. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, 59–66.
[9]
Tian Bao, Wendy J. Carender, Catherine Kinnaird, Vincent J. Barone, Geeta Peethambaran, Susan L. Whitney, Mohammed Kabeto, Rachael D. Seidler, and Kathleen H. Sienko. 2018. Effects of long-term balance training with vibrotactile sensory augmentation among community-dwelling healthy older adults: a randomized preliminary study. Journal of Neuroengineering and Rehabilitation 15, 1 (2018), 1–13.
[10]
Karlin Bark, Emily Hyman, Frank Tan, Elizabeth Cha, Steven A. Jax, Laurel J. Buxbaum, and Katherine J. Kuchenbecker. 2014. Effects of vibrotactile feedback on human learning of arm motions. IEEE Transactions on Neural Systems and Rehabilitation Engineering 23, 1 (2014), 51–63.
[11]
Margrit Betke, James Gips, and Peter Fleming. 2002. The camera mouse: Visual tracking of body features to provide computer access for people with severe disabilities. IEEE Transactions on Neural Systems and Rehabilitation Engineering 10, 1 (2002), 1–10.
[12]
Jeff A. Bilmes, Xiao Li, Jonathan Malkin, Kelley Kilanski, Richard Wright, Katrin Kirchhoff, Amarnag Subramanya, Susumu Harada, James A. Landay, Patricia Dowden, and Howard Chizeck. 2005. The vocal Joystick: A voice-based human-computer interface for individuals with motor impairments. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. 995–1002.
[13]
Niels Birbaumer, Nimr Ghanayim, Thilo Hinterberger, Iver Iversen, Boris Kotchoubey, Andrea Kübler, Juri Perelmouter, Edward Taub, and Herta Flor. 1999. A spelling device for the paralysed. Nature 398, 6725 (1999), 297–298.
[14]
Lorna M. Brown, Stephen A. Brewster, and Helen C. Purchase. 2005. A first investigation into the effectiveness of tactons. In Proceedings of the 1st Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics Conference. IEEE, 167–176.
[15]
Héctor A. Caltenco, Eugen R. Lontis, Bo Bentsen, and Lotte N. S. Andreasen Struijk. 2014. The impact of function location on typing and pointing tasks with an intraoral tongue–computer interface. International Journal of Human-Computer Interaction 30, 4 (2014), 267–277.
[16]
Supadaech Chanjaradwichai, Proadpran Punyabukkana, and Atiwong Suchato. 2010. Design and evaluation of a non-verbal voice-controlled cursor for point-and-click tasks. In Proceedings of the 4th International Convention on Rehabilitation Engineering & Assistive Technology. 1–4.
[17]
Show-Hong Chen, Yu-Luen Chen, Ying-Han Chiou, Jen-Cheng Tsai, and Te-Son Kuo. 2003. Head-controlled device with m3s-based for people with disabilities. In Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 1587–1589.
[18]
Fu-Jen Chu, Ruinian Xu, Zhenxuan Zhang, Patricio A. Vela, and Maysam Ghovanloo. 2018. The helping hand: An assistive manipulation framework using augmented reality and tongue-drive interfaces. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2158–2161.
[19]
Liwei Dai, Rich Goldman, Andrew Sears, and Jeremy Lozier. 2003. Speech-based cursor control: A study of grid-based solutions. ACM SIGACCESS Accessibility and Computing 77–78 (2003), 94–101.
[20]
S. S. Deepika and G. Murugesan. 2015. A novel approach for Human Computer Interface based on eye movements for disabled people. In Proceedings of the 2015 IEEE International Conference on Electrical, Computer and Communication Technologies. IEEE, 1–3.
[21]
Davide Deflorio, Massimiliano Di Luca, and Alan M. Wing. 2022. Skin and mechanoreceptor contribution to tactile input for perception: A review of simulation models. Frontiers in Human Neuroscience 16 (2022), 862344. DOI:
[22]
Uri Feintuch, Liat Raz, Jane Hwang, Naomi Josman, Noomi Katz, Rachel Kizony, Debbie Rand, Albert ``Skip'' Rizzo, Meir Shahar, Jang Yongseok, and Patrice L. (Tamar) Weiss. 2006. Integrating haptic-tactile feedback into a video-capture–based virtual environment for rehabilitation. CyberPsychology & Behavior 9, 2 (2006), 129–132.
[23]
Meaghan M. Fitzgerald, Brenda Sposato, Patricia Politano, John Hetling, and William O’Neill. 2009. Comparison of three head-controlled mouse emulators in three light conditions. Augmentative and Alternative Communication 25, 1 (2009), 32–41.
[24]
Sofien Gannouni, Kais Belwafi, Mohammad Reshood Al-Sulmi, Meshal Dawood Al-Farhood, Omar Ali Al-Obaid, Abdullah Mohammed Al-Awadh, Hatim Aboalsamh, and Abdelfettah Belghith. 2022. A brain controlled command-line interface to enhance the accessibility of severe motor disabled people to personnel computer. Brain Sciences 12, 7 (2022), 926.
[25]
Howard Gardner and Katie Davis. 2013. The App Generation: How Today’s Youth Navigate Identity, Intimacy, and Imagination in a Digital World. Yale University Press.
[26]
James Gips and Peter Olivieri. 1996. EagleEyes: An eye control system for persons with disabilities. In Proceedings of the 11th International Conference on Technology and Persons with Disabilities.
[27]
E. Bruce Goldstein. 1989. Sensation and Perception. Wadsworth/Thomson Learning.
[28]
Susumu Harada, James A. Landay, Jonathan Malkin, Xiao Li, and Jeff A. Bilmes. 2006. The vocal joystick: cevaluation of voice-based cursor control techniques. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility. 197–204.
[29]
Susumu Harada, Jacob O. Wobbrock, Jonathan Malkin, Jeff A. Bilmes, and James A. Landay. 2009. Longitudinal study of people learning to use continuous voice-based cursor control. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA). Association for Computing Machinery, New York, NY, USA, 347–356. DOI:
[30]
Richard Harvey and Erik Peper. 1997. Surface electromyography and mouse use position. Ergonomics 40, 8 (1997), 781–789.
[31]
Marcus H. Heitger, Richard D. Jones, John C. Dalrymple-Alford, Chris M. Frampton, Michael W. Ardagh, and Tim J. Anderson. 2006. Motor deficits and recovery during the first year following mild closed head injury. Brain Injury 20, 8 (2006), 807–824.
[32]
Eve Hoggan, Stephen A. Brewster, and Jody Johnston. 2008. Investigating the effectiveness of tactile feedback for mobile touchscreens. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1573–1582.
[33]
Xueliang Huo, Jia Wang, and Maysam Ghovanloo. 2008. Wireless control of powered wheelchairs with tongue motion using tongue drive assistive technology. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 4199–4202.
[34]
Faustina Hwang, Simeon Keates, Patrick Langdon, P. John Clarkson, and Peter Robinson. 2001. Perception and Haptics: Towards more accessible computers for motion-impaired users. In Proceedings of the 2001 Workshop on Perceptive User Interfaces (Orlando, Florida, USA). Association for Computing Machinery, New York, NY, USA, 1–9. DOI:
[35]
Takeo Igarashi and John F. Hughes. 2001. Voice as sound: Using non-verbal voice input for interactive control. In Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology. 155–156.
[36]
Robert J. K. Jacob. 1991. The use of eye movements in human-computer interaction techniques: Cwhat you look at is what you get. ACM Transactions on Information Systems 9, 2 (1991), 152–169.
[37]
Li Jiang, Mark R. Cutkosky, Juhani Ruutiainen, and Roope Raisamo. 2009. Using haptic feedback to improve grasp force control in multiple sclerosis patients. IEEE Transactions on Robotics 25, 3 (2009), 593–601.
[38]
Topi Kaaresoja and Jukka Linjama. 2005. Perception of short tactile pulses generated by a vibration motor in a mobile phone. In Proceedings of the 1st Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. World Haptics Conference. IEEE, 471–472.
[39]
Kurt A. Kaczmarek and Steven J. Haase. 2003. Pattern identification and perceived stimulus quality as a function of stimulation waveform on a fingertip-scanned electrotactile display. IEEE Transactions on Neural Systems and Rehabilitation Engineering 11, 1 (2003), 9–16.
[40]
Kiran Kumar Kaki. 2013. Mouse cursor’s movements using voice controlled mouse pointer. International Journal of Computer Applications 71, 7 (2013), 27–34.
[41]
P. Kalyanakumar, P. Dinesh Kumar, M. Sivavignesh, S. Jeyapandi, G. Vinoth Rajkumar, et al. 2023. Facial movement and voice recognition based mouse cursor control. In Proceedings of the 2023 4th International Conference on Smart Electronics and Communication. IEEE, 133–140.
[42]
Jeong Ho Kim, Lovenoor Aulck, Michael C. Bartha, Christy A. Harper, and Peter W. Johnson. 2014. Differences in typing forces, muscle activity, comfort, and typing performance among virtual, notebook, and desktop keyboards. Applied Ergonomics 45, 6 (2014), 1406–1413.
[43]
Fanpeng Kong, Md Nazmus Sahadat, Maysam Ghovanloo, and Gregory D. Durgin. 2019. A stand-alone intraoral tongue-controlled computer interface for people with tetraplegia. IEEE Transactions on Biomedical Circuits and Systems 13, 5 (2019), 848–857.
[44]
Mieke Kooi-van Es, Corrie E. Erasmus, Nicoline B. M. Voet, Lenie van den Engel-Hoek, and Philip J. van der Wees. 2023. Best practice recommendations for speech-language pathology in children with neuromuscular disorders: A delphi-based consensus study. International Journal of Speech-Language Pathology 26, 1 (2023), 1–14.
[45]
Emilia Koskinen, Topi Kaaresoja, and Pauli Laitinen. 2008. Feel-good touch: Finding the most pleasant tactile feedback for a mobile touch screen button. In Proceedings of the 10th International Conference on Multimodal Interfaces. 297–304.
[46]
Gautham Krishnamurthy and Maysam Ghovanloo. 2006. Tongue drive: A tongue operated magnetic sensor based wireless assistive technology for people with severe disabilities. In Proceedings of the 2006 IEEE International Symposium on Circuits and Systems. IEEE, 4–pp.
[47]
Chi-Ying Lin, Chia-Min Tsai, Pei-Cheng Shih, and Hsiao-Ching Wu. 2016. Development of a novel haptic glove for improving finger dexterity in poststroke rehabilitation. Technology and Health Care 24, s1 (2016), S97–S103.
[48]
Henry W. J. Lin, Leila Aflatoony, and Ron Wakkary. 2014. Design for one: A game controller for a quadriplegic gamer. In Proceedings of the Extended Abstracts on Human Factors in Computing Systems. 1243–1248.
[49]
Christina Liossi, Daniel E. Schoth, Hayward J. Godwin, and Simon P. Liversedge. 2014. Using eye movements to investigate selective attention in chronic daily headache. PAIN® 155, 3 (2014), 503–510.
[50]
Jinyi Long, Yuanqing Li, Tianyou Yu, and Zhenghui Gu. 2011. Target selection with hybrid feature for BCI-based 2-D cursor control. IEEE Transactions on Biomedical Engineering 59, 1 (2011), 132–140.
[51]
John J. Magee, Samuel Epstein, Eric S. Missimer, Christopher Kwan, and Margrit Betke. 2011. Adaptive mouse-replacement interface control functions for users with disabilities. In Universal Access in Human-Computer Interaction. Users Diversity: 6th International Conference, UAHCI 2011, Held as Part of HCI International 2011, Orlando, FL, USA, July 9–14, 2011, Proceedings, Part II 6. Springer, 332–341.
[52]
Tanya Markow, Narayanan Ramakrishnan, Kevin Huang, Thad Starner, Matthew Eicholtz, Stephen Garrett, Halley Profita, Alex Scarlata, Charles Schooler, Aneesh Tarun, and Deborah Backus. 2010. Mobile music touch: Vibration stimulus in hand rehabilitation. In Proceedings of the 2010 4th International Conference on Pervasive Computing Technologies for Healthcare. IEEE, 1–8.
[53]
Felipe R. Martins, Eduardo L. M. Naves, Yann Morère, and Angela A. R. de Sá. 2022. Preliminary assessment of a multimodal electric-powered wheelchair simulator for training of activities of daily living. Journal on Multimodal User Interfaces 16, 2 (2022), 193–205.
[54]
Dennis J. McFarland, Dean J. Krusienski, William A. Sarnacki, and Jonathan R. Wolpaw. 2008. Emulation of computer mouse control with a noninvasive brain–computer interface. Journal of Neural Engineering 5, 2 (2008), 101.
[55]
Angeliki Metallinou and Jian Cheng. 2014. Using deep neural networks to improve proficiency assessment for children English language learners. In Proceedings of the 15th Annual Conference of the International Speech Communication Association.
[56]
Mostafa Mohammadi, Hendrik Knoche, Michael Gaihede, Bo Bentsen, and Lotte N. S. Andreasen Struijk. 2019. A high-resolution tongue-based joystick to enable robot control for individuals with severe disabilities. In Proceedings of the 2019 IEEE 16th International Conference on Rehabilitation Robotics. IEEE, 1043–1048.
[57]
Mostafa Mohammadi, Hendrik Knoche, and Lotte N. S. Andreasen Struijk. 2021. Continuous tongue robot mapping for paralyzed individuals improves the functional performance of tongue-based robotic assistance. IEEE Transactions on Biomedical Engineering 68, 8 (2021), 2552–2562.
[58]
Alan Newell. 2003. Inclusive design or assistive technology. Inclusive Design: Design for the Whole Population (2003), 172–181.
[59]
Don Norman. 2013. The Design of Everyday Things: Revised and Expanded Edition. Basic books.
[60]
Lotte N. S. Andreasen Struijk, Eugen R. Lontis, Michael Gaihede, Hector A. Caltenco, Morten Enemark Lund, Henrik Schioeler, and Bo Bentsen. 2017. Development and functional demonstration of a wireless intraoral inductive tongue computer interface for severely disabled persons. Disability and Rehabilitation: Assistive Technology 12, 6 (2017), 631–640.
[61]
Tamás Oroszi, Marieke J. G. van Heuvelen, Csaba Nyakas, and Eddy A. van der Zee. 2020. Vibration detection: Its function and recent advances in medical applications. F1000Research 9 (2020), 1–9. DOI:
[62]
Sarah Ostadabbas, Stephen N. Housley, Nordine Sebkhi, Kimberly Richards, David Wu, Zhenxuan Zhang, Maria Garcia Rodriguez, Lindsey Warthen, Crystal Yarbrough, Samir Belagaje, Andrew J. Butler, and Maysam Ghovanloo. 2016. Tongue-controlled robotic rehabilitation: A feasibility study in people with stroke. Journal of Rehabilitation Research & Development 53, 6 (2016), 989–1006.
[63]
Joseph E. O’Doherty, Mikhail A. Lebedev, Peter J. Ifft, Katie Z. Zhuang, Solaiman Shokur, Hannes Bleuler, and Miguel AL Nicolelis. 2011. Active tactile exploration using a brain–machine–brain interface. Nature 479, 7372 (2011), 228–231.
[64]
Qiyu Peng and Thomas F. Budinger. 2007. ZigBee-based wireless intra-oral control system for quadriplegic patients. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 1647–1650.
[65]
Tan Tien Ping. 2008. Automatic Speech Recognition for Non-Native Speakers. Ph. D. Dissertation. Université Joseph-Fourier-Grenoble I.
[66]
Carlos G. Pinheiro, Eduardo L. M. Naves, Pierre Pino, Etienne Losson, Adriano O. Andrade, and Guy Bourhis. 2011. Alternative communication systems for people with severe motor disabilities: A survey. Biomedical Engineering Online 10, 1 (2011), 1–28.
[67]
Gabriel Pires, Urbano Nunes, and Miguel Castelo-Branco. 2012. Evaluation of brain-computer interfaces in accessing computer and other devices by people with severe motor impairments. Procedia Computer Science 14 (2012), 283–292. DOI:
[68]
Ondrej Polacek, Zdenek Mikovec, Adam J. Sporka, and Pavel Slavik. 2011. Humsher: A predictive keyboard operated by humming. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility. 75–82.
[69]
Quadstick. 2020. Quadstick: A Game Controller for Quadriplegics. Retrieved December 23, 2021 from https://www.quadstick.com/shop/quadstick-fps-game-controller
[70]
Vijay Rajanna and Tracy Hammond. 2018. A gaze-assisted multimodal approach to rich and accessible human-computer interaction. arXiv:1803.04713. Retrieved from https://arxiv.org/abs/1803.04713
[71]
Lucas Rosenblatt, Patrick Carrington, Kotaro Hara, and Jeffrey P. Bigham. 2018. Vocal programming for people with upper-body motor impairments.Proceedings of the 15th International Web for All Conference. Association for Computing Machinery, New York, NY, USA, 10 pages. DOI:
[72]
Eliana Sampaio, Stéphane Maris, and Paul Bach-y. Rita. 2001. Brain plasticity: ‘Visual’ acuity of blind persons via the tongue. Brain Research 908, 2 (2001), 204–207.
[73]
Anaelis Sesin, Malek Adjouadi, Mercedes Cabrerizo, Melvin Ayala, and Armando Barreto. 2008. Adaptive eye-gaze tracking using neural-network-based user profiles to assist people with motor disability. Journal of Rehabilitation Research and Development 45, 6 (2008), 801–817. https://www.proquest.com/scholarly-journals/adaptive-eye-gaze-tracking-using-neural-network/docview/215282647/se-2
[74]
Rajesh Singla, Arun Khosla, and Rameshwar Jha. 2014. Influence of stimuli colour in SSVEP-based BCI wheelchair control using support vector machines. Journal of Medical Engineering & Technology 38, 3 (2014), 125–134.
[75]
J. David Smith and T. C. Nicholas Graham. 2006. Use of eye movements for video game control. In Proceedings of the 2006 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology. 20–es.
[76]
Charles Spence and John McDonald. 2004. The cross-modal consequences of the exogenous spatial orienting of attention. The Handbook of Multisensory Processes, Boston Review, Cambridge, MA, 3–25.
[77]
Lotte N. S. Andreasen Struijk. 2006. An inductive tongue computer interface for control of computers and assistive devices. IEEE Transactions on Biomedical Engineering 53, 12 (2006), 2594–2597.
[78]
Boštjan Šumak, Matic Špindler, Mojca Debeljak, Marjan Heričko, and Maja Pušnik. 2019. An empirical evaluation of a hands-free computer interaction for users with motor disabilities. Journal of Biomedical Informatics 96 (2019), 103249.
[79]
A. Taheri, Z. Weissman, and M. Sra. 2021a. Design and evaluation of a hands-free video game controller for individuals with motor impairments. Frontiers in Computer Science 3 (2021), 751455. DOI:
[80]
Atieh Taheri, Ziv Weissman, and Misha Sra. 2021b. Exploratory design of a hands-free video game controller for a quadriplegic individual. In Proceedings of the Augmented Humans Conference 2021. 131–140.
[81]
Ying-Horng Tarng, Gwo-Ching Chang, Jin-Shin Lai, and Te-Son Kuo. 1997. Design of the human/computer interface for human with disability using myoelectric signal control. In Proceedings of the 19th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.’Magnificent Milestones and Emerging Opportunities in Medical Engineering’. IEEE, 1909–1910.
[82]
Kelly O. Thielbar, Thomas J. Lord, Heidi C. Fischer, Emily C. Lazzaro, Kristin C. Barth, Mary E. Stoykov, Kristen M. Triandafilou, and Derek G. Kamper. 2014. Training finger individuation with a mechatronic-virtual reality system leads to improved fine motor control post-stroke. Journal of Neuroengineering and Rehabilitation 11, 1 (2014), 1–11.
[83]
Sherry Turkle. 2011. Evocative Objects: Things we Think With. MIT press.
[84]
Marleen H. van der Linden, Saskia C. van der Linden, Henk T. Hendricks, Baziel G. M. van Engelen, and Alexander C. H. Geurts. 2010. Postural instability in charcot-marie-tooth type 1A patients is strongly associated with reduced somatosensation. Gait & Posture 31, 4 (2010), 483–488.
[85]
Ramiro Velázquez, Edwige Pissaloux, Pedro Rodrigo, Miguel Carrasco, Nicola Ivan Giannoccaro, and Aimé Lay-Ekuakille. 2018. An outdoor navigation system for blind pedestrians using GPS and tactile-foot feedback. Applied Sciences 8, 4 (2018), 578.
[86]
Jacques J. Vidal. 1973. Toward direct brain-computer communication. Annual Review of Biophysics and Bioengineering 2, 1 (1973), 157–180.
[87]
Lev Semenovich Vygotsky and Michael Cole. 1978. Mind in Society: Development of Higher Psychological Processes. Harvard University Press.
[88]
Masahiko Wakumoto, Shinobu Masaki, Kiyoshi Honda, and Toshikazu Ohue. 1998. A pressure sensitive palatography: Application of new pressure sensitive sheet for measuring tongue-palatal contact pressure. In Proceedings of the ICSLP.
[89]
Francis R. Willett, Donald T. Avansino, Leigh R. Hochberg, Jaimie M. Henderson, and Krishna V. Shenoy. 2021. High-performance brain-to-text communication via handwriting. Nature 593, 7858 (2021), 249–254.
[90]
Jonathan R. Wolpaw, Niels Birbaumer, Dennis J. McFarland, Gert Pfurtscheller, and Theresa M. Vaughan. 2002. Brain–computer interfaces for communication and control. Clinical Neurophysiology 113, 6 (2002), 767–791.
[91]
John S. Zelek. 2005. Seeing by touch (haptics) for wayfinding. In Proceedings of the International Congress Series. Elsevier, 1108–1112.

Cited By

View all
  • (2024)Leveraging AI Face-Tracking and Gesture Recognition for Hands-Free Computing: Bridging the Gap for Users with Physical Disabilities2024 IEEE Global Humanitarian Technology Conference (GHTC)10.1109/GHTC62424.2024.10771580(232-239)Online publication date: 23-Oct-2024
  • (2024)On the Effects of Personalizing Vibrotactile Feedback to Facilitate User Interaction With a Robotic SystemIEEE Access10.1109/ACCESS.2024.345135412(126518-126530)Online publication date: 2024

Index Terms

  1. MouseClicker: Exploring Tactile Feedback and Physical Agency for People with Hand Motor Impairments

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Accessible Computing
        ACM Transactions on Accessible Computing  Volume 17, Issue 1
        March 2024
        174 pages
        EISSN:1936-7236
        DOI:10.1145/3413488
        Issue’s Table of Contents
        This work is licensed under a Creative Commons Attribution International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 28 March 2024
        Online AM: 19 February 2024
        Accepted: 05 February 2024
        Revised: 28 November 2023
        Received: 17 July 2023
        Published in TACCESS Volume 17, Issue 1

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Hand motor impairments
        2. holistic inclusion
        3. haptic feedback
        4. facial expressions
        5. hands-free input
        6. computer mouse

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)1,665
        • Downloads (Last 6 weeks)159
        Reflects downloads up to 05 Mar 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Leveraging AI Face-Tracking and Gesture Recognition for Hands-Free Computing: Bridging the Gap for Users with Physical Disabilities2024 IEEE Global Humanitarian Technology Conference (GHTC)10.1109/GHTC62424.2024.10771580(232-239)Online publication date: 23-Oct-2024
        • (2024)On the Effects of Personalizing Vibrotactile Feedback to Facilitate User Interaction With a Robotic SystemIEEE Access10.1109/ACCESS.2024.345135412(126518-126530)Online publication date: 2024

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Login options

        Full Access

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media