skip to main content
10.1145/1090785.1090801acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
Article

The migratory cursor: accurate speech-based cursor movement by moving multiple ghost cursors using non-verbal vocalizations

Published: 09 October 2005 Publication History

Abstract

We present the migratory cursor, which is an interactive interface that enables users to move a cursor to any desired position quickly and accurately using voice alone. The migratory cursor combines discrete specification that allows a user to specify a location quickly, but approximately, with continuous specification that allows the user to specify a location more precisely, but slowly. The migratory cursor displays multiple ghost cursors that are aligned vertically or horizontally with the actual cursor. The user quickly specifies an approximate position by referring to the ghost cursor nearest the desired position, and then uses non-verbal vocalizations to move the ghost cursors continuously until one is on the desired position. The time spent using the continuous specification which is slow to use is short, since it is used just for fine refinement. In addition, the migratory cursor employs only two directional movements: vertical and horizontal, so that the user can move it quickly to any desired position. Moreover, the user can easily and accurately stop cursor movements by becoming silent when the cursor reaches the desired position. We tested the usefulness of the migratory cursor, and showed that users could move the cursor to a desired position quickly and accurately.

Supplementary Material

Low Resolution (p76-mihara56k.mp4)
High Resolution (p76-mihara768k.mp4)

References

[1]
Bederson, B. and Hollan, J.: "Pad++: a zoomable graphical interface for exploring alternate interface physics", Proc. of ACM symposium on User interface software and technology (UIST '94), pp.17--26. 1994.
[2]
Dai, L., Goldman, R., Sears, A. and Lozier, J.: "Speech-Based Cursor Control: A Study of Grid-Based Solutions", Proc. of the ACM SIGACCESS conference on Computers and accessibility (ASSETS'04), pp.94--101. 2004.
[3]
Igarashi, T. and Hughes, J, F.: "Voice as Sound: Using Non-verbal Voice Input for Interactive Control", Proc. of UIST '01, pp.155--156. 2001.
[4]
Kamel, H. and Landay, J.: "The Integrated Communication 2 Draw (IC2D): A Drawing Program for the Visually Impaired", CHI '99 extended abstracts on Human factors in computing systems, pp.222--223. 1999.
[5]
Kamel, H. and Landay, J.: "A Study of Blind Drawing Practice: Creating Graphical Information without the Visual Channel", Proc. of ASSETS '00, pp.34--41. 2000.
[6]
Kamel, H. and Landay, J.: "Sketching images eyes-free: A Grid-Based Dynamic Drawing Tool for the Blind", Proc. of ASSETS '02, pp.33--40. 2002.
[7]
Karimullah, A, S. and Sears, A.: "Speech-Based Cursor Control", Proc. of ASSETS '02, pp.178--185. 2002.
[8]
Manaris, B., McCauley, R. and MacGybers, V.: "An Intelligent Interface for Keyboard and Mouse Control - Providing Full Access to PC Functionality via Speech", Proc. of International Florida AI Research Symposium (FLAIRS '01), pp.182--188, 2001.
[9]
Manaris, B. and Harkreader, A.: "SUITEKeys: A Speech Understanding Interface for the Motor-control challenged", Proc. of ASSETS '98, pp.108--115. 1998.
[10]
Mihara, Y., Takahashi, S. and Shibayama, E.: "WATARIDORI: Multiple Ghost Cursors for Speech-Based Cursor Movement", ACM UIST '04 Companion, pp.19--20. 2004.
[11]
Perlin, K. and Fox, D.: "Pad: an alternative approach to the computer interface", Proc. of SIGGRPAH '93, pp.57--64. 1993.
[12]
University of Washington: Vocal Joystick project, http://ssli.ee.washington.edu/vj/ (papers will appear)University of Washington: Vocal Joystick project, http://ssli.ee.washington.edu/vj/ (papers will appear).

Cited By

View all
  • (2024)Improving Error Correction and Text Editing Using Voice and Mouse Multimodal InterfaceInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2352932(1-24)Online publication date: 22-May-2024
  • (2020)Reviewing Speech Input with AudioACM Transactions on Accessible Computing10.1145/338203913:1(1-28)Online publication date: 21-Apr-2020
  • (2020)Non-Verbal Auditory Input for Controlling Binary, Discrete, and Continuous Input in Automotive User InterfacesProceedings of the 2020 CHI Conference on Human Factors in Computing Systems10.1145/3313831.3376816(1-13)Online publication date: 21-Apr-2020
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
Assets '05: Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility
October 2005
232 pages
ISBN:1595931597
DOI:10.1145/1090785
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 October 2005

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. non-verbal voice input
  2. speech-based cursor movement

Qualifiers

  • Article

Conference

ASSETS05
Sponsor:

Acceptance Rates

Overall Acceptance Rate 436 of 1,556 submissions, 28%

Upcoming Conference

ASSETS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Improving Error Correction and Text Editing Using Voice and Mouse Multimodal InterfaceInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2352932(1-24)Online publication date: 22-May-2024
  • (2020)Reviewing Speech Input with AudioACM Transactions on Accessible Computing10.1145/338203913:1(1-28)Online publication date: 21-Apr-2020
  • (2020)Non-Verbal Auditory Input for Controlling Binary, Discrete, and Continuous Input in Automotive User InterfacesProceedings of the 2020 CHI Conference on Human Factors in Computing Systems10.1145/3313831.3376816(1-13)Online publication date: 21-Apr-2020
  • (2020)Leveraging Error Correction in Voice-based Text Entry by Talk-and-GazeProceedings of the 2020 CHI Conference on Human Factors in Computing Systems10.1145/3313831.3376579(1-11)Online publication date: 21-Apr-2020
  • (2020)The Study of Two Novel Speech-Based Selection Techniques in Voice-User InterfacesIEEE Access10.1109/ACCESS.2020.30416498(217024-217032)Online publication date: 2020
  • (2019)KeySlideProceedings of the 10th Indian Conference on Human-Computer Interaction10.1145/3364183.3364186(1-11)Online publication date: 1-Nov-2019
  • (2018)Operating a Robot by Nonverbal Voice Expressed with Acoustic FeaturesIntelligent Autonomous Systems 1510.1007/978-3-030-01370-7_45(573-584)Online publication date: 31-Dec-2018
  • (2017)Operating a robot by nonverbal voice based on ranges of formants2017 3rd International Conference on Control, Automation and Robotics (ICCAR)10.1109/ICCAR.2017.7942687(202-205)Online publication date: Apr-2017
  • (2013)The adjustable gridProceedings of the 51st annual ACM Southeast Conference10.1145/2498328.2500084(1-6)Online publication date: 4-Apr-2013
  • (2011)Speech-based navigation and error correction: a comprehensive comparison of two solutionsUniversal Access in the Information Society10.1007/s10209-010-0185-910:1(17-31)Online publication date: 1-Mar-2011
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media