skip to main content
10.1145/1294211.1294219acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
Article

Eyepatch: prototyping camera-based interaction through examples

Published: 07 October 2007 Publication History

Abstract

Cameras are a useful source of input for many interactive applications, but computer vision programming is difficult and requires specialized knowledge that is out of reach for many HCI practitioners. In an effort to learn what makes a useful computer vision design tool, we created Eyepatch, a tool for designing camera-based interactions, and evaluated the Eyepatch prototype through deployment to students in an HCI course. This paper describes the lessons we learned about making computer vision more accessible, while retaining enough power and flexibility to be useful in a wide variety of interaction scenarios.

References

[1]
Amavasai, B. P. Principles of Vision Systems. IEEE Systems, Man, and Cybernetics: Principles and Applications Workshop, 2002.
[2]
Bagnall, B., Core LEGO MINDSTORMS Programming: Prentice Hall PTR Upper Saddle River, NJ, USApp. 2002.
[3]
Black, M. J. and A. D. Jepson. A probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions. European Conference on Computer Vision 1. pp. 909--24, 1998.
[4]
Bradski, G. The OpenCV Library. Dr. Dobb's Journal November 2000, Computer Security, 2000.
[5]
Bradski, G. R. Computer vision face tracking for use in a perceptual user interface. Intel Technology Journal 2(2). pp. 12--21, 1998.
[6]
Bradski, G. R. and J. W. Davis. Motion segmentation and pose recognition with motion history gradients. Machine Vision and Applications 13(3). pp. 174--84, 2002.
[7]
Diaz-Marino, R. and S. Greenberg. CAMBIENCE: A Video-Driven Sonic Ecology for Media Spaces. Video Proceedings of ACM CSCW'06 Conference on Computer Supported Cooperative Work, 2006.
[8]
Fails, J. and D. Olsen. A design tool for camera-based interaction. CHI: ACM Conference on Human Factors in Computing Systems. pp. 449--56, 2003.
[9]
Fails, J. A. and D. R. Olsen. Light widgets: interacting in every-day spaces. Proceedings of IUI'02, 2002.
[10]
Freeman, W. T., et al. Computer vision for interactive computer graphics. Computer Graphics and Applications, IEEE 18(3). pp. 42--53, 1998.
[11]
Hager, G. D. and K. Toyama. X Vision: A portable substrate for real-time vision applications. Computer Vision and Image Understanding 69(1). pp. 23--37, 1998.
[12]
Hancher, M. D., M. J. Broxton, and L. J. Edwards. A User's Guide to the NASA Vision Workbench. Intelligent Systems Division, NASA Ames Research Center, 2006.
[13]
Hartmann, B., L. Abdulla, M. Mittal, and S. R. Klemmer. Authoring Sensor Based Interactions Through Direct Manipulation and Pattern Matching. CHI: ACM Conference on Human Factors in Computing Systems, 2007.
[14]
Hartmann, B., et al. Reflective physical prototyping through integrated design, test, and analysis. Proceedings of the 19th annual ACM symposium on User interface software and technology. pp. 299--308, 2006.
[15]
Igarashi, T., Student projects in the "Media Informatics" course at the University of Tokyo, 2005. http://www-ui.is.s.u-tokyo.ac.jp/~kwsk/media2005/projects.html
[16]
Iivarinen, J., M. Peura, J. Sarela, and A. Visa. Comparison of combined shape descriptors for irregular objects. Proceedings of the 8th British Machine Vision Conference 2. pp. 430--39, 1997.
[17]
Isard, M. and A. Blake. Contour Tracking by Stochastic Propagation of Conditional Density. Proceedings of the 4th European Conference on Computer Vision-Volume I-Volume I. pp. 343--56, 1996.
[18]
Kato, H. and M. Billinghurst. Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System. Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality 99. pp. 85--94, 1999.
[19]
Klemmer, S. R. Papier-Mâché: Toolkit support for tangible interaction. CHI: ACM Conference on Human Factors in Computing Systems, 2004.
[20]
Klemmer, S. R., M. W. Newman, R. Farrell, M. Bilezikjian, and J. A. Landay. The Designers' Outpost: A Tangible Interface for Collaborative Web Site Design. The 14th Annual ACM Symposium on User Interface Software and Technology: UIST2001, CHI Letters 3(2). pp. 1--10, 2001.
[21]
Kolsch, M., M. Turk, and T. Hollerer. Vision-based interfaces for mobility. Mobile and Ubiquitous Systems: Networking and Services, 2004. MOBIQUITOUS 2004. The First Annual International Conference on. pp. 86--94, 2004.
[22]
Krueger, M. W., T. Gionfriddo, and K. Hinrichsen. VIDEOPLACE.an artificial reality. ACM SIGCHI Bulletin 16(4). pp. 35--40, 1985.
[23]
Larssen, A. T., L. Loke, T. Robertson, and J. Edwards. Understanding Movement as Input for Interaction-A Study of Two EyeToy" Games. Proceedings of OZCHI 2004, 2004.
[24]
Lowe, D. G. Object recognition from local scale-invariant features. Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on 2, 1999.
[25]
Maynes-Aminzade, D., Website for the course "Designing Applications that See" at Stanford University, 2007. http://cs377s.stanford.edu/
[26]
Maynes-Aminzade, D., R. Pausch, and S. Seitz. Techniques for interactive audience participation. Fourth IEEE International Conference on Multimodal Interfaces. pp. 15--20, 2002.
[27]
Myers, B., S. E. Hudson, and R. Pausch. Past, present, and future of user interface software tools. ACM Transactions on Computer-Human Interaction (TOCHI) 7(1). pp. 3--28, 2000.
[28]
Reas, C. and B. Fry. Processing: a learning environment for creating interactive Web graphics. Proceedings of the SIGGRAPH 2003 conference on Web graphics: in conjunction with the 30th annual conference on Computer graphics and interactive techniques. pp. 1--1, 2003.
[29]
Shell, J. S., R. Vertegaal, and A. W. Skaburskis. EyePliances: attention-seeking devices that respond to visual attention. Conference on Human Factors in Computing Systems. pp. 770--71, 2003.
[30]
Starner, T., J. Auxier, D. Ashbrook, and M. Gandy. The Gesture Pendant: A Self-illuminating, Wearable, Infrared Computer Vision System for Home Automation Control and Medical Monitoring. International Symposium on Wearable Computing, 2000.
[31]
Underkoffler, J. and H. Ishii. Illuminating light: an optical design tool with a luminous-tangible interface. CHI: ACM Conference on Human Factors in Computing Systems. pp. 542--49, 1998.
[32]
Viola, P. and M. Jones. Rapid object detection using a boosted cascade of simple features. Proc. CVPR 1. pp. 511--18, 2001.
[33]
Wagner, D. and D. Schmalstieg. Handheld Augmented Reality Displays. Proceedings of the IEEE Virtual Reality Conference (VR 2006)-Volume 00, 2006.
[34]
Westeyn, T., H. Brashear, A. Atrash, and T. Starner. Georgia tech gesture toolkit: supporting experiments in gesture recognition. Proceedings of the 5th international conference on Multimodal interfaces. pp. 85--92, 2003.
[35]
Wilson, A. D. PlayAnywhere: a compact interactive tabletop projection-vision system. Proceedings of the 18th annual ACM symposium on User interface software and technology. pp. 83--92, 2005.
[36]
Wilson, A. D. TouchLight: an imaging touch screen and display for gesture-based interaction. Proceedings of the 6th international conference on Multimodal interfaces. pp. 69--76, 2004.

Cited By

View all
  • (2024)VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-MakingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676323(1-21)Online publication date: 13-Oct-2024
  • (2023)Design and Evaluation of Protobject: A Tool for Rapid Prototyping of Interactive ProductsIEEE Access10.1109/ACCESS.2023.324287311(13280-13292)Online publication date: 2023
  • (2022)Iteratively Designing Gesture Vocabularies: A Survey and Analysis of Best Practices in the HCI LiteratureACM Transactions on Computer-Human Interaction10.1145/350353729:4(1-54)Online publication date: 5-May-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST '07: Proceedings of the 20th annual ACM symposium on User interface software and technology
October 2007
306 pages
ISBN:9781595936790
DOI:10.1145/1294211
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 October 2007

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. classification
  2. computer vision
  3. image processing
  4. interaction
  5. machine learning

Qualifiers

  • Article

Conference

UIST07

Acceptance Rates

Overall Acceptance Rate 561 of 2,567 submissions, 22%

Upcoming Conference

UIST '25
The 38th Annual ACM Symposium on User Interface Software and Technology
September 28 - October 1, 2025
Busan , Republic of Korea

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)0
Reflects downloads up to 07 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-MakingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676323(1-21)Online publication date: 13-Oct-2024
  • (2023)Design and Evaluation of Protobject: A Tool for Rapid Prototyping of Interactive ProductsIEEE Access10.1109/ACCESS.2023.324287311(13280-13292)Online publication date: 2023
  • (2022)Iteratively Designing Gesture Vocabularies: A Survey and Analysis of Best Practices in the HCI LiteratureACM Transactions on Computer-Human Interaction10.1145/350353729:4(1-54)Online publication date: 5-May-2022
  • (2021)UMLAUT: Debugging Deep Learning Programs using Program Structure and Model BehaviorProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445538(1-16)Online publication date: 6-May-2021
  • (2021)ML Tools for the Web: A Way for Rapid Prototyping and HCI ResearchArtificial Intelligence for Human Computer Interaction: A Modern Approach10.1007/978-3-030-82681-9_10(315-343)Online publication date: 5-Nov-2021
  • (2020)Integrated Development Environment with Interactive Scatter Plot for Examining Statistical ModelingProceedings of the 2020 CHI Conference on Human Factors in Computing Systems10.1145/3313831.3376455(1-7)Online publication date: 21-Apr-2020
  • (2019)Frame-Based Elicitation of Mid-Air Gestures for a Smart Home Device EcosystemInformatics10.3390/informatics60200236:2(23)Online publication date: 5-Jun-2019
  • (2019)Mallard: Turn the Web into a Contextualized Prototyping Environment for Machine LearningProceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology10.1145/3332165.3347936(605-618)Online publication date: 17-Oct-2019
  • (2019)LabelARProceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology10.1145/3332165.3347927(987-998)Online publication date: 17-Oct-2019
  • (2019)Instrumenting and Analyzing Fabrication Activities, Users, and ExpertiseProceedings of the 2019 CHI Conference on Human Factors in Computing Systems10.1145/3290605.3300554(1-14)Online publication date: 2-May-2019
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media