skip to main content
10.1145/3197768.3201547acmotherconferencesArticle/Chapter ViewAbstractPublication PagespetraConference Proceedingsconference-collections
research-article

The HealthSign Project: Vision and Objectives

Published: 26 June 2018 Publication History

Abstract

This paper presents the HealthSign project, which deals with the problem of sign language recognition with focus on medical interaction scenarios. The deaf user will be able to communicate in his native sign language with a physician. The continuous signs will be translated to text and presented to the physician. Similarly, the speech will be recognized and presented as text to the deaf users. Two alternative versions of the system will be developed, one doing the recognition on a server, and another one doing the recognition on a mobile device.

References

[1]
J. Alon, V. Athitsos, Q. Yuan, and S. Sclaroff. 2009. A Unified Framework for Gesture Recognition and Spatiotemporal Gesture Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 9 (Sept 2009), 1685--1699.
[2]
Bioassist. 2018. Heart around. (2018). https://heartaround.com/?lang=el
[3]
Mark Dilsizian, Polina Yanovich, Shu Wang, Carol Neidle, and Dimitris Metaxas. 2014. A New Framework for Sign Language Recognition Based on 3D Handshape Identification and Linguistic Modeling. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14) (26-31), Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (Eds.). European Language Resources Association (ELRA), Reykjavik, Iceland.
[4]
Liya Ding and Aleix M. Martinez. 2009. Modelling and Recognition of the Linguistic Components in American Sign Language. Image Vision Comput. 27, 12 (Nov. 2009), 1826--1844.
[5]
Ali Erol, George Bebis, Mircea Nicolescu, Richard D. Boyle, and Xander Twombly. 2007. Vision-based hand pose estimation: A review. Computer Vision and Image Understanding 108, 1 (2007), 52 - 73. Special Issue on Vision for Human-Computer Interaction.
[6]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1 (NIPS'12). Curran Associates Inc., USA, 1097--1105. http://dl.acm.org/citation.cfm?id=2999134.2999257
[7]
Damien Michel, Ammar Qammaz, and Antonis A. Argyros. 2017. Markerless 3D Human Pose Estimation and Tracking Based on RGBD Cameras: An Experimental Evaluation. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments (PETRA '17). ACM, New York, NY, USA, 115--122.
[8]
Francesc Moreno-Noguer. 2017. 3D Human Pose Estimation from a Single Image via Distance Matrix Regression. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017. 1561--1570.
[9]
Jorge Nocedal. 1980. Updating Quasi-Newton Matrices with Limited Storage. Math. Comp. 35, 151 (1980), 773--782. http://www.jstor.org/stable/2006193
[10]
Iason Oikonomidis, Nikolaos Kyriazis, and Antonis A. Argyros. 2011. Efficient model-based 3D tracking of hand articulations using Kinect. In British Machine Vision Conference, BMVC 2011, Dundee, UK, August 29-September 2, 2011. Proceedings. 1--11.
[11]
C. Qian, X. Sun, Y. Wei, X. Tang, and J. Sun. 2014. Realtime and Robust Hand Tracking from Depth. In 2014 IEEE Conference on Computer Vision and Pattern Recognition. 1106--1113.
[12]
Olga Russakovsky, Jia Deng, HaoSu, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision 115, 3 (01 Dec 2015), 211--252.
[13]
Toby Sharp, Cem Keskin, Duncan Robertson, Jonathan Taylor, Jamie Shotton, David Kim, Christoph Rhemann, Ido Leichter, Alon Vinnikov, Yichen Wei, Daniel Freedman, Pushmeet Kohli, Eyal Krupka, Andrew Fitzgibbon, and Shahram Izadi. 2015. Accurate, Robust, and Flexible Real-time Hand Tracking. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI'15). ACM, New York, NY, USA, 3633--3642.
[14]
C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. 2015. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1--9.
[15]
Ilias Theodorakopoulos, V. Pothos, DimitrisKastaniotis, and Nikos Fragoulis.2017. Parsimonious Inference on Convolutional Neural Networks: Learning and applying on-line kernel activation rules. CoRR abs/1701.05221 (2017). arXiv:1701.05221http://arxiv.org/abs/1701.05221
[16]
Christian Vogler and Dimitris Metaxas. 2004. Handshapes and Movements: Multiple-Channel American Sign Language Recognition. In Gesture-Based Communication in Human-Computer Interaction, Antonio Camurri and Gualtiero Volpe (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 247--258.
[17]
Qi Ye, Shanxin Yuan, and Tae-Kyun Kim. 2016. Spatial Attention Deep Net with Partial PSO for Hierarchical Hybrid Hand Pose Estimation. In Computer Vision-ECCV 2016, Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.). Springer International Publishing, Cham, 346--361.

Cited By

View all
  • (2021)Mobile learning for hearing-impaired children: Review and analysisUniversal Access in the Information Society10.1007/s10209-021-00841-z22:2(635-653)Online publication date: 24-Sep-2021

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
PETRA '18: Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference
June 2018
591 pages
ISBN:9781450363907
DOI:10.1145/3197768
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

In-Cooperation

  • NSF: National Science Foundation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 June 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. embedded processing
  2. sign language recognition

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

PETRA '18

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2021)Mobile learning for hearing-impaired children: Review and analysisUniversal Access in the Information Society10.1007/s10209-021-00841-z22:2(635-653)Online publication date: 24-Sep-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media