skip to main content
10.1145/3264746.3264788acmconferencesArticle/Chapter ViewAbstractPublication PagesracsConference Proceedingsconference-collections
research-article

A prototype of a self-motion training system based on deep convolutional neural network and multiple FAMirror

Authors Info & Claims
Published:09 October 2018Publication History

ABSTRACT

With the development of deep learning methods, there has been a significant development in motion and speech recognition technologies, which have become common methods in Human-Computer Interaction (HCI). In addition, a mirror-metaphor is something that can be easily found around us, and it has become one of the displays for augmented reality as it enables participants to observe themselves. This paper proposes a prototype of self-motion training AR system based on these two important aspects. In the self-motion training system, we propose a method to represent one motion as one image. This method enables faster deep learning and motion recognition. For a self-motion training system, there are two essential requirements. One is that the participants should have the ability to observe their motion as well as a reference motion model, and it should be possible to correct their motion by comparing with the reference model. The other requirement is that the system could recognize a participant's motion from among various motion models in a database. Here, we introduce the configuration of a self-motion training system based on AR and its implementation details. In addition, the system examines the accuracy of the participant's motion with a reference motion model.

References

  1. G. Cicirelli, C. Attolico, C. Guaragnella, T. D'Orazio. 2015. A Kinect-based Gesture Recognition Approach for a Natural Human Robot Interface, International Journal of Advanced Robotic Systems, Vol. 12Google ScholarGoogle ScholarCross RefCross Ref
  2. M. Cappeci, M. G. Ceravolo, F. Ferracuti, S. Iarlori, S. Longhi, L. Romeo, S. N. Russi, F. Verdini. 2016. Accuracy evaluation of the Kinect v2 sensor during dynamic movements in a rehabilitation scenario, EMBCGoogle ScholarGoogle Scholar
  3. M. Pal, S. Saha, A. Konar. 2016. Distance Matching Based Gesture Recognition For Healthcare Using Microsoft's Kinect Sensor, 2016 International Conference on Microelectronics, Computing and Communications (MicroCom), DurgapurGoogle ScholarGoogle ScholarCross RefCross Ref
  4. Modiface Inc. Reshapr plastic surgery visualization tool, Aug 2011Google ScholarGoogle Scholar
  5. I. Pachoulakis and K. Kapetanakis. 2012. Augmented reality, platforms for virtual tting rooms. 4(4):35--46Google ScholarGoogle Scholar
  6. N. Ukita, D. Kaulen, and C. Rocker. 2014. Towards an automatic motion coaching system. In International Conference on Physiological Computing SystemGoogle ScholarGoogle Scholar
  7. J. S. Jang, T. H. Lee, G. S. Jung, and S. K. Jung. 2014. Two-phase calibration for a mirror metaphor augmented reality system. Proceedings of the IEEE, 102(2):196--203Google ScholarGoogle ScholarCross RefCross Ref
  8. J. S. Jang, S. H. Choi, G. S. Jung, and S. K. Jung. 2016. Depth-of-Field analysis for Focused Augmented Mirror, Computer Graphics International 2016Google ScholarGoogle Scholar
  9. J. S. Jang, S. H. Choi, G. S. Jung, and S. K. 2017. Jung. Focused Augmented Mirror based on Human Visual Perception, The Visual Computer, Vol. 33, pp. 625--636 Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. N. Dalal and B Triggs. 2005. Histogram of Oriented Gradients for Human Detection. In Proc. CVPR, volume 2, pages 886--893 Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. 2008. Learning realistic human actions from movies. In Proc. CVPR2008.Google ScholarGoogle Scholar
  12. H. Wang, M. M. Ullah, A. Klaser, I. Laptev, and C. Schmid. 2009. Evaluation of local spatio-temporal features for action recognition. In Proc. BMVC., 1--11Google ScholarGoogle Scholar
  13. B. Combès, R. Hennessy, J. Waddington, N. Roberts, S. Prima. 2008. Automatic symmetry plane estimation of bilateral objects in point clouds. In: IEEE Converence on ComputerVision and Pattern Recognition, pp. 1--8Google ScholarGoogle ScholarCross RefCross Ref
  14. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, 2015, Rethinking the Inception Architecture for Computer Vision, CVPR, arXiv:1512.00567.Google ScholarGoogle Scholar
  15. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, 2014. Going Deeper with Convolutions, CVPR, arXiv:1409.4842.Google ScholarGoogle Scholar

Index Terms

  1. A prototype of a self-motion training system based on deep convolutional neural network and multiple FAMirror

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        RACS '18: Proceedings of the 2018 Conference on Research in Adaptive and Convergent Systems
        October 2018
        355 pages
        ISBN:9781450358859
        DOI:10.1145/3264746

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 October 2018

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate393of1,581submissions,25%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader