ABSTRACT
With the development of deep learning methods, there has been a significant development in motion and speech recognition technologies, which have become common methods in Human-Computer Interaction (HCI). In addition, a mirror-metaphor is something that can be easily found around us, and it has become one of the displays for augmented reality as it enables participants to observe themselves. This paper proposes a prototype of self-motion training AR system based on these two important aspects. In the self-motion training system, we propose a method to represent one motion as one image. This method enables faster deep learning and motion recognition. For a self-motion training system, there are two essential requirements. One is that the participants should have the ability to observe their motion as well as a reference motion model, and it should be possible to correct their motion by comparing with the reference model. The other requirement is that the system could recognize a participant's motion from among various motion models in a database. Here, we introduce the configuration of a self-motion training system based on AR and its implementation details. In addition, the system examines the accuracy of the participant's motion with a reference motion model.
- G. Cicirelli, C. Attolico, C. Guaragnella, T. D'Orazio. 2015. A Kinect-based Gesture Recognition Approach for a Natural Human Robot Interface, International Journal of Advanced Robotic Systems, Vol. 12Google ScholarCross Ref
- M. Cappeci, M. G. Ceravolo, F. Ferracuti, S. Iarlori, S. Longhi, L. Romeo, S. N. Russi, F. Verdini. 2016. Accuracy evaluation of the Kinect v2 sensor during dynamic movements in a rehabilitation scenario, EMBCGoogle Scholar
- M. Pal, S. Saha, A. Konar. 2016. Distance Matching Based Gesture Recognition For Healthcare Using Microsoft's Kinect Sensor, 2016 International Conference on Microelectronics, Computing and Communications (MicroCom), DurgapurGoogle ScholarCross Ref
- Modiface Inc. Reshapr plastic surgery visualization tool, Aug 2011Google Scholar
- I. Pachoulakis and K. Kapetanakis. 2012. Augmented reality, platforms for virtual tting rooms. 4(4):35--46Google Scholar
- N. Ukita, D. Kaulen, and C. Rocker. 2014. Towards an automatic motion coaching system. In International Conference on Physiological Computing SystemGoogle Scholar
- J. S. Jang, T. H. Lee, G. S. Jung, and S. K. Jung. 2014. Two-phase calibration for a mirror metaphor augmented reality system. Proceedings of the IEEE, 102(2):196--203Google ScholarCross Ref
- J. S. Jang, S. H. Choi, G. S. Jung, and S. K. Jung. 2016. Depth-of-Field analysis for Focused Augmented Mirror, Computer Graphics International 2016Google Scholar
- J. S. Jang, S. H. Choi, G. S. Jung, and S. K. 2017. Jung. Focused Augmented Mirror based on Human Visual Perception, The Visual Computer, Vol. 33, pp. 625--636 Google ScholarDigital Library
- N. Dalal and B Triggs. 2005. Histogram of Oriented Gradients for Human Detection. In Proc. CVPR, volume 2, pages 886--893 Google ScholarDigital Library
- I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. 2008. Learning realistic human actions from movies. In Proc. CVPR2008.Google Scholar
- H. Wang, M. M. Ullah, A. Klaser, I. Laptev, and C. Schmid. 2009. Evaluation of local spatio-temporal features for action recognition. In Proc. BMVC., 1--11Google Scholar
- B. Combès, R. Hennessy, J. Waddington, N. Roberts, S. Prima. 2008. Automatic symmetry plane estimation of bilateral objects in point clouds. In: IEEE Converence on ComputerVision and Pattern Recognition, pp. 1--8Google ScholarCross Ref
- C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, 2015, Rethinking the Inception Architecture for Computer Vision, CVPR, arXiv:1512.00567.Google Scholar
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, 2014. Going Deeper with Convolutions, CVPR, arXiv:1409.4842.Google Scholar
Index Terms
- A prototype of a self-motion training system based on deep convolutional neural network and multiple FAMirror
Recommendations
Stereoscopic augmented reality system for supervised training on minimal invasive surgery robots
VRIC '14: Proceedings of the 2014 Virtual Reality International ConferenceTraining in the use of robot-assisted surgery systems is necessary before a surgeon is able to perform procedures using these systems because the setup is very different from manual procedures. In addition, surgery robots are highly expensive to both ...
Convolutional Neural Network-Based Fractional-Pixel Motion Compensation
Fractional-pixel motion compensation (MC) improves the efficiency of inter prediction and has been utilized extensively in video coding standards. The traditional methods of fractional-pixel MC usually follow the approach of interpolation, i.e., they ...
Augmented reality haptics system for dental surgical skills training
VRST '10: Proceedings of the 17th ACM Symposium on Virtual Reality Software and TechnologyWe have developed a virtual reality (VR) and an augmented reality (AR) dental training simulator utilizing a haptic device. The simulators utilize volumetric force feedback computation and real time modification of the volumetric data. They include a ...
Comments