Abstract
To interactively browse augmented reality (AR) content from a smart glass and to display the proper 3D visual contents on a smart glass are challenging research issues. In this paper, we propose to use a depth camera to detect a human subject in a real 3D space, and orientation sensors on a smart glass are used to reveal the attitude and orientations of a user’s head for pose estimation in an AR application. By implementing a prototype for detecting a user’s head and measuring the orientations of the head, the proposed method provides three contributions: (i) a top-view depth camera is used to detect a user’s head position, (ii) the orientation sensors on a smart glass are used to reveal the attitude and orientation properties of the head, and (iii) the displayed AR content in a virtual space is properly mapped from a real 3D space. The experimental results demonstrated the spatial displaying accuracy in three testing spaces: a research lab, an office, and the center for art and technology. In addition, the proposed method is applied in a tech-art installation to allow audience to reliably view the AR content in a smart glass.
Similar content being viewed by others
References
Azuma, R.: A survey of augmented reality. Presence: Teleoperators Virt Environ 6(4), 355–385 (1997)
Baum, M., Faion, F., Hanebeck, U.D.: Tracking ground moving extended objects using RGBD data. In: IEEE International Conf on Multisensor Fusion and Integration for Intelligent Systems (2012)
Benatti, S., Casamassima, F., Milosevic, B., Farella, E., Schonle, P., Fateh, S., Burger, T., Huang, Q., Benini, L.: A versatile embedded platform for EMG acquisition and gesture recognition. IEEE Trans. Biomed. Circuits Syst. 9(5), 620–630 (2015)
Brostow, G.J., Cipolla, R.: Unsupervised Bayesian detection of independent motion in crowds. In: IEEE Conf on Computer Vision and Pattern Recognition (2006)
Child growth standards from World Health Organization (WHO), http://www.who.int/childgrowth/en/
Donoser, M., Kontschieder, P., Bischof, H.: Robust planar target tracking and pose estimation from a single concavity. In: IEEE Int. Symp on Mixed and Augmented Reality (2011)
El-Khoury, S., Batzianoulis, I., Antuvan, C.W., Contu, S., Masia, L., Micera, S., Billard, A.: EMG-based learning aproach for estimating wrist motion. In: IEEE Intl. Conf Engineering in Medicine and Biology Society (EMBC), pp. 6732–6735 (2015)
Epson BT-200, https://tech.moverio.epson.com/en/bt-200/
Epson BT-200 SDK, https://tech.moverio.epson.com/en/bt-200/tools.html
Eshel, R., Moses, Y.: Homography based multiple camera detection and tracking of people in a dense crowd. In: IEEE Conf on Computer Vision and Pattern Recognition (2008)
Gupta, H.P., Chudgar, H.S., Mukherjee, S., Dutta, T., Sharma, K.: A continuous hand gestures recognition technique for human-machine interaction using accelerometer and gyroscope sensors. IEEE Sensors J. 16(16), 6425–6432 (2016)
Kim, K., Lepetit, V., Woo, W.: Scalable real-time planar targets tracking for digilog books. Vis. Comput. 26(6–8), 1145–1154 (2010)
Kinect, https://www.xbox.com/en-US/xbox-one/accessories/kinect
Kinect SDK, https://www.microsoft.com/en-us/download/details.aspx?id=44561
Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: IEEE/ACM International symposium on mixed and augmented reality (2007)
Lan, Y.S., Sun, S.W., Hua, K.L., Cheng, W.H.: O-Displaying: an orientation-based augmented reality display on a smart glass with a user tracking. ACM SIGGRAPH Asia (2017)
Liarokapis, M.V., Artemiadis, P.K., Kyriakopoulos, K.J., Manolakos, E.S.: A learning scheme for reach to grasp movements: on EMG-based interfaces using task specific motion decoding models. IEEE J. Biomed. Health Inform. 17(5), 915–921 (2013)
Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)
Luber, M., Spinello, L., Arras, K.O.: People tracking in RGB-D data with on-line boosted target models. In: IEEE International Conf on Intelligent Robots and Systems (2011)
Lv, Z., Halawani, A., Feng, S., Rhman, S., Li, H.: Touch-less interactive augmented reality game on vision-based wearable device. Personal Ubiq. Comput. 19 (3–4), 551–567 (2015)
Marchand, E., Uchiyama, H., Spindler, F.: Pose estimation for augmented reality: a hands-on survey. IEEE Trans. Visual. Comput. Graph. 22(12), 2633–2651 (2016)
Martin, P., Marchand, E., Houlier, P., Marchal, I.: Mapping and re-localization for mobile augmented reality. IEEE Int. Conf on Image Processing (2014)
Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. British Mach. Vis. Conf., 384–396 (2002)
Ozturk, O., Yamasaki, T., Aizawa, K.: Tracking of humans and estimation of body/head orientation from top-view single camera for visual focus of attention analysis. IEEE Intl. Conf on Computer Vision (2009)
Park, Y., Lepetit, V., Woo, W.: Handling motion-blur in 3d tracking and rendering for augmented reality, vol. 18 (2012)
Santiago, C.B., Sousa, A., Reis, L.P., Estriga, M.L.: Real time colour based player tracking in indoor sports. Computational Vision and Medical Image Processing: Recent Trends (Computational Methods in Applied Sciences), 17–35 (2011)
Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: IEEE Conf on Computer Vision and Pattern Recognition (2011)
Solmaz, B., Moore, B.E., Shah, M.: Identifying behaviors in crowd scenes using stability analysis for dynamical systems. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 2064–2070 (2012)
Su, Y., Fisher, M.H., Wolczowski, A., Bell, G.D., Burn, D.J., Gao, R.X.: Towards an EMG-controlled prosthetic hand using a 3-D electromagnetic positioning system. IEEE Trans. Instrum. Meas. 56(1), 178–186 (2007)
Sun, S.W., Cheng, W.H., Lin, Y.C., Lin, W.C., Chang, Y.T., Peng, C.W.: Whac-a-Mole: a head detection scheme by estimating the 3D envelope from depth image. In: IEEE Intl. Conf. on Multimedia and Expo (2013)
Tamaazousti, M., Gay-Bellile, V., Collette, S., Bourgeois, S., Dhome, M.: Nonlinear refinement of structure from motion reconstruction by taking advantage of a partial knowledge of the evironment. In: IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3073–3080 (2011)
Vuforia, https://developer.vuforia.com/
Xie, R., Cao, J.: Accelerometer-based hand gesture recognition by neural network and similarity matching. IEEE Sensors J. 16(11), 4537–4545 (2016)
Xu, R., Zhou, S., Li, W.J.: MEMS accelerometer based nonspecific-user hand gesture recognition. IEEE Sensors J. 12(5), 1166–1173 (2012)
Xu, R., Zhou, S., Li, W.J.: Home Automation Oriented Gesture Classification from Inertial Measurements. IEEE Trans. Autom. Sci. Eng. 12(4), 1200–1210 (2015)
Acknowledgments
This work was supported in part by the Ministry of Science and Technology, Taiwan, under Grant MOST 106-2221-E-119-002.
Author information
Authors and Affiliations
Corresponding author
Additional information
This article belongs to the Topical Collection: Special Issue on Social Media and Interactive Technologies
Guest Editors: Timothy K. Shih, Lin Hui, Somchoke Ruengittinun, and Qing Li
Rights and permissions
About this article
Cite this article
Sun, SW., Lan, YS. Augmented reality displaying scheme in a smart glass based on relative object positions and orientation sensors. World Wide Web 22, 1221–1239 (2019). https://doi.org/10.1007/s11280-018-0592-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11280-018-0592-z