Hostname: page-component-8448b6f56d-sxzjt Total loading time: 0 Render date: 2024-04-16T20:41:56.117Z Has data issue: false hasContentIssue false

Obstacle Avoidance through Gesture Recognition: Business Advancement Potential in Robot Navigation Socio-Technology

Published online by Cambridge University Press:  20 March 2019

Xuan Liu
Affiliation:
Zhejiang Gongshang University, SBA, Hangzhou, Zhejiang, P. R. China
Kashif Nazar Khan
Affiliation:
COMSATS Institute of Information Technology, Islamabad, Pakistan
Qamar Farooq*
Affiliation:
Zhejiang Gongshang University, SBA, Hangzhou, Zhejiang, P. R. China Air University, Multan Campus, Multan, Pakistan
Yunhong Hao
Affiliation:
Zhejiang Gongshang University, SBA, Hangzhou, Zhejiang, P. R. China
Muhammad Shoaib Arshad
Affiliation:
COMSATS Institute of Information Technology, Islamabad, Pakistan
*
*Corresponding author. E-mail: f4farooq@outlook.com

Summary

In the present modern age, a robot works like human and is controlled in such a manner that its movements should not create hindrance in human activities. This characteristic involves gesture feat and gesture recognition. This article is aimed to describe the developments in algorithms devised for obstacle avoidance in robot navigation which can open a new horizon for advancement in businesses. For this purpose, our study is focused on gesture recognition to mean socio-technological implication. Literature review on this issue reveals that movement of robots can be made efficient by introducing gesture-based collision avoidance techniques. Experimental results illustrated a high level of robustness and usability of the Gesture recognition (GR) system. The overall error rate is almost 10%. In our subjective judgment, we assume that GR system is very well-suited to instruct a mobile service robot to change its path on the instruction of human.

Type
Articles
Copyright
© Cambridge University Press 2019 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Hao, Y., Zhang, Y. and Farooq, Q., “The contribution of leading firms in environmental sustainability: Dampening the detrimental effect of political capital ties,” Int. J. Environ. Sci. Technol. 15(12), 25812594 (2018).CrossRefGoogle Scholar
Wang, Z., Wen, X., Song, Y., Mao, X., Li, W. and Chen, G., “Navigation of a humanoid robot via head gestures based on global and local live videos on google glass,” In: IEEE International Instrumentation and Measurement Technology Conference, Turin, Italy (2017) pp. 16.Google Scholar
Berri, R., Wolf, D. and Osório, F., “Telepresence robot with image-based face tracking and 3d perception with human gesture interface using kinect sensor,” In: 2014 Joint Conference on Robotics: SBR-LARS Robotics Symposium and Robocontrol (SBR LARS Robocontrol), IEEE, Sao Carlos, Brazil (2014) pp. 205210.Google Scholar
Benavidez, P. and Mo, J., “Mobile robot navigation and target tracking system,” In: International Conference on System of Systems Engineering, Albuquerque, NM, USA (2011) pp. 299304.Google Scholar
Bax, L., “Human-robot interactive collision prevention: Improved navigation in home environments,” (2012). Master Thesis. Retrieved from https://theses.ubn.ru.nl/handle/123456789/194Google Scholar
Huber, E. and Kortenkamp, D., “Using stereo vision to pursue moving agents with a mobile robot,” In: Proceedings of 1995 IEEE International Conference on Robotics and Automation, Nagoya, Japan (1995) p. V13.Google Scholar
Kortenkamp, D., Huber, E. and Bonasso, R. P., “Recognizing and interpreting gestures on a mobile robot,” AAAI/IAAI 2 (1996) pp. 915921. Retrieved from http://www.aaai.org/Library/AAAI/1996/aaai96-136.phpGoogle Scholar
Kahn, R. E., Swain, M. J., Prokopowicz, P. N. and Firby, R. J., “Gesture recognition using the perseus architecture,” In: Proceedings CVPR’96, 1996 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE (1996) pp. 734741.Google Scholar
Boehme, H.-J., Braumann, U.-D., Brakensiek, A., Corradini, A., Krabbes, M. and Gross, H.-M., “User localisation for visually-based human-machine-interaction,” Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan (1998) p. 486.Google Scholar
Cui, Y. and Weng, J. J., “Hand segmentation using learning-based prediction and verification for hand sign recognition,” In: Proceedings CVPR’96, 1996 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE (1996) pp. 8893.Google Scholar
Triesch, J. and Von Der Malsburg, C., “Robotic gesture recognition,” In: International Gesture Workshop, Springer (1997) pp. 233244.Google Scholar
Wren, C. R., Azarbayejani, A., Darrell, T. and Pentland, A. P., “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 780785 (1997).CrossRefGoogle Scholar
Pentland, A., Moghaddam, B. and Starner, T., “View-based and modular eigenspaces for face recognition,” (1994). doi:10.1.1.47.3791Google Scholar
Wilson, A. D. and Bobick, A. F., “Parametric hidden Markov models for gesture recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 21(9), 884900 (1999).CrossRefGoogle Scholar
Han, K. M., Collision Free Path Planning Algorithms for Robot Navigation Problem (University of Missouri, Columbia, 2007). doi:10.32469/10355/5021CrossRefGoogle Scholar
Ryan, D. J., Finger and Gesture Recognition with Microsoft Kinect (University of Stavanger, Norway, 2012).Google Scholar
Hao, Y., Farooq, Q. and Zhang, Y., “Unattended social wants and corporate social responsibility of leading firms: Relationship of intrinsic motivation of volunteering in proposed welfare programs and employee attributes,” Corporate Social Responsibility Environ. Manage. 25(6), 10291038 (2018).CrossRefGoogle Scholar
Albrektsen, S. M., Using the Kinect Sensor for Social Robotics (Institutt for teknisk kybernetikk, Trondheim, Norway, 2011).Google Scholar
Farooq, Q., Fu, P., Hao, Y., Jonathan, T. and Zhang, Y., “A review of management and importance of E-Commerce implementation in service delivery of private express enterprises of China,” SAGE Open 9(1), (2019). doi:10.1177/2158244018824194CrossRefGoogle Scholar
Hoxey, T. and Stephenson, I., “Integrating live skeleton data into a VR environment,” (2018). Retrieved from http://eprints.bournemouth.ac.uk/30826/Google Scholar
Müller, M., “Dynamic time warping,” In: Information Retrieval for Music and Motion (Springer, Berlin, Heidelberg, 2007) pp. 6984.CrossRefGoogle Scholar
Celebi, S., Aydin, A. S., Temiz, T. T. and Arici, T., “Gesture recognition using skeleton data with weighted dynamic time warping,” VISAPP (1), 620625 (2013).Google Scholar
Raptis, M., Kirovski, D. and Hoppe, H., “Real-time classification of dance gestures from skeleton animation,” In: Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, ACM (2011) pp. 147156.Google Scholar
Hao, Y., Farooq, Q. and Sun, Y., “Development of theoretical framework and measures for the role of social media in realizing corporate social responsibility through native and non-native communication modes: moderating effects of cross-cultural management”. Corporate Soc. Respons. Environ. Manage. 25(4), 704711 (2018).CrossRefGoogle Scholar
Fahimi, F., Nataraj, C. and Ashrafiuon, H., “Real-time obstacle avoidance for multiple mobile robots,” Robotica 27(2), 189198 (2009).CrossRefGoogle Scholar
Tanveer, M. H., Recchiuto, C. T. and Sgorbissa, A., “Analysis of path following and obstacle avoidance for multiple wheeled robots in a shared workspace,” Robotica 37(1), 80108 (2018).CrossRefGoogle Scholar
Farooq, Q., Hao, Y. and Liu, X.Understanding Corporate Social Responsibility with Cross-Cultural Difference: A Deeper Look at Religiosity,” Corporate Social Responsibility and Environmental Management (2019). doi:10.1002/csr.1736CrossRefGoogle Scholar