Skip to main content

Advertisement

Log in

An Underwater Human–Robot Interaction Using Hand Gestures for Fuzzy Control

  • Published:
International Journal of Fuzzy Systems Aims and scope Submit manuscript

Abstract

Autonomous underwater vehicle (AUV) plays an important role in ocean research and exploration. The underwater environment has a great influence on AUV control and human–robot interaction, since underwater environment is highly dynamic with unpredictable fluctuation of water flow, high pressure and light attenuation. The traditional control model contains a large number of parameters, which is not effective and produces errors frequently. The proposal of fuzzy control addressed this issue to a certain extent. It applies fuzzy variables to the controller, which replace the values in an interval. In addition to the controller, underwater human–robot interaction is also difficult. Divers cannot speak or show any facial expressions underwater. The buttons on the AUV also need to overcome the huge water pressure. In this paper, we proposed a method to recognize the gesture instructions and apply it to the fuzzy control of AUV. Our contribution is the gesture recognition framework for the human–robot interaction, including the gesture detection network and the algorithm for the control of AUV. The experiment result shows the efficiency of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Papanikolopoulos, N.P., Khosla, P.K.: Adaptive robotic visual tracking: theory and experiments. IEEE Trans. Autom. Control 38(3), 429–445 (1993)

    Article  MathSciNet  Google Scholar 

  2. Jiang, Y., Zhao, M., Hu, C., He, L., Bai, H., Wang, J.: A parallel FP-growth algorithm on World Ocean Atlas data with multi-core CPU. J Supercomput 75(2), 732–745 (2019)

    Article  Google Scholar 

  3. Qin, H., Wang, C., Jiang, Y., Deng, Z., Zhang, W.: Trend prediction of the 3D thermocline’s lateral boundary based on the SVR method. EURASIP J Wirel Commun Netw. 2018, 252 (2018)

    Article  Google Scholar 

  4. Dudek, G., et al.: A visually guided swimming robot, IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3604–3609 (2005)

  5. Son-Cheol, Y., Ura, T., Fujii, T., Kondo, H.: Navigation of autonomous underwater vehicles based on artificial underwater landmarks, Oceans IEEE, pp. 409–416 (2001)

  6. Sattar J., Dudek, G.: Robust servo-control for underwater robots using banks of visual filters, IEEE International Conference on Robotics and Automation, pp. 3583–3588 (2009)

  7. Neverova, N., Wolf, C., Taylor, G.W, Nebout, F.: Multi-scale deep learning for gesture detection and localization, Workshop at the European Conference on Computer Vision. Springer, pp. 474–490 (2015)

  8. Ren, S., He, K., Girshick, R., Sun, J., Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1137–1149 (2017)

  9. Fiala, M.: ARTag, a fiducial marker system using digital techniques, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2005, pp. 590–596

  10. Olson, E.: AprilTag: A robust and flexible visual fiducial system, IEEE International Conference on Robotics and Automation, pp. 3400–3407 (2011)

  11. Dudek, G., Sattar, J., Xu, A.: A visual language for robot control and programming: A human-interface study, IEEE International Conference on Robotics and Automation, pp. 2507–2513 (2007)

  12. Molchanov, P., Gupta, S., Kim, K., Kautz, J.: Hand gesture recognition with 3D convolutional neural networks, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2015, pp. 1–7

  13. Girshick, R.: Fast R-CNN, IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448 (2015)

  14. He, K., Gkioxari, G., Dollár P., Girshick, R.: Mask R-CNN, IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988 (2017)

  15. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu C., Berg, A.C. SSD: Single Shot MultiBox Detector, European Conference on Computer Vision (ECCV 2016), (2015)

  16. Redmon, J., Divvala, S., Girshick. R., Farhadi, A., You only look once: unified, real-time object detection, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016)

  17. Jeen-Shing, W., Lee, C.S.G., Self-adaptive recurrent neuro-fuzzy control for an autonomous underwater vehicle, IEEE International Conference on Robotics and Automation, pp. 1095–1100 (2002)

  18. Zhang, L., Pang, Y., Su, Y., et al.: HPSO-based fuzzy neural network control for AUV. J. Control Theory Appl. 6, 322–326 (2008)

    Article  MathSciNet  Google Scholar 

  19. Huang, W., Fang, H., Liu, L., Obstacle avoiding policy of multi-AUV formation based on virtual AUV, Sixth International Conference on Fuzzy Systems and Knowledge Discovery, pp. 131–135 (2009)

  20. Li, Y., Pang, Y., Wan, L., Tang, X., A fuzzy motion control of AUV based on apery intelligence, Chinese Control and Decision Conference, pp. 1316–1321 (2009)

  21. Li, Q., Shi, X.H., Kang, Z.Q.: The research of Fuzzy-PID control based on grey prediction for AUV. Appl. Mech. Mater. 246–247, 888–892 (2012)

    Article  Google Scholar 

  22. Liang, X., Qu, X., Wan, L., Ma, Q.: Three-dimensional path following of an underactuated AUV based on Fuzzy backstepping sliding mode control. Int. J. Fuzzy Syst. 20, 640–649 (2018)

    Article  MathSciNet  Google Scholar 

  23. Yu, C, Xiang, X., Wilson, P.A., Guidance-Error-Based Robust Fuzzy Adaptive Control for Bottom Following of a Flight-Style AUV With Saturated Actuator Dynamics, IEEE Transactions on Cybernetics, 1–13 (2019)

  24. Li, H., He, B., Yin, Q.: Fuzzy Optimized MFAC Based on ADRC in AUV Heading Control. Electronics. 8(6), 608 (2019)

    Article  Google Scholar 

  25. Gomez Chavez, A., Ranieri, A., Chiarella, D., Zereik, E., Babić, A., Birk, A.: Caddy underwater stereo-vision dataset for human–robot interaction (HRI) in the context of diver activities. J. Marine Sci. Eng. 7(1), 16 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant 51679105, Grant 51809112, and Grant 51939003.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hong Qi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, Y., Peng, X., Xue, M. et al. An Underwater Human–Robot Interaction Using Hand Gestures for Fuzzy Control. Int. J. Fuzzy Syst. 23, 1879–1889 (2021). https://doi.org/10.1007/s40815-020-00946-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40815-020-00946-2

Keywords

Navigation