Skip to main content

A Real-Time AGV Gesture Control Method Based on Body Part Detection

  • Conference paper
  • First Online:
Intelligent Robotics and Applications (ICIRA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14273))

Included in the following conference series:

Abstract

The intelligent control of Automated Guided Vehicles (AGV) has essential research significance and application in logistics loading, unmanned driving, and emergency rescue. As an idealized human-computer interaction method, gesture has tremendous expressive power. Therefore, the gesture-based AGV control method is the mainstream. However, In a complex environment, noise interference can affect the precise and real-time control of AGV. To deal with this problem, a real-time AGV gesture control method based on human body part detection is proposed. We design a simple AGV control method based on human gestures by the relative relationship between human body parts in space. We extend a new branch on the Fully Convolutional One-Stage Object Detection (FCOS), which constrains the detection range of human parts. This method subtly associates the human parts with the human body, which vastly improves the anti-interference capability of gesture recognition. We train the network end-to-end on the COCO Human Parts dataset and achieve a detection accuracy of 35.4% of human parts. In addition, We collect a small dataset for validating the gesture recognition method designed in this paper and achieves an accuracy of 96.1% with a detection speed of 17.23 FPS. Our method achieves precise and convenient control of AGVs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Lynch, L., et al.: Automated ground vehicle (AGV) and sensor technologies-a review. In: 2018 12th International Conference on Sensing Technology (ICST). IEEE (2018)

    Google Scholar 

  2. Zhou, X., Chen, T., Zhang, Y.: Research on intelligent AGV control system. In: 2018 Chinese Automation Congress (CAC). IEEE (2018)

    Google Scholar 

  3. Zhang, Y., et al.: Learning effective spatial-temporal features for sEMG armband-based gesture recognition. IEEE Internet Things J. 7(8), 6979–6992 (2020)

    Article  Google Scholar 

  4. Anwar, S., Sinha, S.K., Vivek, S., Ashank, V.: Hand gesture recognition: a survey. In: Nath, V., Mandal, J.K. (eds.) Nanoelectronics, Circuits and Communication Systems. LNEE, vol. 511, pp. 365–371. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-0776-8_33

    Chapter  Google Scholar 

  5. Rautaray, S.S., Agrawal, A.: Vision-based hand gesture recognition for human-computer interaction: a survey. Artif. Intell. Rev. 43, 1–54 (2015)

    Article  Google Scholar 

  6. Mujahid, A., et al.: Real-time hand gesture recognition based on deep learning YOLOv3 model. Appl. Sci. 11(9), 4164 (2021)

    Article  Google Scholar 

  7. Ren, S., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)

    Google Scholar 

  8. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  9. Redmon, J., Farhadi, A.: YOLOV3: an incremental improvement. arXiv preprint: arXiv:1804.02767 (2018)

  10. Law, H., Deng, J.: CornerNet: detecting objects as paired keypoints. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 765–781. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_45

    Chapter  Google Scholar 

  11. Tian, Z., et al.: FCOS: fully convolutional one-stage object detection. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE (2020)

    Google Scholar 

  12. Duan, K., et al.: CenterNet: keypoint triplets for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2019)

    Google Scholar 

  13. Zhang, K., et al.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23(10), 1499–1503 (2016)

    Article  Google Scholar 

  14. Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  15. Zhang, S., et al.: AIParsing: anchor-free instance-level human parsing. IEEE Trans. Image Process. 31, 5599–5612 (2022)

    Article  Google Scholar 

  16. He, K., et al.: Mask r-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  17. Lee, H., et al.: Stretchable array electromyography sensor with graph neural network for static and dynamic gestures recognition system. NPJ Flex. Electron. 7(1), 20 (2023)

    Article  Google Scholar 

  18. Yu, C., et al.: End-side gesture recognition method for UAV control. IEEE Sens. J. 22(24), 24526–24540 (2022)

    Article  Google Scholar 

  19. Chua, S.N.D., et al.: Hand gesture control for human-computer interaction with deep learning. J. Electr. Eng. Technol. 17(3), 1961–1970 (2022)

    Article  MathSciNet  Google Scholar 

  20. Alba-Flores, R.: UAVs control using 3D hand keypoint gestures: In: SoutheastCon 2022. IEEE (2022)

    Google Scholar 

  21. Lin, T.-Y., et al.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  22. Yang, L., et al.: Hier R-CNN: instance-level human parts detection and a new benchmark. IEEE Trans. Image Process. 30, 39–54 (2020)

    Article  Google Scholar 

  23. Lin, T.-Y., et al.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

    Google Scholar 

Download references

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China (62006204, 62103407), the Guangdong Basic and Applied Basic Research Foundation (2022A1515011431), and Shenzhen Science and Technology Program (RCBS20210609104516043, JSGG20210802154004014).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qing Gao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, Y., Gao, Q., Yu, X., Zhang, X. (2023). A Real-Time AGV Gesture Control Method Based on Body Part Detection. In: Yang, H., et al. Intelligent Robotics and Applications. ICIRA 2023. Lecture Notes in Computer Science(), vol 14273. Springer, Singapore. https://doi.org/10.1007/978-981-99-6498-7_17

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-6498-7_17

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-6497-0

  • Online ISBN: 978-981-99-6498-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics