Skip to main content

An Action Evaluation and Scaling Algorithm for Robot Motion Planning

  • Conference paper
  • First Online:
Intelligent Robotics and Applications (ICIRA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14270))

Included in the following conference series:

  • 465 Accesses

Abstract

The evaluation and manipulation of robot motion are essential for effective motion planning. Similar to audio and video data, motion data requires transform operations and a merit function when utilizing supervised and semi-supervised algorithms. However, designing an appropriate evaluation method remains a challenge, and motion data cannot be processed with acceleration. This paper proposes a method for calculating robot posture similarity and action sequence similarity while introducing a method for scaling action sequences. First, a posture and action sequence similarity calculation method based on visual angle is proposed. Then, a method for scaling action sequences is designed, drawing on resampling techniques from the audio field. Through a series of experimental analyses, the evaluation methods demonstrated consistency with human evaluations, and the distortion rate of action sequences at different scales was less than 1%. Applying the proposed method to robot motion planning problems allows for the implementation of supervisory algorithms, resulting in intelligent robot motion planning.

This work was supported by the Open Project of Scientific Research Platform of Grain Information Processing Center of Henan University of Technology (KFJJ-2021-107, KFJJ-2021-108), the Key Scientific Research Projects of Higher Education Institutions in Henan Province (No. 23B520001), the Science and Technology Research Project of Henan Province under Grant 222102210108, the Science and Technology Research and Development Plan Joint fund (application research) project of Henan Province (222103810042), and the Henan Science and technology research project (222102210309).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bi, T., Fankhauser, P., Bellicoso, D., Hutter, M.: Real-time dance generation to music for a legged robot. In: IEEE International Conference on Intelligent Robots and Systems, pp. 1038–1044 (2018)

    Google Scholar 

  2. Chen, B., Su, J., Wang, L., Gu, Q.: Learning robot grasping from a random pile with deep Q-learning. In: Liu, X.-J., Nie, Z., Yu, J., Xie, F., Song, R. (eds.) ICIRA 2021. LNCS (LNAI), vol. 13014, pp. 142–152. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-89098-8_14

    Chapter  Google Scholar 

  3. Ehrlich, S.K., Cheng, G.: A feasibility study for validating robot actions using EEG-based error-related potentials. Int. J. Soc. Robot. 11, 271–283 (2019)

    Article  Google Scholar 

  4. Feng, C., Lan, X., Wan, L., Liang, Z., Wang, H.: A guided evaluation method for robot dynamic manipulation. In: Chan, C.S., et al. (eds.) ICIRA 2020. LNCS (LNAI), vol. 12595, pp. 161–170. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-66645-3_14

    Chapter  Google Scholar 

  5. Fu, T., Li, F., Zheng, Y., Song, R.: Process learning of robot fabric manipulation based on composite reward functions. In: Liu, X.-J., Nie, Z., Yu, J., Xie, F., Song, R. (eds.) ICIRA 2021. LNCS (LNAI), vol. 13014, pp. 163–174. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-89098-8_16

    Chapter  Google Scholar 

  6. Johannink, T., et al.: Residual reinforcement learning for robot control. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 6023–6029. IEEE (2019)

    Google Scholar 

  7. Kattepur, A.: Roboplanner: autonomous robotic action planning via knowledge graph queries. In: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pp. 953–956 (2019)

    Google Scholar 

  8. Liu, N., Liu, Z., Cui, L.: A modified cartesian space DMPS model for robot motion generation. In: Yu, H., Liu, J., Liu, L., Ju, Z., Liu, Y., Zhou, D. (eds.) ICIRA 2019. LNCS (LNAI), vol. 11745, pp. 76–85. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-27529-7_7

    Chapter  Google Scholar 

  9. Losey, D.P., Srinivasan, K., Mandlekar, A., Garg, A., Sadigh, D.: Controlling assistive robots with learned latent actions. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 378–384. IEEE (2020)

    Google Scholar 

  10. Manfrè, A., Infantino, I., Vella, F., Gaglio, S.: An automatic system for humanoid dance creation. Biol. Inspired Cogn. Archit. 15, 1–9 (2016)

    Google Scholar 

  11. Menolotto, M., Komaris, D.S., Tedesco, S., O’Flynn, B., Walsh, M.: Motion capture technology in industrial applications: a systematic review. Sensors 20(19), 5687 (2020)

    Article  Google Scholar 

  12. Özen, F., Küntan, U., Tükel, D.B.: Robot-music synchronization: self-designed dance. In: IEEE EUROCON 2017-17th International Conference on Smart Technologies, pp. 582–587. IEEE (2017)

    Google Scholar 

  13. Peng, H., Li, J., Hu, H., Zhao, L., Feng, S., Hu, K.: Feature fusion based automatic aesthetics evaluation of robotic dance poses. Robot. Auton. Syst. 111, 99–109 (2019)

    Article  Google Scholar 

  14. Peng, W.: A robot dance generation system based on deep learning method. Master’s thesis, Xiamen University (2019)

    Google Scholar 

  15. Phueakthong, P., Varagul, J.: A development of mobile robot based on ROS2 for navigation application. In: 2021 International Electronics Symposium (IES), pp. 517–520. IEEE (2021)

    Google Scholar 

  16. Ravichandar, H., Polydoros, A.S., Chernova, S., Billard, A.: Recent advances in robot learning from demonstration. Annu. Rev. Control Robot. Auton. Syst. 3, 297–330 (2020)

    Article  Google Scholar 

  17. Sunardi, M., Perkowski, M.: Music to motion: using music information to create expressive robot motion. Int. J. Soc. Robot. 10(1), 43–63 (2018)

    Article  Google Scholar 

  18. Wu, R., et al.: Towards deep learning based robot automatic choreography system. In: Yu, H., Liu, J., Liu, L., Ju, Z., Liu, Y., Zhou, D. (eds.) ICIRA 2019. LNCS (LNAI), vol. 11743, pp. 629–640. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-27538-9_54

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruiqi Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, R. et al. (2023). An Action Evaluation and Scaling Algorithm for Robot Motion Planning. In: Yang, H., et al. Intelligent Robotics and Applications. ICIRA 2023. Lecture Notes in Computer Science(), vol 14270. Springer, Singapore. https://doi.org/10.1007/978-981-99-6492-5_34

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-6492-5_34

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-6491-8

  • Online ISBN: 978-981-99-6492-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics