Abstract
Robot arms have been widely used in various production factories. They are able to complete desired tasks, such as picking and placing, with good repeatability. However, robots cannot completely replace human workers due to many different reasons. Human workers can complete delicate tasks more effectively with their skillful hands. Robots could be human workers helpers in terms of picking and placing items, delivering items to humans, lifting items for humans, etc. However, the risk of harming human workers greatly increases as the robots get closer to them. Recently, researchers began to develop advanced technologies for human–robot collaboration. In this paper, a novel system will be presented. A spatial-temporal graph network was used to identify human motions, and the random forest model was used to evaluate the danger factor between the human and the robot in the robot’s moving path. A Lagrangian minimization was used to determine a new robot’s moving trajectory to keep a safe distance from humans. The safety distance could be adaptively shortened as the robot moves closer to humans for specific man–robot collaboration missions.
















REFERENCES
Wahab, M.N., Nefti-Meziani, S., and Atyabi, A., A comparative review on mobile robot path planning: Classical or meta-heuristic methods?, Annu. Rev. Control, 2020, vol. 50, pp. 233–252. https://doi.org/10.1016/j.arcontrol.2020.10.001
Zhong, R., Xu, X., Klotz, E., and Newman, S.T., Intelligent manufacturing in the context of Industry 4.0: A review, Engineering, 2017, vol. 3, no. 5, pp. 616–630. https://doi.org/10.1016/j.eng.2017.05.015
Li, J., Pang, D., Zheng, Yu., Guan, X., and Le, X., A flexible manufacturing assembly system with deep reinforcement learning, Control Eng. Pract., 2022, vol. 118, p. 104957. https://doi.org/10.1016/j.conengprac.2021.104957
Gkournelos, C., Kousi, N., Christos Bavelos, A., Aivaliotis, S., Giannoulis, C., Michalos, G., and Makris, S., Model based reconfiguration of flexible production systems, Procedia CIRP, 2019, vol. 86, pp. 80–85. https://doi.org/10.1016/j.procir.2020.01.042
Tsarouchi, P., Makris, S., and Chryssolouris, G., Human–robot interaction review and challenges on task planning and programming, Int. J. Comput. Integr. Manuf., 2016, vol. 29, no. 8, pp. 916–931. https://doi.org/10.1080/0951192x.2015.1130251
Fujita, M., Domae, Y., Noda, A., Garcia Ricardez, G., Nagatani, T., Zeng, A., Song, S., Rodriguez, A., Causo, A., Chen, I., and Ogasawara, T., What are the important technologies for bin picking? Technology analysis of robots in competitions based on a set of performance metrics, Adv. Rob., 2020, vol. 34, nos. 7–8, pp. 1–15. https://doi.org/10.1080/01691864.2019.1698463
Semeraro, F., Griffiths, A., and Cangelosi, A., Human–robot collaboration and machine learning: A systematic review of recent research, Rob. Comput.-Integr. Manuf., 2023, vol. 79, p. 102432. https://doi.org/10.1016/j.rcim.2022.102432
Lee, J., A survey of robot learning from demonstrations for human–robot collaboration, 2017. https://doi.org/10.48550/arXiv.1710.08789
Saveriano, M., Abu-Dakka, F.J., Kramberger, A., and Peternel, L., Dynamic movement primitives in robotics: A tutorial survey, 2021. https://doi.org/10.48550/arXiv.2102.03861
Wu, M. and Shi, P., Human pose estimation based on a spatial temporal graph convolutional network, Appl. Sci., 2023, vol. 13, no. 5, p. 3286. https://doi.org/10.3390/app13053286
Kong, Y. and Fu, Y., Human action recognition and prediction: A survey, Int. J. Comput. Vision, 2022, vol. 130, no. 5, pp. 1366–1401. https://doi.org/10.48550/arXiv.1806.11230
Newell, A., Huang, Z., and Deng, J., Associative embedding: End-to-end learning for joint detection and grouping, Adv. Neural Inf. Process. Syst., 2017, vol. 30.
Blank, A., Hiller, M., Zhang, S., Leser, A., Metzner, M., Lieret, M., Thielecke, J., and Franke, J., 6DoF pose-estimation pipeline for texture-less industrial components in bin picking applications, 2019 Eur. Conf. on Mobile Robots (ECMR), Prague, 2019, IEEE, 2019, pp. 1–7. https://doi.org/10.1109/ecmr.2019.8870920
Fang, H.-Sh., Xie, S., Tai, Yu-W., and Lu, C., RMPE: Regional multi-person pose estimation, 2017 IEEE Int. Conf. on Computer Vision, Venice, 2017, IEEE, 2017, pp. 2334–2343. https://doi.org/10.1109/ICCV.2017.256
Zhang, Q., Chang, J., Meng, G., Xu, S., Xiang, S., and Pan, C., Learning graph structure via graph convolutional networks, Pattern Recognit., 2019, vol. 95, pp. 308–318. https://doi.org/10.1016/j.patcog.2019.06.012
Pan, Z., Jia, Z., Jing, K., Ding, Yi., and Liang, Q., Manipulator package sorting and placing system based on computer vision, 2020 Chinese Control and Decision Conf. (CCDC), Hefei, China, 2020, IEEE, 2020, pp. 409–414. https://doi.org/10.1109/ccdc49329.2020.9164071
Du, K., Song, J., Wang, X., Li, X., and Lin, J., A multi-object grasping detection based on the improvement of YOLOv3 algorithm, 2020 Chinese Control and Decision Conf. (CCDC), Hefei, China, 2020, IEEE, 2020, pp. 1027–1033. https://doi.org/10.1109/ccdc49329.2020.9164792
Kipkosgei, P., Njiri, J.G., and Kimotho, J.K., A review on real time object detection using single shot multibox detector, J. Environ. Sci., Comput. Sci. Eng. Technol., 2020, vol. 8, no. 3, pp. 11–23. https://doi.org/10.24214/jecet.b.8.3.23641
Liu, J., Shahroudy, A., Xu, D., and Wang, G., Spatio-temporal LSTM with trust gates for 3D human action recognition, Computer Vision–ECCV 2016, Lecture Notes in Computer Science, vol. 9907, Cham: Springer, 2016, pp. 816–833. https://doi.org/10.1007/978-3-319-46487-9_50
Shahroudy, A., Liu, J., Ng, T.-T., and Wang, G., NTU RGB+D: A large scale dataset for 3D human activity analysis, 2016 IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, 2016, IEEE, 2016, pp. 1010–1019. https://doi.org/10.1109/CVPR.2016.115
Yan, S., Xiong, Y., and Lin, D., Spatial temporal graph convolutional networks for skeleton-based action recognition, Proc. AAAI Conf. Artif. Intell., 2018, vol. 32, no. 1. https://doi.org/10.48550/arXiv.1801.07455
Kipf, T.N. and Welling, M., Semi-supervised classification with graph convolutional networks, 2016. https://doi.org/10.48550/arXiv.1609.02907
Hewage, P., Behera, A., Trovati, M., Pereira, E., Ghahremani, M., Palmieri, F., and Liu, Yo., Temporal convolutional neural (TCN) network for an effective weather forecasting using time-series data from the local weather station, Soft Comput., 2020, vol. 24, no. 21, pp. 16453–16482. https://doi.org/10.1007/s00500-020-04954-0
Oh, S.-R., Pathak, K., Agrawal, S.K., Pota, H.R., and Garratt, M., Approaches for a tether-guided landing of an autonomous helicopter, IEEE Trans. Rob., 2006, vol. 22, no. 3, pp. 536–544. https://doi.org/10.1109/TRO.2006.870657
Pairet, E., Ardon, P., Mistry, M., and Petillot, Yv., Learning generalizable coupling terms for obstacle avoidance via low-dimensional geometric descriptors, IEEE Rob. Autom. Lett., 2019, vol. 4, no. 4, pp. 3979–3986. https://doi.org/10.1109/lra.2019.2930431
Losey, D.P. and O’Malley, M.K., Learning the correct robot trajectory in real-time from physical human interactions, ACM Trans. Hum.-Robot Interaction, 2019, vol. 9, no. 1, pp. 1–19. https://doi.org/10.1145/3354139
Cefalo, M. and Oriolo, G., A general framework for task-constrained motion planning with moving obstacles, Robotica, 2019, vol. 37, no. 3, pp. 575–598. https://doi.org/10.1017/s0263574718001182
Kavraki, L.E., Svestka, P., Latombe, J.-C., and Overmars, M.H., Probabilistic roadmaps for path planning in high-dimensional configuration spaces, IEEE Trans. Rob. Autom., 1996, vol. 12, no. 4, pp. 566–580. https://doi.org/10.1109/70.508439
Kuffner, J.J. and LaValle, S.M., RRT-connect: An efficient approach to single-query path planning, Proc. 2000 ICRA. Millennium Conf. IEEE Int. Conf. on Robotics and Automation. Symp. Proc., San Francisco, 2000, IEEE, 2000, vol. 2, pp. 995–1001. https://doi.org/10.1109/robot.2000.844730
Gammell, J.D., Srinivasa, S.S., and Barfoot, T.D., Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic, 2014 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Chicago, 2014, IEEE, 2014, pp. 2997–3004. https://doi.org/10.1109/iros.2014.6942976
Minghua, O., Wei, S., Shengnan, L., and Pengfei, Ya., On joint obstacle avoidance based on artificial potential field for duct cleaning robot, Proc. 2012 Second Int. Conf. on Electric Information and Control Engineering, Washington, D.C.: IEEE, 2012, vol. 2, pp. 1045–1049.
Park, C., Pan, J., and Manocha, D., ITOMP: Incremental trajectory optimization for real-time replanning in dynamic environments, Proc. Int. Conf. Automated Planning Scheduling, 2012, vol. 22, pp. 207–215. https://doi.org/10.1609/icaps.v22i1.13513
Naderi, K., Rajamäki, J., and Hämäläinen, P., RT-RRT* a real-time path planning algorithm based on RRT, Proc. 8th ACM SIGGRAPH Conf. on Motion in Games, Paris, 2015, New York: Association for Computing Machinery, 2015, pp. 113–118. https://doi.org/10.1145/2822013.2822036
Lugaresi, C., Tang, J., Nash, H., McClanahan, Ch., Uboweja, E., Hays, M., Zhang, F., Chang, Ch.-L., Yong, M.G., Lee, J., Chang, W.-T., Hua, W., Georg, M., and Grundmann, M., MediaPipe: A framework for perceiving and processing reality, Third Workshop on Computer Vision for AR/VR at IEEE Computer Vision and Pattern Recognition (CVPR), IEEE, 2019.
Funding
This research was supported by the National Science and Technology Council (previously Ministry of Science and Technology), Taiwan (grant number NSTC 111-2218-E-011-017, MOST 111-2811-E-011-007-MY3, MOST 111-2221-E-011-102, MOST 110-2218-E-002-040); Intelligent Manufacturing Innovation Center (IMIC) (previously Center for Cyber-Physical System Innovation (CPSi)) at National Taiwan University of Science and Technology (NTUST), Taiwan, which is a Featured Areas Research Center in Higher Education Sprout Project of Ministry of Education (MOE), Taiwan (since 2018).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The authors declare that they have no conflicts of interest.
About this article
Cite this article
Brijesh Patel, Lin, Y.C., Tong, H.J. et al. Robot Arm Path Planning with Adaptive Obstacle Avoidance for Man–Robot Collaboration. Aut. Control Comp. Sci. 57, 423–438 (2023). https://doi.org/10.3103/S0146411623050097
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3103/S0146411623050097