Abstract
Dual arm robots have been attracting attention from the view point of factory automation. These robots are basically required to reach their hands toward the respective target objects, simultaneously. Therefore, we focus on motion planning with vision-based deep neural networks. Given an RGB-D camera mounted on a robot, object images are fed as the inputs to the reaching motion planner based on convolutional neural network, CNN. For multiple objects, the depth of each object in the image is useful information to determine a reaching target. If the objects are close to each other, however, the depth becomes similar. For this challenge, we propose to generate the target object image through instance segmentation and order classifier. In the experiment with multiple objects, we show that the robot is able to reach both the hands toward the target objects by using the target object images.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Since these objects are given the same label and not separated through connected-component labeling, a cluster is formed by the objects in the image.
- 2.
In the CNN architecture without the branches, the amount of movement of both the hands, \(\Delta \)s, were derived from the six units in the output through the same fully-connected layers composed of 19200, 4000, and 1000 units.
References
Maeda, Y., Moriyama, Y.: View-based teaching/playback for industrial manipulators. In: IEEE International Conference on Robotics and Automation, pp. 4306–4311 (2011)
Maeda, Y., Nakamura, T.: View-based teaching/playback for robotic manipulation. ROBOMECH J. 2(2) (2015)
Argall, B.D., et al.: A survey of robot learning from demonstration. Robot. Autonom. Syst. 57, 469–483 (2009)
Hoshino, S., Urayama, K.: Teach and playback for robotic handling through object recognition. In: IEEE/SICE International Symposium on System Integration, pp. 582–583 (2021)
Noda, K., et al.: Multimodal integration learning of robot behavior using deep neural networks. Robot. Auton. Syst. 62(6), 721–736 (2014)
Yang, P., et al.: Repeatable folding task by humanoid robot worker using deep learning. IEEE Robot. Autom. Lett. 2(2), 397–403 (2017)
Lecun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Hoshino, S., et al.: Imitation learning based on data augmentation for robotic reaching. In: Annual Conference of the Society of Instrument and Control Engineers of Japan, pp. 417–424 (2021)
Zeng, A., et al.: Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. Int. J. Robot. Res. 1–14 (2019)
Zapata-Impata, B., et al.: Using geometry to detect grasping points on 3D unknown point cloud. In: International Conference on Informatics in Control, Automation and Robotics, pp. 292–299 (2017)
Rusu, R.B., Cousins, S.: 3D is here: point cloud library (PCL). In: IEEE International Conference on Robotics and Automation, pp. 1–4 (2011)
Ten Pas, A., Platt, R.: Using geometry to detect grasp poses in 3D point clouds. Robot. Res. 1, 307–324 (2018)
Kaiming, H., et al.: Mask R-CNN. IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)
Badrinarayanan, V., et al.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intel. 39(12), 2481–2495 (2017)
Badrinarayanan, V., et al.: SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust Semantic Pixel-Wise Labelling (2015). arXiv, 1505.07293
Dillencourt, M.B., et al.: A general approach to connected-component labeling for arbitrary image representations. J. ACM 39(2), 253–280 (1992)
He, K., et al.: Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: International Conference for Learning Representations, pp. 1–15 (2014)
Chitta, S., et al.: Moveit! [ROS topics]. IEEE Robot. Autom. Mag. 19(1), 18–19 (2012)
Nextage Open. www.kawadarobot.co.jp/en/nextage/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hoshino, S., Oikawa, R. (2023). Reaching Motion Planning with Vision-Based Deep Neural Networks for Dual Arm Robots. In: Petrovic, I., Menegatti, E., Marković, I. (eds) Intelligent Autonomous Systems 17. IAS 2022. Lecture Notes in Networks and Systems, vol 577. Springer, Cham. https://doi.org/10.1007/978-3-031-22216-0_31
Download citation
DOI: https://doi.org/10.1007/978-3-031-22216-0_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-22215-3
Online ISBN: 978-3-031-22216-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)