Abstract
In this paper, an imitation learning approach of vision guided reaching skill is proposed for robotic precision manipulation, which enables the robot to adapt its end-effector’s nonlinear motion with the awareness of collision-avoidance. The reaching skill model firstly uses the raw images of objects as inputs, and generates the incremental motion command to guide the lower-level vision-based controller. The needle’s tip is detected in image space and the obstacle region is extracted by image segmentation. A neighborhood-sampling method is designed for needle component collision perception, which includes a neural networks based attention module. The neural network based policy module infers the desired motion in the image space according to the neighborhood-sampling result, goal and current positions of the needle’s tip. A refinement module is developed to further improve the performance of the policy module. In three dimensional (3D) manipulation tasks, typically two cameras are used for image-based vision control. Therefore, considering the epipolar constraint, the relative movements in two cameras’ views are refined by optimization. Experimental are conducted to validate the effectiveness of the proposed methods.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Jain, R.K., Datta, S., Majumder, S., Dutta, A.: Development of multi micro manipulation system using IPMC micro grippers. J. Intell. Robot. Syst. 74(3-4), 547–569 (2014)
Li, Y., Liu, X.L., Xu, D., Zhang, D.P.: Orientation measurement for objects with planar surface based on monocular microscopic vision. International Journal of Automation and Computing (2019)
Wang, Z.N., Chen, F., Ramadass, M., Wei, T.A., Steven, Y.M., Win, T.L.: Three-dimensional cell rotation with fluidic flow-controlled cell manipulating device. IEEE/ASME Trans. Mechatron.. 21 (4), 1995–2003 (2016)
Liu, S., Xu, D., Zhang, D.P., Zhang, Z.T.: High precision automatic assembly based on microscopic vision and force information. IEEE Trans. Autom. Sci. Eng. 13(1), 382–393 (2016)
Liu, S., Xu, D., Li, Y., Shen, F., Zhang, D.P.: Nanoliter fluid dispensing based on microscopic vision and laser range sensor. IEEE Trans. Ind. Electron. 64(2), 1292–1302 (2017)
Ahmed, H.Q., CY, M.: Deeply informed neural sampling for robot motion planning. In: Proceedings of IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 6282–6588 (2018)
Arunkumar, B., Dieter, F.: SE3-Nets: Learning rigid body motion using deep neural networks. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 173–180 (2017)
Liu, S., Li, Y.F.: Precision 3-D motion tracking for binocular microscopic vision system. IEEE Trans. Ind. Electron. 66(12), 9339–9349 (2019)
Sylvain, C., Aude, B.: Incremental learning of gestures by imitation in a humanoid robot. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, pp. 255–262 (2007)
Rok, P., Andrej, G., Ales, U., Jun, M.: Deep encoder-decoder networks for mapping raw images to dynamic movement primitives. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 5863–5868 (2018)
Qin, F.B., Xu, D., Zhang, D.P., Li, Y.: Robotic skill learning for precision assembly with microscopic vision and force feedback. IEEE/ASME Trans. Mechatron. 24(3), 1117–1128 (2019)
Auke, J.I., Jun, N., Stefan, S.: Movement imitation with nonlinear dynamical systems in humanoid robots. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 1398–1403 (2002)
Sylvain, C., Florent, D., Eric, L.S., Darwin, G.C., Aude, G.B.: A probabilistic approach based on dynamical systems to learn and re-produce gestures by imitation. IEEE Robot. Autom. Mag. 17(2), 44–54 (2010)
Hu, Z., Han, T., Sun, P.G., Pan, J, Dinesh, M.: 3D deformable object manipulation using deep neural networks. IEEE Robot. Autom. Lett. 4(4), 4255–4261 (2019)
Mark, P., Michael, S., Juan, N., Roland, S., Cesar, C.: From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 1527–1533 (2017)
Ahmed, H.Q., Anthony, S., Mayur, J.B., Michael, C.Y.: Motion planning networks. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 2118–2124 (2019)
Jonathan, H., Stefano, E.: Generative adversarial imitation learning. In: Proceedings of Advances in Neural Information Processing Systems, pp. 4565–4573 (2016)
Olaf, R., Philipp, F., Thomas, B.: U-Net: Convolutional networks for biomedical image segmentation. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241 (2015)
Acknowledgements
This work was supported by National Key Research and Development Program of China (2018AAA0103005), National Natural Science Foundation of China (61873266) and State Key Laboratory of Smart Manufacturing for Special Vehicles and Transmission System (GZ2019KF008).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Li, Y., Qin, F., Du, S. et al. Vision-Based Imitation Learning of Needle Reaching Skill for Robotic Precision Manipulation. J Intell Robot Syst 101, 22 (2021). https://doi.org/10.1007/s10846-020-01290-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-020-01290-1