Skip to main content
Log in

Vision-Based Imitation Learning of Needle Reaching Skill for Robotic Precision Manipulation

  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

In this paper, an imitation learning approach of vision guided reaching skill is proposed for robotic precision manipulation, which enables the robot to adapt its end-effector’s nonlinear motion with the awareness of collision-avoidance. The reaching skill model firstly uses the raw images of objects as inputs, and generates the incremental motion command to guide the lower-level vision-based controller. The needle’s tip is detected in image space and the obstacle region is extracted by image segmentation. A neighborhood-sampling method is designed for needle component collision perception, which includes a neural networks based attention module. The neural network based policy module infers the desired motion in the image space according to the neighborhood-sampling result, goal and current positions of the needle’s tip. A refinement module is developed to further improve the performance of the policy module. In three dimensional (3D) manipulation tasks, typically two cameras are used for image-based vision control. Therefore, considering the epipolar constraint, the relative movements in two cameras’ views are refined by optimization. Experimental are conducted to validate the effectiveness of the proposed methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Jain, R.K., Datta, S., Majumder, S., Dutta, A.: Development of multi micro manipulation system using IPMC micro grippers. J. Intell. Robot. Syst. 74(3-4), 547–569 (2014)

    Article  Google Scholar 

  2. Li, Y., Liu, X.L., Xu, D., Zhang, D.P.: Orientation measurement for objects with planar surface based on monocular microscopic vision. International Journal of Automation and Computing (2019)

  3. Wang, Z.N., Chen, F., Ramadass, M., Wei, T.A., Steven, Y.M., Win, T.L.: Three-dimensional cell rotation with fluidic flow-controlled cell manipulating device. IEEE/ASME Trans. Mechatron.. 21 (4), 1995–2003 (2016)

    Article  Google Scholar 

  4. Liu, S., Xu, D., Zhang, D.P., Zhang, Z.T.: High precision automatic assembly based on microscopic vision and force information. IEEE Trans. Autom. Sci. Eng. 13(1), 382–393 (2016)

    Article  Google Scholar 

  5. Liu, S., Xu, D., Li, Y., Shen, F., Zhang, D.P.: Nanoliter fluid dispensing based on microscopic vision and laser range sensor. IEEE Trans. Ind. Electron. 64(2), 1292–1302 (2017)

    Article  Google Scholar 

  6. Ahmed, H.Q., CY, M.: Deeply informed neural sampling for robot motion planning. In: Proceedings of IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 6282–6588 (2018)

  7. Arunkumar, B., Dieter, F.: SE3-Nets: Learning rigid body motion using deep neural networks. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 173–180 (2017)

  8. Liu, S., Li, Y.F.: Precision 3-D motion tracking for binocular microscopic vision system. IEEE Trans. Ind. Electron. 66(12), 9339–9349 (2019)

    Article  Google Scholar 

  9. Sylvain, C., Aude, B.: Incremental learning of gestures by imitation in a humanoid robot. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, pp. 255–262 (2007)

  10. Rok, P., Andrej, G., Ales, U., Jun, M.: Deep encoder-decoder networks for mapping raw images to dynamic movement primitives. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 5863–5868 (2018)

  11. Qin, F.B., Xu, D., Zhang, D.P., Li, Y.: Robotic skill learning for precision assembly with microscopic vision and force feedback. IEEE/ASME Trans. Mechatron. 24(3), 1117–1128 (2019)

    Article  Google Scholar 

  12. Auke, J.I., Jun, N., Stefan, S.: Movement imitation with nonlinear dynamical systems in humanoid robots. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 1398–1403 (2002)

  13. Sylvain, C., Florent, D., Eric, L.S., Darwin, G.C., Aude, G.B.: A probabilistic approach based on dynamical systems to learn and re-produce gestures by imitation. IEEE Robot. Autom. Mag. 17(2), 44–54 (2010)

    Article  Google Scholar 

  14. Hu, Z., Han, T., Sun, P.G., Pan, J, Dinesh, M.: 3D deformable object manipulation using deep neural networks. IEEE Robot. Autom. Lett. 4(4), 4255–4261 (2019)

    Article  Google Scholar 

  15. Mark, P., Michael, S., Juan, N., Roland, S., Cesar, C.: From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 1527–1533 (2017)

  16. Ahmed, H.Q., Anthony, S., Mayur, J.B., Michael, C.Y.: Motion planning networks. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 2118–2124 (2019)

  17. Jonathan, H., Stefano, E.: Generative adversarial imitation learning. In: Proceedings of Advances in Neural Information Processing Systems, pp. 4565–4573 (2016)

  18. Olaf, R., Philipp, F., Thomas, B.: U-Net: Convolutional networks for biomedical image segmentation. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241 (2015)

Download references

Acknowledgements

This work was supported by National Key Research and Development Program of China (2018AAA0103005), National Natural Science Foundation of China (61873266) and State Key Laboratory of Smart Manufacturing for Special Vehicles and Transmission System (GZ2019KF008).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ying Li.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Qin, F., Du, S. et al. Vision-Based Imitation Learning of Needle Reaching Skill for Robotic Precision Manipulation. J Intell Robot Syst 101, 22 (2021). https://doi.org/10.1007/s10846-020-01290-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-020-01290-1

Keywords

Navigation