Skip to main content

Self-learning Visual Servoing for Robot Manipulation in Unstructured Environments

  • Conference paper
  • First Online:
Intelligent Robotics and Applications (ICIRA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13014))

Included in the following conference series:

  • 3433 Accesses

Abstract

Current visual servoing methods used in robot manipulation require system modeling and parameters, only working in structured environments. This paper presents a self-learning visual servoing for a robot manipulator operated in unstructured environments. A Gaussian-mapping likelihood process is used in Bayesian stochastic state estimation (SSE) for Robotic coordination control, in which the Monte Carlo sequential importance sampling (MCSIS) algorithm is created for robotic visual-motor mapping estimation. The Bayesian learning strategy described takes advantage of restraining the particles deterioration to maintain the robot robust performance. Additionally, the servoing controller is deduced for robotic coordination directly by visual observation. The proposed visual servoing framework is applied to a manipulator with eye-in-hand configuration no system parameters. Finally, the simulation and experimental results demonstrate consistently that the proposed algorithm outperforms traditional visual servoing approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Khansari-Zadeh, S.M., Khatib, O.: Learning potential functions from human demonstrations with encapsulated dynamic and compliant behaviors. Auton. Robots 41(1), 45–69 (2016). https://doi.org/10.1007/s10514-015-9528-y

    Article  Google Scholar 

  2. Fang, G., et al.: Vision-based online learning kinematic control for soft robots using local Gaussian process regression. IEEE Robot. Autom. Lett. 4(2), 1194–1201 (2019)

    Article  Google Scholar 

  3. He, W., Dong, Y.: Adaptive fuzzy neural network control for a constrained robot using impedance learning. IEEE Trans. Neural Netw. Learn. Syst. 29(4), 1174–1186 (2018)

    Google Scholar 

  4. Kong, L., He, W., Yang, C., Li, Z., Sun, C.: Adaptive fuzzy control for coordinated multiple robots with constraint using impedance learning. IEEE Trans. Cybern. 49(8), 3052–3063 (2019)

    Article  Google Scholar 

  5. Calli, B., Dollar, A.M.: Robust precision manipulation with simple process models using visual servoing techniques with disturbance rejection. IEEE Trans. Autom. Sci. Eng. 16(1), 406–419 (2019)

    Article  Google Scholar 

  6. Huang, B., Ye, M., Hu, Y., Vandini, A., Lee, S., Yang, G.: A multirobot cooperation framework for sewing personalized stent grafts. IEEE Trans. Ind. Inf. 14(4), 1776–1785 (2018)

    Article  Google Scholar 

  7. He, W., Xue, C., Yu, X., Li, Z., Yang, C.: Admittance-based controller design for physical human-robot interaction in the constrained task space. IEEE Trans. Autom. Sci. Eng. (2020). https://doi.org/10.1109/TASE.2020.2983225

    Article  Google Scholar 

  8. Chaumette, F., Hutchinson, S.: Visual servo control. part I: basic approaches. IEEE Robot. Autom. Mag. 13(4), 82–90 (2006)

    Google Scholar 

  9. Do-Hwan, P., Jeong-Hoon, K., In-Joong, H.: Novel position-based visual servoing approach to robust global stability under field-of-view constraint. IEEE Trans. Ind. Electron. 59(10), 4735–4752 (2012)

    Google Scholar 

  10. Farrokh, J.S., Deng, L.F., Wilson, W.J.: Comparison of basic visual servoing methods. IEEE/ASME Trans. Mechatron. 16(5), 967–983 (2011)

    Article  Google Scholar 

  11. Chang, W.: Robotic assembly of smartphone back shells with eye-in-hand visual servoing. Robot. Comput. Integr. Manuf. 50, 102–113 (2018)

    Article  Google Scholar 

  12. Zhao, Y., Lin, Y., Xi, F., Guo, S.: Calibration-based iterative learning control for path tracking of industrial robots. IEEE Trans. Ind. Electron. 62(5), 2921–2929 (2015)

    Article  Google Scholar 

  13. Kudryavtsev, A.V., et al.: Eye-in-hand visual servoing of concentric tube robots. IEEE Robot. Autom. Lett. 3(3), 2315–2321 (2018)

    Article  Google Scholar 

  14. Xu, F., Wang, H., Wang, J., Samuel Au, K.W., Chen, W.: Underwater dynamic visual servoing for a soft robot arm with online distortion correction. IEEE/ASME Trans. Mechatron. 24(3), 979–989 (2019)

    Google Scholar 

  15. Farrokh, J.S., Deng, L., Wilson, W.J.: Comparison of basic visual servoing methods. IEEE/ASME Trans. Mechatron. 16(5), 967–983 (2011)

    Article  Google Scholar 

  16. Atchad, Y., Rosenthal, J.S.: On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11(5), 815–828 (2005)

    MathSciNet  MATH  Google Scholar 

  17. Isard, M., Blake, A.: Condensation-conditional density propagation for visual tracking. Int. J. Comput. Vis. 29(6), 5–28 (1998)

    Article  Google Scholar 

  18. Liu, C., Wen, G., Zhao, Z., Sedaghati, R.: Neural network-based sliding mode control of an uncertain robot using dynamic model approximated switching gain. IEEE Trans. Cybern. (2020). https://doi.org/10.1109/TCYB.2020.2978003

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant (No. 61703356), in part by the Natural Science Foundation of Fujian Province under Grant (No. 2018J05114 and 2020J01285), in part by the Innovation Foundation of Xiamen under Grant (No. 3502Z20206071).

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhong, X., Shi, C., Lin, J., Zhao, J., Zhong, X. (2021). Self-learning Visual Servoing for Robot Manipulation in Unstructured Environments. In: Liu, XJ., Nie, Z., Yu, J., Xie, F., Song, R. (eds) Intelligent Robotics and Applications. ICIRA 2021. Lecture Notes in Computer Science(), vol 13014. Springer, Cham. https://doi.org/10.1007/978-3-030-89098-8_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89098-8_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89097-1

  • Online ISBN: 978-3-030-89098-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics