Abstract
Current visual servoing methods used in robot manipulation require system modeling and parameters, only working in structured environments. This paper presents a self-learning visual servoing for a robot manipulator operated in unstructured environments. A Gaussian-mapping likelihood process is used in Bayesian stochastic state estimation (SSE) for Robotic coordination control, in which the Monte Carlo sequential importance sampling (MCSIS) algorithm is created for robotic visual-motor mapping estimation. The Bayesian learning strategy described takes advantage of restraining the particles deterioration to maintain the robot robust performance. Additionally, the servoing controller is deduced for robotic coordination directly by visual observation. The proposed visual servoing framework is applied to a manipulator with eye-in-hand configuration no system parameters. Finally, the simulation and experimental results demonstrate consistently that the proposed algorithm outperforms traditional visual servoing approaches.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Khansari-Zadeh, S.M., Khatib, O.: Learning potential functions from human demonstrations with encapsulated dynamic and compliant behaviors. Auton. Robots 41(1), 45–69 (2016). https://doi.org/10.1007/s10514-015-9528-y
Fang, G., et al.: Vision-based online learning kinematic control for soft robots using local Gaussian process regression. IEEE Robot. Autom. Lett. 4(2), 1194–1201 (2019)
He, W., Dong, Y.: Adaptive fuzzy neural network control for a constrained robot using impedance learning. IEEE Trans. Neural Netw. Learn. Syst. 29(4), 1174–1186 (2018)
Kong, L., He, W., Yang, C., Li, Z., Sun, C.: Adaptive fuzzy control for coordinated multiple robots with constraint using impedance learning. IEEE Trans. Cybern. 49(8), 3052–3063 (2019)
Calli, B., Dollar, A.M.: Robust precision manipulation with simple process models using visual servoing techniques with disturbance rejection. IEEE Trans. Autom. Sci. Eng. 16(1), 406–419 (2019)
Huang, B., Ye, M., Hu, Y., Vandini, A., Lee, S., Yang, G.: A multirobot cooperation framework for sewing personalized stent grafts. IEEE Trans. Ind. Inf. 14(4), 1776–1785 (2018)
He, W., Xue, C., Yu, X., Li, Z., Yang, C.: Admittance-based controller design for physical human-robot interaction in the constrained task space. IEEE Trans. Autom. Sci. Eng. (2020). https://doi.org/10.1109/TASE.2020.2983225
Chaumette, F., Hutchinson, S.: Visual servo control. part I: basic approaches. IEEE Robot. Autom. Mag. 13(4), 82–90 (2006)
Do-Hwan, P., Jeong-Hoon, K., In-Joong, H.: Novel position-based visual servoing approach to robust global stability under field-of-view constraint. IEEE Trans. Ind. Electron. 59(10), 4735–4752 (2012)
Farrokh, J.S., Deng, L.F., Wilson, W.J.: Comparison of basic visual servoing methods. IEEE/ASME Trans. Mechatron. 16(5), 967–983 (2011)
Chang, W.: Robotic assembly of smartphone back shells with eye-in-hand visual servoing. Robot. Comput. Integr. Manuf. 50, 102–113 (2018)
Zhao, Y., Lin, Y., Xi, F., Guo, S.: Calibration-based iterative learning control for path tracking of industrial robots. IEEE Trans. Ind. Electron. 62(5), 2921–2929 (2015)
Kudryavtsev, A.V., et al.: Eye-in-hand visual servoing of concentric tube robots. IEEE Robot. Autom. Lett. 3(3), 2315–2321 (2018)
Xu, F., Wang, H., Wang, J., Samuel Au, K.W., Chen, W.: Underwater dynamic visual servoing for a soft robot arm with online distortion correction. IEEE/ASME Trans. Mechatron. 24(3), 979–989 (2019)
Farrokh, J.S., Deng, L., Wilson, W.J.: Comparison of basic visual servoing methods. IEEE/ASME Trans. Mechatron. 16(5), 967–983 (2011)
Atchad, Y., Rosenthal, J.S.: On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11(5), 815–828 (2005)
Isard, M., Blake, A.: Condensation-conditional density propagation for visual tracking. Int. J. Comput. Vis. 29(6), 5–28 (1998)
Liu, C., Wen, G., Zhao, Z., Sedaghati, R.: Neural network-based sliding mode control of an uncertain robot using dynamic model approximated switching gain. IEEE Trans. Cybern. (2020). https://doi.org/10.1109/TCYB.2020.2978003
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China under Grant (No. 61703356), in part by the Natural Science Foundation of Fujian Province under Grant (No. 2018J05114 and 2020J01285), in part by the Innovation Foundation of Xiamen under Grant (No. 3502Z20206071).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhong, X., Shi, C., Lin, J., Zhao, J., Zhong, X. (2021). Self-learning Visual Servoing for Robot Manipulation in Unstructured Environments. In: Liu, XJ., Nie, Z., Yu, J., Xie, F., Song, R. (eds) Intelligent Robotics and Applications. ICIRA 2021. Lecture Notes in Computer Science(), vol 13014. Springer, Cham. https://doi.org/10.1007/978-3-030-89098-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-89098-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89097-1
Online ISBN: 978-3-030-89098-8
eBook Packages: Computer ScienceComputer Science (R0)