A nonparametric-learning visual servoing framework for robot manipulator in unstructured environments
Introduction
The current visual feedback schemes are mainly deployed in structured environments and requires a calibration process based on known target model, and the robotic autonomy is limited due to predefined coding with certain kinematics model. Especially, the rigid-soft integrated robotic with varied stiffness would bound to make those modeling methods invalid. Thus, it has brought about new challenges for the robot perceptual coordination in unstructured environment, and it is also a cutting-edge problem in field of robotics visual servoing (VS) control [1], [2], [3], [4].
The VS is a promising solution to control the robots interacting with their working environments [5], [6], [7], which normally deploys monocular or binocular cameras with eye-in-hand or eye-to-hand configurations. Among various kinds of VS methods, the position-based visual servoing (PBVS), and the image-based visual servoing (IBVS) are popular ones and widely deployed in many manufacturing applications [8].
PBVS retrieves 3D pose information in Cartesian space based on known camera projection parameters. A geometric model of the target is used to estimate the target pose with respect to the robot [9]. PBVS enables the robot motion to be constrained towards the desired pose, it more suitable for industrial manipulators which are subject only to pose commands in the Cartesian space. But the PBVS system was inevitably associated with the hand-eye calibration. In consequence, it was sensitive to calibration errors [10], and the 2D image features beyond the controlling may disappear from camera’s field-of-view (FOV) [11].
In contrast, IBVS regulates the dynamic behavior of robot by its visual sensor, and the 2D image measurements are used to estimate the desired movement [12], [13], [14], IBVS more suitable for preventing the image feature from leaving the FOV since the trajectories of the feature points are controlled directly on the image plane. However, IBVS cannot keep the 3D Cartesian movement of robot insider its workspace, particularly when a large displacement of posture coordination is required. Hence, Redwan et al. proposed the modified IBVS method to improve poor dynamic performance [15], and Hajiloo et al. used the robust model prediction control for robot’s constraints [16]. Eissa et al. deployed the optimization techniques to minimize both end-effector trajectories in Cartesian space and the features trajectories on image planes [17]. But the methods face the robust camera calibration or depth information problems.
It is clear that the system calibration parameters, depth information should be provided for mentioned VS methods. And the visual feedback schemes are mainly deployed in specific environments requires definitive target modeling. These may not be possible for many real-world applications that have unknown system parameters and contain uncertain and dynamic changes [18], [19], [20].
Thus this work tries to develop a new nonparametric visual servoing in which the crucial problem is online adjust the mapping between robotic visual-motor spaces. For instance, the classical Boyden based method and the family of Boyden updating formulas can be defined to estimate the Jacobian mapping matrix [21], [22]. Azad et al. proposed a statistically robust M-estimator in [23] to gradually increase the quality of the Jacobian estimation using visual-motor memory without the system model parameters. Kotsiopoulos proposed a method for local calculation of the image Jacobian through training without the need for depth estimation [24]. Li et al. developed an adaptive visual servoing scheme to drive a wheeled mobile robot to the desired pose [25], wherein the unknown depth information is identified simultaneously. Qian and Su adopted Kalman-Bucy filter (KBF) for Jacobian matrix estimation [26]. Those Kalman filtering (KF) based methods assume that the filtering parameters are known and the constructed system with the observed states is static. However, this assumption is unsuitable to unknown dynamic environments. Thus, Lv and Huang investigated the application of KF in the state space model with variable noise parameters [27]. Janabi-Sharifi and Marey presented an iterative adaptive extended Kalman filter (EKF) by integrating mechanisms for noise adaptation and iterative-measurement linearization in VS tasks [28]. While the existing methods mostly based on the robot kinematics or dynamics modeling only suitable for working in structured environments.
In this paper, As the nonlinear mapping between robotic visual-motor spaces is difficult to adjust online [29], [30], we consider it as a dynamic state identification problem without the hand-eye calibration parameters, the depth of target and the robot kinematics modeling. Moreover, a nonparametric visual servoing system is realized by employing Bayesian-based MCSIS techniques, the new VS system endowed robot manipulator with a kernel capability for its self-adaption learning operate in unstructured environments.
More specifically, our ideas behind nonparametric visual servoing with visual-motor global mapping is treated as SSE in the Markov sense. The Bayesian-based MCSIS algorithm is then presented to estimate the system state by sampling particles with their corresponding weights. It highly relies on its ability to maintain a good approximation to the state’s posterior distribution, and a large number of particles are required to guarantee sufficient sampling in a wide state space. In general, the MCSIS is sensitive to the number of random particles, and difficult to ensure the robust performance of robotic coordination in practice. Therefore, the particles count of MCSIS should constrained at low-level hypotheses and then the remedy on the particles deterioration is realized by incorporating a learning estimator into MCSIS process. In this way, the estimator could remedy the MCSIS precision for SSE when the particle’s sampling counts remain low. Then the new nonparametric visual servoing control framework is constructed by employing learning-remedied MCSIS schemes.
In summary, differing from the traditional PBVS and IBVS [8], our new VS framework is very flexible for hand-eye feedback system without parameters. The method can be expanding for other filed robotic tasks, including future stiffness-varied robotics. The paper has made the following contributions:
- (1)
The global mapping was defined for robotic 2D visual and 3D motor spaces, and the mapping is treated as a SSE in the Markov sense. The Bayesian-based MCSIS prediction and updating recursive process is utilized to solve the mapping on-line identification problem.
- (2)
A learning estimator is incorporating with MCSIS to restrain the deteriorated particles and to remedy the robust stability of robot coordination at a low sampling particles rate. It is also beneficial to the real-time performance of the system.
- (3)
The servoing controller is developed by using the Lyapunov stability criterion. This new visual servoing framework is implemented on a 6-DOFs robot with eye-in-hand configuration, which allows the robot to adapt its motion to image feature changes without system calibration and kinematics modeling, also the depth information.
The rest of the paper is organized as follows. Section 2 outlines the background of the visual servoing, and the Bayesian-based MCSIS theory for SSE. In Section 3, the mapping identification problem is presented with learning estimator and MCSIS techniques. Section 4 proposes a new nonparametric visual servoing framework, based on learning-remedied MCSIS. The results are presented in Section 5 to show the feasibility and performance of the proposed approach. Finally, a brief conclusion and future work are given in Section 6.
Section snippets
Descriptions on visual servoing
Fig. 1 shows the facilities, in which the 6-DOFs manipulator is mounted with an on-board camera and the target is assumed to be stationary with respect to the base frame of robot. The objectives of VS is described to derive the end-effector moving by using a set of observed image features to minimize the error, that is defined as:where and , the elements , are the
Robust MCSIS with learning estimator
In terms of high dimensionality of SSE, the MCSIS has a big challenge on its real-time performance. There are some valid solutions to handle this challenge, such as using a GPU-accelerated Bayesian for 3D visual tracking application [33], and reducing the dimensionality of the problem [34], and minimizing the number of sampling particles meeting to reduce the computational cost [35].
In this paper, the constrained counts of particles with low-level hypotheses inevitably bringing another problem,
Nonparametric visual servoing control framework
Fig. 4 shows the framework of the proposed nonparametric VS method, which based on learning-remedied MCSIS scheme, where the control law is to derive the robot motion Ue(t) by image feedback S(t). Considering that the desired feature Sd is a constant parameter due to the fixed goal pose, the differential coefficient of image error es(t) in (1) is
According to (2), and with the global mapping, we have
And substituting (21) into (20), we have
Results and discussions
The image features have four-feature-points and are used for testing the robot. The feature vector S(t) is obtained at each time instant through:
The desired features vector Sd does not change over time, and therefore can be calculated before the main control loop of the experiment.
The robot moving control is , and denote the instantaneous linear and angular velocities respectively.
The size of
Conclusion
Current visual servoing methods used in robot manipulation depend on the system calibration, target modeling, and robot kinematics or dynamics. They are normally operating well in structured environments such as manufacturing, but unreliable to operate reliably in dynamic parameters environments. In this paper, a sequential importance sampling Bayesian-based algorithm is proposed for visual-motor spaces mapping and identification of robotic manipulators.
The proposed approach does not require
CRediT authorship contribution statement
Xungao Zhong: Conceptualization, Methodology, Software, Investigation, Writing - original draft. Xunyu Zhong: Validation, Formal analysis, Visualization, Software. Huosheng Hu: Writing - review & editing, Supervision. Xiafu Peng: Supervision, Data curation.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Xungao Zhong received the B.E. degree in electronic information engineering from Nanchang University, Nanchang, China, in 2007, the M.S. degree in electromechanical engineering from Guangdong University of Technology, Guangzhou, China, in 2011 and the Ph.D. degree in control theory and control engineering from Xiamen University, in 2014. He is currently an associate Professor with the School of Electrical Engineering and Automation, Xiamen University of Technology at Xiamen, China. His current
References (36)
Robotic assembly of smartphone back shells with eye-in-hand visual servoing
Robot. Comput. Integr. Manuf.
(2018)- et al.
An image-based trajectory planning approach for robust robot programming by demonstration
Robot. Auton. Syst.
(2017) Robust Jacobian matrix estimation for image-based visual servoing
Robot. Comput. Integr. Manuf.
(2011)- et al.
Smart particle filtering for high-dimensional tracking
Comput. Vision Image Understanding
(2007) - et al.
Learning potential functions from human demonstrations with encapsulated dynamic and compliant behaviors
Auton. Robots
(2017) - et al.
Vision-based online learning kinematic control for soft robots using local gaussian process regression
IEEE Robot. Autom. Lett.
(2019) - et al.
Adaptive fuzzy neural network control for a constrained robot using impedance learning
IEEE Trans. Neural Net. Learn. Syst.
(2018) - et al.
Adaptive fuzzy control for coordinated multiple robots with constraint using impedance learning
IEEE Trans. Cybern. Year
(2019) - et al.
Robust precision manipulation with simple process models using visual servoing techniques with disturbance rejection
IEEE Trans. Autom. Sci. Eng.
(2019) - et al.
A multirobot cooperation framework for sewing personalized stent grafts
IEEE Trans. Ind. Inf.
(2018)
Admittance-based controller design for physical human-robot interaction in the constrained task space
IEEE Trans. Autom. Sci. Eng.
Visual servo control. part I: basic approaches
IEEE Robot. Autom. Mag.
Unfalsified visual servoing for simultaneous object recognition and pose tracking
IEEE Trans. Cybern.
Novel position-based visual servoing approach to robust global stability under field-of-view constraint
IEEE Trans. Ind. Electr.
Comparison of basic visual servoing methods
IEEE/ASME Trans. Mechatron.
Image-based visual servoing using an optimized trajectory planning technique
IEEE/ASME Trans. Mechatron.
Dynamic visual servoing from sequential regions of interest acquisition
Int. J. Robot. Res.
Robust online model predictive control for a constrained image-based visual servoing
IEEE Trans. Ind. Electr.
Cited by (5)
Investigation of Multi-Stage Visual Servoing in the context of autonomous assembly
2024, Measurement: Journal of the International Measurement ConfederationDesign of thick panels origami-inspired flexible grasper with anti-interference ability
2023, Mechanism and Machine TheoryInvestigation of IBVS control method utilizing vanishing vector subject to spatial constraint
2023, Measurement: Journal of the International Measurement ConfederationMCS: a metric confidence selection framework for few shot image classification
2024, Multimedia Tools and Applications
Xungao Zhong received the B.E. degree in electronic information engineering from Nanchang University, Nanchang, China, in 2007, the M.S. degree in electromechanical engineering from Guangdong University of Technology, Guangzhou, China, in 2011 and the Ph.D. degree in control theory and control engineering from Xiamen University, in 2014. He is currently an associate Professor with the School of Electrical Engineering and Automation, Xiamen University of Technology at Xiamen, China. His current research interests include machine learning, robotic visual servoing and application. He is selected as distinguished young scientific research talent of Fujian province, China, in 2018.
Xunyu Zhong received the M.E. degree in mechatronics engineering from Harbin Engineering University, Harbin, China, in 2007, and the Ph.D. degree in control theory and control engineering from Harbin Engineering University, in 2009. He is currently an associate Professor with the Department of Automation, Xiamen University, Xiamen, China. He is an academic visitor of the School of Computer Science and Electronic Engineering, University of Essex, U.K., for one year from Sept. 2017. His current research interests include robot motion planning, visual servo and autonomous robots.
Huosheng Hu received the M.Sc. degree in industrial automation from Central South University, Changsha, China, in 1982, and the Ph.D. degree in robotics from the University of Oxford, Oxford, U.K., in 1993. Currently, he is a Professor with the School of Computer Science and Electronic Engineering, University of Essex, Colchester, U.K., leading the Robotics Group. He has authored over 500 research articles published in journals, books, and conference proceedings. His research interests include autonomous robots, human–robot interaction, multi-robot collaboration, embedded systems, pervasive computing, sensor integration, intelligent control, cognitive robotics, and networked robots. Prof. Hu is Fellow of the Institute of Engineering and Technology, Fellow of the Institution of Measurement and Control, and a Chartered Engineer in the U.K. He currently serves as Editor-in-Chief for the International Journal of Automation and Computing, Editor-in-Chief of MDPI Robotics Journal, and an Executive Editor for the International Journal of Mechatronics and Automation.
Xiafu Peng received the M.S. and Ph.D. degrees in control science from the Harbin Engineering University, in 1994 and 2001, respectively. He is currently a Professor with the Department of Automation, Xiamen University at Xiamen. His current research interests include the navigation and motion control of robots. Prof. Peng is a Fellow of the Fujian Association for the advancement of Automation and Power, and a Senior Member of the Chinese Institute of Electronics. He is the recipient of the provincial/ministerial Scientific and Technological Progress Award.