Elsevier

Neurocomputing

Volume 437, 21 May 2021, Pages 206-217
Neurocomputing

A nonparametric-learning visual servoing framework for robot manipulator in unstructured environments

https://doi.org/10.1016/j.neucom.2021.01.029Get rights and content

Highlights

  • A visual servoing framework is constructed for robotic in unstructured environment.

  • MCSIS updating recursive process is utilized to solve the mapping identification.

  • A remedy method to improve the robust stability of robotic coordination.

  • The proposed approach allows robot to adapt its motion without system calibration.

Abstract

Current visual servoing methods used in robot manipulation require system modeling and parameters, only working in structured environments. This paper presents a nonparametric visual servoing for a robot manipulator operated in unstructured environments. A Gaussian-mapping likelihood process is used in Bayesian stochastic state estimation (SSE) for Robotic coordination control, in which the Monte Carlo sequential importance sampling (MCSIS) algorithm and a learning-remedied method are created for robotic visual-motor mapping estimation. The self-learning strategy described takes advantage of remedy the particles deterioration to maintain the robust performance at a low rate of particle sampling, rather than likes MCSIS rely on enlarge the sampling variance to cover the whole state distribution. Additionally, the servoing controller is deduced for robotic coordination directly by visual observation. The stability of the proposed framework is illustrated by Lyapunov theory and applied to a manipulator with eye-in-hand configuration no system parameters. Finally, the simulation and experimental results demonstrate consistently that the proposed algorithm involving learning-remedied outperforms traditional visual servoing approaches.

Introduction

The current visual feedback schemes are mainly deployed in structured environments and requires a calibration process based on known target model, and the robotic autonomy is limited due to predefined coding with certain kinematics model. Especially, the rigid-soft integrated robotic with varied stiffness would bound to make those modeling methods invalid. Thus, it has brought about new challenges for the robot perceptual coordination in unstructured environment, and it is also a cutting-edge problem in field of robotics visual servoing (VS) control [1], [2], [3], [4].

The VS is a promising solution to control the robots interacting with their working environments [5], [6], [7], which normally deploys monocular or binocular cameras with eye-in-hand or eye-to-hand configurations. Among various kinds of VS methods, the position-based visual servoing (PBVS), and the image-based visual servoing (IBVS) are popular ones and widely deployed in many manufacturing applications [8].

PBVS retrieves 3D pose information in Cartesian space based on known camera projection parameters. A geometric model of the target is used to estimate the target pose with respect to the robot [9]. PBVS enables the robot motion to be constrained towards the desired pose, it more suitable for industrial manipulators which are subject only to pose commands in the Cartesian space. But the PBVS system was inevitably associated with the hand-eye calibration. In consequence, it was sensitive to calibration errors [10], and the 2D image features beyond the controlling may disappear from camera’s field-of-view (FOV) [11].

In contrast, IBVS regulates the dynamic behavior of robot by its visual sensor, and the 2D image measurements are used to estimate the desired movement [12], [13], [14], IBVS more suitable for preventing the image feature from leaving the FOV since the trajectories of the feature points are controlled directly on the image plane. However, IBVS cannot keep the 3D Cartesian movement of robot insider its workspace, particularly when a large displacement of posture coordination is required. Hence, Redwan et al. proposed the modified IBVS method to improve poor dynamic performance [15], and Hajiloo et al. used the robust model prediction control for robot’s constraints [16]. Eissa et al. deployed the optimization techniques to minimize both end-effector trajectories in Cartesian space and the features trajectories on image planes [17]. But the methods face the robust camera calibration or depth information problems.

It is clear that the system calibration parameters, depth information should be provided for mentioned VS methods. And the visual feedback schemes are mainly deployed in specific environments requires definitive target modeling. These may not be possible for many real-world applications that have unknown system parameters and contain uncertain and dynamic changes [18], [19], [20].

Thus this work tries to develop a new nonparametric visual servoing in which the crucial problem is online adjust the mapping between robotic visual-motor spaces. For instance, the classical Boyden based method and the family of Boyden updating formulas can be defined to estimate the Jacobian mapping matrix [21], [22]. Azad et al. proposed a statistically robust M-estimator in [23] to gradually increase the quality of the Jacobian estimation using visual-motor memory without the system model parameters. Kotsiopoulos proposed a method for local calculation of the image Jacobian through training without the need for depth estimation [24]. Li et al. developed an adaptive visual servoing scheme to drive a wheeled mobile robot to the desired pose [25], wherein the unknown depth information is identified simultaneously. Qian and Su adopted Kalman-Bucy filter (KBF) for Jacobian matrix estimation [26]. Those Kalman filtering (KF) based methods assume that the filtering parameters are known and the constructed system with the observed states is static. However, this assumption is unsuitable to unknown dynamic environments. Thus, Lv and Huang investigated the application of KF in the state space model with variable noise parameters [27]. Janabi-Sharifi and Marey presented an iterative adaptive extended Kalman filter (EKF) by integrating mechanisms for noise adaptation and iterative-measurement linearization in VS tasks [28]. While the existing methods mostly based on the robot kinematics or dynamics modeling only suitable for working in structured environments.

In this paper, As the nonlinear mapping between robotic visual-motor spaces is difficult to adjust online [29], [30], we consider it as a dynamic state identification problem without the hand-eye calibration parameters, the depth of target and the robot kinematics modeling. Moreover, a nonparametric visual servoing system is realized by employing Bayesian-based MCSIS techniques, the new VS system endowed robot manipulator with a kernel capability for its self-adaption learning operate in unstructured environments.

More specifically, our ideas behind nonparametric visual servoing with visual-motor global mapping is treated as SSE in the Markov sense. The Bayesian-based MCSIS algorithm is then presented to estimate the system state by sampling particles with their corresponding weights. It highly relies on its ability to maintain a good approximation to the state’s posterior distribution, and a large number of particles are required to guarantee sufficient sampling in a wide state space. In general, the MCSIS is sensitive to the number of random particles, and difficult to ensure the robust performance of robotic coordination in practice. Therefore, the particles count of MCSIS should constrained at low-level hypotheses and then the remedy on the particles deterioration is realized by incorporating a learning estimator into MCSIS process. In this way, the estimator could remedy the MCSIS precision for SSE when the particle’s sampling counts remain low. Then the new nonparametric visual servoing control framework is constructed by employing learning-remedied MCSIS schemes.

In summary, differing from the traditional PBVS and IBVS [8], our new VS framework is very flexible for hand-eye feedback system without parameters. The method can be expanding for other filed robotic tasks, including future stiffness-varied robotics. The paper has made the following contributions:

  • (1)

    The global mapping was defined for robotic 2D visual and 3D motor spaces, and the mapping is treated as a SSE in the Markov sense. The Bayesian-based MCSIS prediction and updating recursive process is utilized to solve the mapping on-line identification problem.

  • (2)

    A learning estimator is incorporating with MCSIS to restrain the deteriorated particles and to remedy the robust stability of robot coordination at a low sampling particles rate. It is also beneficial to the real-time performance of the system.

  • (3)

    The servoing controller is developed by using the Lyapunov stability criterion. This new visual servoing framework is implemented on a 6-DOFs robot with eye-in-hand configuration, which allows the robot to adapt its motion to image feature changes without system calibration and kinematics modeling, also the depth information.

The rest of the paper is organized as follows. Section 2 outlines the background of the visual servoing, and the Bayesian-based MCSIS theory for SSE. In Section 3, the mapping identification problem is presented with learning estimator and MCSIS techniques. Section 4 proposes a new nonparametric visual servoing framework, based on learning-remedied MCSIS. The results are presented in Section 5 to show the feasibility and performance of the proposed approach. Finally, a brief conclusion and future work are given in Section 6.

Section snippets

Descriptions on visual servoing

Fig. 1 shows the facilities, in which the 6-DOFs manipulator is mounted with an on-board camera and the target is assumed to be stationary with respect to the base frame of robot. The objectives of VS is described to derive the end-effector moving by using a set of observed image features to minimize the error, that is defined as:es(t)=St-Sd=s1(t)-s1d,s2(t)-s2d,...,sn(t)-sndTR2n×1where St=s1(t),...,sn(t)TR2n×1 and Sd=s1d,...,sndTR2n×1, the elements si(t)=ui(t),vi(t)T, sid=uid,vidT are the

Robust MCSIS with learning estimator

In terms of high dimensionality of SSE, the MCSIS has a big challenge on its real-time performance. There are some valid solutions to handle this challenge, such as using a GPU-accelerated Bayesian for 3D visual tracking application [33], and reducing the dimensionality of the problem [34], and minimizing the number of sampling particles meeting to reduce the computational cost [35].

In this paper, the constrained counts of particles with low-level hypotheses inevitably bringing another problem,

Nonparametric visual servoing control framework

Fig. 4 shows the framework of the proposed nonparametric VS method, which based on learning-remedied MCSIS scheme, where the control law is to derive the robot motion Ue(t) by image feedback S(t). Considering that the desired feature Sd is a constant parameter due to the fixed goal pose, the differential coefficient of image error es(t) in (1) isės(t)=ddtS(t)-Sd=Ṡ(t)

According to (2), and with the global mapping, we haveṠ(t)=GtUe(t)

And substituting (21) into (20), we haveės(t)=G(t)Ue(t)

Results and discussions

The image features have four-feature-points and are used for testing the robot. The feature vector S(t) is obtained at each time instant through:S(t)=u1,v1,u2,v2,u3,v3,u4,v4TR8×1

The desired features vector Sd does not change over time, and therefore can be calculated before the main control loop of the experiment.

The robot moving control is Ue(t)=ve(t)we(t)TR6×1, ve(t)=vx(t),vy(t),vy(t) and we(t)=wx(t),wy(t),wy(t) denote the instantaneous linear and angular velocities respectively.

The size of

Conclusion

Current visual servoing methods used in robot manipulation depend on the system calibration, target modeling, and robot kinematics or dynamics. They are normally operating well in structured environments such as manufacturing, but unreliable to operate reliably in dynamic parameters environments. In this paper, a sequential importance sampling Bayesian-based algorithm is proposed for visual-motor spaces mapping and identification of robotic manipulators.

The proposed approach does not require

CRediT authorship contribution statement

Xungao Zhong: Conceptualization, Methodology, Software, Investigation, Writing - original draft. Xunyu Zhong: Validation, Formal analysis, Visualization, Software. Huosheng Hu: Writing - review & editing, Supervision. Xiafu Peng: Supervision, Data curation.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Xungao Zhong received the B.E. degree in electronic information engineering from Nanchang University, Nanchang, China, in 2007, the M.S. degree in electromechanical engineering from Guangdong University of Technology, Guangzhou, China, in 2011 and the Ph.D. degree in control theory and control engineering from Xiamen University, in 2014. He is currently an associate Professor with the School of Electrical Engineering and Automation, Xiamen University of Technology at Xiamen, China. His current

References (36)

  • W. He et al.

    Admittance-based controller design for physical human-robot interaction in the constrained task space

    IEEE Trans. Autom. Sci. Eng.

    (2020)
  • F. Chaumette et al.

    Visual servo control. part I: basic approaches

    IEEE Robot. Autom. Mag.

    (2006)
  • P. Jiang et al.

    Unfalsified visual servoing for simultaneous object recognition and pose tracking

    IEEE Trans. Cybern.

    (2016)
  • P. Do-Hwan et al.

    Novel position-based visual servoing approach to robust global stability under field-of-view constraint

    IEEE Trans. Ind. Electr.

    (2012)
  • J.S. Farrokh et al.

    Comparison of basic visual servoing methods

    IEEE/ASME Trans. Mechatron.

    (2011)
  • M. Keshmiri et al.

    Image-based visual servoing using an optimized trajectory planning technique

    IEEE/ASME Trans. Mechatron.

    (2017)
  • D. Redwan et al.

    Dynamic visual servoing from sequential regions of interest acquisition

    Int. J. Robot. Res.

    (2012)
  • A. Hajiloo et al.

    Robust online model predictive control for a constrained image-based visual servoing

    IEEE Trans. Ind. Electr.

    (2016)
  • Cited by (5)

    • Investigation of Multi-Stage Visual Servoing in the context of autonomous assembly

      2024, Measurement: Journal of the International Measurement Confederation
    • Investigation of IBVS control method utilizing vanishing vector subject to spatial constraint

      2023, Measurement: Journal of the International Measurement Confederation

    Xungao Zhong received the B.E. degree in electronic information engineering from Nanchang University, Nanchang, China, in 2007, the M.S. degree in electromechanical engineering from Guangdong University of Technology, Guangzhou, China, in 2011 and the Ph.D. degree in control theory and control engineering from Xiamen University, in 2014. He is currently an associate Professor with the School of Electrical Engineering and Automation, Xiamen University of Technology at Xiamen, China. His current research interests include machine learning, robotic visual servoing and application. He is selected as distinguished young scientific research talent of Fujian province, China, in 2018.

    Xunyu Zhong received the M.E. degree in mechatronics engineering from Harbin Engineering University, Harbin, China, in 2007, and the Ph.D. degree in control theory and control engineering from Harbin Engineering University, in 2009. He is currently an associate Professor with the Department of Automation, Xiamen University, Xiamen, China. He is an academic visitor of the School of Computer Science and Electronic Engineering, University of Essex, U.K., for one year from Sept. 2017. His current research interests include robot motion planning, visual servo and autonomous robots.

    Huosheng Hu received the M.Sc. degree in industrial automation from Central South University, Changsha, China, in 1982, and the Ph.D. degree in robotics from the University of Oxford, Oxford, U.K., in 1993. Currently, he is a Professor with the School of Computer Science and Electronic Engineering, University of Essex, Colchester, U.K., leading the Robotics Group. He has authored over 500 research articles published in journals, books, and conference proceedings. His research interests include autonomous robots, human–robot interaction, multi-robot collaboration, embedded systems, pervasive computing, sensor integration, intelligent control, cognitive robotics, and networked robots. Prof. Hu is Fellow of the Institute of Engineering and Technology, Fellow of the Institution of Measurement and Control, and a Chartered Engineer in the U.K. He currently serves as Editor-in-Chief for the International Journal of Automation and Computing, Editor-in-Chief of MDPI Robotics Journal, and an Executive Editor for the International Journal of Mechatronics and Automation.

    Xiafu Peng received the M.S. and Ph.D. degrees in control science from the Harbin Engineering University, in 1994 and 2001, respectively. He is currently a Professor with the Department of Automation, Xiamen University at Xiamen. His current research interests include the navigation and motion control of robots. Prof. Peng is a Fellow of the Fujian Association for the advancement of Automation and Power, and a Senior Member of the Chinese Institute of Electronics. He is the recipient of the provincial/ministerial Scientific and Technological Progress Award.

    View full text