A new image-based visual servoing method with velocity direction control
Introduction
Nowadays, the vision of robot is involved in all walks of our life. Visual servoing (VS) is a motion control process that uses imaging technology to move marked feature to desired positions [1], and can be integrated with many disciplines, such as optimization methods [2], sliding mode [3], fuzzy control [4], robust control [5], switching control [6], adaptive control [7], [8], neural networks [9], [10], [11], reinforcement learning [12] and so on. Generally, VS is divided into position-based visual servoing (PBVS), image-based visual servoing (IBVS) and hybrid visual servo. Each method has advantages and disadvantages [13], [14].
PBVS needs to get geometric shape of the target, intrinsic parameters of the camera and observed image plane feature [15]. Then, the estimated pose of the target relative to the camera is asked. The main task of PBVS is to reduce the difference between the current pose and the desired one, which can effectively execute the linear camera motion in the cartesian coordinate frame. Therefore, the calibration accuracy of geometric and camera model of the target seriously affects and restricts the performance of PBVS [16]. Moreover, PBVS is controlled in three-dimensional space and needs complex calculation, so it is difficult to obtain satisfactory image plane trajectories, which may cause the image feature to leave the FOV.
IBVS is fundamentally different from PBVS in that it does not estimate the relative pose of the target, which is implied in image feature. So IBVS is to control feature in the image plane [17]. It has been widely used in industrial fields because of its robustness, such as unmanned systems [18], aircrafts [19], fault diagnosis [20] and so on. However, IBVS needs to worry about feature leaving the FOV, and it can produce trajectories with redundant motion that exceed the robot’s motion limitation, especially if it requires a large angle of rotation around the camera’s optical axis. Due to this characteristic, in [21], a trajectory planning method is developed to eliminate the limitation caused by robot’s FOV. Another low complexity IBVS control scheme can finish tasks respecting the FOV constrains by rigorous derivations in [22]. IBVS needs only one 3D information, that is, the depth information of feature points, but it is robust to the estimation bias of depth and the calibration error of model parameters. Therefore, in recent decades, there have been many different control methods combined with IBVS. The proportional controller is the most basic method to decrease the states exponentially, and there are numerical elegant approaches, whose applications have been developed during the past decades. A partially decoupled control law is respectively designed for rotation and translation can be found in [23]. In [24], [25], an online IBVS controller based on the robust model predictive control (RMPC) is proposed, and there is another method using acceleration as command named augmented IBVS (AIBVS) in [26]. In general, the improvement of monocular VS control is often based on the choice between exploring new visual features and designing new controllers. Except for novel control law, many researchers find other proper image features for enhancing the tracking performance, including the luminance of pixel [27], points in a new virtual visual space [28] as well as image moments [29]. Nonetheless, a more flexible controller is still to be expected.
Hybrid visual servoing, called 2-1/2-D visual servoing, is the combination of PBVS and IBVS, in order to avoid the shortcomings of them. By decomposing the homography matrix, the translation and rotation motion of robot’s end-effector can be controlled respectively in [30], so as to simultaneously optimize the motion trajectories in the image plane and cartesian space. Another scheme in [31] is a modified controller based on optimization, which includes the constrains of FOV and actuator. However, it has poor robustness and is especially sensitive to image noise. Moreover, it needs to calculate and decompose the homography matrix at all times in the control process, which leads to calculation pressure and reduces real-time performance of the system.
In this paper, a new IBVS control method with velocity direction control is defined. A multi-objective optimization framework different from [1] is designed to solve the optimal parameters, in order to minimize the gaps between the velocity direction and its boundaries. Compared with the classical control methods, it can achieve the same excellent performance for the long-distance translational motion. For the rotation task with large angles, it can effectively avoid redundant motion and complete the task with trajectories closer to straight lines. The stability proof of the controller will be given in the paper. Finally, the results of experiments will demonstrate the effectiveness of the proposed method.
The layout of the paper is organized as follows. In Section II, the details of the IBVS and its system descriptions are shown. The core part is in Section III, in which the novel control law based on the velocity direction control for IBVS system is derived. In Section IV, several experimental results are presented to verify the effectiveness of the proposed control strategy. Finally, the conclusions are given in the last section.
Section snippets
IBVS
The formation of images in VS comes from the central-projection model, which is similar to a pinhole camera [15]. For any point in the world coordinate according to the following formula, the point coordinates in the image plane are:and pixel coordinates are:where f is the focal length, ρu and ρv are the length and width of each pixel respectively. The principal point coordinates of image plane are (u0, v0).
By taking the
Classical method with optimization framework
As mentioned in [1] and [2], the control law in Eq. (9) can be equivalent to an optimization problem by solving the least square multi-objective function, such as:where w1 ≥ 0 and w2 ≥ 0 are adjustable weights of tasks. In this way, the computation of controller is transformed into an unconstrained optimization form, which is similar to Eq. (10) and convenient to add more tasks.
In [1], three feature points are used to verify the control strategy. A
Experimental Results
In this section, several experiments are shown, and the corresponding results verify the effectiveness and practicability of the proposed method compared with the traditional IBVS controller.
The experiments are done on a 6-DOF manipulator system, whose experimental setup consists of two subsystems. One is a VP-6242M Denso robot and another is a Logitech web camera C310 mounted on the end-effector of robot, whose resolution is set to 374 × 240, as seen in Fig. 2. The camera works at a sampling
Conclusion
Throughout the paper, a new control scheme for IBVS has been proposed. It is aiming to limit the motion direction of end-effector within the range of predefined directions. This is achieved by developing a parameterized multi-objective function. Moreover, the optimized parameters bring satisfactory motion trajectory and distinct reduction in redundant motion. The results of experiments based on the 6-DOF manipulator show the improvement over the classical IBVS control scheme. Under the novel
Acknowledgment
This work was supported in part by the National Natural Science Foundation of China under Grant 61873056, Grant 61473068, Grant 61621004 and Grant 61420106016, the Fundamental Research Funds for the Central Universities in China under Grant N170405004, N182608004 and the Research Fund of State Key Laboratory of Synthetical Automation for Process Industries in China under Grant 2013ZCX01.
References (32)
Robust jacobian matrix estimation for image-based visual servoing
Robot. Comput.-Integr. Manuf.
(2011)- et al.
Adaptive visual servoing using common image features with unknown geometric parameters
Automatica
(2013) Intelligent visual servoing with extreme learning machine and fuzzy logic
Expert Syst. Appl.
(2017)- et al.
Robots visual servo control with features constraint employing kalman-neural-network filtering scheme
Neurocomputing
(2015) - et al.
Neural network reinforcement learning for visual control of robot manipulators
Expert Syst. Appl.
(2013) - et al.
Guaranteeing field of view constraints in visual servoing tasks under uncertain dynamics
IFAC-PapersOnLine
(2017) - et al.
Active vision for pose estimation applied to singularity avoidance in visual servoing
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
(2017) - et al.
Visual servoing in an optimization framework for the whole-body control of humanoid robots
IEEE Robot. Autom. Lett.
(2017) - et al.
Enhanced IBVS controller for a 6DOF manipulator using hybrid PD-SMC method
IECON 2017-43rd Annual Conference of the IEEE Industrial Electronics Society
(2017) - et al.
Image-based visual servoing of a 7-DOF robot manipulator using an adaptive distributed fuzzy PD controller
IEEE/ASME Trans. Mechatron.
(2014)
Stable visual servoing through hybrid switched-system control
IEEE Trans. Robot.
Adaptive visual servoing of contour features
IEEE/ASME Trans. Mechatron.
Robust kalman filtering cooperated elman neural network learning for vision-sensing-based robotic manipulation with global stability
Sensors
Visual servo control. I. basic approaches
IEEE Robot. & Autom. Mag.
Visual servo control, part II: advanced approaches
IEEE Robot. Autom. Mag.
Robotics, Vision and Control: Fundamental Algorithms In MATLAB® Second, Completely Revised
Cited by (25)
Resilient adaptive trajectory tracking control for uncalibrated visual servoing systems with unknown actuator failures
2024, Journal of the Franklin InstituteAdaptive event-triggered control for uncalibrated visual servoing of robot manipulators
2023, Journal of the Franklin InstituteRobust homography-based visual servo control for a quadrotor UAV tracking a moving target
2023, Journal of the Franklin InstituteCitation Excerpt :The latter extracts the two-dimensional (2D) image features and directly employs them as control feedback. It avoids complex reconstruction and enhances the robustness to the calibration errors of the camera [7,8]. Nevertheless, the traditional IBVS method can merely provide local asymptotic stability theoretically. [9].
A survey Of learning-Based control of robotic visual servoing systems
2022, Journal of the Franklin InstituteCitation Excerpt :This technology makes full use of the perception ability of the visual information in the complex environment and expands robotic systems to unknown environments. According to the different dimensions of using visual information, visual servo can be divided into: image-based visual servo (IBVS) [6], position-based visual servo (PBVS) [7] and hybrid visual servo (HVS) [8]. IBVS basically uses the projection relationship of cameras to transform the world coordinates of feature points into the pixel plane.
Adaptive visual servoing for the robot manipulator with extreme learning machine and reinforcement learning
2024, Asian Journal of Control