A new image-based visual servoing method with velocity direction control

https://doi.org/10.1016/j.jfranklin.2020.01.012Get rights and content

Abstract

In image-based visual servoing (IBVS), for more satisfying trajectories, the target is always expected to move in straight lines to the desired positions. In this paper, a new IBVS controller for trajectory adjustment is obtained by optimizing a multi-objective function about adjusting a direction in the image plane. By applying the control law based on the optimized parameters, less redundant motion can be achieved in contrast to the traditional methods. Meanwhile, it will be beneficial for reducing the risk of the target leaving the field of view (FOV) and the robustness to the uncertainties of depth and camera internal parameters remains. The experimental results of the 6 degrees of freedom (DOF) robot with eye-in-hand configuration demonstrate the effectiveness and practicability of the proposed method.

Introduction

Nowadays, the vision of robot is involved in all walks of our life. Visual servoing (VS) is a motion control process that uses imaging technology to move marked feature to desired positions [1], and can be integrated with many disciplines, such as optimization methods [2], sliding mode [3], fuzzy control [4], robust control [5], switching control [6], adaptive control [7], [8], neural networks [9], [10], [11], reinforcement learning [12] and so on. Generally, VS is divided into position-based visual servoing (PBVS), image-based visual servoing (IBVS) and hybrid visual servo. Each method has advantages and disadvantages [13], [14].

PBVS needs to get geometric shape of the target, intrinsic parameters of the camera and observed image plane feature [15]. Then, the estimated pose of the target relative to the camera is asked. The main task of PBVS is to reduce the difference between the current pose and the desired one, which can effectively execute the linear camera motion in the cartesian coordinate frame. Therefore, the calibration accuracy of geometric and camera model of the target seriously affects and restricts the performance of PBVS [16]. Moreover, PBVS is controlled in three-dimensional space and needs complex calculation, so it is difficult to obtain satisfactory image plane trajectories, which may cause the image feature to leave the FOV.

IBVS is fundamentally different from PBVS in that it does not estimate the relative pose of the target, which is implied in image feature. So IBVS is to control feature in the image plane [17]. It has been widely used in industrial fields because of its robustness, such as unmanned systems [18], aircrafts [19], fault diagnosis [20] and so on. However, IBVS needs to worry about feature leaving the FOV, and it can produce trajectories with redundant motion that exceed the robot’s motion limitation, especially if it requires a large angle of rotation around the camera’s optical axis. Due to this characteristic, in [21], a trajectory planning method is developed to eliminate the limitation caused by robot’s FOV. Another low complexity IBVS control scheme can finish tasks respecting the FOV constrains by rigorous derivations in [22]. IBVS needs only one 3D information, that is, the depth information of feature points, but it is robust to the estimation bias of depth and the calibration error of model parameters. Therefore, in recent decades, there have been many different control methods combined with IBVS. The proportional controller is the most basic method to decrease the states exponentially, and there are numerical elegant approaches, whose applications have been developed during the past decades. A partially decoupled control law is respectively designed for rotation and translation can be found in [23]. In [24], [25], an online IBVS controller based on the robust model predictive control (RMPC) is proposed, and there is another method using acceleration as command named augmented IBVS (AIBVS) in [26]. In general, the improvement of monocular VS control is often based on the choice between exploring new visual features and designing new controllers. Except for novel control law, many researchers find other proper image features for enhancing the tracking performance, including the luminance of pixel [27], points in a new virtual visual space [28] as well as image moments [29]. Nonetheless, a more flexible controller is still to be expected.

Hybrid visual servoing, called 2-1/2-D visual servoing, is the combination of PBVS and IBVS, in order to avoid the shortcomings of them. By decomposing the homography matrix, the translation and rotation motion of robot’s end-effector can be controlled respectively in [30], so as to simultaneously optimize the motion trajectories in the image plane and cartesian space. Another scheme in [31] is a modified controller based on optimization, which includes the constrains of FOV and actuator. However, it has poor robustness and is especially sensitive to image noise. Moreover, it needs to calculate and decompose the homography matrix at all times in the control process, which leads to calculation pressure and reduces real-time performance of the system.

In this paper, a new IBVS control method with velocity direction control is defined. A multi-objective optimization framework different from [1] is designed to solve the optimal parameters, in order to minimize the gaps between the velocity direction and its boundaries. Compared with the classical control methods, it can achieve the same excellent performance for the long-distance translational motion. For the rotation task with large angles, it can effectively avoid redundant motion and complete the task with trajectories closer to straight lines. The stability proof of the controller will be given in the paper. Finally, the results of experiments will demonstrate the effectiveness of the proposed method.

The layout of the paper is organized as follows. In Section II, the details of the IBVS and its system descriptions are shown. The core part is in Section III, in which the novel control law based on the velocity direction control for IBVS system is derived. In Section IV, several experimental results are presented to verify the effectiveness of the proposed control strategy. Finally, the conclusions are given in the last section.

Section snippets

IBVS

The formation of images in VS comes from the central-projection model, which is similar to a pinhole camera [15]. For any point in the world coordinate P=(X,Y,Z)R1×3, according to the following formula, the point coordinates p=(x,y)R1×2 in the image plane are:x=fXZ,y=fYZand pixel coordinates s=(u,v)R1×2 are:u=fρux+u0,v=fρvy+v0where f is the focal length, ρu and ρv are the length and width of each pixel respectively. The principal point coordinates of image plane are (u0, v0).

By taking the

Classical method with optimization framework

As mentioned in [1] and [2], the control law in Eq. (9) can be equivalent to an optimization problem by solving the least square multi-objective function, such as:Vc*=argminVcw1L^eVc+λe(t)2+w2Vc2+...where w1 ≥ 0 and w2 ≥ 0 are adjustable weights of tasks. In this way, the computation of controller is transformed into an unconstrained optimization form, which is similar to Eq. (10) and convenient to add more tasks.

In [1], three feature points are used to verify the control strategy. A

Experimental Results

In this section, several experiments are shown, and the corresponding results verify the effectiveness and practicability of the proposed method compared with the traditional IBVS controller.

The experiments are done on a 6-DOF manipulator system, whose experimental setup consists of two subsystems. One is a VP-6242M Denso robot and another is a Logitech web camera C310 mounted on the end-effector of robot, whose resolution is set to 374 × 240, as seen in Fig. 2. The camera works at a sampling

Conclusion

Throughout the paper, a new control scheme for IBVS has been proposed. It is aiming to limit the motion direction of end-effector within the range of predefined directions. This is achieved by developing a parameterized multi-objective function. Moreover, the optimized parameters bring satisfactory motion trajectory and distinct reduction in redundant motion. The results of experiments based on the 6-DOF manipulator show the improvement over the classical IBVS control scheme. Under the novel

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China under Grant 61873056, Grant 61473068, Grant 61621004 and Grant 61420106016, the Fundamental Research Funds for the Central Universities in China under Grant N170405004, N182608004 and the Research Fund of State Key Laboratory of Synthetical Automation for Process Industries in China under Grant 2013ZCX01.

References (32)

  • N.R. Gans et al.

    Stable visual servoing through hybrid switched-system control

    IEEE Trans. Robot.

    (2007)
  • H. Wang et al.

    Adaptive visual servoing of contour features

    IEEE/ASME Trans. Mechatron.

    (2018)
  • X. Zhong et al.

    Robust kalman filtering cooperated elman neural network learning for vision-sensing-based robotic manipulation with global stability

    Sensors

    (2013)
  • F. Chaumette et al.

    Visual servo control. I. basic approaches

    IEEE Robot. & Autom. Mag.

    (2006)
  • S. Hutchinson et al.

    Visual servo control, part II: advanced approaches

    IEEE Robot. Autom. Mag.

    (2007)
  • P. Corke

    Robotics, Vision and Control: Fundamental Algorithms In MATLAB® Second, Completely Revised

    (2017)
  • Cited by (25)

    • Robust homography-based visual servo control for a quadrotor UAV tracking a moving target

      2023, Journal of the Franklin Institute
      Citation Excerpt :

      The latter extracts the two-dimensional (2D) image features and directly employs them as control feedback. It avoids complex reconstruction and enhances the robustness to the calibration errors of the camera [7,8]. Nevertheless, the traditional IBVS method can merely provide local asymptotic stability theoretically. [9].

    • A survey Of learning-Based control of robotic visual servoing systems

      2022, Journal of the Franklin Institute
      Citation Excerpt :

      This technology makes full use of the perception ability of the visual information in the complex environment and expands robotic systems to unknown environments. According to the different dimensions of using visual information, visual servo can be divided into: image-based visual servo (IBVS) [6], position-based visual servo (PBVS) [7] and hybrid visual servo (HVS) [8]. IBVS basically uses the projection relationship of cameras to transform the world coordinates of feature points into the pixel plane.

    View all citing articles on Scopus
    View full text