Elsevier

Automatica

Volume 49, Issue 8, August 2013, Pages 2453-2460
Automatica

Brief paper
Adaptive visual servoing using common image features with unknown geometric parameters

https://doi.org/10.1016/j.automatica.2013.04.018Get rights and content

Abstract

This paper generalizes the concept of the depth-independent interaction matrix, developed for point and line features in our early work, to generalized image features. We derive the conditions under which the depth-independent interaction matrix can be linearly parameterized by the geometric parameters of the generalized image features, and propose an adaptive visual servo controller for robot manipulators using the generalized image features whose geometric parameters are unknown. To estimate the unknown parameters on-line, we propose new error functions that are linear to estimation errors of the parameters and an algorithm that minimizes the error functions using multiple images. The Lyapunov theory is used to prove asymptotic stability of the proposed controller based on the nonlinear dynamics of the manipulator. It is also shown that in addition to points and lines, other common image features like distances, angles, areas, and centroids all satisfy the conditions for the linear parameterization. Experiments have been conducted to validate the proposed control method.

Introduction

Vision is an important sensory channel for humans to move and act. The controllers can be also categorized as kinematics-based visual servoing and dynamic visual servoing. In kinematics-based methods, one designs the velocity command of a robot manipulator using visual feedback without considering the nonlinear dynamics of the robot, assuming that the robot manipulator can control its velocity accurately. There is a huge number of works on kinematics-based visual servoing (e.g., Fang, Liu, and Zhang (2012), Gans, Hu, Shen, Zhang, and Dixon (2012)) and Hu et al., 2010, Hu et al., 2009. To estimate the structure of a stationary object, some researchers (e.g., Azarbayejani and Pentland (1995), Dani, Fischer, Kan, and Dixon (2012), Dixon, Fang, Dawson, and Flynn (2003), Jankovic and Ghosh (1995), Karagiannis and Astolfi (2005), and Zhang, Fang, and Liu (2011)) developed nonlinear observers for real-time structure estimation. When the nonlinear forces have dominated effects in high-speed motion of robot manipulators, kinematics-based methods cannot guarantee stability and satisfactory performance.

By directly incorporating visual feedback in the dynamic control loop, it is possible to enhance the system stability and the control performance. Dynamic visual servoing is to design the joint inputs of robot manipulators directly using visual feedback. In the design, the nonlinear dynamics of the robot manipulator is taken into account. Hashimoto, Kimoto, Ebine, and Kimura (1991) is one of the earliest researchers who studied dynamic visual servoing. However, his controllers need to know the 3-D structure of the features. Kelly, Reyes, Moreno, and Hutchinson (1999) carried out important work on dynamic visual servoing with 2D motion. Pomares and Torres (2005) proposed a movement-flow-based method for visual servoing and force control. Piepmeier et al. (Piepmeier, McMurray, & Lipkin, 2004) developed a quasi-Newton method for uncalibrated dynamic visual servoing. Cheah, Liu, and Slotine (2010) developed an adaptive approach for vision-based control of robots. In addition to points and lines, other image features have been used in visual servoing (e.g., Fomena and Chaumette (2007) and Iwatsuki and Okiyama (2002)). The authors (e.g., Liu, Wang, Wang, and Lam (2006), Wang, Liu, and Zhou (2007), and Wang, Liu, and Zhou (2008)) proposed the concept of depth-independent interaction (or image Jacobian) matrix for point and line features.

This paper extends the concept of depth-independent interaction matrix to generalized image features and proposes an adaptive controller for visually servoing a robot manipulator using the generalized image features with unknown 3-D geometric information (e.g., Liu, Wang, and Wang (2008)). In this paper, we derive the conditions for generalized image features under which the depth-independent interaction matrix can be linearly parameterized using the unknown 3-D geometric parameters of the features. We prove that common image features including points, lines, distances, angles, areas and centroids all satisfy the conditions. On the basis of the nonlinear robot dynamics, we prove asymptotic convergence of the image errors using the Lyapunov theory. We have implemented the controllers in a 3 DOF manipulator and validated its performance by experiments.

The contribution of this paper can be summarized as follows: First, it generalizes the concept of the depth-independent interaction matrix to generalized image features. Second, this paper derives the conditions under which the depth-independent interaction can be linearly parameterized by the unknown parameters and proves that the common image features satisfy the conditions. Third, new error functions have been proposed for on-line parameters estimation.

Section snippets

Generalized image features

This section introduces the concept of the generalized image coordinates and generalized depth.

Error functions for parameters estimation

One of the key ideas in designing an adaptive visual servo controller is to estimate on-line the unknown 3-D structures of the features. For this purpose, we define a proper error function that depends on the estimation error of the parameters and minimize the error function on-line. To facilitate design of an algorithm for parameter estimation, we impose the following two conditions on the error function:

  • (1)

    The error function can be represented as a linear function of the estimated parameters; and

Controller design

This section shows the details of the image-based visual servo controller.

Implementation and experiments

We have implemented the proposed controller in a 3 DOF robot manipulator (Fig. 5). We conducted visual servoing experiments using centroid as example. The control gains used in the experiment were K1=50,B=0.00017,K3=0.001,Γ=1000000. The sampling time of the controller is 22 ms. Fig. 6(a) and 12(b) demonstrate the position errors of the centroid and its trajectory on the image plane, respectively. The results confirmed the convergence of the image error to zero under control of the proposed

Conclusions

In this paper, we generalized the concept of the depth-independent interaction matrix defined for point and line features to generalized image features. The conditions for linear parameterization of the depth-independent interaction matrix and design of an adaptive visual servo controller with unknown feature geometry have been derived. We have also demonstrated that in addition to points and lines the common image features including angle, distance, centroid and area satisfy the required

Acknowledgments

This work was supported in part by the Hong Kong Research Grant Council under the grants 415110 and 414912, in part by Specialized Research Fund for the Doctoral Program of Higher Education of China under Grants 20100073120020 and 20100073110018, in part by Shanghai Municipal Natural Science Foundation under Grant 11ZR1418400, in part by the Natural Science Foundation of China under Projects 61105095, 60334010, and 60475029.

Yun-Hui Liu (S’90-M’92-SM’98-F’09) received the B.Eng. degree in applied dynamics from Beijing Institute of Technology, Beijing, China, in 1985, the M.Eng. degree in mechanical engineering from Osaka University, Osaka, Japan, in 1989, and the Ph.D. degree in mathematical engineering and information physics from the University of Tokyo, Tokyo, Japan, in 1992.

He worked with the Electrotechnical Laboratory, MITI, Japan, from 1992 to 1995. Since February 1995, he has been with the Chinese

References (24)

  • G. Hu et al.

    Adaptive homography-based visual servo tracking control via a quaternion formulation

    IEEE Transactions on Control Systems Technology

    (2010)
  • G. Hu et al.

    Homography-based visual servo control with imperfect camera calibration

    IEEE Transactions on Automatic Control

    (2009)
  • Cited by (0)

    Yun-Hui Liu (S’90-M’92-SM’98-F’09) received the B.Eng. degree in applied dynamics from Beijing Institute of Technology, Beijing, China, in 1985, the M.Eng. degree in mechanical engineering from Osaka University, Osaka, Japan, in 1989, and the Ph.D. degree in mathematical engineering and information physics from the University of Tokyo, Tokyo, Japan, in 1992.

    He worked with the Electrotechnical Laboratory, MITI, Japan, from 1992 to 1995. Since February 1995, he has been with the Chinese University of Hong Kong and is currently a Professor with the Department of Mechanical and Automation Engineering. He is also a Visiting Professor of the Harbin Institute of Technology. He has published over 200 papers in refereed journals and refereed conference proceedings. His research interests include visual servoing, medical robotics, multifingered robot hands, mobile robots, sensor networks, and machine intelligence. Dr. Liu has received numerous research awards from international journals and international conferences in robotics and automation and government agencies. He was an Associate Editor of the IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION and the general chair of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

    Hesheng Wang received the B.Eng. degree in Electrical Engineering from the Harbin Institute of Technology. Harbin, China, in 2002, the M.Phil. and Ph.D. degrees in Automation & Computer-Aided Engineering from the Chinese University of Hong Kong, Hong Kong, in 2004 and 2007, respectively. From 2007 to 2009, he was a Post-doctoral Fellow and Researcher Assistant in the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong.

    Currently, he is an Associate Professor of Department of Automation, Shanghai Jiaotong University, China. He worked as a visiting researcher at University of Zurich in Switzerland. His research interests include visual servoing, service robot, adaptive robot control and computer vision. He received the Best Student Conference Paper at the IEEE International Conference on Integration Technology in 2007, and the SUPCON Best Paper at the 8th World Congress on Intelligent Control and Automation in 2010.

    Weidong Chen received his B.S. and M.S. degrees in Control Engineering in 1990 and 1993, and a Ph.D. degree in Mechatronics in 1996, respectively, all from the Harbin Institute of Technology, Harbin, China. Since 2005 he has been a professor of the Department of Automation at the Shanghai Jiao Tong University, Shanghai, China, and Director of the Institute of Robotics and Intelligent Information Processing. He is the founder of the Autonomous Robot Laboratory. He worked as a visiting professor at The Ohio State University in the US and at University of Zurich in Switzerland. Dr. Chen’s current research interests include autonomous robotics, assistive robotics, collective robotics, and control of mechatronic systems.

    Dongxiang Zhou received the B. Eng. and M. Eng. degrees in physical electronics and optoelectronics from Southeast University, Nanjing, China, in 1989 and 1992, respectively, and the Ph.D. degree in information and communication engineering from National University of Defense Technology, Changsha, China, in 2000.

    He is currently an Associate Professor with the School of Electronic Science and Engineering, National University of Defense Technology. From 2004 to 2006, he was a Post-Doctoral Fellow and Research Associate at University of Alberta, Canada. His research interests include image processing, computer vision, and integrated intelligent systems.

    The material in this paper was partially presented at the 2008 IEEE International Conference on Robotics and Automation (ICRA 2008), May 19–23, 2008, Pasadena, California, USA. This paper was recommended for publication in revised form by Associate Editor Warren E. Dixon under the direction of Editor Andrew R. Teel.

    1

    Tel.: +86 21 34207252; fax: +86 21 34204302.

    View full text