Precise hand–eye calibration method based on spatial distance and epipolar constraints
Introduction
In recent years, robotic systems consisting of multiple cameras and manipulators have been playing an increasingly important role in automatic production lines and surgical operations. Such systems enhance the flexibility, automation, and intelligence of production. Visual servoing plays an important role in such systems. Since the end effector (hand) frame and the camera (eye) frame are not coincident, it is important to solve the hand–eye calibration problem. Hand-eye calibration requires image data and manipulator pose data. Due to the existence of various noise, it is difficult to achieve high precision hand–eye calibration.
Since hand–eye calibration is to find the transformation between camera frame and end-effector frame in 3D space, it is more appropriate to leverage 3D information for calibration. However, most of existing methods implement at 2D level, which results in the inconsistency between the calibration goal and the actual requirement.
In this paper, a precise and robust hand–eye calibration method is proposed. The basic idea of the proposed method is to optimize hand–eye parameters by minimizing the comprehensive error, which contains spatial distance error and epipolar error. Firstly, the initial hand–eye parameters are calculated by employing Kronecker product. The initial parameters are utilized to find the transformation matrix of camera position after camera moving. According to this transformation matrix, the epipolar error and the 3D coordinates of the feature points can be calculated. Then, the spatial distance error between the feature points is calculated according to the ground truth and the calculated value obtained from coordinates of feature points. The adaptive weight coefficient is set according to the initial values of the two types of errors and the hand–eye parameters are iteratively optimized by minimizing the sum of errors.
The main contributions of the proposed method are summarized as follows.
(1) The hand–eye parameters are applied to 3D reconstruction and the 3D constraint is introduced into the optimization of the hand–eye parameters.
(2) The inherent epipolar constraint is utilized to improve the speed and accuracy of the nonlinear optimization.
(3) The adaptive weight coefficient is applied to the objective function to unify the spatial distance error and the epipolar error to the same order of magnitude. The adaptive weight coefficient is set according to the initial value of two types of errors, which avoids the situation that only the constraint with the larger initial error plays a role in the optimization process.
(4) The number of parameters to be optimized is reduced to 6, and the error propagations from base-to-world transform and world-to-eye transform are eliminated.
The rest of this paper is organized as follows. Section 2 reviews related work on hand–eye calibration. Section 3 mathematically formulates the problem and introduces the closed-form solution based on Kronecker product. Section 4 describes the epipolar constraint and the spatial distance constraint. Section 5 presents the parameter optimization approach. Both simulation and real experiments are provided in Section 6. Section 7 shows the application example using precise hand–eye parameters. Section 8 concludes this paper. The symbols in this paper are shown in the nomenclature.
Section snippets
Related work
Generally the existing hand–eye calibration methods can be divided into two categories. The first category we summarize consists of methods which try to solve the hand–eye calibration problem based on the formula . In this formulation, , calculated by camera calibration, is the transformation from world coordinate frame to the camera coordinate frame, and , obtained from the robot controller, is the transformation from robot-base frame to the hand coordinate frame. represents the
Initialization of the hand–eye parameter
A good initialization will help the optimization procedure of hand–eye calibration achieve convergence rapidly. As shown in Fig. 1, solving hand–eye calibration problem is actually solving the equation where and represent the relative movements of the camera (eye) and end-effector (hand), respectively. is the unknown transformation from hand to eye.
Denote , and as the rotation matrices of , and , respectively, and denote , and as related translation
Constraints for hand–eye parameters
The initial values of hand–eye parameters are always sensitive to noise. In order to improve the accuracy and stability of the calibration, the calibration parameters are further optimized. The parameters are constrained by epipolar constraint and spatial distance constraint during the optimization process.
Hand–eye parameter optimization
As the dimensions of and are different and the initial values of these two kinds of errors are often affected by the environment, it is necessary to set the weight coefficient to balance the errors. Suppose that the initial values of and are and , respectively. By joining the spatial distance error and the epipolar error, the objective function is established as
Actually, the initial values of the hand-eye parameters
Experimental environment description
Both simulation experiments and real experiments are designed to validate the proposed iterative calibration method. Two open source libraries are employed to assist the implementation of the algorithm. Specifically, Opencv 2.4.9 is utilized for image processing and Eigen 3.0 is used for matrix operation.
In order to comprehensively evaluate the effect of the proposed method, the comparative experiments include the initial solution of the proposed method (initial guess), the optimization method
Application
In common industrial applications, high-precision hand–eye transform is often utilized to ensure that the target pose detected by the camera can be accurately transformed to the robotic arm, thereby enabling the robotic to perform subsequent operation on the target. We built the platform shown in Fig. 9 to evaluate the practical application effect of the proposed hand–eye calibration method. The setup consists of an UR5e robot, a realsense D435i camera, a pneumatic sucker, and chessboard
Conclusion
This paper is proposed to find the hand–eye transform by synthetically employing the spatial distance constraint and epipolar constraint, which contributes to improving the accuracy of the vision-guided robot arm system detecting in 3D space. Since the iterative optimization process of hand–eye parameters is independent of the base-to-world transform and world-to-eye transform, the proposed method reduces the number of parameters to be optimized to 6 and eliminates the error propagations from
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
This work was supported in part by the National Key Research and Development Program of China (grant No. 2019YFB1312600), in part by the National Natural Science Foundation of China (grant No. 52075480), in part by the Key Research and Development Program of Zhejiang Province, China (grant No. 2021C01008), in part by the High-level Talent Special Support Plan of Zhejiang Province, China (grant No. 2020R52004) and in part by the Natural Science Foundation of Zhejiang Province, China under Grant
Zhenyu Liu received the B.S. and Ph.D. degrees from the Department of Mechanical Engineering, Zhejiang University, Zhejiang, China, in 1996 and 2002, respectively. He was a Visiting Scholar with Ritsumeikan University, Kyoto, Japan, in 2010. He is currently a Professor with the Department of Mechanical Engineering and State Key Laboratory of CAD&CG, Zhejiang University. His current research interests include virtual prototyping, virtual-reality-based simulation, and robotics.
References (37)
- et al.
Calibration method of robot base frame using unit quaternion form
Precision Eng.
(2015) - et al.
Triangulation
Comput. Vis. Image Underst.
(1997) - et al.
Simultaneous robot/world and tool/flange calibration by solving homogeneous transformation equations of the form AX XB
IEEE Trans. Robot. Autom.
(1994) - et al.
Simultaneous robot-world and hand-eye calibration
IEEE Trans. Robot. Autom.
(1998) - R.L. Hirsh, G.N. DeSouza, A.C. Kak, An iterative approach to the hand-eye and base-world calibration problem, in: ICRA,...
- et al.
Simultaneous robot-world and handeye calibration using dual-quaternions and Kronecker product
Int. J. Phys. Sci.
(2010) Solving the robot-world/hand-eye calibration problem using the Kronecker product
ASME J. Mech. Robot.
(2013)- et al.
Non-orthogonal tool/flange and robot/world calibration
Int. J. Med. Robot. Comput. Assist. Surg.
(2012) - et al.
Simultaneous hand-eye and robot-world calibration by solving the problem without correspondence
IEEE Robot. Autom. Lett.
(2015) - et al.
Solving the robot-world hand-eye(s) calibration problem with iterative methods
Mach. Vis. Appl.
(2017)
Finding the kinematic base frame of a robot by hand-eye calibration using 3D position data
IEEE Trans. Autom. Sci. Eng.
Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form
IEEE Trans. Robot. Autom.
Finding the position and orientation of a sensor on a robot manipulator using quaternions
Int. J. Robot. Res.
Comments on ’calibration of wrist–mounted robotic sensors by solving homogenous transform equations of the form AX XB
IEEE Trans. Robot. Autom.
Hand-eye calibration using dual quaternions
Int. J. Robot./ Res.
Hand-eye calibration using dual quaternions in medical environment
SPIE Med. Imaging
An improved robust hand-eye calibration for endoscopy navigation system
SPIE Med. Imaging
Cited by (9)
Simultaneous calibration of hand-eye and kinematics for industrial robot using line-structured light sensor
2023, Measurement: Journal of the International Measurement ConfederationToward Simultaneous Coordinate Calibrations of AX=YB Problem by the LMI-SDP Optimization
2023, IEEE Transactions on Automation Science and EngineeringCalibration Parameter Optimization and Accuracy Evaluation of Complex Visual Measurement Systems
2023, Zhongguo Jixie Gongcheng/China Mechanical EngineeringMarker Displacement Method Used in Vision-Based Tactile Sensors - From 2-D to 3-D: A Review
2023, IEEE Sensors JournalA novel hand-eye calibration method of picking robot based on TOF camera
2023, Frontiers in Plant Science
Zhenyu Liu received the B.S. and Ph.D. degrees from the Department of Mechanical Engineering, Zhejiang University, Zhejiang, China, in 1996 and 2002, respectively. He was a Visiting Scholar with Ritsumeikan University, Kyoto, Japan, in 2010. He is currently a Professor with the Department of Mechanical Engineering and State Key Laboratory of CAD&CG, Zhejiang University. His current research interests include virtual prototyping, virtual-reality-based simulation, and robotics.
Xia Liu the B.S. degrees from the Department of Mechanical Engineering, Ocean University of China, Shandong, China, in 2016, He is currently doctoral candidate with the Institute of Design Engineering, Zhejiang University, Zhejiang, China. His research interests include visual measurement, and robot vision.
Guifang Duan received the Doctor degree in integrated science and engineering from Ritsumeikan University, Kyoto, Japan, in 2009. From 2009 to 2012, he was a Postdoctoral Fellow with the Institute of Science and Technology, Ritsumeikan University. He is currently an associate professor with the Institute of Design Engineering, Zhejiang University, Zhejiang, China. His research interests include machine learning, and robot vision.
Jianrong Tan is a distinguished professor at Zhejiang University, an academician of Chinese Academy of Engineering, chief scientist of National 973 Program, the dean of the Institute of Robotics, Zhejiang University, China Big Data Technology Chairman of the Association of Applied Industries, the Vice Chairman of the China Society of Mechanical Engineering, the Vice Chairman of the Chinese Society of Graphics, and the Director of the Engineering Graphics Teaching Steering Committee of the Ministry of Education. He has won 7 national awards, including national science and technology progress second prize 4 items, the national outstanding teaching achievements first prize 1 item, second prize 2 items. He published 8 books, over 150 papers. His current research interests include mechanical design, virtual-reality-based simulation, and robotics.