Loading [a11y]/accessibility-menu.js
Hand–Eye Parameter Estimation Based on 3-D Observation of a Single Marker | IEEE Journals & Magazine | IEEE Xplore

Hand–Eye Parameter Estimation Based on 3-D Observation of a Single Marker


Abstract:

Hand–eye calibration (HEC) aims to determine the geometrical transformation between the camera and the robot, which is essential for vision-guided robotic (VGR) systems. ...Show More

Abstract:

Hand–eye calibration (HEC) aims to determine the geometrical transformation between the camera and the robot, which is essential for vision-guided robotic (VGR) systems. On the one hand, the classic pose-based and the recent pixel-based HEC methods mainly focus on multipixel research for initializing or solving. However, in these multimarker cases, additional connector assembly and wide coverage area are not user-friendly in practical data collection operations. Furthermore, both accuracy and efficiency are not ideal due to the transition calculation of multimarker poses. On the other hand, with the development of 3-D sensing technology, 3-D position data can be obtained in increasing scenes. However, HEC research on 3-D observation is relatively insufficient, and previous position-based formulations ignore the relationship among parameters. Motivated by the above shortcomings, this article thoroughly investigates the hand–eye parameter estimation based on 3-D observation of a single marker. First, a uniform single-marker formulation is proposed. This formulation is unique in the optimization sense without factor-extraction variants and covers both the eye-in-hand and eye-to-hand configurations. Then, an analytical solution and an iterative solution are derived through different rotation treatments. It is worth noting that these solutions are in a consistent and compact form due to the precalculated variables and the equivalent transformation. Meanwhile, the solvability, estimation accuracy, and computational efficiency are discussed. Finally, comprehensive simulations and real-world experiments are provided to demonstrate the advantages of the proposed method over previous methods in terms of accuracy, computational efficiency, and operational efficiency. The codes and datasets are open source at: https://github.com/MatthewJin001/Single3D and https://github.com/MatthewJin001/3Ddata, respectively.
Article Sequence Number: 3518114
Date of Publication: 04 April 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.