Homography estimation from circular motion for use in visual control
Introduction
Camera calibration is one of the fundamental tasks in computer vision. Results of many computer vision algorithms depend on good camera calibration, e.g. image rectification, 3D reconstruction, visual servoing, etc. The problem of camera calibration has been addressed by many authors, and the topic is normally addressed in every computer vision book, e.g. [1]. The conventional calibration approaches normally require a special preparation of the working environment, usually by insertion of an object with known dimensions into the environment, like a chessboard pattern [2], [3], [4], a pattern of circular markers [5], or other conic patterns [6], [7], [8], [9], [10], etc. Different calibration procedures use different camera models that give different accuracies. Normally it suffices to use only a basic pinhole camera model, but this model gives inaccurate results in presence of large lens distortions, so a more general model must be considered [4], [11]. The applicability of a particular method depends on the environment in which the method is to be used.
The conventional calibration techniques [2] are based on estimation of the camera model from a set of non-coplanar world points and their images using the least squares or some other minimization algorithm, and decomposition of the estimated model (normally in matrix form) into the camera intrinsic and extrinsic parameters. Zhang [3] proposed a similar approach that is also based on decomposition, but from a set of homography matrices. This work focuses on estimation of the homography, a transformation that relates two planes under perspective projection. In computer vision, a homography is normally used to describe the relation between two image planes, an image plane and a world plane or two world planes. There exist well known algorithms for extraction of the camera intrinsic and extrinsic parameters from multiple homographies that all relate to the same image plane [3]. In this paper we investigate several homography estimation techniques that can easily be deployed in visually guided robotic systems. The basic idea is to use robot motion in order to eliminate the need for special preparation of the environment solely for the purpose of camera calibration.
Recently, many approaches for camera calibration that are based on studying the invariance of the conics under perspective transformation have been proposed. Sugimoto [6] presented the estimation from seven corresponding conics by solving an over constrained algebraic set of equations. It has been shown that at the minimum a pair of conic correspondences is needed for estimation [7]. Many authors studied different configurations of conics: coplanar circles [9], [10], lines intersecting the circle centre [12], concentric circles [13], etc.
Wong [14] studied self calibration from axial symmetric objects (surfaces of revolution), objects that are frequently found in man-made environments and have some special structure, but the exact geometry is not known. In this paper we present a calibration procedure for estimation of the homography up to similarity that is based on circular motion of some world point around the single axis. The method for camera calibration and 3D reconstruction from rotation of an object about a single axis was shown in [15], [16]. The approach [15] requires image tracking of several points in the rotating object. The method presented in this paper requires image tracking of only a single point, but it requires constant angular velocity, which is not required for the use of [15].
Provided that camera intrinsic parameters are known, orientation and position of the camera with respect to the ground plane can be determined [17]. In visual control, pose of the camera (object) with respect to some ground plane is a valuable piece of information that can be used to improve autonomy of many robotic tasks, like automatic landing of air-planes [18], hovering and navigation of helicopters or quadrocopters [19], etc. Once the transformation between the image and ground plane (homography) is established, a view from a virtual camera looking perpendicular to the ground plane can be obtained. The homography can be used directly in visual control of mobile robots [20], [21], [22].
This paper is structured as follows. In Section 2 an overview of the basic projective transformation relations is given and conventional methods for camera calibration are presented. Section 3 presents three different methods for homography estimation from circular motion. In Section 4 the results of the experimental validation are presented and comparison of the methods is given. Afterwards, a discussion of the results is given in Section 5 and in the final section, Section 6, some conclusions are drawn.
Section snippets
Points, lines and conics under perspective projection
If not specified differently, small bold-face letters (e.g. ) are used to denote column vectors (e.g. homogeneous points) and big bold-face letters (e.g. ) for matrices. The subscript is used to denote the world coordinate frame, for the picture (image) frame and for the camera frame.
The transformation between the point in the world frame and the corresponding point in the image frame can be described by a pinhole camera model [23]:
Self calibration from circular motion
The camera calibration procedure described in Section 2.2 requires knowledge of the world points lying on a common plane and the corresponding image points. In this section we present a calibration procedure for estimation of the homography up to similarity that is based on circular motion of a point around the single axis.
Without loss of generality, suppose a camera is observing a point that is rotating in a plane around a single axis. The motion of the point in the world plane can be
Synthetic data
We validated all three methods for estimation of the homography up to similarity on synthetic data. We simulated the motion of two points rotating in the same plane. That resulted in two circular trajectories: a circle with the radius rotating at the angular velocity around the ground frame origin and a circle with the radius rotating at the angular velocity around the point in the ground plane. The configuration of the projection plane
Discussion
Both cost functions in Fig. 4, Fig. 5c have strong minimum at the image of the circle centre. As it can be observed in Fig. 5, the cost function of the proposed method wraps down near ellipse boundary as smaller part of the full circle is available. This can lead to wrong estimation of the circle centre. This problem is avoided when a large-enough part of the circle is available. Another thing to mention that applies to Lourakis’ and the proposed method is that sensitivity of the method used
Conclusion
Several methods for estimation of the homography up to similarity from the circular motion were presented. The study was focused on a case when the camera is observing and tracking a single point on an object that is rotating at constant angular velocity around a fixed axis. Since the direct method and the approach proposed by Lourakis may not give satisfactory results for the considered case in the presence of disturbances and non-ideal conditions, a new method for estimation of the homography
Andrej Zdešar received the B.Sc. degree in 2010 from the Faculty of Electrical Engineering, University of Ljubljana, Slovenia, where he is currently employed in the framework of the national young researchers scheme. His research interests are in the area of visual control, machine vision and autonomous mobile systems.
References (29)
- et al.
Planar rectification by solving the intersection of two circles under 2D homography
Pattern Recognit.
(2005) - et al.
Coplanar circles, quasi-affine invariance and calibration
Image Vis. Comput.
(2006) - et al.
A new easy camera calibration technique based on circular points
Pattern Recognit.
(2003) - et al.
Robust and efficient vision system for group of cooperating mobile robots with application to soccer robots
ISA Trans.
(2004) Parameter estimation techniques: a tutorial with application to conic fitting
Image Vision Comput.
(1997)- et al.
Multiple View Geometry
(2004) A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses
IEEE J. Robot. Autom.
(1987)A flexible new technique for camera calibration
IEEE Trans. Pattern Anal. Mach. Intell.
(2000)- et al.
Equidistant fish-eye calibration and rectification by vanishing point extraction
IEEE Trans. Pattern Anal. Mach. Intell.
(2010) Geometric camera calibration using circular control points
IEEE Trans. Pattern Anal. Mach. Intell.
(2000)
A linear algorithm for computing the homography from conics in correspondence
J. Math. Imaging Vision
Geometric structure computation from conics
Plane metric rectification from a single view of multiple coplanar circles
Close-range camera calibration
Photogramm. Eng.
Cited by (2)
Modeling and control with neural networks for a magnetic levitation system
2017, NeurocomputingCitation Excerpt :The trajectory tracking problem of direct current motors is considered in [28,29], and [30]. In [40,41], self-tuning of two degrees-of-freedom controllers are designed. The identification and control of magnetic levitation systems is studied in [3,39].
Segmentation of Stereo-Camera Depth Image into Planar Regions based on Evolving Principal Component Clustering
2021, IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems
Andrej Zdešar received the B.Sc. degree in 2010 from the Faculty of Electrical Engineering, University of Ljubljana, Slovenia, where he is currently employed in the framework of the national young researchers scheme. His research interests are in the area of visual control, machine vision and autonomous mobile systems.
Igor Škrjanc received the B.Sc., the M.Sc. and the Ph.D. degrees in Electrical Engineering, from the Faculty of Electrical and Computer Engineering, University of Ljubljana, Slovenia, in 1988, 1991 and 1996, respectively. His main research interests are intelligent, predictive control systems and autonomous mobile systems. In 2007 he received the highest research award of the University of Ljubljana, Faculty of Electrical Engineering, and in 2008, the highest award of the Republic of Slovenia for Scientific and Research Achievements, Zois award for outstanding research results in the field of intelligent control. He also received the Humboldt Research Fellowship for Experienced Researchers for the period between 2009 and 2011. Currently, he is a professor for Automatic Control at the Faculty of Electrical Engineering and the head of the research program Modelling, Simulation and Control.
Gregor Klančar received the B.Sc. and Ph.D. degrees in 1999, and 2003, respectively, from the Faculty of Electrical Engineering, University of Ljubljana, Slovenia. His research interests are in the area of fault diagnosis methods, multiple vehicle coordinated control and mobile robotics. Currently, he is an associate professor at the Faculty of Electrical Engineering.