Homography estimation from circular motion for use in visual control

https://doi.org/10.1016/j.robot.2014.05.012Get rights and content

Highlights

  • Only one point rotating at constant velocity around a single axis is required.

  • The proposed method estimates the homography and the angular velocity of the rotating point.

  • All the measurements required in the estimation can be obtained in the image.

Abstract

This paper presents a new method for estimation of the homography up to similarity from observing a single point that is rotating at constant velocity around a single axis. The benefit of the proposed estimation approach is that it does not require measurement of the points in the world frame. The homography is estimated based on the known shape of the motion and in-image tracking of a single rotating point. The proposed method is compared to the two known methods: the direct approach based on point correspondences and a more recently proposed method based on conic properties. The main advantages of the proposed method are that it also estimates the angular velocity and that it requires only a single circle. The estimation is made directly from the measurements in the image. Because of the advantages of the proposed method over the other methods, the proposed method should be simple to implement for calibration of visually guided robotic systems.

All the approaches were compared in the simulation environment in some non-ideal conditions and in the presence of disturbances, and a real experiment was made on a mobile robot. The experimental results confirm that the presented approach gives accurate results, even in some non-ideal conditions.

Introduction

Camera calibration is one of the fundamental tasks in computer vision. Results of many computer vision algorithms depend on good camera calibration, e.g. image rectification, 3D reconstruction, visual servoing, etc. The problem of camera calibration has been addressed by many authors, and the topic is normally addressed in every computer vision book, e.g.  [1]. The conventional calibration approaches normally require a special preparation of the working environment, usually by insertion of an object with known dimensions into the environment, like a chessboard pattern  [2], [3], [4], a pattern of circular markers  [5], or other conic patterns  [6], [7], [8], [9], [10], etc. Different calibration procedures use different camera models that give different accuracies. Normally it suffices to use only a basic pinhole camera model, but this model gives inaccurate results in presence of large lens distortions, so a more general model must be considered  [4], [11]. The applicability of a particular method depends on the environment in which the method is to be used.

The conventional calibration techniques  [2] are based on estimation of the camera model from a set of non-coplanar world points and their images using the least squares or some other minimization algorithm, and decomposition of the estimated model (normally in matrix form) into the camera intrinsic and extrinsic parameters. Zhang  [3] proposed a similar approach that is also based on decomposition, but from a set of homography matrices. This work focuses on estimation of the homography, a transformation that relates two planes under perspective projection. In computer vision, a homography is normally used to describe the relation between two image planes, an image plane and a world plane or two world planes. There exist well known algorithms for extraction of the camera intrinsic and extrinsic parameters from multiple homographies that all relate to the same image plane  [3]. In this paper we investigate several homography estimation techniques that can easily be deployed in visually guided robotic systems. The basic idea is to use robot motion in order to eliminate the need for special preparation of the environment solely for the purpose of camera calibration.

Recently, many approaches for camera calibration that are based on studying the invariance of the conics under perspective transformation have been proposed. Sugimoto  [6] presented the estimation from seven corresponding conics by solving an over constrained algebraic set of equations. It has been shown that at the minimum a pair of conic correspondences is needed for estimation  [7]. Many authors studied different configurations of conics: coplanar circles  [9], [10], lines intersecting the circle centre  [12], concentric circles  [13], etc.

Wong  [14] studied self calibration from axial symmetric objects (surfaces of revolution), objects that are frequently found in man-made environments and have some special structure, but the exact geometry is not known. In this paper we present a calibration procedure for estimation of the homography up to similarity that is based on circular motion of some world point around the single axis. The method for camera calibration and 3D reconstruction from rotation of an object about a single axis was shown in  [15], [16]. The approach  [15] requires image tracking of several points in the rotating object. The method presented in this paper requires image tracking of only a single point, but it requires constant angular velocity, which is not required for the use of  [15].

Provided that camera intrinsic parameters are known, orientation and position of the camera with respect to the ground plane can be determined  [17]. In visual control, pose of the camera (object) with respect to some ground plane is a valuable piece of information that can be used to improve autonomy of many robotic tasks, like automatic landing of air-planes  [18], hovering and navigation of helicopters or quadrocopters  [19], etc. Once the transformation between the image and ground plane (homography) is established, a view from a virtual camera looking perpendicular to the ground plane can be obtained. The homography can be used directly in visual control of mobile robots  [20], [21], [22].

This paper is structured as follows. In Section  2 an overview of the basic projective transformation relations is given and conventional methods for camera calibration are presented. Section  3 presents three different methods for homography estimation from circular motion. In Section  4 the results of the experimental validation are presented and comparison of the methods is given. Afterwards, a discussion of the results is given in Section  5 and in the final section, Section  6, some conclusions are drawn.

Section snippets

Points, lines and conics under perspective projection

If not specified differently, small bold-face letters (e.g.  x) are used to denote column vectors (e.g. homogeneous points) and big bold-face letters (e.g.  X) for matrices. The subscript ()w is used to denote the world coordinate frame, ()p for the picture (image) frame and ()c for the camera frame.

The transformation between the point pwT=[xwywzw1] in the world frame and the corresponding point ppT=[xpyp1] in the image frame can be described by a pinhole camera model  [23]:wpp=S[Rt]pw,S=[αγx

Self calibration from circular motion

The camera calibration procedure described in Section  2.2 requires knowledge of the world points lying on a common plane and the corresponding image points. In this section we present a calibration procedure for estimation of the homography up to similarity that is based on circular motion of a point around the single axis.

Without loss of generality, suppose a camera is observing a point that is rotating in a plane around a single axis. The motion of the point (xw,yw) in the world plane can be

Synthetic data

We validated all three methods for estimation of the homography up to similarity HaHp on synthetic data. We simulated the motion of two points rotating in the same plane. That resulted in two circular trajectories: a circle with the radius r1=0.2m rotating at the angular velocity ω1=0.5rad/s around the ground frame origin and a circle with the radius r2=0.4m rotating at the angular velocity ω2=0.5rad/s around the point (0.2m,0.2m) in the ground plane. The configuration of the projection plane

Discussion

Both cost functions in Fig. 4, Fig. 5c have strong minimum at the image of the circle centre. As it can be observed in Fig. 5, the cost function of the proposed method wraps down near ellipse boundary as smaller part of the full circle is available. This can lead to wrong estimation of the circle centre. This problem is avoided when a large-enough part of the circle is available. Another thing to mention that applies to Lourakis’ and the proposed method is that sensitivity of the method used

Conclusion

Several methods for estimation of the homography up to similarity from the circular motion were presented. The study was focused on a case when the camera is observing and tracking a single point on an object that is rotating at constant angular velocity around a fixed axis. Since the direct method and the approach proposed by Lourakis may not give satisfactory results for the considered case in the presence of disturbances and non-ideal conditions, a new method for estimation of the homography

Andrej Zdešar received the B.Sc. degree in 2010 from the Faculty of Electrical Engineering, University of Ljubljana, Slovenia, where he is currently employed in the framework of the national young researchers scheme. His research interests are in the area of visual control, machine vision and autonomous mobile systems.

References (29)

  • A. Sugimoto

    A linear algorithm for computing the homography from conics in correspondence

    J. Math. Imaging Vision

    (2000)
  • P.K. Mudigonda et al.

    Geometric structure computation from conics

  • M.~I. Lourakis

    Plane metric rectification from a single view of multiple coplanar circles

  • D.~C. Brown

    Close-range camera calibration

    Photogramm. Eng.

    (1971)
  • Cited by (2)

    • Modeling and control with neural networks for a magnetic levitation system

      2017, Neurocomputing
      Citation Excerpt :

      The trajectory tracking problem of direct current motors is considered in [28,29], and [30]. In [40,41], self-tuning of two degrees-of-freedom controllers are designed. The identification and control of magnetic levitation systems is studied in [3,39].

    • Segmentation of Stereo-Camera Depth Image into Planar Regions based on Evolving Principal Component Clustering

      2021, IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems

    Andrej Zdešar received the B.Sc. degree in 2010 from the Faculty of Electrical Engineering, University of Ljubljana, Slovenia, where he is currently employed in the framework of the national young researchers scheme. His research interests are in the area of visual control, machine vision and autonomous mobile systems.

    Igor Škrjanc received the B.Sc., the M.Sc. and the Ph.D. degrees in Electrical Engineering, from the Faculty of Electrical and Computer Engineering, University of Ljubljana, Slovenia, in 1988, 1991 and 1996, respectively. His main research interests are intelligent, predictive control systems and autonomous mobile systems. In 2007 he received the highest research award of the University of Ljubljana, Faculty of Electrical Engineering, and in 2008, the highest award of the Republic of Slovenia for Scientific and Research Achievements, Zois award for outstanding research results in the field of intelligent control. He also received the Humboldt Research Fellowship for Experienced Researchers for the period between 2009 and 2011. Currently, he is a professor for Automatic Control at the Faculty of Electrical Engineering and the head of the research program Modelling, Simulation and Control.

    Gregor Klančar received the B.Sc. and Ph.D. degrees in 1999, and 2003, respectively, from the Faculty of Electrical Engineering, University of Ljubljana, Slovenia. His research interests are in the area of fault diagnosis methods, multiple vehicle coordinated control and mobile robotics. Currently, he is an associate professor at the Faculty of Electrical Engineering.

    View full text