Elsevier

Pattern Recognition

Volume 41, Issue 2, February 2008, Pages 607-615
Pattern Recognition

A new calibration model of camera lens distortion

https://doi.org/10.1016/j.patcog.2007.06.012Get rights and content

Abstract

Lens distortion is one of the main factors affecting camera calibration. In this paper, a new model of camera lens distortion is presented, according to which lens distortion is governed by the coefficients of radial distortion and a transform from ideal image plane to real sensor array plane. The transform is determined by two angular parameters describing the pose of the real sensor array plane with respect to the ideal image plane and two linear parameters locating the real sensor array with respect to the optical axis. Experiments show that the new model has about the same correcting effect upon lens distortion as the conventional model including all the radial distortion, decentering distortion and prism distortion. Compared with the conventional model, the new model has fewer parameters to be calibrated and more explicit physical meaning.

Introduction

Camera calibration has always been an important issue in photogrammetry and computer vision. Up to now, a variety of methods, see Refs. [1], [2], [3], [4], [5], [6], [7], [8], [9], [10] to cite a few, have been developed to accommodate various applications. Theoretically these methods can solve almost all problems about camera calibration. However, in our practice of designing vision system for surgical robot, we found that camera lens distortion and noise in images are two main factors hindering us from getting good calibration. Here we will focus on lens distortion.

The research on camera lens distortion can be traced back to 1919, when A. Conrady first introduced the decentering distortion model. Based on Conrady's work, in 1966 Brown presented the famous Brown–Conrady model [1], [8]. In this model, Brown classified lens distortion into radial distortion and tangential distortion, and proposed the famous plumb line method to calibrate these distortions. Since then Brown–Conrady model has been widely used, see Refs. [3], [5], [9], [11] to cite a few. Some modifications to the model have been reported [12], [13], [14], [15], [16], [17], but they are mainly focused on mathematical treatments, lacking physical analysis of the distortion sources and relation among different distortion components. In addition, a general model has been presented in Ref. [18], but it is a conceptual one so far without quantitative evaluation. Recently, a nonparametric radial distortion model has been proposed in Ref. [19], but it considers only radial distortion. Although the radial component of lens distortion is predominant, it is coupled with tangential one. Therefore modeling radial distortion without considering tangential part is not enough. So far the basic formula expressing lens distortion as a sum of radial distortion, decentering distortion and thin prism distortion is still the main stream of distortion models, which is called conventional model in this paper.

According to the conventional model, the decentering distortion is resulted from various decentering and has both radial and tangential components [1], [2], [5]. Thin prism distortion arises from slight tilt of lens or image sensor array and also causes additional radial and tangential distortion [1], [2], [5]. Here we can see that the radial distortion, decentering distortion and thin prism distortion are coupled with one another, because both decentering distortion and thin prism distortion have a contribution to radial component. Can we find another way to unify all the three types of distortion?

Now that decentering distortion and thin prism distortion come from decentering and tilt, and decentering and tilt can be described mathematically by a translation vector and a rotation matrix, then we can express them in a transform consisting of rotation and translation. Inspired by this idea, we present a new model of lens distortion in this paper.

We start with a brief review of the previous work in Section 2. Then our work is presented in detail, including analysis of lens distortion in Section 3, the new model of lens distortion in Section 4, calibration method of the new model in Section 5, and experiment results and discussions in Section 6. Finally a conclusion is drawn in Section 7.

Section snippets

Previous work

Lens distortion can usually be expressed asud=u+δu(u,v),vd=v+δv(u,v),where u and v are the unobservable distortion-free image coordinates; ud and vd are the corresponding image coordinates with distortion; δu(u,v) and δv(u,v) are distortion in u and v direction respectively, which can be classified into three types: radial distortion, decentering distortion and thin prism distortion.

Radial distortion is caused by flawed radial curvature of a lens and governed by the following equation [5]:δur(u,

Analysis of lens distortion

Formula (4) can be rewritten asδud(u,v)=p1(u2+v2)+u(2p1·u+2p2·v),δvd(u,v)=p2(v2+u2)+v(2p1·u+2p2·v),then formula (6) can be rewritten as:δu(u,v)=k1u(u2+v2)+k2u(u2+v2)2+(p1+s1)(u2+v2)+u(2p1·u+2p2·v),δv(u,v)=k1v(u2+v2)+k2v(u2+v2)2+(p2+s2)(u2+v2)+v(2p1·u+2p2·v).

It can be seen from formula (10) that the coefficients of thin prism distortion s1 and s2 are coupled with the coefficients of decentering distortion p1 and p2 in formula (6). To decouple them, letq1=s1+p1,q2=s2+p2and replace 2p1 and 2p2with

The new model of lens distortion

Because we focus on lens distortion, without loss of generality, we can suppose a point Pc lies in the camera coordinate system OcXcYcZc. According to the new model, the point is imaged onto the real image sensor array in four stages as shown in Fig. 2.

(1) The point Pc=XcYcZcT is imaged onto the ideal image plane according to a pinhole camera model without any lens distortion. This stage is a perspective projecting process, which can be expressed asxyfT=fZcXcYcZcT,where f is focal length of the

Calibration method of the new model

Methods to calibrate parameters of a distortion model fall into two classes, the total calibration [3], [5], [9] and the nonmetric calibration [11], [21], [22]. The total calibration methods use a calibration object whose geometrical structure is accurately known. In the total calibration methods, distortion parameters are obtained together with other intrinsic and extrinsic parameters of the camera. Because of coupling between distortion parameters and other intrinsic and extrinsic parameters,

Experiment results and analysis

To verify the new model, we experiment with WAT 902B CCD cameras and four types of standard CCTV lenses with focal length of 4, 6, 8, and 16mm, respectively. According to the specification supplied by its manufacturer, the camera has a resolution of 768×576, with unit cell size of CCD sensor being 8.6μm×8.3μm.

For each lens, 16 images of a checkerboard at different poses and positions are taken as shown in Fig. 3, which are from a camera with focal length of 4mm. Corner points in each image are

Conclusion and discussion

In this paper, a new calibration model of camera lens distortion is proposed. The new model expresses lens distortion as radial distortion plus a transform from ideal image plane to real sensor array plane. The transform is determined by two angular parameters describing the pose of the real sensor array plane with respect to the ideal image plane and two linear parameters locating the real sensor array with respect to the optical axis. Experiments show that the new model has about the same

Acknowledgements

This research is supported by the National Natural Science Foundation of China (No. 60675017) and the National Basic Research Program (973) of China (No. 2006CB303103).

About the AuthorJIANHUA WANG received the BS degree in automatic control from Beijing University of Aeronautics & Astronautics, China, the MS degree in pattern recognition and intelligent system from Chongqing University, China, and now is a PhD candidate in pattern recognition and intelligent system of Shanghai Jiao Tong University, China. His research interests include 3D computer vision, intelligent system and robot.

References (22)

  • A. Basu et al.

    Alternative models for fish-eye lenses

    Pattern Recognition Lett.

    (1995)
  • B. Prescott et al.

    Line-based correction of radial lens distortion

    Graphical Models Image Process.

    (1997)
  • D.C. Brown

    Close-range camera calibration

    Photogramm. Eng.

    (1971)
  • W. Faig

    Calibration of close-range photogrammetry systems: mathematical formulation

    Photogramm. Eng. Remote Sensing

    (1975)
  • R.Y. Tsai

    A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses

    IEEE J. Robotics Automat.

    (1987)
  • B. Caprile et al.

    Using vanishing points for camera calibration

    Int. J. Comput. Vision

    (1990)
  • J. Weng et al.

    Camera calibration with distortion models and accuracy evaluation

    IEEE Trans. Pattern Anal. Mach. Intell.

    (1992)
  • G.Q. Wei et al.

    A complete two-plane camera calibration method and experimental comparisons

  • R.G. Willson et al.

    What is the center of the image? Technical Report CMU-CS-93-122

    (1993)
  • T.A. Clarke et al.

    The development of camera calibration methods and models

    Photogramm. Rec.

    (1998)
  • Z. Zhang

    A flexible new technique for camera calibration

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2000)
  • Cited by (0)

    About the AuthorJIANHUA WANG received the BS degree in automatic control from Beijing University of Aeronautics & Astronautics, China, the MS degree in pattern recognition and intelligent system from Chongqing University, China, and now is a PhD candidate in pattern recognition and intelligent system of Shanghai Jiao Tong University, China. His research interests include 3D computer vision, intelligent system and robot.

    About the AuthorYUNCAI LIU received the PhD degree from the University of Illinois at Urbana-Champaign, in the Department of Electrical and Computer Science Engineering, in 1990, and worked as an associate researcher at the Beckman Institute of Science and Technology from 1990 to 1991. Since 1991, he had been a system consultant and then a chief consultant of research in Sumitomo Electric Industries, Ltd., Japan. In October 2000, he joined the Shanghai Jiao Tong University as a distinguished professor. His research interests are in image processing and computer vision, especially in motion estimation, feature detection and matching, and image registration. He also made many progresses in the research of intelligent transportation systems.

    View full text