Abstract
This paper deals with plane detection from a monocular image sequence without camera calibration or a priori knowledge about the egomotion. Within a framework of driver assistance applications, it is assumed that the 3D scene is a set of 3D planes. In this paper, the vision process considers obstacles, roads and buildings as planar structures. These planes are detected by exploiting iso-velocity curves after optical flow estimation. A Hough Transform-like frame called c-velocity was designed. This paper explains how this c-velocity, defined by analogy to the v-disparity in stereovision, can represent planes, regardless of their orientation and how this representation facilitates plane extraction. Under a translational camera motion, planar surfaces are transformed into specific parabolas of the c-velocity space. The error and robustness analysis of the proposed technique confirms that this cumulative approach is very efficient for making the detection more robust and coping with optical flow imprecision. Moreover, the results suggest that the concept could be generalized to the detection of other parameterized surfaces than planes.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Bay, H., Ess, A., Tuytelaars, T., & Gool, L. V. (2008). Surf: speeded up robust features. Computer Vision and Image Understanding, 110(3), 346–359.
Bouchafa, S., & Zavidovique, B. (2006). Efficient cumulative matching for image registration. Image and Vision Computing, 24(1), 70–79.
Bouchafa, S., & Zavidovique, B. (2008). Moving plane detection under translational camera motion using the c-velocity concept. In IEEE international workshop on image processing, theory tools and applications (pp. 1–8).
Bouchafa, S., & Zavidovique, B. (2010). Robustness of c-velocity based methods for 3d moving plane detection. In Proc. of 10th international conference on pattern recognition and image analysis: new information technologies (PRIA), St. Petersburg, Russia (pp. 177–181).
Duda, R., & Hart, P.E. (1972). Use of the hough transformation to detect lines and curves in pictures. Communications of the ACM, 15, 11–15.
Fermuller, C., & Aloimonos, Y. (1995). Global rigidity constraints in image displacement fields. In International conference on computer vision (pp. 245–250).
Hanes, D., Keller, J., & McCollum, G. (2008). Motion parallax contribution to perception of self-motion and depth. Biological Cybernetics, 98(4), 273–293.
Hartley, R. (1995). In defense of the 8-point algorithm. In IEEE international conference on computer vision (pp. 1064–1070).
Hildreth, E. C. (1992). Recovering heading for visually-guided navigation. Vision Research, 32(6), 1177–1192.
Irani, M., Rousso, B., & Peleg, S. (1997). Recovery of egomotion using region alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(3), 268–272.
Labayrade, R., Aubert, D., & Tarel, J. (2002). Real time obstacle detection on non flat road geometry through ‘v-disparity’ representation. In IEEE intelligent vehicles symposium 2002 (pp. 646–651).
Longuet-Higgins, H., & Prazdny, K. (1980). The interpretation of a moving retinal image. Proceeding of Royal Society London, B-208, 385–397.
Lucas, B., & Kanade, T. (1981). An iterative image registration technique with an application to stereovision. In DARPA image understanding workshop (pp. 121–130).
Luong, Q. T., & Faugeras, OD (1997). Camera calibration, scene motion and structure recovery from point correspondences and fundamental matrices. International Journal of Computer Vision, 22(3), 261–289.
MacKay, D. J. (2003). Information theory, inference, and learning algorithms.
MacLean, W., Jepson, A., & Frecker, R. (1994). Recovery of egomotion and segmentation of independant object motion using the em algorithme. In British machine vision conference (pp. 13–16).
Negahdaripour, S., & Horn, B. (1989). A direct method for locating the focus of expansion. Computer Vision, Graphics, and Image Processing, 46(3), 303–326.
Roberts, R., Potthast, C., & Dellaert, F. (2009). Learning general optical flow subspaces for egomotion estimation and detection of motion anomalies. In IEEE conference on computer vision and pattern recognition, Miami, USA (pp. 57–64).
Sazbon, D., Rotstein, H., & Rivlin, E. (2004). Finding the focus of expansion and estimating range using optical flow images and a matched filter. Machine Vision and Applications, 15(4), 229–236.
Stein, G. P., Mano, O., & Shashua, A. (2000). A robust method for computing vehicle egomotion. In IEEE intelligent vehicles symposium (pp. 362–368).
Verri, A., & Poggio, T. (1989). Motion field and optical flow: qualitative properties. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(5), 490–498.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Bouchafa, S., Zavidovique, B. c-Velocity: A Flow-Cumulating Uncalibrated Approach for 3D Plane Detection. Int J Comput Vis 97, 148–166 (2012). https://doi.org/10.1007/s11263-011-0475-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263-011-0475-6