Skip to main content
Log in

Image-Based Ego-Motion Estimation Using On-Vehicle Omnidirectional Camera

  • Published:
International Journal of Intelligent Transportation Systems Research Aims and scope Submit manuscript

Abstract

The estimation of the motion of the sensor, as well as a 3D shape of a scene, has been extensively researched, especially for Virtual Reality (VR) and Robotics systems. To achieve this estimation, a system that consists of a laser range sensor, Global Positioning System (GPS), and Gyro sensor has been proposed, actually constructed, and used. However, it is usually difficult to produce a precise and detailed estimation of the 3D shape because of the limited ability of each sensor. The Structure from Motion (SfM) method is widely known for estimation purposes, and the method can estimate those parameters in pixel order. However, the SfM method is frequently unstable because of dependency on initial parameters and also because of noise. In this paper, we propose a SfM method for omnidirectional image sequences using both factorization and a bundle adjustment method to achieve high accuracy and robustness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  1. Kakuta, T., Oishi, T., Ikeuchi, K.: Shading and shadowing of architecture in mixed reality. 4th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 05) (2005)

  2. Choset, H., Nagatani, K.: Topological simultaneous localization and mapping (slam): Toward exact localization without explicit localization. IEEE Trans. Robot. Autom. 125–137 (2001)

  3. Ronald, A., Hoff, B., Neely III, H., Sarfaty, R.: A motion-stabilized outdoor augmented reality system. Proc. IEEE Virtual Real. IEEE CS Press. 252–259 (1999)

  4. Oe, M., Sato, T., Yokoya, N.: Camera position and posture estimation based on feature landmark database for geometric registration. TVRSJ. 285–294 (2005)

  5. Karlsson, N., Di Bernardo, E., Ostrowski, J., Goncalves, L., Pirjanian, P., Munich, M.E.: The vSLAM Algorithm for Robust Localization and Mapping. Proc. Int. Conf. Robot. Autom. (ICRA) (2005)

  6. Fox, D., Burgard, W., Kruppa, H., Thrun, S.: Efficient multi-robot localization based on monte carlo approximation. In Proc. 9th Int. Symp. Robot. Res. (ISRR-99). (1999)

  7. Adluru, N., Latecki, L.J., Sobel, M., Lakaemper, R.: Merging maps of multiple robots. ICPR 2008. 1–4 (2008)

  8. Howard, A.: Multi-robot simultaneous localization and mapping using particle filters. Int. J. Robot. Res. 1243–1256 (2006)

  9. Thomas, A., Ferrari, V., Leibe, B., Tuytelaars, T., Van Gool, L.: Dynamic 3d scene analysis from a moving vehicle. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR’07). (2007)

  10. Brostowm, G., Shotton, J., Fauqueur, J., Cipolla, R.: Segmentation and recognition using structure from motion point clouds. European Conference on Computer Vision (ECCV). (2008)

  11. Harada, T., Yamashita, A., Kanoko, T.: Environment sensing using structure from motion with an omnidirectional camera on a mobile robot. The 11th Robotics Symposia. 387–392 (2006)

  12. Miyagawa, I., Ishikawa, Y., Wakabayashi, K., Arakawa, K.: 3d structure recovery of urban space from omnidirectional image sequences based on a projection model of vehicle motion. Transactions of Information Processing Society of Japan. 34–53 (2004)

  13. Sato, T., Ikeda, S., Yokoya, N.: Determining extrinsic camera parameters of an omnidirectional multi-camera system from multiple video streams. Trans. Inst. Electron. Inf. Commun. Eng J88-D-II(2), 347–357 (2005)

    Google Scholar 

  14. Point Grey Research Inc.: Ladybug2. http://www.ptgrey.com/products/ladybug2/

  15. Harris, C., Stephens, M.: A combined corner and edge detector. Proc. Alvey. Vision Conf. 147–151 (1988)

  16. Lucas, B., Kanade, T.: An iterative image registratio technique with an application to stereo vision. Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI ’81). 674–679 (1981)

  17. Tomasi, C., Kanade, T.: Shape and motion from image stream under orthography: a factorization method. IJCV 9, 137–189 (1992)

    Article  Google Scholar 

  18. Banno, A., Hasegawa, K., Ikeuchi, K.: Flying laser range sensor for scanning large-scale cultural heritages. IPSJ SIG Notes CVIM. 213–220 (2005)

Download references

Acknowledgement

This work was, in part, supported by “Development of Energy-saving ITS Technologies” (P08018),New Energy Development Organization (NEDO)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shintaro Ono.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Matsuhisa, R., Ono, S., Kawasaki, H. et al. Image-Based Ego-Motion Estimation Using On-Vehicle Omnidirectional Camera. Int. J. ITS Res. 8, 106–117 (2010). https://doi.org/10.1007/s13177-010-0011-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13177-010-0011-z

Keywords

Navigation