Abstract
This paper proposes a robust visual odometry algorithm using a Kinect-style RGB-D sensor and inertial measurement unit (IMU) in a highly dynamic environment. Based on SURF (Speed Up Robust Features) descriptor, the proposed algorithm generates 3-D feature points incorporating depth information into RGB color information. By using an IMU, the generated 3-D feature points are rotated in order to have the same rigid body rotation component between two consecutive images. Before calculating the rigid body transformation matrix between the successive images from the RGB-D sensor, the generated 3-D feature points are filtered into dynamic or static feature points using motion vectors. Using the static feature points, the rigid body transformation matrix is finally computed by RANSAC (RANdom SAmple Consensus) algorithm. The experiments demonstrate that visual odometry is successfully obtained for a subject and a mobile robot by the proposed algorithm in a highly dynamic environment. The comparative study between proposed method and conventional visual odometry algorithm clearly show the reliability of the approach for computing visual odometry in a highly dynamic environment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Mourikis, A.I., Trawny, N., Roumeliotis, S.I., Johnson, A.E., Matthies, L.: Vision-aided inertial navigation for precise planetary landing: Analysis and experiments. In: Robotics: Science and Systems (2007)
Jeong, W., Lee, K.M.: Cv-slam: A new ceiling vision-based slam technique. In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005), pp. 3195–3200. IEEE (2005)
Baglietto, M., Sgorbissa, A., Verda, D., Zaccaria, R.: Human navigation and mapping with a 6dof imu and a laser scanner. Robotics and Autonomous Systems 59(12), 1060–1069 (2011)
Yoo, J.K., Kim, J.H.: Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 42(1), 125–139 (2012)
Grzonka, S., Grisetti, G., Burgard, W.: Towards a navigation system for autonomous indoor flying. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 2878–2883. IEEE (2009)
Bachrach, A., Huang, A.S., Maturana, D., Henry, P., Krainin, M., Fox, D., Roy, N.: Visual navigation for micro air vehicles (2011)
Huang, A.S., Bachrach, A., Henry, P., Krainin, M., Maturana, D., Fox, D., Roy, N.: Visual odometry and mapping for autonomous flight using an rgb-d camera. In: Int. Symposium on Robotics Research (ISRR), Flagstaff, Arizona, USA (2011)
Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: Monoslam: Real-time single camera slam. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(6), 1052–1067 (2007)
Kneip, L., Chli, M., Siegwart, R.: Robust real-time visual odometry with a single camera and an imu. In: BMVC, pp. 1–11 (2011)
Han, S., Kim, J., Myung, H., et al.: Landmark-based particle localization algorithm for mobile robots with a fish-eye vision system. IEEE/ASME Transactions on Mechatronics PP(99), 1–12 (2012)
Konolige, K., Agrawal, M., Solà , J.: Large-scale visual odometry for rough terrain. In: Kaneko, M., Nakamura, Y. (eds.) Robotics Research. STAR, vol. 66, pp. 201–212. Springer, Heidelberg (2010)
Hu, G., Huang, S., Zhao, L., Alempijevic, A., Dissanayake, G.: A robust rgb-d slam algorithm. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1714–1719. IEEE (2012)
Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D., Burgard, W.: An evaluation of the rgb-d slam system. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 1691–1696. IEEE (2012)
Yang, C., Medioni, G.: Object modelling by registration of multiple range images. Image and Vision Computing 10(3), 145–155 (1992)
Challis, J.H.: A procedure for determining rigid body transformation parameters. Journal of Biomechanics 28(6), 733–737 (1995)
Kim, D.-H., Kim, J.-H.: Image-based ICP algorithm for visual odometry using a RGB-D sensor in a dynamic environment. In: Kim, J.-H., Matson, E., Myung, H., Xu, P. (eds.) Robot Intelligence Technology and Applications. AISC, vol. 208, pp. 423–430. Springer, Heidelberg (2013)
Nistér, D.: Preemptive ransac for live structure and motion estimation. Machine Vision and Applications 16(5), 321–329 (2005)
Steinbrucker, F., Sturm, J., Cremers, D.: Real-time visual odometry from dense rgb-d images. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 719–722. IEEE (2011)
Tardif, J.P., Pavlidis, Y., Daniilidis, K.: Monocular visual odometry in urban environments using an omnidirectional camera. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2008, pp. 2531–2538. IEEE (2008)
Nistér, D., Naroditsky, O., Bergen, J.: Visual odometry. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 1, p. I–652. IEEE (2004)
Konolige, K., Agrawal, M., Bolles, R.C., Cowan, C., Fischler, M., Gerkey, B.: Outdoor mapping and navigation using stereo vision. In: Khatib, O., Kumar, V., Rus, D. (eds.) Experimental Robotics. STAR, vol. 39, pp. 179–190. Springer, Heidelberg (2008)
Newcombe, R.A., Davison, A.J., Izadi, S., Kohli, P., Hilliges, O., Shotton, J., Molyneaux, D., Hodges, S., Kim, D., Fitzgibbon, A.: Kinectfusion: Real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136. IEEE (2011)
Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: Rgb-d mapping: Using kinect-style depth cameras for dense 3d modeling of indoor environments. The International Journal of Robotics Research 31(5), 647–663 (2012)
Kaess, M., Ranganathan, A., Dellaert, F.: isam: Incremental smoothing and mapping. IEEE Transactions on Robotics 24(6), 1365–1378 (2008)
Kuemmerle, R., Grisetti, G., Strasdat, H., Konolige, K., Burgard, W.: g2o: A general framework for graph optimization. In: Proc. of the IEEE Int. Conf. on Robotics and Automation, ICRA (2011)
Kaess, M., Johannsson, H., Roberts, R., Ila, V., Leonard, J., Dellaert, F.: isam2: Incremental smoothing and mapping with fluid relinearization and incremental variable reordering. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 3281–3288. IEEE (2011)
Ho, K.L., Newman, P.: Detecting loop closure with scene sequences. International Journal of Computer Vision 74(3), 261–286 (2007)
Kim, D.H., Kim, J.H.: Visual loop-closure detection method using average feature descriptors. In: Kim, J.-H., Matson, E., Myung, H., Xu, P. (eds.) Robot Intelligence Technology and Applications 2. AISC, vol. 274, pp. 113–118. Springer, Heidelberg (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Kim, DH., Han, SB., Kim, JH. (2015). Visual Odometry Algorithm Using an RGB-D Sensor and IMU in a Highly Dynamic Environment. In: Kim, JH., Yang, W., Jo, J., Sincak, P., Myung, H. (eds) Robot Intelligence Technology and Applications 3. Advances in Intelligent Systems and Computing, vol 345. Springer, Cham. https://doi.org/10.1007/978-3-319-16841-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-16841-8_2
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-16840-1
Online ISBN: 978-3-319-16841-8
eBook Packages: EngineeringEngineering (R0)