Skip to main content

Determining Location and Detecting Changes Using a Single Training Video

  • Conference paper
  • First Online:
  • 690 Accesses

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1144))

Abstract

This paper proposes a new approach to find a robot’s current location and to detect any changes in its path, using monocular vision. A recorded single obstacle-free training video is first obtained and saved. Then, a moving robot can use its camera to find its current location, within its path, by matching current frames with the ones from the training video. This frame-to-frame matching is performed using extracted feature points. Once a match is found, the corresponding frames are aligned (registered) using a homography that is calculated based the matched feature points. This allows to compensate for viewpoint changes between the observed and saved frames. Finally, we compare the regions of interest (ROIs) of the aligned frames, using their colour histograms. We carried out seventeen tests using this approach. The videos, for both training and testing, were recorded using off-the-shelf phone camera by walking down different paths. Four tests were performed in an outdoor environment, and 13 in an indoor environment. Our tests have shown excellent results, with an accuracy above 95% for most of them, for finding the robot’s location and for detecting obstacles in the robot’s path. Both training and testing videos used in our tests were realistic and very challenging, as they consisted of a mix of indoor and outdoor environments with cluttered backgrounds, repetitive floor textures and glare.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Aleksander, I., Thomas, W., Bowden, P.: Wisard.a radical step forward in image recognition. Sens. Rev. 4, 120–124 (1984). https://doi.org/10.1108/eb007637

    Article  Google Scholar 

  2. Babaee, M., Dinh, D.T., Rigoll, G.: A deep convolutional neural network for video sequence background subtraction. Pattern Recogn. 76(C), 635–649 (2018). https://doi.org/10.1016/j.patcog.2017.09.040

    Article  Google Scholar 

  3. Barnich, O., Droogenbroeck, M.: Vibe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 20, 1709–1724 (2011). https://doi.org/10.1109/TIP.2010.2101613

    Article  MathSciNet  Google Scholar 

  4. Bianco, S., Ciocca, G., Schettini, R.: How far can you get by combining change detection algorithms? CoRR abs/1505.02921 (2015). http://dblp.uni-trier.de/db/journals/corr/corr1505.html#BiancoCS15a

  5. Braham, M., Droogenbroeck, M.V.: Deep background subtraction with scene-specific convolutional neural networks. In: 2016 International Conference on Systems, Signals and Image Processing (IWSSIP), pp. 1–4 (2016)

    Google Scholar 

  6. Chen, S., Zhang, J., Li, Y., Zhang, J.: A hierarchical model incorporating segmented regions and pixel descriptors for video background subtraction. IEEE Trans. Ind. Inf. 8, 118–127 (2012). https://doi.org/10.1109/TII.2011.2173202

    Article  Google Scholar 

  7. Cheng, L., Gong, M.: Realtime background subtraction from dynamic scenes, pp. 2066–2073 (2009). https://doi.org/10.1109/ICCV.2009.5459454

  8. De Gregorio, M., Giordano, M.: Change detection with weightless neural networks, June 2014. https://doi.org/10.1109/CVPRW.2014.66

  9. Jia, B., Liu, R., Zhu, M.: Real-time obstacle detection with motion features using monocular vision. Vis. Comput. 31(3), 281–293 (2015). https://doi.org/10.1007/s00371-014-0918-5

    Article  Google Scholar 

  10. Kompella, V., Bidargaddi, S.V., Kaipa, K., Ghose, D.: A tracked mobile robot with vision-based obstacle avoidance, pp. 12–13, January 2008

    Google Scholar 

  11. Lacassagne, L., Manzanera, A., Dupret, A.: Motion detection: fast and robust algorithms for embedded systems, pp. 3265–3268, December 2009. https://doi.org/10.1109/ICIP.2009.5413946

  12. Lee, T., Yi, D.H., Dan Cho, D.I.: A monocular vision sensor-based obstacle detection algorithm for autonomous robots. Sensors 16, 311 (2016). https://doi.org/10.3390/s16030311

    Article  Google Scholar 

  13. Li, Y., Birchfield, S.T.: Image-based segmentation of indoor corridor floors for a mobile robot, pp. 837–843, November 2010. https://doi.org/10.1109/IROS.2010.5652818

  14. Lorigo, L., Brooks, R., Grimsou, W.: Visually-guided obstacle avoidance in unstructured environments, vol. 1, pp. 373–379, October 1997. https://doi.org/10.1109/IROS.1997.649086

  15. Mandellos, N.A., Keramitsoglou, I., Kiranoudis, C.T.: A background subtraction algorithm for detecting and tracking vehicles. Expert Syst. Appl. 38(3), 1619–1631 (2011). https://doi.org/10.1016/j.eswa.2010.07.083

    Article  Google Scholar 

  16. Manzanera, A., Richefeu, J.C.: A robust and computationally efficient motion detection algorithm based on sigma-delta background estimation, pp. 46–51, December 2004

    Google Scholar 

  17. Michels, J., Saxena, A., Ng, A.Y.: High speed obstacle avoidance using monocular vision and reinforcement learning. In: Proceedings of the 22nd International Conference on Machine Learning, ICML 2005, pp. 593–600. ACM, New York (2005). https://doi.org/10.1145/1102351.1102426

  18. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF, pp. 2564–2571, November 2011. https://doi.org/10.1109/ICCV.2011.6126544

  19. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: Flexible background subtraction with self-balanced local sensitivity, June 2014. https://doi.org/10.1109/CVPRW.2014.67

  20. Stauffer, C., E. L. Grimson, W.: Adaptive background mixture models for real-time tracking. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, January 2007

    Google Scholar 

  21. Ulrich, I., Nourbakhsh, I.R.: Appearance-based obstacle detection with monocular color vision. In: Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pp. 866–871. AAAI Press (2000). http://dl.acm.org/citation.cfm?id=647288.721755

  22. Zivkovic, Z.: Improved adaptive Gaussian mixture model for background subtraction, vol. 2, pp. 28–31, September 2004. https://doi.org/10.1109/ICPR.2004.1333992

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryan Bluteau .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bluteau, R., Boufama, B., Habashi, P. (2020). Determining Location and Detecting Changes Using a Single Training Video. In: Djeddi, C., Jamil, A., Siddiqi, I. (eds) Pattern Recognition and Artificial Intelligence. MedPRAI 2019. Communications in Computer and Information Science, vol 1144. Springer, Cham. https://doi.org/10.1007/978-3-030-37548-5_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37548-5_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37547-8

  • Online ISBN: 978-3-030-37548-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics