Skip to main content

Difference-in-level Detection from RGB-D Images

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13599))

Abstract

Most robots implicitly assume that the road surface on which they move is flat, without differences in level. Detecting differences in level on roads contributes to robots moving safely without stacking and falling. Although there are some studies on detecting differences in level in RGB or RGB-D images, directly finding only differences in level on roads is difficult due to the abundance and complexity of the types of differences in level on roads. This paper presents a new method for detecting differences in level from RGB-D images obtained by a modern smartphone equipped with a high-performance depth camera. First, we extract a part of differences in level on roads by finding the change of the normal vector in the contour of the detected plane. Then, a deep learning model trained on the dataset created by using the extracted image patches is used for detecting all the differences in level in outdoor images. To evaluate the effectiveness of the proposed method, quantitative and qualitative comparisons with existing methods were conducted. Further, the results from various inputs were qualitatively and quantitatively evaluated. As a result, we verified that the proposed method was able to detect all differences in level in an image, even in complex scenes where existing methods cannot detect.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Tasaki, R., Kitazaki, M., Miura, J., Terashima, K.: Prototype design of medical round supporting robot Terapio. In: 2015 IEEE International Conference on Robotics and Automation (ICRA2015), pp. 829–834 (2015)

    Google Scholar 

  2. Hirata, Y., Hara, A., Kosuge, K.: Motion Control of passive-type walking support system based on environment information. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA2005), pp. 2921–2926 (2005)

    Google Scholar 

  3. World Health Organization, Falls, Fact sheets. Accessed 24 Jan 2022

    Google Scholar 

  4. Imai, K., Kitahara, I., Kameda, Y.: Detecting walkable plane areas by using RGB-D camera and accelerometer for visually impaired people. In: 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), pp. 1–4 (2017)

    Google Scholar 

  5. Yanagihara, K., Takefuji, H., Sarakon, P., Kawano, H.: A method to detect steps on the sidewalks for supporting visually impaired people in walking. Proc. Fuzzy Syst. Sym. (J. Soc. Fuzzy Theor. Intell. Informatics) 36, 395–398 (2020)

    Google Scholar 

  6. Sarkar, S., Venugopalan, V., Reddy, K., Ryde, J., Jaitly, N., Giering, M.: Deep learning for automated occlusion edge detection in RGB-D frames. J. Sign. Process. Syst. 88(2), 205–217 (2016). https://doi.org/10.1007/s11265-016-1209-3

    Article  Google Scholar 

  7. Wang, S., Pang, H., Zhang, C., Tian, Y.: RGB-D image-based detection of stairs, pedestrian crosswalks and traffic signs. J. Vis. Commun. and Image Represent. 25(2),(2013)

    Google Scholar 

  8. Harms, H., Rehder, E., Schwarze, T., Lauer, M.: Detection of ascending stairs using stereo vision. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2496–2502 (2015)

    Google Scholar 

  9. Guerrero, J.J., Pérez-Yus, A., Gutiérrez-Gómez, D., Rituerto, A., López-Nicolás, G.: Human navigation assistance with a RGB-D sensor. In: ACTAS V Congreso Internacional de Turismo para Todos: VI Congreso Internacional de Diseno, Redes de Investigacion y Tecnologia para todos DRT4ALL, pp. 285–312 (2015)

    Google Scholar 

  10. Guerrero, J.J., Pérez-Yus, A., Gutiérrez-Gómez, D., Rituerto, A., López-Nicolás, G.: Stairs detection with odometry-aided traversal from a wearable RGB-D camera. Comput. Vis. Image Underst. 154, 192–205 (2017)

    Article  Google Scholar 

  11. Vu, H., Hoang, V., Le, T., Tran, T., Nguyen, T.T.: A projective chirp based stair representation and detection from monocular images and its application for the visually impaired. Pattern Recognit. Lett. 137, 17–26 (2020)

    Article  Google Scholar 

  12. Arunpriyan, J., Variyar, V.V.S., Soman, K.P., Adarsh, S.: Real-time speed bump detection using image segmentation for autonomous vehicles. In: Pandian, A.P., Ntalianis, K., Palanisamy, R. (eds.) ICICCS 2019. AISC, vol. 1039, pp. 308–315. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-30465-2_35

    Chapter  Google Scholar 

  13. Lion, K.M., Kwong, K.H., Lai, W.K.: Smart speed bump detection and estimation with kinect. In: 2018 4th International Conference on Control, Automation and Robotics (ICCAR2018), pp. 465–469 (2018)

    Google Scholar 

  14. Fernández, C., et al.: Free space and speed humps detection using lidar and vision for urban autonomous navigation. IEEE Intell. Veh. Symp. (IV2012), pp. 698–703 (2012)

    Google Scholar 

  15. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  16. Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust CNN model for edge detection. In: IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020)

    Google Scholar 

  17. Canny, J.: A computational approach to edge detection. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679–698 (1986)

    Google Scholar 

  18. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (ICLR), pp. 1–15 (2015)

    Google Scholar 

  19. Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yusuke Nonaka .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nonaka, Y., Uchiyama, H., Saito, H., Yachida, S., Iwamoto, K. (2022). Difference-in-level Detection from RGB-D Images. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2022. Lecture Notes in Computer Science, vol 13599. Springer, Cham. https://doi.org/10.1007/978-3-031-20716-7_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20716-7_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20715-0

  • Online ISBN: 978-3-031-20716-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics