Skip to main content
Log in

A smart obstacle avoiding technology based on depth camera for blind and visually impaired people

  • Regular Paper
  • Published:
CCF Transactions on Pervasive Computing and Interaction Aims and scope Submit manuscript

Abstract

It remains challenging to assist BVI individuals in outdoor travel nowadays.In this paper, We propose a set of low-cost wearable obstacle avoidance devices and introduce an obstacle detection algorithm called L-PointPillars, which is based on point cloud data and is suitable for edge devices. We first model the obstacles faced by BVI individuals during outdoor travel and then establish a mapping between the information space and physical space based on point clouds. We then introduce depthwise separable convolution and attention mechanisms to develop L-PointPillars, a fast neural network for obstacle detection. This network is specifically designed for creating wearable obstacle detection devices. Finally, we implemented a wearable electronic travel aid device (WETAD) based on L-PointPillars on the Jetson Xavier NX. Experiments show that while L-PointPillars reduces the number of parameters in the original PointPillars by 75%, WETAD achieves an average obstacle detection accuracy of 95.3%. It takes an average of 144 milliseconds to process each frame during outdoor travel for BVI individuals, which is more than twice as fast as the Second network and 31% improvement compared to PointPillars.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

The data that support the findings of this study are available in KITTI at https://www.cvlibs.net/datasets/kitti/index.php (Geiger et al. 2012).

References

  • Ahlmark, D.I., Fredriksson, H., Hyyppä, K.: Obstacle avoidance using haptics and a laser rangefinder[C]//2013 IEEE Workshop on Advanced Robotics and its Social Impacts. IEEE, 76-81 (2013)

  • Aladren, A., López-Nicolás, G., Puig, L., et al.: Navigation assistance for the visually impaired using RGB-D sensor with range expansion[J]. IEEE Syst. J. 10(3), 922–932 (2014)

    Article  Google Scholar 

  • Aymaz, S., Çavdar, T.: Ultrasonic Assistive Headset for visually impaired people[C]//2016 39th International Conference on Telecommunications and Signal Processing (TSP). IEEE, 388-391 (2016)

  • Berenguel-Baeta, B., Guerrero-Viu, M., Nova, A., et al.: Floor extraction and door detection for visually impaired guidance[C]//2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV). IEEE, (2020) 1222-1229

  • Bouhamed, S.A., Kallel, I.K., Masmoudi, D.S.: New electronic white cane for stair case detection and recognition using ultrasonic senso. Int. J. Adv. Comput. Sci. Appl. (2013). https://doi.org/10.14569/IJACSA.2013.040633

    Article  Google Scholar 

  • Cheng, R., Wang, K., Yang, K., et al.: A ground and obstacle detection algorithm for the visually impaired[J]. (2015)

  • Costa, P., Fernandes, H., Barroso, J, et al.: Obstacle detection and avoidance module for the blind[C]//2016 World Automation Congress (WAC). IEEE, 1-6 (2016)

  • Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite[C]//2012 IEEE conference on computer vision and pattern recognition. IEEE, 3354-3361 (2012)

  • Gharani, P., Karimi, H.A.: Context-aware obstacle detection for navigation by visually impaired. Image Vis. Comput. 64, 103–115 (2017)

    Article  Google Scholar 

  • Gharani, P., Karimi, H.A.: Context-aware obstacle detection for navigation by visually impaired. Image Vis. Comput. 64, 103–115 (2017)

    Article  Google Scholar 

  • He, Jian, Liu, Xinyuan: Ground Obstacle Detection Technology Based on Fusion of RGB-D and Inertial Sensors[J]. J. Comput. Aided Des. Comput. Graph. 34(02), 254–263 (2022). ((in Chinese))

    Google Scholar 

  • Jiang, B., Yang, J., Lv, Z., et al.: Wearable vision assistance system based on binocular sensors for visually impaired users. IEEE Internet Things J. 6(2), 1375–1383 (2018)

    Article  Google Scholar 

  • Jiao, Yang, Gong, Jiangtao, Shi, Yuanchun, et al.: The research on interactive experiences of graphical tactile displays for the visually impaired[J]. Journal of Computer-Aided Design & Computer Graphics 28(9), 1571–1576 (2016). ((in Chinese))

    Google Scholar 

  • Kumar, K., Champaty, B., Uvanesh, K., et al.: Development of an ultrasonic cane as a navigation aid for the blind people[C]//2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT). IEEE, (2014) 475-479

  • Lang, A.H., Vora, S., Caesar, H., et al.: Pointpillars: Fast encoders for object detection from point clouds[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 12697-12705 (2019)

  • Lee, T.J., Yi, D.H., Cho, D.I.D.: A monocular vision sensor-based obstacle detection algorithm for autonomous robots[J]. Sensors 16(3), 311 (2016)

    Article  Google Scholar 

  • Liu, W., Anguelov, D., Erhan, D., et al.(2018) Ssd: Single shot multibox detector[C]//European conference on computer vision. Springer, Cham, 2016: 21-37

  • Misra, D.: Mish: a self regularized non-monotonic neural activation function. arXiv preprint 4(2), 1048550 (2019)

    Google Scholar 

  • Orita, K., Takizawa, H., Aoyagi, M., et al.: Obstacle detection by the Kinect cane system for the visually impaired[C]//Proceedings of the 2013 IEEE/SICE International Symposium on System Integration. IEEE, (2013) 115-118

  • Pham, H.H., Thi-Lan, L., Vuillerme, N.: Real-time obstacle detection system in indoor environment for the visually impaired using microsoft kinect sensor. J. Sens (2016)

  • Rodríguez, A., Yebes, J.J., Alcantarilla, P.F., et al.: Assisting the visually impaired: obstacle detection and warning system by acoustic feedback. Sensors 12(12), 17476–17496 (2012)

    Article  Google Scholar 

  • Sen, A., Sen, K., Das, J.: Ultrasonic blind stick for completely blind people to avoid any kind of obstacles[C]//2018 IEEE SENSORS. IEEE, (2018) 1-4

  • Tapu, R., Mocanu, B., Zaharia, T.: Real time static/dynamic obstacle detection for visually impaired persons[C]//2014 IEEE International Conference on Consumer Electronics (ICCE). IEEE, 394-395 (2014)

  • Wahab, M.H.A., Talib, A.A., Kadir, H.A., et al.: Smart cane: Assistive cane for visually-impaired people[J]. arXiv preprint arXiv:1110.5156, (2011)

  • Wang, Zheng, Zhao, Xiao, She, Hongjie, et al.: Obstacle detection and obstacle avoidance of AGV based on binocular vision[J]. Comput. Integr. Manuf. Syst. 24(2), 400–409 (2018). ((in Chinese))

    Google Scholar 

  • World Blind Union.: China Country Report to World Blind Union Asia Pacific, General Assembly, Ulaanbaatar, Mongolia[EB/OL] (2018) http://wbuap.org/archives/1416,2018-10-09

  • World Health Organization.: (2012). Available online: https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment

  • Zhang, D.: Seeing Eye Phone: a smart phone-based indoor localization and guidance system for the visually impaired. Mach. Vis. Appl. 25, 811–822 (2014)

    Article  Google Scholar 

  • Zhu, Aibin, He, Dayong, Luo, Wencheng, Chen, Wei: Research on wearable guide robot based on binocular vision method. Mech. Des. Res. 32(05), 31–34 (2016). ((in Chinese))

    Google Scholar 

Download references

Acknowledgements

We thank all of our participants. We thank the reviewers for their helpful comments. This work is supported by the National Key Research and Development Plan under Grant No. 2020YFB2104402, the Natural Science Foundation of China under Grant No. 52175493.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhonghua Xiao.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, J., Song, X., Su, Y. et al. A smart obstacle avoiding technology based on depth camera for blind and visually impaired people. CCF Trans. Pervasive Comp. Interact. 5, 382–395 (2023). https://doi.org/10.1007/s42486-023-00136-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42486-023-00136-7

Keywords

Navigation