Skip to main content
Log in

The development of assisted- visually impaired people robot in the indoor environment based on deep learning

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The indoor positioning for visually impaired people has influence on their daily life in unknown indoor environment. This study designs the robot that can assist the blind walking safety and navigate in indoor environment by a single camera. The sense classification is proposed to position the blind in indoor environment by proposed convolutional neural network framework and integrate the semantic segmentation to find the road surface through a depth camera to guide the blind walking. The proposed vision-based sense classification method is compared with the traditional WiFi triangular-positioning method, and the average error of x-y coordinate position result as (9.25,3.65) is better. From the experiment, the designed robot can help the visually impaired people to indoor navigation in unknown indoor environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available in the COCO and ADE20K repository, http://cocodataset.org/#home and https://groups.csail.mit.edu/vision/datasets/ADE20K/, separately.

References

  1. ADE20K (n.d.) https://groups.csail.mit.edu/vision/datasets/ADE20K/. Accessed 28 Jun 2019

  2. Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495

  3. Bharathi S, Ramesh A, Vivek S (2012) Effective navigation for visually impaired by wearable obstacle avoidance system. In: 2012 International Conference on Computing, Electronics and Electrical Technologies (ICCEET), pp 956–958. https://doi.org/10.1109/ICCEET.2012.6203916

  4. Bourbakis N, Kavraki D (2005) A 2D vibration array for sensing dynamic changes and 3D space for blinds’ navigation. In: Fifth IEEE Symposium on Bioinformatics and Bioengineering (BIBE’05), pp 222–226. https://doi.org/10.1109/BIBE.2005.1

  5. Chen L, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848

  6. COCO (n.d.) http://cocodataset.org/#home. Accessed 28 Jun 2019

  7. Costilla-Reyes O, Namuduri K (2014) Dynamic Wi-Fi fingerprinting indoor positioning system. In: 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp 271–280. https://doi.org/10.1109/IPIN.2014.7275493

  8. Dakopoulos D, Boddhu SK, Bourbakis N (2007) A 2D Vibration array as an assistive device for visually impaired. In: 2007 IEEE 7th International Symposium on BioInformatics and BioEngineering, pp 930–937. https://doi.org/10.1109/BIBE.2007.4375670

  9. Du Y, Czarnecki WM, Jayakumar SM, Farajtabar M, Pascanu R, Lakshminarayanan B (2020) Adapting auxiliary losses using gradient similarity. https://doi.org/10.48550/arXiv.1812.02224

  10. El Lahib M, Tekli J, Issa YB (2018) Evaluating Fitts’ law on vibrating touch-screen to improve visual data accessibility for blind users. Int J Human-Comput Stud 112:16–27, ISSN 1071-5819,. https://doi.org/10.1016/j.ijhcs.2018.01.005

    Article  Google Scholar 

  11. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp 580–587. https://doi.org/10.1109/CVPR.2014.81

  12. Hart P, Nilsson NJ, Raphael B (1968) A formal basis for the heuristic determination of minimum cost paths. IEEE Trans Syst Sci Cybern 4:100–107

  13. Hayat S, Kun S, Tengtao Z, Yu Y, Tu T, Du Y (2018) “A deep learning framework using convolutional neural network for multi-class object recognition,” 2018 IEEE 3rd international conference on image, vision and computing (ICIVC), Chongqing, pp. 194–198

  14. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 770–778. https://doi.org/10.1109/CVPR.2016.90

  15. Hsieh YZ, Lin SS, Xu FX (2020) Development of a wearable guide device based on convolutional neural network for blind or visually impaired persons. Multimed Tools Appl 79:29473–29491. https://doi.org/10.1007/s11042-020-09464-7

    Article  Google Scholar 

  16. Kumar N, Vámossy Z, Szabó-Resch ZM (2016) Heuristic approaches in robot navigation. In: 2016 IEEE 20th Jubilee International Conference on Intelligent Engineering Systems (INES), pp 219–222. https://doi.org/10.1109/INES.2016.7555123

  17. Lee N, Han D (2017) “Magnetic indoor positioning system using deep neural network,” 2017 international conference on indoor positioning and indoor navigation (IPIN), Sapporo, pp. 1–8

  18. Lin G, Milan A, Shen C, Reid I (2017) RefineNet: multi-path refinement networks for high-resolution semantic segmentation. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), Honolulu, HI, pp 5168–5177

  19. Naga Srinivasu P, Balas VE (2021) Self-learning network-based segmentation for real-time brain M.R. images through HARIS. PeerJ Comput Sci 2(7):e654. https://doi.org/10.7717/peerj-cs.654 PMID: 34435099; PMCID: PMC8356652

    Article  Google Scholar 

  20. Redmon J, Farhadi A (2017) YOLO9000: Better, Faster, Stronger. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), Honolulu, HI, pp 6517–6525

  21. Redmon J, Farhadi A (2018) “Yolov3: An incremental improvement”, CoRR, vol. abs/1804.02767

  22. Redmon J, Divvala S, Girshick R, Farhadi A (2016) “You only look once: unified, real-time object detection,” 2016 IEEE conference on computer vision and pattern recognition (CVPR), Las Vegas, NV, pp. 779–788

  23. Ren S, He K, Girshick R, Sun J (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

  24. Santos ADPD, Suzuki AHG, Medola FO, Vaezipour A (2021) A systematic review of wearable devices for orientation and mobility of adults with visual impairment and blindness. IEEE Access 9:162306–162324. https://doi.org/10.1109/ACCESS.2021.3132887

    Article  Google Scholar 

  25. Shelhamer E, Long J, Darrell T (2017) Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 39(4):640–651

  26. Sokic E,Ferizbegovic M,Zubaca J,Softic K, Ahic-Djokic M (2015) “Design of Ultrasound-based Sensory System for environment inspection robots”, International symposium ELMAR (ELMAR), Zadar,Croatia ,28–30 September

  27. Sonali KK, Dharmesh HS, Nishant MR (2010) Obstacle avoidance for a mobile exploration robot using a single ultrasonic range sensor. INTERACT-2010, pp 8–11. https://doi.org/10.1109/INTERACT.2010.5706156

  28. Srinivasu PN, Bhoi AK, Jhaveri RH et al (2021) Probabilistic deep Q network for real-time path planning in censorious robotic procedures using force sensors. J Real-Time Image Proc 18:1773–1785. https://doi.org/10.1007/s11554-021-01122-x

  29. Swaminathan R, Nischt M, Kuhnel C (2008) Localization based object recognition for smart home environments. In: 2008 IEEE International Conference on Multimedia and Expo, pp 921–924. https://doi.org/10.1109/ICME.2008.4607586

  30. Tekli J, Issa YB, Chbeir R (2018) Evaluating touch-screen vibration modality for blind users to access simple shapes and graphics. Int J Human-Comput Stud 110:115–133, ISSN 1071-5819,. https://doi.org/10.1016/j.ijhcs.2017.10.009

  31. Wang P et al. (2018) “Understanding convolution for semantic segmentation,” 2018 IEEE winter conference on applications of computer vision (WACV), Lake Tahoe, NV, pp. 1451–1460

  32. Xu Y, Wang Y, Ma L (2010) “A Novel WLAN Indoor Positioning Algorithm Based on Positioning Characteristics Extraction,” 2010 Fourth international conference on genetic and evolutionary computing, Shenzhen, pp. 134–137

  33. Zhao H, Shi J, Qi X, Wang X, Jia J (2017) Pyramid scene parsing network. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), Honolulu, HI, pp 6230–6239

Download references

Acknowledgments

This paper was partly supported by Ministry of Science and Technology, Taiwan, under MOST 110-2221-E-019 -051 -, 109-2221-E-019 -057 -, 110-2634-F-019 -001 – and 110-2634-F-008 -005 -.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shih-Syun Lin.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hsieh, YZ., Ku, XL. & Lin, SS. The development of assisted- visually impaired people robot in the indoor environment based on deep learning. Multimed Tools Appl 83, 6555–6578 (2024). https://doi.org/10.1007/s11042-023-15644-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-15644-y

Keywords

Navigation