Skip to main content
Log in

Real Time Multi Object Detection for Blind Using Single Shot Multibox Detector

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

According to world health statistics 285 million out of 7.6 billion population suffers visual impairment; hence 4 out of 100 people are blind. Absence of vision restricts the mobility of a person to pronounced extent and hence there is a need to build an explicit device to conquer guiding aid to the prospect. This paper proposes to build a prototype that performs real time object detection using image segmentation and deep neural network. Further the object, its position with respect to the person and accuracy of detection is prompted through speech stimulus to the blind person. The accuracy of detection is also prompted to the device holder. This work uses a combination of single-shot multibox detection framework with mobileNet architecture to build rapid real time multi object detection for a compact, portable and minimal response time device construction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. http://www.acb.org/.

  2. Shoval, S., Ulrich, I., & Borenstein, J. (2003). NavBelt and the Guide-Cane [obstacle-avoidance systems for the blind and visually impaired. IEEE Robotics and Automation Magazine, 10(1), 9–20.

    Article  Google Scholar 

  3. Wang, Y., & Kuchenbecker, K. J. (2012). HALO: Haptic alerts for low-hanging obstacles in white cane navigation. In 2012 IEEE haptics symposium (HAPTICS), Vancouver (pp. 527–532).

  4. Chumkamon, S., Tuvaphanthaphiphat, P., & Keeratiwintakorn, P. (2008). A blind navigation system using RFID for indoor environments. In 2008 5th International conference on electrical engineering/electronics, computer, telecommunications and information technology, Krabi (pp. 765–768).

  5. Faria, J., Lopes, S., Fernandes, H., Martins, P., & Barroso, J. (2010). Electronic white cane for blind people navigation assistance. In 2010 World automation congress, Kobe (pp. 1–7).

  6. Lavanya, G., Preethy, W., Shameem, A., & Sushmitha, R. (2013). Passenger BUS alert system for easy navigation of blind. In 2013 international conference on circuits, power and computing technologies (ICCPCT), Nagercoil (pp. 798–802).

  7. Adame, M. R., Yu, J., Moller, K., & Seemann, E. (2013). A wearable navigation aid for blind people using a vibrotactile information transfer system. In 2013 ICME international conference on complex medical engineering, Beijing (pp. 13–18).

  8. Ando, B. (2003). Electronic sensory systems for the visually impaired. IEEE Instrumentation and Measurement Magazine, 6(2), 62–67.

    Article  Google Scholar 

  9. Wachaja, A., Agarwal, P., Zink, M., Adame, M. R., Möller, K., & Burgard, W. (2015). Navigating blind people with a smart walker. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS), Hamburg (pp. 6014–6019).

  10. Dakopoulos, D., & Bourbakis, N. G. (2010). Wearable obstacle avoidance electronic travel aids for blind: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40(1), 25–35.

    Article  Google Scholar 

  11. Balasuriya, B. K., Lokuhettiarachchi, N. P., Ranasinghe, A. R. M. D. N., Shiwantha, K. D. C., & Jayawardena, C. (2017). Learning platform for visually impaired children through artificial intelligence and computer vision. In 2017 11th International conference on software, knowledge, information management and applications (SKIMA), Malabe, Sri Lanka (pp. 1–7).

  12. Mancini, A, Frontoni, E, & Zingaretti, P. (2018). Mechatronic system to help visually impaired users during walking and running. IEEE Transactions on Intelligent Transportation Systems, 19, 649–660. ISSN 1524-9050.

  13. Dunai, L. D., Lengua, I. L., Tortajada, I., & Simon, F. B. (2014) Obstacle detectors for visually impaired people. In 2014 International conference on optimization of electrical and electronic equipment (OPTIM), Bran (pp. 809–816).

  14. Xiong, J. (2018). Tutorial-1: Machine learning and deep learning. In 2018 23rd Asia and South Pacific design automation conference (ASP-DAC), Jeju, Korea (South) (pp. 19–25).

  15. Noble, F. K. (2017) A mobile robot platform for supervised machine learning applications. In 2017 24th International conference on mechatronics and machine vision in practice (M2VIP), Auckland (pp. 1–6).

  16. Barbosa, C., Santana, O., & Silva, B. (2017). An unsupervised machine learning algorithm for visual target identification in the context of a robotics competition. In 2017 Latin American robotics symposium (LARS) and 2017 Brazilian symposium on robotics (SBR), Curitiba (pp. 1–6).

  17. DiStasio, M. M., Francis, J. T., & Boraud, T. (2013). Use of frontal lobe hemodynamics as reinforcement signals to an adaptive controller. PLoS ONE, 8, e69541. ISSN 1932-6203.

  18. Chhatbar, P. Y., Francis, J. T., Fridman, E. A. (2013). Towards a naturalistic brain–machine interface: Hybrid torque and position control allows generalization to novel dynamics. PLoS ONE, 8, e52286. ISSN 1932-6203.

  19. Moshovos, et al. (2018). Value-based deep-learning acceleration. IEEE Micro, 38(1), 41–55.

    Article  Google Scholar 

  20. Ranganathan, H., Venkateswara, H., Chakraborty, S., & Panchanathan, S. (2017). Deep active learning for image classification. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp. 3934–3938).

  21. da Silva, L. C. B., de Oliveira Rocha, H. R., Castellani, C. E. S., Segatto, M. E. V., & Pontes, M. J. (2017) Improving temperature resolution of distributee temperature sensor using Artificial Neural Network. In Microwave and optoelectronics conference (IMOC) 2017 SBMO/IEEE MTT-S international (pp. 1–5).

  22. Han, W. S., & Han, I. S. (2017). Bio-inspired neuromorphic visual processing with neural networks for cyclist detection in vehicle’s blind spot and segmentation in medical CT images. In 2017 Computing conference, London (pp. 744–750).

  23. Yang, H., Yuan, C., Xing, J., & Hu, W. (2017). SCNN: Sequential convolutional neural network for human action recognition in videos. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp. 355–359).

  24. Deng, Z., Fan, H., Xie, F., Cui, Y., & Liu, J. (2017). Segmentation of dermoscopy images based on fully convolutional neural network. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp. 1732–1736).

  25. Cho, C., Lee, Y. H., & Lee, S., (2017). Prostate detection and segmentation based on convolutional neural network and topological derivative. In 2017 IEEE international conference on image processing (ICIP), Beijing, China (pp 3071–3074).

  26. Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137–1149.

    Article  Google Scholar 

  27. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In 2016 IEEE conference on computer vision and pattern recognition (CVPR), Las Vegas, NV (pp. 779–788).

  28. Liu, W., et al. (2016). SSD: Single shot MultiBox detector. In B. Leibe, J. Matas, N. Sebe, & M. Welling (Eds.), Computer vision—ECCV 2016. ECCV 2016.

  29. Ning, C., Zhou, H., Song, Y., & Tang, J. (2017). Inception single shot MultiBox detector for object detection. In 2017 IEEE international conference on multimedia & expo workshops (ICMEW), Hong Kong (pp. 549–554).

  30. Cengil, E., Çınar, A., & Özbay, E. (2017). Image classification with caffe deep learning framework. In 2017 International conference on computer science and engineering (UBMK), Antalya (pp. 440–444).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Sofana Reka.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arora, A., Grover, A., Chugh, R. et al. Real Time Multi Object Detection for Blind Using Single Shot Multibox Detector. Wireless Pers Commun 107, 651–661 (2019). https://doi.org/10.1007/s11277-019-06294-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-019-06294-1

Keywords

Navigation