Skip to main content
Log in

A Smart Glove for Visually Impaired People Who Attend to the Elections

A Proof of Concept Study

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Visually impaired people encounter many difficulties in their daily life. The participation in elections is one of these difficulties. The voting booths are not particularly designed for visually impaired people. Therefore, an auxiliary must help visually impaired people. The auxiliary accompanying the visually impaired citizen is usually someone the citizen does not know. This situation eliminates the secrecy and safety of their votes because the auxiliary might vote according to his/her own political view. Additionally, the political view of the visually impaired person is learned by others. In this study, a smart glove is designed to allow visually impaired people to vote in privacy. The smart glove has a camera on it to recognize the political parties’ logos and then it gives a feedback as a voice message to the user via a phone application that is developed. The proposed solution, a smart glove, relies on neural networks to recognize the logos of parties while visually impaired people are voting. Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) were applied and compared for the image classifications of the logos. Based on the images that were obtained by using the Raspberry Zero’s camera, the train and test results of CNN are 98% and 98%, respectively. For the same dataset, the train and test results of SVM are 80% and 80%, respectively. The real experiments were conducted with the designed robotic hand, which contains a camera on the ring finger, a microcontroller on the outer surface of the hand and a powerbank on the wrist. In real experiments, the camera took an image of the ballot and after that the microcontroller, where the CNN model was embedded, processed the image. The feedback was given to the user via Wi-Fi and a phone application that was designed. The accuracy of the real-time experiments is 96%. The simulation and real test results show that the developed system has an accurate image classification. Furthermore, it can be used by visually impaired people not only for elections but also for different applications with some changes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Tehrani K., Micheal A. Wearable technology and wearable devices: everything you need to know about wearable devices. http://www.wearabledevices.com/what-is-a-wearable-device/. Accessed 24 Oct 2019.

  2. Velázquez R. Wearable assistive devices for the blind. In: Lay-Ekuakille A, Mukhopadhyay SC (eds) Wearable and autonomous biomedical devices and systems for smart environment, vol. 75. Berlin, Heidelberg: Springer Berlin Heidelberg; 2010. p. 331–349.

  3. Hoefer S. Meet The Tacit Project. It’s Sonar For The Blind. [Online]. http://grathio.com/2011/08/meet-the-tacit-project-its-sonar-for-the-blind/. Accessed 24 Oct 2019.

  4. Researchers Develop 60 “Watch” to Help Blind People Navigate.

  5. Eklas H, Khan Md R, Ahad A., Design and data analysis for a belt-for-blind for visually impaired people. International Journal of Advanced Mechatronic Systems. 2011;36. https://doi.org/10.1504/IJAMECHS.2011.045010.

  6. Benson S. 3D Haptic Vest for Visually Impaired and Gamers. https://hackaday.io/project/1962-3d-haptic-vest-for-visually-impaired-and-gamers. Accessed 24 Oct 2019.

  7. wuzhi1023Follow, ‘Blind Guider’, Instructables. [Online]. https://www.instructables.com/id/Blind-Guider/. Accessed 24 Oct 2019.

  8. ‘About the UltraCane’. [Online]. https://www.ultracane.com/about-the-ultracane. Accessed 24 Oct 2019.

  9. Toyota Project BLAID wearable set to help blind people. Toyota, 07-Mar-2016. [Online]. https://blog.toyota.co.uk/toyota-project-blaid-blind-visually-impaired-people . Accessed 24 Oct 2019.

  10. Mali A. The BuzzClip: wearable mobility tool for the blind. Indiegogo. [Online]. https://www.indiegogo.com/projects/1383038. Accessed 24 Oct 2019.

  11. Lapa M. PARSEE: World’s First FREE Glasses for Blind People. Indiegogo. [Online]. https://www.indiegogo.com/projects/1734420. Accessed 24 Oct 2019.

  12. Boxall A. Heads-on With Microsoft’s Wearable For The Blind. Digital Trends, 17-Mar-2015. [Online]. https://www.digitaltrends.com/wearables/microsoft-cities-unlocked-wearable-for-the-blind/. Accessed 24 Oct 2019.

  13. Redmon J, Farhadi A. Yolov3: an incremental improvement. CoRR. arxiv:1804.02767, 2018. Accessed 15 Jun 2020.

  14. Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time objectdetection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2017;39(6):1137–1149, issn:0162-8828, 2160–9292. https://doi.org/10.1109/TPAMI.2016.2577031 [Online]. http://ieeexplore.ieee.org/document/7485869/. Accessed 4 Mar 2019.

  15. He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. 2017. arXiv: 1703.06870. Accessed 4 Mar 2019.

  16. Hideaki Y, Takuro Y, Watanabe H. A study on object detection method from manga images using CNN. Int Workshop Adv Image Technol (IWAIT). 2018;2018:7–9.

    Google Scholar 

  17. Tang J, Mao Y, Wang J, Wang L. Multi-task enhanced dam crack image detection based on faster R-CNN. In: 2019 IEEE 4th international conference on image, vision and computing (ICIVC), 5–7 July 2019. Accessed 15 Jun 2020.

  18. Kenya M, Masataka M, Daisuke E, Yuichiro M, Masahiko T. A single filter CNN performance for basic shape classification. In: 9th international conference on awareness science and technology (iCAST), 2018;19–21.

  19. Weijia C, Dengjie W, Hong C, Shaojun W, Anping H, Zhihua W. An asynchronous and reconfigurable CNN accelerator. In: IEEE internationa l conference on electron devices and solid state circuits (EDSSC). 2018.

  20. Shu G, Liu W, Zheng X, Li J. IF-CNN: image-aware inference framework for CNN with the collaboration of mobile devices and cloud, Vol. 6. IEEE Access, 2018.

  21. Dong Y, Li M, Li J. Image retrieval based on improved canny edge detection algorithm. In: International conference on mechatronic sciences, electric engineering and computer (MEC), 2013.

  22. Pratheek KN, Kantha V, Govindaraju K, Guru D. Features fusion for classification of logos. In: International conference on computational modeling and security (CMS), 2016.

  23. “MIT App Inventor”. [Online]. https://appinventor.mit.edu. Accessed 15 Jun 2020.

  24. Online: https://appinventor.mit.edu/explore/sites/all/files/hourofcode/TalkToMePart1.pdf. Accessed 15 Jun 2020.

  25. Pan SJ, Yang Q. A survey on transfer learning. In: IEEE Trans. Knowl. Data Eng. 2010;22(10).

  26. Kecman V. Learning and soft computing: support vector machines, neural networks, and fuzzy logic models. Cambridge: The MIT Press; 2001.

    MATH  Google Scholar 

  27. Wang L, editor. Support vector machines: theory and applications. Berlin: Springer; 2005.

    MATH  Google Scholar 

  28. Vapnik V, Smola A. Support vector regression machines. Adv Neural Inform Process Syst. 1997;9(1):155–61.

    Google Scholar 

  29. Smola A, Scholkopf B. A tutorial on support vector regression. Stat Comput. 2004;14:199–222.

    Article  MathSciNet  Google Scholar 

  30. Sahlol AT, Yousri D, Ewees AA, et al. COVID-19 image classification using deep features and fractional-order marine predators algorithm. Sci Rep. 2020;10:15364. https://doi.org/10.1038/s41598-020-71294-2.

    Article  Google Scholar 

  31. Kim J-H, Kim B-G, Roy PP, Jeong D-M. Efficient Facial Expression Recognition Algorithm Based on Hierarchical Deep Neural Network Structure. IEEE Access. 2019;7:41273–85. https://doi.org/10.1109/ACCESS.2019.2907327.

    Article  Google Scholar 

  32. Kim JH, Hong GS, Kim BG. Dogra DP, deepGesture: Deep learning-based gesture recognition scheme using motion sensors. Displays. 2018;55:38–45. https://doi.org/10.1016/j.displa.2018.08.001.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pinar Oguz Ekim.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Oguz Ekim, P., Ture, E., Karahan, S. et al. A Smart Glove for Visually Impaired People Who Attend to the Elections. SN COMPUT. SCI. 2, 312 (2021). https://doi.org/10.1007/s42979-021-00709-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-021-00709-2

Keywords

Navigation