Skip to main content

Voice Recognition Based System to Adapt Automatically the Readability Parameters of a User Interface

  • Conference paper
  • First Online:
Intelligent Systems and Applications (IntelliSys 2019)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1038))

Included in the following conference series:

  • 2327 Accesses

Abstract

When a user interface (UI) is displayed on a screen, parameters can be set to make it more readable to the user: font size and type, colors, brightness, widgets, etc. The optimal settings are specific to each user. For example, dark backgrounds are better for many visually impaired people who are dazzled. Adjusting the settings may be time-consuming and inefficient because of the user subjectivity. The proposed approach optimizes them automatically by using a measure of the reading performance. After a survey of existing set-ups for optimizing UIs, a new system composed of a microphone with voice recognition, and an optimization algorithm to perform reinforcement learning (RL), will be proposed. The user reads aloud a text displayed through the UI, and the feedback adaptation signals are the reading performance criteria. The UI parameters are modified while the user is reading, until an optimum is reached.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cherké, S., Girard, N., Soubaras, H.: Des malvoyants á l’industrie: une nouvelle commande gestuelle dezoom pour tablettes et smartphones. In: Proceedings of HANDICAP, Paris, France, June 2018

    Google Scholar 

  2. Ecalle, J.: L’évaluation de la lecture et des compétences associées. Revue française de linguistique appliquée 15(1), 105–120 (2010)

    Google Scholar 

  3. Forster, K.I.: Visual perception of rapidly presented word sequences of varying complexity. Atten. Percept. Psychophys. 8(4), 215–221 (1970)

    Article  Google Scholar 

  4. Gajos, K., Weld, D.S.: SUPPLE: automatically generating user interfaces. In: Proceedings of the 9th International Conference on Intelligent User Interfaces, pp. 93–100. ACM (2004)

    Google Scholar 

  5. Gajos, K.Z., Wobbrock, J.O., Weld, D.S.: Automatically generating user interfaces adapted to users’ motor and vision capabilities. In: Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, pp. 231–240. ACM (2007)

    Google Scholar 

  6. Gajos, K.Z., Wobbrock, J.O., Weld, D.S.: Improving the performance of motor-impaired users with automatically-generated, ability-based interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1257–1266. ACM (2008)

    Google Scholar 

  7. Girard, N., Cherké, S., Soubaras, H., Nieuwenhuis, K.: A new gesture control for zooming on tablets and smartphones for visually impaired people. In: International Conference on Computers Helping People with Special Needs, pp. 351–358. Springer (2018)

    Google Scholar 

  8. Gish, K.W., Staplin, L.: Human factors aspects of using head up displays in automobiles: a review of the literature. Technical report, US Dept of Transportation, National Highway Traffic Safety Administration, Washington, DC, Interim Report, August 1995

    Google Scholar 

  9. Kiefer, R.J.: Quantifying head-up display (HUD) pedestrian detection benefits for older drivers. In: 16th International Technical Conference on Experimental Safety Vehicles. NHTSA, Windsor, pp. 428–437 (1998)

    Google Scholar 

  10. Kurniawan, S., King, A., Evans, D.G., Blenkhorn, P.: Design and user evaluation of a joystick-operated full-screen magnifier. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 25–32. ACM (2003)

    Google Scholar 

  11. Lefavrais, P.: Test de l’alouette (1967)

    Google Scholar 

  12. Lefavrais, P.: Test de l’Alouette Révisé. Editions du Centre de Psychologie Appliquée, Paris (2005)

    Google Scholar 

  13. Lino, T., Otsuka, T., Suzuki, Y.: Development of heads-up display for a motor vehicle. Technical report, SAE Technical Paper (1988)

    Google Scholar 

  14. Lomas, J.D., Forlizzi, J., Poonwala, N., Patel, N., Shodhan, S., Patel, K., Koedinger, K., Brunskill, E.: Interface design optimization as a multi-armed bandit problem. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 4142–4153. ACM (2016)

    Google Scholar 

  15. Macik, M., Cerny, T., Slavik, P.: Context-sensitive, cross-platform user interface generation. J. Multimodal User Interfaces 8(2), 217–229 (2014)

    Article  Google Scholar 

  16. Spillmann, T., Dillier, N.: Test d’audiométrie par ordinateur dans le diagnostic audiométrique (1988)

    Google Scholar 

  17. Oulasvirta, A.: User interface design with combinatorial optimization. Computer 50(1), 40–47 (2017)

    Article  Google Scholar 

  18. Post, D.L., Lippert, T.M., Snyder, H.L.: Color contrast metrics for head-up displays. In: Proceedings of the Human Factors Society Annual Meeting, vol. 27, pp. 933–937 (1983)

    Article  Google Scholar 

  19. Puterman, M.L.: Markov Decision Processes. Wiley, New York (1994)

    Book  Google Scholar 

  20. Richaudeau, F., Gauquelin, M., Gauquelin, F.: La lecture rapide: une méthode moderne pour apprendre sans peine: lire mieux et davantage, décupler son information, Marabout (1999)

    Google Scholar 

  21. So, J.C.Y., Chan, A.H.S.: Design factors on dynamic text display. Eng. Lett. 16(3), 368–371 (2008)

    Google Scholar 

  22. Soubaras, H., Colineau, J.: PortaNum: une nouvelle aide technique la vision de loin comportant du traitement d’images pour les malvoyant. In: Proceedings of HANDICAP, Paris, France (2002)

    Google Scholar 

  23. Tomasek, M., Cerny, T.: Automated user interface generation involving field classification. Softw. Netw. 2018(1), 53–78 (2018)

    Google Scholar 

  24. Veeriah, V., Pilarski, P.M., Sutton, R.S.: Face valuing: training user interfaces with facial expressions and reinforcement learning. CoRR, abs/1606.02807 (2016)

    Google Scholar 

  25. Wang, A.-H., Chen, C.-H.: Effects of screen type, chinese typography, text/background color combination, speed, and jump length for VDT leading display on users’ reading performance. Int. J. Ind. Ergon. 31(4), 249–261 (2003)

    Article  Google Scholar 

  26. Yoo, H.: Display of HUD warnings to drivers: determining an optimal location. The University of Michigan Transportation Research Institute, Technical report, Ann Arbor, MI (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hélène Soubaras .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Soubaras, H. (2020). Voice Recognition Based System to Adapt Automatically the Readability Parameters of a User Interface. In: Bi, Y., Bhatia, R., Kapoor, S. (eds) Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing, vol 1038. Springer, Cham. https://doi.org/10.1007/978-3-030-29513-4_12

Download citation

Publish with us

Policies and ethics