skip to main content
10.1145/3520495.3520527acmotherconferencesArticle/Chapter ViewAbstractPublication PagesozchiConference Proceedingsconference-collections
short-paper

Gaze-Based Authentication Method Using Graphical Passwords Featuring Keypoints

Published: 15 September 2022 Publication History

Abstract

We propose a gaze-based authentication method using a new type of graphical password. After registering keypoints in an image as a secret, the user can unlock the device by gazing at them in the registered order on a randomly displayed image. We present the design and implementation of our authentication method and report the results of a feasibility study utilizing mobile devices. Our prototype implementation achieved an acceptance rate of 86.0%.

References

[1]
Mozhgan Azimpourkivi, Umut Topkara, and Bogdan Carbunar. 2017. A Secure Mobile Authentication Alternative to Biometrics. In Proceedings of the 33rd Annual Computer Security Applications Conference (Orlando, FL, USA) (ACSAC 2017). Association for Computing Machinery, New York, NY, USA, 28–41. https://doi.org/10.1145/3134600.3134619
[2]
Jinkun Cao, Hongyang Tang, Hao-Shu Fang, Xiaoyong Shen, Yu-Wing Tai, and Cewu Lu. 2019. Cross-Domain Adaptation for Animal Pose Estimation. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (Seoul, Korea (South)). Institute of Electrical and Electronics Engineers, 9497–9506. https://doi.org/10.1109/iccv.2019.00959
[3]
Andrew Lim Chee Yeung, Bryan Lee Weng Wai, Cheng Hao Fung, Fiza Mughal, and Vahab Iranmanesh. 2015. Graphical password: Shoulder-surfing resistant using falsification. In 2015 9th Malaysian Software Engineering Conference (MySEC). Institute of Electrical and Electronics Engineers, 145–148. https://doi.org/10.1109/MySEC.2015.7475211
[4]
Grigorios G Chrysos, Stylianos Moschoglou, Giorgos Bouritsas, Jiankang Deng, Yannis Panagakis, and Stefanos P Zafeiriou. 2021. Deep Polynomial Neural Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence PP (Feb. 2021), 1–1. https://doi.org/10.1109/TPAMI.2021.3058891
[5]
Antonella De Angeli, Lynne Coventry, Graham Johnson, and Karen Renaud. 2005. Is a picture really worth a thousand words? Exploring the feasibility of graphical authentication systems. International Journal of Human-Computer Studies 63, 1-2 (July 2005), 128–152. https://doi.org/10.1016/j.ijhcs.2005.04.020
[6]
Alexander De Luca, Martin Denzel, and Heinrich Hussmann. 2009. Look into my eyes! can you guess my password?. In Proceedings of the 5th Symposium on Usable Privacy and Security (Mountain View, California, USA) (SOUPS ’09, 7). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/1572532.1572542
[7]
Alexander De Luca, Roman Weiss, and Heiko Drewes. 2007. Evaluation of eye-gaze interaction methods for security enhanced PIN-entry. In Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces (Adelaide, Australia) (OZCHI ’07). Association for Computing Machinery, New York, NY, USA, 199–202. https://doi.org/10.1145/1324892.1324932
[8]
Rainhard Dieter Findling, Le Ngu Nguyen, and Stephan Sigg. 2019. Closed-Eye Gaze Gestures: Detection and Recognition of Closed-Eye Movements with Cameras in Smart Glasses. In Advances in Computational Intelligence(IWANN 2019). Springer International Publishing, Cham, Switzerland, 322–334. https://doi.org/10.1007/978-3-030-20521-8_27
[9]
Eira Friström, Elias Lius, Niki Ulmanen, Paavo Hietala, Pauliina Kärkkäinen, Tommi Mäkinen, Stephan Sigg, and Rainhard Dieter Findling. 2019. Free-Form Gaze Passwords from Cameras Embedded in Smart Glasses. In Proceedings of the 17th International Conference on Advances in Mobile Computing & Multimedia(Munich, Germany) (MoMM2019). Association for Computing Machinery, New York, NY, USA, 136–144. https://doi.org/10.1145/3365921.3365928
[10]
Daniel Hintze, Philipp Hintze, Rainhard D Findling, and René Mayrhofer. 2017. A Large-Scale, Long-Term Analysis of Mobile Device Usage Characteristics. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 2 (June 2017), 1–21. https://doi.org/10.1145/3090078
[11]
Robert J K Jacob. 1991. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Transactions on Information and System Security 9, 2 (April 1991), 152–169. https://doi.org/10.1145/123078.128728
[12]
Christina Katsini, Yasmeen Abdrabou, George E Raptis, Mohamed Khamis, and Florian Alt. 2020. The Role of Eye Gaze in Security and Privacy Applications: Survey and Future HCI Research Directions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–21. https://doi.org/10.1145/3313831.3376840
[13]
Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, and Florian Alt. 2017. GazeTouchPIN: protecting sensitive data on mobile devices using secure multimodal authentication. In Proceedings of the 19th ACM International Conference on Multimodal Interaction (Glasgow, UK) (ICMI ’17). Association for Computing Machinery, New York, NY, USA, 446–450. https://doi.org/10.1145/3136755.3136809
[14]
Manu Kumar, Tal Garfinkel, Dan Boneh, and Terry Winograd. 2007. Reducing shoulder-surfing by using gaze-based password entry. In Proceedings of the 3rd symposium on Usable privacy and security (Pittsburgh, Pennsylvania, USA) (SOUPS ’07). Association for Computing Machinery, New York, NY, USA, 13–19. https://doi.org/10.1145/1280680.1280683
[15]
Alexander Mathis, Pranav Mamidanna, Kevin M Cury, Taiga Abe, Venkatesh N Murthy, Mackenzie Weygandt Mathis, and Matthias Bethge. 2018. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nature neuroscience 21, 9 (Sept. 2018), 1281–1289. https://doi.org/10.1038/s41593-018-0209-y
[16]
Wendy Moncur and Grégory Leplâtre. 2007. Pictures at the ATM: exploring the usability of multiple graphical passwords. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’07). Association for Computing Machinery, New York, NY, USA, 887–894. https://doi.org/10.1145/1240624.1240758
[17]
Joonbeom Park, Seonghoon Park, and Hojung Cha. 2021. GAZEL: Runtime Gaze Tracking for Smartphones. In 2021 IEEE International Conference on Pervasive Computing and Communications (PerCom). Institute of Electrical and Electronics Engineers, 1–10. https://doi.org/10.1109/PERCOM50583.2021.9439113
[18]
Räihä, Kari-Jouko and Heikkilä, Henna. 2009. Speed and Accuracy of Gaze Gestures. Journal of Eye Movement Researc 3, 2 (November 2009). https://doi.org/10.16910/JEMR.3.2.1
[19]
Vijay Rajanna, Seth Polsley, Paul Taele, and Tracy Hammond. 2017. A Gaze Gesture-Based User Authentication System to Counter Shoulder-Surfing Attacks. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI EA ’17). Association for Computing Machinery, New York, NY, USA, 1978–1986. https://doi.org/10.1145/3027063.3053070
[20]
Dario D Salvucci and Joseph H Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on Eye tracking research & applications (Palm Beach Gardens, Florida, USA) (ETRA ’00). Association for Computing Machinery, New York, NY, USA, 71–78. https://doi.org/10.1145/355017.355028
[21]
Mingxing Tan, Ruoming Pang, and Quoc V Le. 2020. EfficientDet: Scalable and Efficient Object Detection. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Seattle, WA, USA). Institute of Electrical and Electronics Engineers, 10778–10787. https://doi.org/10.1109/cvpr42600.2020.01079
[22]
Nachiappan Valliappan, Na Dai, Ethan Steinberg, Junfeng He, Kantwon Rogers, Venky Ramachandran, Pingmei Xu, Mina Shojaeizadeh, Li Guo, Kai Kohlhoff, and Vidhya Navalpakkam. 2020. Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nature communications 11, 1 (Sept. 2020), 4553. https://doi.org/10.1038/s41467-020-18360-5
[23]
Susan Wiedenbeck, Jim Waters, Jean-Camille Birget, Alex Brodskiy, and Nasir Memon. 2005. PassPoints: design and longitudinal evaluation of a graphical password system. International Journal of Human-Computer Studies 63, 1-2 (July 2005), 102–127. https://doi.org/10.1016/j.ijhcs.2005.04.010

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
OzCHI '21: Proceedings of the 33rd Australian Conference on Human-Computer Interaction
November 2021
361 pages
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 September 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Gaze Input
  2. Gaze-Based Authentication
  3. Graphical Passwords
  4. Keypoints

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Conference

OzCHI '21
OzCHI '21: 33rd Australian Conference on Human-Computer Interaction
November 30 - December 2, 2021
VIC, Melbourne, Australia

Acceptance Rates

Overall Acceptance Rate 362 of 729 submissions, 50%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 56
    Total Downloads
  • Downloads (Last 12 months)23
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media