Skip to main content

Advertisement

Log in

Learning to mimic programmers gaze behavior for program comprehension improvement

  • Original Article
  • Published:
Artificial Life and Robotics Aims and scope Submit manuscript

Abstract

Gaze behavior of human coders could allow to improve programmer-aiding tools relying on program comprehension algorithms, as gaze reveals which subsets of source code programmers focus on to understand its function. When real gaze data are unavailable, algorithmic solutions for gaze behavior estimation might be used to integrate gaze behavior in a global programmer-aiding tool pipeline and enhance it. The objective of this paper is the implementation and training of an algorithmic solution to generate trajectories describing gaze behavior, and illustrate how it can enhance an automatic program comprehension task. Even though our approach has limits and is improvable, we successfully implemented an automatic gaze behavior generation algorithm with visibly better performances than a random policy, and also demonstrated that a trained gaze model can improve a simple program comprehension task. This implies that automatic program comprehension improvement with artificially generated gaze trajectories could be directly possible in a programmer-aiding software, without the need for eye gaze tracking devices every time the programmer-aiding software is used on a new code snippet. Workflows similar to ours could enhance programmer-aiding tools, but also benefit society more broadly by giving new insights toward building AI with attention mechanisms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Ali N, Sharafl Z, Guéhéneuc Y-G, Antoniol G (2012). An empirical study on requirements traceability using eye-tracking. In 2012 28th IEEE International Conference on Software Maintenance (ICSM), pp. 191–200, https://doi.org/10.1109/ICSM.2012.6405271

  2. Allamanis M, Barr E, Devanbu P, Sutton C (2017) A survey of machine learning for big code and naturalness. ACM Comput Surv. https://doi.org/10.1145/3212695

    Article  Google Scholar 

  3. Arora S, Doshi P (2021) A survey of inverse reinforcement learning: challenges, methods and progress. Artif Intell 297:103500. https://doi.org/10.1016/j.artint.2021.103500

    Article  MathSciNet  MATH  Google Scholar 

  4. Balog M, Gaunt AL, Brockschmidt M, Nowozin S, Tarlow D (2017)Deepcoder: learning to write programs. In International conference on learning representations

  5. Brown DS, Goo W, Nagarajan P, Niekum S (2019) Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. CoRR, Vol. abs/1904.06387

  6. Busjahn T, Bednarik R, Begel A, Crosby M, Paterson JH, Schulte C, Sharif B, Tamm S (2015) Eye movements in code reading: relaxing the linear order. In IEEE 23rd international conference on program comprehension. p. 255–265, https://doi.org/10.1109/ICPC.2015.36

  7. Codevilla F, Santana E, Lopez A, Gaidon A (2019) Exploring the limitations of behavior cloning for autonomous driving. In 2019 IEEE/CVF international conference on computer vision (ICCV), pp. 9328–9337, https://doi.org/10.1109/ICCV.2019.00942

  8. Cormen TH, Leiserson CE, Rivest RL, Stein C (2009) Introduction to Algorithms, 3rd edn. The MIT Press, Cambridge

    MATH  Google Scholar 

  9. Crosby ME, Scholtz J, Wiedenbeck S (2002) The roles beacons play in comprehension for novice and expert programmers. In PPIG, pp. 5

  10. Dar AHHM, Wagner AS (2021) Remodnav: robust eye-movement classification for dynamic stimulation. Behav Res Methods 53(1):399–414. https://doi.org/10.3758/s13428-020-01428-x

    Article  Google Scholar 

  11. Gupta R, Pal S, Kanade A, Shevade S (2017) Deepfix: Fixing common c language errors by deep learning. Proceedings of the AAAI Conference on Artificial Intelligenc. Vol. 31, No. 1, https://doi.org/10.1609/aaai.v31i1.10742

  12. Henderson JM (2003) Human gaze control during real-world scene perception. Trends Cogn Sci 7(11):498–504. https://doi.org/10.1016/j.tics.2003.09.006

    Article  Google Scholar 

  13. Ho J Ermon S (2016) Generative adversarial imitation learning. In Advances in Neural Information Processing Systems. Vol. 29

  14. Ikutani Y, Kubo T, Nishida S, Hata H, Matsumoto K, Ikeda K, Nishimoto S (2021) Expert programmers have fine-tuned cortical representations of source code. eNeuro. https://doi.org/10.1523/ENEURO.0405-20.2020

    Article  Google Scholar 

  15. Kishikawa D, Arai S (2021) Estimation of personal driving style via deep inverse reinforcement learning. Artif Life Robot 26(3):338–346. https://doi.org/10.1007/s10015-021-00682-2

    Article  Google Scholar 

  16. Li Y, Song J, Ermon S (2017) Infogail: Interpretable imitation learning from visual demonstrations. In NIPS

  17. Mutlu B, Forlizzi J, Hodgins J (2006) A storytelling robot: Modeling and evaluation of human-like gaze behavior. In 2006 6th IEEE-RAS International Conference on Humanoid Robots. p. 518–523, https://doi.org/10.1109/ICHR.2006.321322

  18. Pelachaud C, Bilvi M (2003) Modelling gaze behavior for conversational agents. In Intelligent Virtual Agents. p. 93–100

  19. Rehurek R, Sojka P (2011) Gensim-python framework for vector space modelling. NLP Centre Faculty, of Informatics, Masaryk University, Brno

    Google Scholar 

  20. Rodeghero P, McMillan C, McBurney WP, Bosch N, D’Mello S (2014) Improving automated source code summarization via an eye-tracking study of programmers. In Proceedings of the 36th International Conference on Software Engineering. p. 390-401 https://doi.org/10.1145/2568225.2568247

  21. Saran A, Zhang R, Short ES, Niekum S (2021) Efficiently guiding imitation learning agents with human gaze. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems. p. 1109-1117

  22. Sedgewick R, Wayne K (2011) Algorithms, Forth Edition. Addison-Wesley Professional

  23. Seele S, Misztal S, Buhler H, Herpers R, Schild J (2017) Here’s looking at you anyway! how important is realistic gaze behavior in co-located social virtual reality games? In Proceedings of the Annual Symposium on Computer-Human Interaction in Play. pp. 531-540, https://doi.org/10.1145/3116595.3116619

  24. Shaffer TR, Wise JL, Walters BM, Müller SC, Falcone M, Sharif B (2015) Itrace: Enabling eye tracking on software artifacts within the ide to support software engineering tasks. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. pp. 954-957, https://doi.org/10.1145/2786805.2803188

  25. Sugiyama M, Suzuki T, Kanamori T (2012) Density ratio estimation in machine learning. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9781139035613

    Book  MATH  Google Scholar 

  26. Sutton RS, Barto AG (2018) Reinforcement learning: an introduction. The MIT Press, Cambridge

    MATH  Google Scholar 

  27. Uchibe E, Doya K (2021) Forward and inverse reinforcement learning sharing network weights and hyperparameters. Neural Netw 144:138–153. https://doi.org/10.1016/j.neunet.2021.08.017

    Article  Google Scholar 

  28. Uwano H, Nakamura M, Monden A, Matsumoto K-i (2006) Analyzing individual performance of source code review using reviewers’ eye movement. Eye Tracking Research and Applications Symposium (ETRA). pp. 133–140, https://doi.org/10.1145/1117309.1117357

  29. Yang S-N (2009) Effects of gaze-contingent text changes on fixation duration in reading. Vision Res 49(23):2843–2855. https://doi.org/10.1016/j.visres.2009.08.023

    Article  Google Scholar 

  30. Zhu Y, Pan M (2019) Automatic code summarization: A systematic literature review. CoRR. Vol. abs/1909.04352

  31. Zhuang J, Tang T, Ding Y, Tatikonda S, Dvornek N, Papademetris X, Duncan J (2020) Adabelief optimizer: adapting stepsizes by the belief in observed gradients. Adv Neural Inform Process Syst 33:18795–18806

Download references

Acknowledgements

We would like to express our great appreciation to Dr. Yoshiharu Ikutani and Dr. Eiji Uchibe for their valuable comments that greatly improved the study and manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeanne Barthélemy.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barthélemy, J., Kubo, T., Itoh, T.D. et al. Learning to mimic programmers gaze behavior for program comprehension improvement. Artif Life Robotics 28, 295–306 (2023). https://doi.org/10.1007/s10015-023-00868-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10015-023-00868-w

Keywords

Navigation