Abstract
Deep learning models have demonstrated favorable performance on many medical image classification tasks. However, they rely on expensive hand-labeled datasets that are time-consuming to create. In this work, we explore a new supervision source to training deep learning models by using gaze data that is passively and cheaply collected during a clinician’s workflow. We focus on three medical imaging tasks, including classifying chest X-ray scans for pneumothorax and brain MRI slices for metastasis, two of which we curated gaze data for. The gaze data consists of a sequence of fixation locations on the image from an expert trying to identify an abnormality. Hence, the gaze data contains rich information about the image that can be used as a powerful supervision source. We first identify a set of gaze features and show that they indeed contain class-discriminative information. Then, we propose two methods for incorporating gaze features into deep learning pipelines. When no task labels are available, we combine multiple gaze features to extract weak labels and use them as the sole source of supervision (Gaze-WS). When task labels are available, we propose to use the gaze features as auxiliary task labels in a multi-task learning framework (Gaze-MTL). On three medical image classification tasks, our Gaze-WS method without task labels comes within 5 AUROC points (1.7 precision points) of models trained with task labels. With task labels, our Gaze-MTL method can improve performance by 2.4 AUROC points (4 precision points) over multiple baselines.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Our two novel datasets and code are available at https://github.com/HazyResearch/observational.
References
Aresta, G., et al.: Automatic lung nodule detection combined with gaze information improves radiologists’ screening performance. IEEE J. Biomed. Health Inform. 24(10) (2020)
Bosmans, J.M., Weyler, J.J., Parizel, P.M.: Structure and content of radiology reports, a quantitative and qualitative study in eight medical centers. Eur. J. Radiol. 72(2) (2009)
Cole, M.J., Gwizdka, J., Liu, C., Bierig, R., Belkin, N.J., Zhang, X.: Task and user effects on reading patterns in information search. Interact. Comput. 23(4) (2011)
Dunnmon, J.A., et al.: Cross-modal data programming enables rapid medical machine learning. Patterns (2020)
Dunnmon, J.A., Yi, D., Langlotz, C.P., Ré, C., Rubin, D.L., Lungren, M.P.: Assessment of convolutional neural networks for automated classification of chest radiographs. Radiol. 290(2) (2019)
Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639) (2017)
Esteva, A., et al.: A guide to deep learning in healthcare. Nat. Med. 25(1) (2019)
Ge, G., Yun, K., Samaras, D., Zelinsky, G.J.: Action classification in still images using human eye movements. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2015)
Grøvik, E., Yi, D., Iv, M., Tong, E., Rubin, D., Zaharchuk, G.: Deep learning enables automatic detection and segmentation of brain metastases on multisequence MRI. J. Magnet. Resonance Imaging 51(1) (2020)
Hayhoe, M.: Vision using routines: a functional account of vision. Visual Cognit. 7(1–3) (2000)
Hayhoe, M., Ballard, D.: Eye movements in natural behavior. Trends in Cogn. Sci. 9(4) (2005)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Johnson, A., Pollard, T., Mark, R., Berkowitz, S., Horng, S.: Mimic-CXR database (2019). https://doi.org/10.13026/C2JT1Q. https://physionet.org/content/mimic-cxr/1.0.0/
Karargyris, A., et al.: Creation and validation of a chest x-ray dataset with eye-tracking and report dictation for AI development. Sci. Data 8(1) (2021)
Karessli, N., Akata, Z., Schiele, B., Bulling, A.: Gaze embeddings for zero-shot image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Khosravan, N., Celik, H., Turkbey, B., Jones, E.C., Wood, B., Bagci, U.: A collaborative computer aided diagnosis (c-cad) system with eye-tracking, sparse attentional model, and deep learning. Med. Image Anal. 51 (2019)
Klein, J.S., Rosado-de-Christenson, M.L.: A Systematic Approach to Chest Radiographic Analysis. Springer (2019)
Lai, Q., Wang, W., Khan, S., Shen, J., Sun, H., Shao, L.: Human vs. machine attention in neural networks: a comparative study. arXiv preprint arXiv:1906.08764 (2019)
for Imaging Informatics in Medicine (SIIM), S.: Siim-ACR pneumothorax segmentation (2019). https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation
Murrugarra-Llerena, N., Kovashka, A.: Learning attributes from human gaze. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE (2017)
Papadopoulos, D.P., Clarke, A.D.F., Keller, F., Ferrari, V.: Training object class detectors from eye tracking data. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 361–376. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_24
Qiao, X., Ren, P., Dustdar, S., Liu, L., Ma, H., Chen, J.: Web AR: a promising future for mobile augmented reality-state of the art, challenges, and insights. Proc. IEEE 107(4) (2019)
Rajpurkar, P., et al.: Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225 (2017)
Ratner, A., De Sa, C., Wu, S., Selsam, D., Ré, C.: Data programming: creating large training sets, quickly. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
Rimmer, A.: Radiologist shortage leaves patient care at risk, warns royal college. BMJ: British Med. J. (Online) 359 (2017)
Ruder, S.: An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 (2017)
Saab, K., et al.: Doubly weak supervision of deep learning models for head CT. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 811–819. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_90
Saab, K., Dunnmon, J., Ratner, A., Rubin, D., Re, C.: Improving sample complexity with observational supervision. In: International Conference on Learning Representations, LLD Workshop (2019)
Samson, R., Frank, M., Fellous, J.M.: Computational models of reinforcement learning: the role of dopamine as a reward signal. Cogn. Neurodyn. 4(2) (2010)
Selvaraju, R.R., et al.: Taking a hint: leveraging explanations to make vision and language models more grounded. In: Proceedings of the IEEE International Conference on Computer Vision (2019)
Stember, J., et al.: Eye tracking for deep learning segmentation using convolutional neural networks. J. Digital Imaging 32(4) (2019)
Taylor, A.G., Mielke, C., Mongan, J.: Automated detection of moderate and large pneumothorax on frontal chest x-rays using deep convolutional neural networks: a retrospective study. PLoS Med. 15(11) (2018)
Valliappan, N., et al.: Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nat. Commun. 11(1) (2020)
Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Wang, X., Thome, N., Cord, M.: Gaze latent support vector machine for image classification improved by weakly supervised region selection. Pattern Recogn. 72 (2017)
Wu, S., Zhang, H., Ré, C.: Understanding and improving information transfer in multi-task learning. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=SylzhkBtDB
Yu, Y., Choi, J., Kim, Y., Yoo, K., Lee, S.H., Kim, G.: Supervising neural attention models for video captioning by human gaze data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Yun, K., Peng, Y., Samaras, D., Zelinsky, G.J., Berg, T.L.: Exploring the role of gaze behavior and object detection in scene understanding. Frontiers Psychol. 4 (2013)
Zhang, H.R., Yang, F., Wu, S., Su, W.J., Ré, C.: Sharp bias-variance tradeoffs of hard parameter sharing in high-dimensional linear regression. arXiv preprint arXiv:2010.11750 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Saab, K. et al. (2021). Observational Supervision for Medical Image Classification Using Gaze Data. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12902. Springer, Cham. https://doi.org/10.1007/978-3-030-87196-3_56
Download citation
DOI: https://doi.org/10.1007/978-3-030-87196-3_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87195-6
Online ISBN: 978-3-030-87196-3
eBook Packages: Computer ScienceComputer Science (R0)