Abstract
Out-of-distribution (OOD) generalization poses a serious challenge for modern deep learning (DL). OOD data consists of test data that is significantly different from the model’s training data. DL models that perform well on in-domain test data could struggle on OOD data. Overcoming this discrepancy is essential to the reliable deployment of DL. Proper model calibration decreases the number of spurious connections that are made between model features and class outputs. Hence, calibrated DL can improve OOD generalization by only learning features that are truly indicative of the respective classes. Previous work proposed domain-aware model calibration (DOMINO) to improve DL calibration, but it lacks designs for model generalizability to OOD data. In this work, we propose DOMINO++, a dual-guidance and dynamic domain-aware loss regularization focused on OOD generalizability. DOMINO++ integrates expert-guided and data-guided knowledge in its regularization. Unlike DOMINO which imposed a fixed scaling and regularization rate, DOMINO++ designs a dynamic scaling factor and an adaptive regularization rate. Comprehensive evaluations compare DOMINO++ with DOMINO and the baseline model for head tissue segmentation from magnetic resonance images (MRIs) on OOD data. The OOD data consists of synthetic noisy and rotated datasets, as well as real data using a different MRI scanner from a separate site. DOMINO++’s superior performance demonstrates its potential to improve the trustworthy deployment of DL on real clinical data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arjovsky, M., Bottou, L., Gulrajani, I., Lopez-Paz, D.: Invariant risk minimization (2019). https://doi.org/10.48550/ARXIV.1907.02893, https://arxiv.org/abs/1907.02893
Ashburner, J.: SPM: a history. Neuroimage 62(2), 791–800 (2012)
Bertels, J., et al.: Optimizing the dice score and Jaccard index for medical image segmentation: theory and practice. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 92–100. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_11
Consortium, M.: MONAI: Medical open network for AI, March 2020. https://doi.org/10.5281/zenodo.6114127, If you use this software, please cite it using these metadata
Dinsdale, N.K., Bluemke, E., Sundaresan, V., Jenkinson, M., Smith, S.M., Namburete, A.I.: Challenges for machine learning in clinical translation of big data imaging studies. Neuron 110, 3866–3881 (2022)
Dosovitskiy, A., Djolonga, J.: You only train once: loss-conditional training of deep networks. In: International Conference on Learning Representations (2020)
Dubuisson, M.P., Jain, A.K.: A modified Hausdorff distance for object matching. In: Proceedings of 12th International Conference on Pattern Recognition, vol. 1, pp. 566–568. IEEE (1994)
Golatkar, A.S., Achille, A., Soatto, S.: Time matters in regularizing deep networks: weight decay and data augmentation affect early learning dynamics, matter little near convergence. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Gudbjartsson, H., Patz, S.: The Rician distribution of noisy MRI data. Magn. Reson. Med. 34(6), 910–914 (1995)
Hatamizadeh, A., et al.: UNETR: transformers for 3D medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 574–584 (2022)
Huttenlocher, D.P., Klanderman, G.A., Rucklidge, W.J.: Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 15(9), 850–863 (1993)
Jadon, S.: A survey of loss functions for semantic segmentation. In: 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), pp. 1–7. IEEE (2020)
Kukačka, J., Golkov, V., Cremers, D.: Regularization for deep learning: a taxonomy. arXiv preprint arXiv:1710.10686 (2017)
Lee, J.H., Lee, C., Kim, C.S.: Learning multiple pixelwise tasks based on loss scale balancing. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5107–5116 (2021)
Runge, V.M., Osborne, M.A., Wood, M.L., Wolpert, S.M., Kwan, E., Kaufman, D.M.: The efficacy of tilted axial MRI of the CNS. Magn. Reson. Imaging 5(6), 421–430 (1987)
Saturnino, G.B., Puonti, O., Nielsen, J.D., Antonenko, D., Madsen, K.H., Thielscher, A.: SimNIBS 2.1: a comprehensive pipeline for individualized electric field modelling for transcranial brain stimulation. In: Makarov, S., Horner, M., Noetscher, G. (eds.) Brain and Human Body Modeling, pp. 3–25. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-21293-3_1
Sobol, W.T.: Recent advances in MRI technology: implications for image quality and patient safety. Saudi J. Ophthalmol. 26(4), 393–399 (2012)
Stolte, S.E., et al.: DOMINO: domain-aware model calibration in medical image segmentation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention-MICCAI 2022: 25th International Conference, Singapore, 18–22 September 2022, Proceedings, Part V, pp. 454–463. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16443-9_44
Tom, G., Hickman, R.J., Zinzuwadia, A., Mohajeri, A., Sanchez-Lengeling, B., Aspuru-Guzik, A.: Calibration and generalizability of probabilistic models on low-data chemical datasets with DIONYSUS. arXiv preprint arXiv:2212.01574 (2022)
Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR 2011, pp. 1521–1528 (2011). https://doi.org/10.1109/CVPR.2011.5995347
Wald, Y., Feder, A., Greenfeld, D., Shalit, U.: On calibration and out-of-domain generalization. Adv. Neural. Inf. Process. Syst. 34, 2215–2227 (2021)
Wolf, T., et al.: HuggingFace’s transformers: state-of-the-art natural language processing (2020)
Yang, J., Soltan, A.A., Clifton, D.A.: Machine learning generalizability across healthcare settings: insights from multi-site COVID-19 screening. npj Digit. Med. 5(1), 69 (2022)
Acknowledgements
This work was supported by the National Institutes of Health/National Institute on Aging, USA (NIA RF1AG071469, NIA R01AG054077), the National Science Foundation, USA (1908299), the Air Force Research Laboratory Munitions Directorate, USA (FA8651-08-D-0108 TO48), and NSF-AFRL INTERN Supplement to NSF IIS-1908299, USA.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Stolte, S.E. et al. (2023). DOMINO++: Domain-Aware Loss Regularization for Deep Learning Generalizability. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14223. Springer, Cham. https://doi.org/10.1007/978-3-031-43901-8_68
Download citation
DOI: https://doi.org/10.1007/978-3-031-43901-8_68
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43900-1
Online ISBN: 978-3-031-43901-8
eBook Packages: Computer ScienceComputer Science (R0)