Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

The importance of resource awareness in artificial intelligence for healthcare

Abstract

Artificial intelligence and machine learning (AI/ML) models have been adopted in a wide range of healthcare applications, from medical image computing and analysis to continuous health monitoring and management. Recent data have demonstrated a clear trend that AI/ML model sizes, as well as their computational complexity, memory consumption and the scale of the required training data and costs, are experiencing an exponential increase. The developments in current computing hardware platforms, storage infrastructure, networking and domain expertise cannot keep up with this exponential growth in resources demanded by the AI/ML models. Here, we first analyse this recent trend and highlight that there are resource sustainability issues in AI/ML for healthcare. We then present various algorithm/system innovations that will help address these issues. We finally outline future directions to proactively and prospectively tackle these resource sustainability issues.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Unsustainable energy resource issues caused by the gap between model complexity and efficiency.
Fig. 2: Unsustainable computing power resource issues caused by the gap between model complexity and computing capacity.
Fig. 3: Unsustainable storage issue caused by the increasing data volume and resolution of medical data utilized in AI/ML model development.
Fig. 4: Unsustainable expert load on health data preparation for AI/ML model training.

Similar content being viewed by others

References

  1. Zheng, H. et al. Cartilage segmentation in high-resolution 3D micro-CT images via uncertainty-guided self-training with very sparse annotation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 802–812 (Springer, 2020).

  2. Perrine, S. M. M. et al. A dysmorphic mouse model reveals developmental interactions of chondrocranium and dermatocranium. eLife 11, e76653 (2022).

    Google Scholar 

  3. Pitirri, M. K. et al. Meckel’s cartilage in mandibular development and dysmorphogenesis. Front. Genet 13, 871927 (2022).

    Google Scholar 

  4. Nightingale, L. et al. Automatic instance segmentation of mitochondria in electron microscopy data. Preprint bioRxiv https://doi.org/10.1101/2021.05.24.444785 (2021).

  5. Jia, Z. et al. Learning to learn personalized neural network for ventricular arrhythmias detection on intracardiac EGMs. In Proc. International Joint Conference on Artificial Intelligence 2606–2613 (2021).

  6. Hannun, A. Y. et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat. Med. 25, 65–69 (2019).

    Google Scholar 

  7. Lekadir, K., Quaglio, G., Garmendia, A. T. & Gallin, C. Artificial Intelligence in Healthcare. Applications, Risks, and Ethical and Societal Impacts (European Parliamentary Research Service, 2022).

  8. Banerjee, M. et al. The impact of artificial intelligence on clinical education: perceptions of postgraduate trainee doctors in London (UK) and recommendations for trainers. BMC Med. Educ. 21, 429 (2021).

    Google Scholar 

  9. Dodge, J. et al. Measuring the carbon intensity of AI in cloud instances. In Proc. ACM Conference on Fairness, Accountability and Transparency 1877–1894 (ACM, 2022).

  10. Bayoudh, K., Knani, R., Hamdaoui, F. & Mtibaa, A. A survey on deep multimodal learning for computer vision: advances, trends, applications and datasets. Visual Comput. 38, 2939–2970 (2021).

    Google Scholar 

  11. Xu, X. et al. Scaling for edge inference of deep neural networks. Nat. Electron. 1, 216–222 (2018).

    Google Scholar 

  12. Sutter, H. et al. The free lunch is over: a fundamental turn toward concurrency in software. Dr. Dobb’s J. 30, 202–210 (2005).

    Google Scholar 

  13. Desislavov, R., Martínez-Plumed, F. & Hernández-Orallo, J. Compute and energy consumption trends in deep learning inference. Preprint at https://arxiv.org/abs/2109.05472 (2021).

  14. Hestness, J. et al. Deep learning scaling is predictable, empirically. Preprint at https://arxiv.org/abs/1712.00409 (2017).

  15. Kaplan, J. et al. Scaling laws for neural language models. Preprint at https://arxiv.org/abs/2001.08361 (2020).

  16. Henighan, T. et al. Scaling laws for autoregressive generative modeling. Preprint at https://arxiv.org/abs/2010.14701 (2020).

  17. Jassim, H. S., Lu, W. & Olofsson, T. Predicting energy consumption and CO2 emissions of excavators in earthwork operations: an artificial neural network model. Sustainability 9, 1257 (2017).

    Google Scholar 

  18. Strubell, E., Ganesh, A. & McCallum, A. Energy and policy considerations for deep learning in NLP. In Proc. 57th Annual Meeting of the Association for Computational Linguistics 3645–3650 (Association for Computational Linguistics, 2019).

  19. Rasmy, L., Xiang, Y., Xie, Z., Tao, C. & Zhi, D. Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. NPJ Digit. Med. 4, 86 (2021).

    Google Scholar 

  20. Clark, K., Luong, M.-T., Le, Q. V. & Manning, C. D. Electra: pre-training text encoders as discriminators rather than generators. Preprint at https://arxiv.org/abs/2003.10555 (2020).

  21. Gholami, A., Kim, S. & Yao, Z. Memory footprint and FLOPs for SOTA models in CV/NLP/Speech https://github.com/amirgholami/ai_and_memory_wall (2020).

  22. Wang, S., Zhao, Z., Ouyang, X., Wang, Q. & Shen, D. ChatCAD: interactive computer-aided diagnosis on medical image using large language models. Preprint at https://arxiv.org/abs/2302.07257 (2023).

  23. CPI inflation calculator. https://www.bls.gov/data/inflation_calculator.htm (2023).

  24. Xu, Y. et al. Computer-aided detection and prognosis of colorectal cancer on whole slide images using dual resolution deep learning. J. Cancer Res. Clin. Oncol. 149, 91–101 (2022).

    Google Scholar 

  25. Cernazanu-Glavan, C. & Holban, S. Segmentation of bone structure in X-ray images using convolutional neural network. Adv. Electr. Comput. Eng 13, 87–94 (2013).

    Google Scholar 

  26. Chen, C.-L. et al. An annotation-free whole-slide training approach to pathological classification of lung cancer types using deep learning. Nat. Commun. 12, 1193 (2021).

    Google Scholar 

  27. Mund, A. et al. Deep visual proteomics defines single-cell identity and heterogeneity. Nat. Biotechnol. 40, 1231–1240 (2022).

    Google Scholar 

  28. Ghahremani, P. et al. Deep learning-inferred multiplex immunofluorescence for immunohistochemical image quantification. Nat. Mach. Intell. 4, 401–412 (2022).

    Google Scholar 

  29. Jin, C.-B. et al. Deep CT to MR synthesis using paired and unpaired data. Sensors 19, 2361 (2019).

    Google Scholar 

  30. Chen, R. J. et al. Pan-cancer integrative histology-genomic analysis via multimodal deep learning. Cancer Cell 40, 865–878 (2022).

    Google Scholar 

  31. Andreev, A., Morrell, T., Briney, K., Gesing, S. & Manor, U. Biologists need modern data infrastructure on campus. Preprint at https://arxiv.org/abs/2108.07631 (2021).

  32. Gourraud, P.-A. & Simon, F. Differences between Europe and the United States on AI/digital policy: comment response to roundtable discussion on AI. Gender Genome 4, 1–18 (2020).

    Google Scholar 

  33. Ghosh, A., Raha, A. & Mukherjee, A. Energy-efficient IoT-health monitoring system using approximate computing. Internet Things 9, 100166 (2020).

    Google Scholar 

  34. Sheller, M. J. et al. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Sci. Rep. 10, 12598 (2020).

    Google Scholar 

  35. Zhang, A., Xing, L., Zou, J. & Wu, J. C. Shifting machine learning for healthcare from development to deployment and from models to data. Nat. Biomed. Eng 6, 1330–1345 (2022).

    Google Scholar 

  36. Pitman, A., Cowan, I. A., Floyd, R. A. & Munro, P. L. Measuring radiologist workload: progressing from RVUs to study ascribable times. J. Med. Imag. Rad. Oncol. 62, 605–618 (2018).

    Google Scholar 

  37. Dora, J. M., Torres, F. S., Gerchman, M. & Fogliatto, F. S. Development of a local relative value unit to measure radiologists’ computed tomography reporting workload. J. Med. Imag. Rad. Oncol. 60, 714–719 (2016).

    Google Scholar 

  38. Ghayvat, H. et al. AI-enabled radiologist in the loop: novel AI-based framework to augment radiologist performance for COVID-19 chest CT medical image annotation and classification from pneumonia. Neural Comput. Appl. https://doi.org/10.1007/s00521-022-07055-1 (2022).

  39. Veiga-Canuto, D. et al. Comparative multicentric evaluation of inter-observer variability in manual and automatic segmentation of neuroblastic tumors in magnetic resonance images. Cancers 14, 3648 (2022).

    Google Scholar 

  40. Physician Specialty Data Report; https://www.aamc.org/data-reports/workforce/report/physician-specialty-data-report (Association of American Medical Colleges, 2022).

  41. Chen, Y., Qin, X., Wang, J., Yu, C. & Gao, W. FedHealth: a federated transfer learning framework for wearable healthcare. IEEE Intell. Syst. 35, 83–93 (2020).

    Google Scholar 

  42. Xu, X. et al. AI-CHD: an AI-based framework for cost-effective surgical telementoring of congenital heart disease. Commun. ACM 64, 66–74 (2021).

    Google Scholar 

  43. Bittremieux, W., May, D. H., Bilmes, J. & Noble, W. S. A learned embedding for efficient joint analysis of millions of mass spectra. Nat. Methods 19, 675–678 (2022).

    Google Scholar 

  44. Wolleb, J. et al. Learn to ignore: domain adaptation for multi-site MRI analysis. In Proc. Medical Image Computing and Computer Assisted Intervention 725–735 (Springer, 2022).

  45. Jia, Z., Shi, Y. & Hu, J. Personalized neural network for patient-specific health monitoring in IoT: a meta-learning approach. In Proc. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems Vol. 41, 5394–5407 (IEEE, 2022).

  46. Jia, Z., Hong, F., Ping, L., Shi, Y. & Hu, J. Enabling on-device model personalization for ventricular arrhythmias detection by generative adversarial networks. In Proc. ACM/IEEE Design Automation Conference (DAC) 163–168 (IEEE, 2021).

  47. Xu, X. et al. Efficient hardware implementation of cellular neural networks with incremental quantization and early exit. ACM J. Emerg. Technol. Comput. Syst. 14, 1–20 (2018).

    Google Scholar 

  48. Wu, Y., Zeng, D., Xu, X., Shi, Y. & Hu, J. FairPrune: achieving fairness through pruning for dermatological disease diagnosis. In Proc. Medical Image Computing and Computer Assisted Intervention: 25th International Conference Part I 743–753 (Springer, 2022).

  49. Zhang, R. & Chung, A. C. MedQ: lossless ultra-low-bit neural network quantization for medical image segmentation. Med. Image Anal. 73, 102200 (2021).

    Google Scholar 

  50. Zhang, Y. et al. RT-RCG: neural network and accelerator search towards effective and real-time ECG reconstruction from intracardiac electrograms. ACM J. Emerg. Technol. Comput. Syst. 18, 1–25 (2022).

    Google Scholar 

  51. Chen, L. et al. Self-supervised learning for medical image analysis using image context restoration. Med. Image Anal. 58, 101539 (2019).

    Google Scholar 

  52. Jamaludin, A., Kadir, T. & Zisserman, A. Self-supervised learning for spinal MRIs. In Proc. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support 294–302 (Springer, 2017).

  53. Azizi, S. et al. Big self-supervised models advance medical image classification. In Proc. IEEE/CVF International Conference on Computer Vision 3478–3488 (IEEE, 2021).

  54. Zhang, Y., Jiang, H., Miura, Y., Manning, C. D. & Langlotz, C. P. Contrastive learning of medical visual representations from paired images and text. In Proc. Machine Learning for Healthcare Conference 2–25 (PMLR, 2022).

  55. Kiyasseh, D., Zhu, T. & Clifton, D. A. CLOCS: contrastive learning of cardiac signals across space, time and patients. In Proc. International Conference on Machine Learning 5606–5615 (PMLR, 2021).

  56. Lan, X., Ng, D., Hong, S. & Feng, M. Intra-inter subject self-supervised learning for multivariate cardiac signals. In Proc. AAAI Conference on Artificial Intelligence Vol. 36, 4532–4540 (AAAI, 2022).

  57. Sarma, K. V. et al. Federated learning improves site performance in multicenter deep learning without data sharing. J. Am. Med. Inform. Assoc. 28, 1259–1264 (2021).

    Google Scholar 

  58. Qayyum, A., Ahmad, K., Ahsan, M. A., Al-Fuqaha, A. & Qadir, J. Collaborative federated learning for healthcare: multi-modal COVID-19 diagnosis at the edge. IEEE Open J. Comput. Soc. 3, 172–184 (2022).

    Google Scholar 

  59. Teng, D., Kong, J. & Wang, F. Scalable and flexible management of medical image big data. Distrib. Parallel Databases 37, 235–250 (2019).

    Google Scholar 

  60. Shen, B., Guo, J. & Yang, Y. MedChain: efficient healthcare data sharing via blockchain. Appl. Sci. 9, 1207 (2019).

    Google Scholar 

  61. Lu, Q., Jiang, W., Xu, X., Shi, Y. & Hu, J. On neural architecture search for resource-constrained hardware platforms. In Proc. International Conference on Computer-Aided Design; https://doi.org/10.48550/arXiv.1911.00105 (Association for Computing Machinery, 2019).

  62. Ding, Y. et al. Hardware design and the competency awareness of a neural network. Nat. Electron. 3, 514–523 (2020).

    Google Scholar 

  63. Bian, S., Jiang, W., Lu, Q., Shi, Y. & Sato, T. NASS: optimizing secure inference via neural architecture search. In Proc. ECAI 2020 24th European Conference on Artificial Intelligence 1746–1753 (IOS Press, 2020).

  64. Jiang, W. et al. Device-circuit-architecture co-exploration for computing-in-memory neural accelerators. IEEE Trans. Comput. 70, 595–605 (2020).

    MathSciNet  MATH  Google Scholar 

  65. Jiang, W. et al. Hardware/software co-exploration of neural architectures. IEEE Trans. Comput. Aided Design Integrated Circuits Syst. 39, 4805–4815 (2020).

    Google Scholar 

  66. Jiang, W., Yang, L., Dasgupta, S., Hu, J. & Shi, Y. Standing on the shoulders of giants: hardware and neural architecture co-search with hot start. IEEE Trans. Comput. Aided Design Integrated Circuits Syst. 39, 4154–4165 (2020).

    Google Scholar 

  67. Yang, L. et al. Co-exploration of neural architectures and heterogeneous ASIC accelerator designs targeting multiple tasks. In Proc. Design Automation Conference (DAC) 1–6 (IEEE, 2020).

  68. Cao, Q., Lal, Y. K., Trivedi, H., Balasubramanian, A. & Balasubramanian, N. IrEne: interpretable energy prediction for transformers. In Proc. 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing 2145–2157 (ACL, 2021).

  69. Baruffati, A. Chat GPT Statistics 2023: Trends and the Future Perspectives https://blog.gitnux.com/chat-gpt-statistics (2023).

  70. Narang, S. & Chowdhery, A. Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html (2022).

  71. Wu, C. et al. Visual ChatGPT: talking, drawing and editing with visual foundation models. Preprint at https://arxiv.org/abs/2303.04671 (2023).

  72. Stokes, J. With GPT-4, OpenAI Is Deliberately Slow Walking To AGI https://www.piratewires.com/p/openai-slowing-walking-gpt (2023).

  73. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 234–241 (Springer, 2015).

  74. Wei, C., Ren, S., Guo, K., Hu, H. & Liang, J. High-resolution Swin transformer for automatic medical image segmentation. Sensors 23, 3420 (2023).

    Google Scholar 

  75. Tran, T., Nguyen, T. D., Phung, D. & Venkatesh, S. Learning vector representation of medical objects via EMR-driven nonnegative restricted Boltzmann machines (eNRBM). J. Biomed. Inform. 54, 96–105 (2015).

    Google Scholar 

  76. Yang, X. et al. GatorTron: A large clinical language model to unlock patient information from unstructured electronic health records. Preprint at https://www.medrxiv.org/content/10.1101/2022.02.27.22271257v2 (2022).

  77. Zheng, S. et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 6881–6890 (IEEE, 2021).

  78. Lee, J. et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36, 1234–1240 (2020).

    Google Scholar 

  79. Badrinarayanan, V., Kendall, A. & Cipolla, R. SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).

    Google Scholar 

  80. Zhang, Z., Liu, Q. & Wang, Y. Road extraction by deep residual U-net. IEEE Geosci. Remote Sensing Lett. 15, 749–753 (2018).

    Google Scholar 

  81. Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N. & Liang, J. UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imag. 39, 1856–1867 (2019).

    Google Scholar 

  82. Jha, D. et al. ResUNet++: an advanced architecture for medical image segmentation. In Proc. 2019 IEEE International Symposium on Multimedia (ISM) 225–2255 (IEEE, 2019).

  83. Sun, K., Xiao, B., Liu, D. & Wang, J. Deep high-resolution representation learning for human pose estimation. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 5693–5703 (IEEE, 2019).

  84. Fan, D.-P. et al. PraNet: parallel reverse attention network for polyp segmentation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention 263–273 (Springer, 2020).

  85. Xie, E. et al. SegFormer: simple and efficient design for semantic segmentation with transformers. Adv. Neural Inf. Process. Syst. 34, 12077–12090 (2021).

    Google Scholar 

  86. Cao, H. et al. Swin-UNet: UNet-like pure transformer for medical image segmentation. In Computer Vision–ECCV 2022 Workshops: Tel Aviv, Israel, October 23–27, 2022 Proceedings Part III, 205–218 (Springer Nature, 2023).

  87. Lin, A. et al. DS-TransUNet: dual Swin transformer U-Net for medical image segmentation. In Proc. IEEE Transactions on Instrumentation and Measurement Vol. 71, 1–15 (IEEE, 2022).

  88. Hatamizadeh, A. et al. UNETR: transformers for 3D medical image segmentation. In Proc. IEEE/CVF Winter Conference on Applications of Computer Vision 574–584 (IEEE, 2022).

  89. Miotto, R., Li, L., Kidd, B. A. & Dudley, J. T. Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci. Rep. 6, 26094 (2016).

    Google Scholar 

  90. Nguyen, P., Tran, T., Wickramasinghe, N. & Venkatesh, S. Deepr: a convolutional net for medical records. IEEE J. Biomed. Health Inform. 21, 22–30 (2016).

    Google Scholar 

  91. Kwon, B. C. et al. RetainVis: visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. IEEE Trans. Visual. Comput. Graph. 25, 299–309 (2018).

    Google Scholar 

  92. Huang, K., Altosaar, J. & Ranganath, R. ClinicalBERT: modeling clinical notes and predicting hospital readmission. Preprint at https://arxiv.org/abs/1904.05342 (2019).

  93. Shin, H.-C. et al. BioMegatron: larger biomedical domain language model. In Proc. 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 4700–4706 (Online Association for Computational Linguistics, 2020).

  94. Gu, Y. et al. Domain-specific language model pretraining for biomedical natural language processing. ACM Trans. Comput. Healthcare 3, 1–23 (2021).

    Google Scholar 

  95. Rasmy, L., Xiang, Y., Xie, Z., Tao, C. & Zhi, D. Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. NPJ Digit. Med. 4, 86 (2021).

    Google Scholar 

  96. Shamsolmoali, P., Zareapoor, M., Wang, R., Zhou, H. & Yang, J. A novel deep structure U-Net for sea-land segmentation in remote sensing images. IEEE J. Select. Topics Appl. Earth Observ. Remote Sens. 12, 3219–3232 (2019).

    Google Scholar 

  97. Wang, Z. & Blaschko, M. MRF-UNets: searching UNet with Markov random fields. In Proc. European Conference on Machine Learning and Knowledge Discovery in Databases 599–614 (ACM, 2022).

  98. Gao, J. et al. AutoBERT-Zero: evolving Bert backbone from scratch. In Proc. AAAI Conference on Artificial Intelligence Vol. 36, 10663–10671 (AAAI, 2022).

  99. Mutlu, O. Memory scaling: a systems architecture perspective. In Proc. IEEE International Memory Workshop 21–25 (IEEE, 2013).

  100. Rajagopalan, V. et al. Using Next-Generation Memory Technologies: DRAM and Beyond HC28-T1 https://www.youtube.com/watch?v=61oZhHwBrh8 (2016).

  101. Samsung HBM2E https://semiconductor.samsung.com/dram/hbm/hbm2e-flashbolt/ (2019).

  102. Micron GDDR6X https://www.micron.com/products/ultra-bandwidth-solutions/gddr6x (2020).

  103. Samsung HBM3 https://semiconductor.samsung.com/dram/hbm/hbm3/ (2021).

  104. Talluri, R. LPDDR5X: Memory Performance that Pushes the Limits of What’s Possible https://www.micron.com/about/blog/2022/february/lpddr5x-memory-performance-that-pushes-the-limits (2022).

  105. Samsung LPDDR5X. https://semiconductor.samsung.com/dram/lpddr/lpddr5x/ (2022).

  106. Alrowili, S. & Vijay-Shanker, K. BioM-transformers: building large biomedical language models with BERT, ALBERT and ELECTRA. In Proc. 20th Workshop on Biomedical Language Processing 221–227 (Association for Computational Linguistics. 2021).

  107. GPU specs database https://www.techpowerup.com/gpu-specs/ (2023).

  108. Early lung cancer action program (ELCAP) dataset https://www.via.cornell.edu/lungdb.html (2014).

  109. Shafiee, M. J. et al. Discovery radiomics via stochasticnet sequencers for cancer detection. Preprint at https://arxiv.org/abs/1511.03361 (2015).

  110. Armato, S. G.III et al. The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Med. Phys. 38, 915–931 (2011).

    Google Scholar 

  111. Armato, S. G.III et al. Lung image database consortium: developing a resource for the medical imaging research community. Radiology 232, 739–748 (2004).

    Google Scholar 

  112. Litjens, G. et al. 1399 H&E-stained sentinel lymph node sections of breast cancer patients: The CAMELYON dataset. GigaScience 7, giy065 (2018).

    Google Scholar 

  113. Kuan, K. et al. Deep learning for lung cancer detection: tackling the Kaggle Data Science Bowl 2017 challenge. Preprint at https://arxiv.org/abs/1705.09435 (2017).

  114. PAIP 2019: Liver cancer segmentation https://paip2019.grand-challenge.org/ (2019).

  115. Ngo, T. A. & Carneiro, G. Lung segmentation in chest radiographs using distance regularized level set and deep-structured learning and inference. In Proc. IEEE International Conference on Image Processing 2140–2143 (IEEE, 2015).

  116. LUng Nodule Analysis (LUNA) https://luna16.grand-challenge.org/Home/ (2016).

  117. Dou, Q., Chen, H., Yu, L., Qin, J. & Heng, P.-A. Multilevel contextual 3-D CNNs for false positive reduction in pulmonary nodule detection. IEEE Trans. Biomed. Eng. 64, 1558–1567 (2016).

    Google Scholar 

  118. Venkatesan, N. J., Shin, D. R. & Nam, C. S. Nodule detection with convolutional neural network using Apache Spark and GPU frameworks. Appl. Sci. 11, 2838 (2021).

    Google Scholar 

  119. Yan, C., Yao, J., Li, R., Xu, Z. & Huang, J. Weakly supervised deep learning for thoracic disease classification and localization on chest X-rays. In Proc. ACM International Conference on Bioinformatics, Computational Biology and Health Informatics 103–110 (ACM, 2018).

  120. Bustos, A., Pertusa, A., Salinas, J.-M. & de la Iglesia-Vayá, M. PadChest: a large chest X-ray image dataset with multi-label annotated reports. Med. Image Anal. 66, 101797 (2020).

    Google Scholar 

  121. Lee, J., Kim, H., Chung, H. & Ye, J. C. Deep learning fast MRI using channel attention in magnitude domain. In Proc. International Symposium on Biomedical Imaging 917–920 (IEEE, 2020).

  122. Knoll, F. et al. fastMRI: a publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning. Artif. Intell. 2, e190007 (2020).

    Google Scholar 

  123. Linmans, J., Elfwing, S., van der Laak, J. & Litjens, G. Predictive uncertainty estimation for out-of-distribution detection in digital pathology. Med. Image Anal. 83, 102655 (2023).

    Google Scholar 

  124. Dandu, R. V. Storage media for computers in radiology. Ind. J. Radiol. Imag. 18, 287–289 (2008).

    Google Scholar 

  125. Reeves, A. P. et al. A public image database to support research in computer aided diagnosis. In Proc. Annual International Conference of the IEEE Engineering in Medicine and Biology Society 3715–3718 (IEEE, 2009).

  126. Computed tomography emphysema database https://lauge-soerensen.github.io/emphysema-database/ (2010).

  127. TCGA-LUAD collection https://www.cancerimagingarchive.net/collections/tcga-luad/ (2016).

  128. DeepLesion dataset https://nihcc.app.box.com/v/DeepLesion/ (2019).

  129. SCR database: segmentation in chest radiographs https://www.isi.uu.nl/Research/Databases/SCR/ (2006).

  130. Demner-Fushman, D., Antani, S., Simpson, M. & Thoma, G. R. Design and development of a multimodal biomedical information retrieval system. J. Comput. Sci. Eng. 6, 168–177 (2012).

    Google Scholar 

  131. Zhu, C. S. et al. The prostate, lung, colorectal and ovarian cancer screening trial and its associated research resource. J. Natl Cancer Institute 105, 1684–1693 (2013).

    Google Scholar 

  132. Guendel, S. et al. Learning to recognize abnormalities in chest X-rays with location-aware dense networks. In Proc. Iberoamerican Congress on Pattern Recognition 757–765 (Springer, 2018).

  133. Rajpurkar, P. et al. MURA dataset: towards radiologist-level abnormality detection in musculoskeletal radiographs. In Proc. Medical Imaging with Deep Learning (2018).

  134. Kermany, D., Zhang, K. & Goldbaum, M. Large dataset of labeled optical coherence tomography (OCT) and chest X-ray images. Mendeley Data 3, 10-17632 (2018).

    Google Scholar 

  135. Irvin, J. et al. CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In Proc. AAAI Conference on Artificial Intelligence Vol. 33, 590–597 (AAAI, 2019).

  136. Johnson, A. E. et al. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Sci. Data 6, 317 (2019).

    Google Scholar 

  137. ChestX-Det-Dataset https://github.com/Deepwise-AILab/ChestX-Det-Dataset (2020).

  138. RSNA cervical spine train image PNG CSFD + CSV https://www.kaggle.com/datasets/saberghaderi/rsna-cervical-spine-train-image-png-csfd?select=RSNA+Cervical+Spine+CSFD (2022).

  139. Marcus, D. S. et al. Open access series of imaging studies (OASIS): cross-sectional MRI data in young, middle aged, nondemented and demented older adults. J. Cogn. Neurosci. 19, 1498–1507 (2007).

    Google Scholar 

  140. Marcus, D. S., Fotenos, A. F., Csernansky, J. G., Morris, J. C. & Buckner, R. L. Open access series of imaging studies: longitudinal MRI data in nondemented and demented older adults. J. Cogn. Neurosci. 22, 2677–2684 (2010).

    Google Scholar 

  141. LaMontagne, P. J. et al. IC-P-164: OASIS-3: longitudinal neuroimaging, clinical, and cognitive dataset for normal aging and Alzheimers disease. Alzheimers Dement. 14, 138 (2018).

    Google Scholar 

  142. Koenig, L. N. et al. Select atrophied regions in Alzheimer disease (SARA): an improved volumetric model for identifying Alzheimer disease dementia. NeuroImage Clin. 26, 102248 (2020).

    Google Scholar 

  143. MRI lesion segmentation in multiple sclerosis database http://www.medinfo.cs.ucy.ac.cy/index.php/facilities/32-software/218-datasets (2011).

  144. Loizou, C. P. et al. Multiscale amplitude-modulation frequency-modulation (AM–FM) texture analysis of multiple sclerosis in brain MRI images. IEEE Trans. Inf. Technol. Biomed. 15, 119–129 (2010).

    Google Scholar 

  145. Samartzis, D., Karppinen, J., Chan, D., Luk, K. D. & Cheung, K. M. The association of lumbar intervertebral disc degeneration on magnetic resonance imaging with body mass index in overweight and obese adults: a population-based study. Arthritis Rheum. 64, 1488–1496 (2012).

    Google Scholar 

  146. Kuang, X. et al. Spine-GFlow: a hybrid learning framework for robust multi-tissue segmentation in lumbar MRI without manual annotation. Comput. Med. Imag. Graph. 99, 102091 (2022).

    Google Scholar 

  147. Longitudinal multiple sclerosis lesion segmentation challenge https://smart-stats-tools.org/lesion-challenge-2015 (2015).

  148. Carass, A. et al. Longitudinal multiple sclerosis lesion segmentation: resource and challenge. NeuroImage 148, 77–102 (2017).

    Google Scholar 

  149. MRNet dataset: a knee MRI dataset and competition https://stanfordmlgroup.github.io/competitions/mrnet/ (2018).

  150. Kara, A. C. & Hardalaç, F. Detection and classification of knee injuries from MR images using the MRNet dataset with progressively operating deep learning methods. Mach. Learn. Knowledge Extraction 3, 1009–1029 (2021).

    Google Scholar 

  151. Lumbar spine MRI dataset https://data.mendeley.com/datasets/k57fr854j2/2 (2019).

  152. RSNA-ASNR-MICCAI brain tumor segmentation (BraTS) challenge http://braintumorsegmentation.org/ (2021).

  153. Çallı, E., Sogancioglu, E., van Ginneken, B., van Leeuwen, K. G. & Murphy, K. Deep learning for chest X-ray analysis: a survey. Med. Image Anal. 72, 102125 (2021).

    Google Scholar 

  154. Gu, Y. et al. A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput. Biol. Med. 137, 104806 (2021).

    Google Scholar 

  155. Shoeibi, A. et al. Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging: a review. Comput. Biol. Med. 136, 104697 (2021).

    Google Scholar 

  156. Forsberg, D., Rosipko, B. & Sunshine, J. L. Radiologists’ variation of time to read across different procedure types. J. Digit. Imag. 30, 86–94 (2017).

    Google Scholar 

  157. Randell, R., Ruddle, R. A., Quirke, P., Thomas, R. G. & Treanor, D. Working at the microscope: analysis of the activities involved in diagnostic pathology. Histopathology 60, 504–510 (2012).

    Google Scholar 

  158. Vodovnik, A. Diagnostic time in digital pathology: a comparative study on 400 cases. J. Pathol. Inform. 7, 4 (2016).

    Google Scholar 

  159. Obaro, A. E., Plumb, A. A., North, M. P., Halligan, S. & Burling, D. N. Computed tomographic colonography: how many and how fast should radiologists report? Eur. Radiol. 29, 5784–5790 (2019).

    Google Scholar 

Download references

Acknowledgements

J.C. is funded by the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF, Germany) under funding reference 161L0272 and supported by the Ministry of Culture and Science of the State of North Rhine-Westphalia (Ministerium für Kultur und Wissenschaft des Landes Nordrhein-Westfalen, MKW NRW).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiaowei Xu or Yiyu Shi.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Andrey Andreev and Fernando Martínez-Plumed for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jia, Z., Chen, J., Xu, X. et al. The importance of resource awareness in artificial intelligence for healthcare. Nat Mach Intell 5, 687–698 (2023). https://doi.org/10.1038/s42256-023-00670-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-023-00670-0

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics