Skip to main content

Leverage Supervised and Self-supervised Pretrain Models for Pathological Survival Analysis via a Simple and Low-cost Joint Representation Tuning

  • Conference paper
  • First Online:
Resource-Efficient Medical Image Analysis (REMIA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13543))

Included in the following conference series:

  • 471 Accesses

Abstract

The large-scale pretrained models from terabyte-level (TB) data are now broadly used in feature extraction, model initialization, and transfer learning in pathological image analyses. Most existing studies have focused on developing more powerful pretrained models, which are increasingly unscalable for academic institutes. Very few, if any, studies have investigated how to take advantage of existing, yet heterogeneous, pretrained models for downstream tasks. As an example, our experiments elucidated that self-supervised models (e.g., contrastive learning on the entire The Cancer Genome Atlas (TCGA) dataset) achieved a superior performance compared with supervised models (e.g., ImageNet pretraining) on a classification cohort. Surprisingly, it yielded an inferior performance when it was translated to a cancer prognosis task. Such a phenomenon inspired us to explore how to leverage the already trained supervised and self-supervised models for pathological survival analysis. In this paper, we present a simple and low-cost joint representation tuning (JRT) to aggregate task-agnostic vision representation (supervised ImageNet pretrained models) and pathological specific feature representation (self-supervised TCGA pretrained models) for downstream tasks. Our contribution is in three-fold: (1) we adapt and aggregate classification-based supervised and self-supervised representation to survival prediction via joint representation tuning, (2) comprehensive analyses on prevalent strategies of pretrained models are conducted, (3) the joint representation tuning provides a simple, yet computationally efficient, perspective to leverage large-scale pretrained models for both cancer diagnosis and prognosis. The proposed JRT method improved the c-index from 0.705 to 0.731 on the TCGA brain cancer survival dataset. The feature-direct JRT (f-JRT) method achieved \(60\times \) training speedup while maintaining 0.707 c-index score.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Azizi, S., et al.: Big self-supervised models advance medical image classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3478–3488 (2021)

    Google Scholar 

  2. Bao, H., Dong, L., Wei, F.: Beit: bert pre-training of image transformers. arXiv preprint arXiv:2106.08254 (2021)

  3. Bar, Y., Diamant, I., Wolf, L., Greenspan, H.: Deep learning with non-medical training used for chest pathology identification. In: Medical Imaging 2015: Computer-Aided Diagnosis, vol. 9414, p. 94140V. International Society for Optics and Photonics (2015)

    Google Scholar 

  4. Bardes, A., Ponce, J., LeCun, Y.: Vicreg: Variance-invariance-covariance regularization for self-supervised learning. arXiv preprint. arXiv:2105.04906 (2021)

  5. Chen, R.J., et al.: Pathomic fusion: an integrated framework for fusing histopathology and genomic features for cancer diagnosis and prognosis. IEEE Trans. Med. Imaging 41, 757–770 (2020)

    Article  Google Scholar 

  6. Ciga, O., Xu, T., Martel, A.L.: Self supervised contrastive learning for digital histopathology. Mach. Learn. Appl. 7, 100198 (2021)

    Google Scholar 

  7. David, L., et al.: Applications of deep-learning in exploiting large-scale and heterogeneous compound data in industrial pharmaceutical research. Front. Pharmacol. 10, 1303 (2019)

    Article  Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  9. Huo, Y., Deng, R., Liu, Q., Fogo, A.B., Yang, H.: AI applications in renal pathology. Kidney Int. 99(6), 1309–1320 (2021)

    Article  Google Scholar 

  10. Jarrett, D., Yoon, J., van der Schaar, M.: Dynamic prediction in clinical survival analysis using temporal convolutional networks. IEEE J. Biomed. Health Inform. 24(2), 424–436 (2019)

    Article  Google Scholar 

  11. Kather, J.N., et al.: Predicting survival from colorectal cancer histology slides using deep learning: a retrospective multicenter study. PLoS Med. 16(1), e1002730 (2019)

    Article  Google Scholar 

  12. Kieffer, B., Babaie, M., Kalra, S., Tizhoosh, H.R.: Convolutional neural networks for histopathology image classification: training vs. using pre-trained networks. In: 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6. IEEE (2017)

    Google Scholar 

  13. Kim, Y.J., et al.: PAIP 2019: liver cancer segmentation challenge. Med. Image Anal. 67, 101854 (2021)

    Article  Google Scholar 

  14. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25 (2012)

    Google Scholar 

  15. Li, R., Yao, J., Zhu, X., Li, Y., Huang, J.: Graph CNN for survival analysis on whole slide pathological images. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 174–182. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_20

    Chapter  Google Scholar 

  16. Liu, Q., et al.: SimTriplet: simple triplet representation learning with a single GPU. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 102–112. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_10

    Chapter  Google Scholar 

  17. Lu, Y., Jha, A., Huo, Y.: Contrastive learning meets transfer learning: a case study in medical image analysis. arXiv preprint. arXiv:2103.03166 (2021)

  18. Mobadersany, P., et al.: Predicting cancer outcomes from histology and genomics using convolutional networks. Proc. Natl. Acad. Sci. 115(13), E2970–E2979 (2018)

    Article  Google Scholar 

  19. Mormont, R., Geurts, P., Marée, R.: Multi-task pre-training of deep neural networks for digital pathology. IEEE J. Biomed. Health Inform. 25(2), 412–421 (2020)

    Article  Google Scholar 

  20. Peikari, M., Salama, S., Nofech-Mozes, S., Martel, A.L.: A cluster-then-label semi-supervised learning approach for pathology image classification. Sci. Rep. 8(1), 1–13 (2018)

    Article  Google Scholar 

  21. Rai, T., et al.: Can imagenet feature maps be applied to small histopathological datasets for the classification of breast cancer metastatic tissue in whole slide images?. In: Medical Imaging 2019: Digital Pathology, vol. 10956, pp. 191–200. SPIE (2019)

    Google Scholar 

  22. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2015)

    Google Scholar 

  23. Tang, B., Li, A., Li, B., Wang, M.: Capsurv: capsule network for survival analysis with whole slide pathological images. IEEE Access 7, 26022–26030 (2019)

    Article  Google Scholar 

  24. Tellez, D., van der Laak, J., Ciompi, F.: Gigapixel whole-slide image classification using unsupervised image compression and contrastive training (2018)

    Google Scholar 

  25. Thongprayoon, C., et al.: Promises of big data and artificial intelligence in nephrology and transplantation (2020)

    Google Scholar 

  26. Tomczak, K., Czerwińska, P., Wiznerowicz, M.: The cancer genome atlas (TCGA): an immeasurable source of knowledge. Contemp. Oncol. 19(1A), A68 (2015)

    Google Scholar 

  27. Uno, H., Cai, T., Pencina, M.J., D’Agostino, R.B., Wei, L.J.: On the c-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data. Stat. Med. 30(10), 1105–1117 (2011)

    Article  MathSciNet  Google Scholar 

  28. Wang, X., et al.: TransPath: transformer-based self-supervised learning for histopathological image classification. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12908, pp. 186–195. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87237-3_18

    Chapter  Google Scholar 

  29. Yang, P., Hong, Z., Yin, X., Zhu, C., Jiang, R.: Self-supervised visual representation learning for histopathological images. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 47–57. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_5

    Chapter  Google Scholar 

  30. Yao, J., Zhu, X., Jonnagaddala, J., Hawkins, N., Huang, J.: Whole slide images based cancer survival prediction using attention guided deep multiple instance learning networks. Med. Image Anal. 65, 101789 (2020)

    Article  Google Scholar 

  31. Zhu, X., Yao, J., Huang, J.: Deep convolutional neural network for survival analysis with pathological images. In: 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 544–547. IEEE (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuankai Huo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, Q. et al. (2022). Leverage Supervised and Self-supervised Pretrain Models for Pathological Survival Analysis via a Simple and Low-cost Joint Representation Tuning. In: Xu, X., Li, X., Mahapatra, D., Cheng, L., Petitjean, C., Fu, H. (eds) Resource-Efficient Medical Image Analysis. REMIA 2022. Lecture Notes in Computer Science, vol 13543. Springer, Cham. https://doi.org/10.1007/978-3-031-16876-5_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16876-5_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16875-8

  • Online ISBN: 978-3-031-16876-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics