Skip to main content

Pathological Image Contrastive Self-supervised Learning

  • Conference paper
  • First Online:
  • 414 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13543))

Abstract

Self-supervised learning methods have been receiving wide attentions in recent years, where contrastive learning starts to show encouraging performance in many tasks in the field of computer vision. Contrastive learning methods build pre-training weight parameters by crafting positive/negative samples and optimizing their distance in the feature space. It is easy to construct positive/negative samples on natural images, but the methods cannot directly apply to histopathological images because of the unique characteristics of the images such as staining invariance and vertical flip invariance. This paper proposes a general method for constructing clinical-equivalent positive sample pairs on histopathological images for applying contrastive learning on histopathological images. Results on the PatchCamelyon benchmark show that our method can improve model accuracy up to 6% while reducing the training costs, as well as reducing reliance on labeled data.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bejnordi, B.E., et al.: Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Jama 318(22), 2199–2210 (2017)

    Article  Google Scholar 

  2. Chaitanya, K., Erdil, E., Karani, N., Konukoglu, E.: Contrastive learning of global and local features for medical image segmentation with limited annotations. arXiv: Computer. Vision and Pattern Recognition (2020)

  3. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)

    Google Scholar 

  4. Chen, X., Fan, H., Girshick, R., He, K.: Improved baselines with momentum contrastive learning. arXiv preprint. arXiv:2003.04297 (2020)

  5. Chen, X., He, K.: Exploring simple siamese representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15750–15758 (2021)

    Google Scholar 

  6. Chen, X., Xie, S., He, K.: An empirical study of training self-supervised vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9640–9649 (2021)

    Google Scholar 

  7. Contributors, M.: MMSelfSup: Openmmlab self-supervised learning toolbox and benchmark. https://github.com/open-mmlab/mmselfsup (2021)

  8. Doersch, C., Gupta, A., Efros, A.A.: Unsupervised visual representation learning by context prediction. In: International Conference on Computer Vision (2015)

    Google Scholar 

  9. Grill, J.B., et al.: Bootstrap your own latent-a new approach to self-supervised learning. In: Advances in Neural Information Processing Systems, vol. 33, pp. 21271–21284 (2020)

    Google Scholar 

  10. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners (2021)

    Google Scholar 

  11. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)

    Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  13. Li, Z., Chen, Z., Li, A., Fang, L., Jiang, Q., Liu, X., Jiang, J., Zhou, B., Zhao, H.: SimIPU: simple 2d image and 3d point cloud unsupervised pre-training for spatial-aware visual representations (2021)

    Google Scholar 

  14. Macenko, M., et al.: A method for normalizing histology slides for quantitative analysis. In: 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1107–1110. IEEE (2009)

    Google Scholar 

  15. Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving jigsaw puzzles. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 69–84. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_5

    Chapter  Google Scholar 

  16. Ruifrok, A.C., Johnston, D.A., et al.: Quantification of histochemical staining by color deconvolution. Anal. Quant. Cytol. Histol. 23(4), 291–299 (2001)

    Google Scholar 

  17. Sowrirajan, H., Yang, J., Ng, A.Y., Rajpurkar, P.: Moco-cxr: Moco pretraining improves representation and transferability of chest x-ray models. arXiv : Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  18. Vahadane, A., et al.: Structure-preserving color normalization and sparse stain separation for histological images. IEEE Trans. Med. Imaging 35(8), 1962–1971 (2016)

    Article  Google Scholar 

  19. Veeling, B.S., Linmans, J., Winkens, J., Cohen, T., Welling, M.: Rotation equivariant CNNs for digital pathology. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 210–218. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_24

    Chapter  Google Scholar 

  20. Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3733–3742 (2018)

    Google Scholar 

  21. Yang, P., Hong, Z., Yin, X., Zhu, C., Jiang, R.: Self-supervised visual representation learning for histopathological images. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 47–57. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_5

    Chapter  Google Scholar 

  22. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. arXiv: Computer. Vision and Pattern Recognition (2016)

Download references

Acknowledgements

This research was supported in part by the Foundation of Shenzhen Science and Technology Innovation Committee (JCYJ20180507181527806).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lin Luo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qin, W., Jiang, S., Luo, L. (2022). Pathological Image Contrastive Self-supervised Learning. In: Xu, X., Li, X., Mahapatra, D., Cheng, L., Petitjean, C., Fu, H. (eds) Resource-Efficient Medical Image Analysis. REMIA 2022. Lecture Notes in Computer Science, vol 13543. Springer, Cham. https://doi.org/10.1007/978-3-031-16876-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16876-5_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16875-8

  • Online ISBN: 978-3-031-16876-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics