Skip to main content

A Fast Stain Normalization Network for Cervical Papanicolaou Images

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2022)

Abstract

The domain shift between different styles of stain images greatly challenges the generalization of computer-aided diagnosis (CAD) algorithms. To bridge the gap, color normalization is a prerequisite for most CAD algorithms. The existing algorithms with better normalization effect often require more computational consumption, resisting the fast application in large-size medical stain slide images. This paper designs a fast normalization network (FTNC-Net) for cervical Papanicolaou stain images based on learnable bilateral filtering. In our FTNC-Net, explicit three-attribute estimation and spatially adaptive instance normalization are introduced to guide the model to transfer stain color styles in space accurately, and dynamic blocks are adopted to adapt multiple stain color styles. Our method achieves at least 80 fps over 1024\(\times \)1024 images on our experimental platform, thus it can synchronize with the scanner for image acquisition and processing, and has advantages in visual and quantitative evaluation compared with other methods. Moreover, experiments on our cervical staining image dataset demonstrate that the FTNC-Net improves the precision of abnormal cell detection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/xtarx/StainGAN.

References

  1. Abbas, A., Aster, J., Kumar, V.: Robbins Basic Pathology, 9th edn. Elsevier, Amsterdam (2012)

    Google Scholar 

  2. Chen, K., et al.: MMDetection: open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155 (2019)

  3. Chen, X., et al.: An unsupervised style normalization method for cytopathology images. Comput. Struct. Biotechnol. J. 19, 3852–3863 (2021)

    Article  Google Scholar 

  4. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)

    Google Scholar 

  5. Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM Trans. Graph. (TOG) 36(4), 1–12 (2017)

    Article  Google Scholar 

  6. Hong, K., Jeon, S., Yang, H., Fu, J., Byun, H.: Domain-aware universal style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14609–14617 (2021)

    Google Scholar 

  7. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)

    Google Scholar 

  8. Huang, X., Liu, M.Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018)

    Google Scholar 

  9. Kang, H., et al.: Stainnet: a fast and robust stain normalization network. Front. Med. 8 (2021). https://doi.org/10.3389/fmed.2021.746307

  10. Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.H.: Universal style transfer via feature transforms. Adv. Neural Inf. Process. Syst. 30, 1–11 (2017)

    Google Scholar 

  11. Li, Y., Liu, M.Y., Li, X., Yang, M.H., Kautz, J.: A closed-form solution to photorealistic image stylization. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 453–468 (2018)

    Google Scholar 

  12. Lu, C., Gu, C., Wu, K., Xia, S., Wang, H., Guan, X.: Deep transfer neural network using hybrid representations of domain discrepancy. Neurocomputing 409, 60–73 (2020)

    Article  Google Scholar 

  13. Lu, C., Wang, H., Gu, C., Wu, K., Guan, X.: Viewpoint estimation for workpieces with deep transfer learning from cold to hot. In: Cheng, L., Leung, A.C.S., Ozawa, S. (eds.) ICONIP 2018. LNCS, vol. 11301, pp. 21–32. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04167-0_3

    Chapter  Google Scholar 

  14. Lu, C., Xia, S., Huang, W., Shao, M., Fu, Y.: Circle detection by arc-support line segments. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 76–80. IEEE (2017)

    Google Scholar 

  15. Lu, C., Xia, S., Shao, M., Fu, Y.: Arc-support line segments revisited: an efficient high-quality ellipse detection. IEEE Trans. Image Process. 29, 768–781 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  16. Macenko, M., et al.: A method for normalizing histology slides for quantitative analysis. In: 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1107–1110. IEEE (2009)

    Google Scholar 

  17. Nadeem, S., Hollmann, T., Tannenbaum, A.: Multimarginal wasserstein barycenter for stain normalization and augmentation. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12265, pp. 362–371. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59722-1_35

    Chapter  Google Scholar 

  18. Raju, K.: Evolution of pap stain. Biomed. Res. Therapy 3(2), 1–11 (2016)

    Article  MathSciNet  Google Scholar 

  19. Reinhard, E., Adhikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (2001)

    Article  Google Scholar 

  20. Schulte, E.: Standardization of biological dyes and stains: pitfalls and possibilities. Histochemistry 95(4), 319 (1991)

    Article  Google Scholar 

  21. Shaban, M.T., Baur, C., Navab, N., Albarqouni, S.: Staingan: stain style transfer for digital histological images. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 953–956. IEEE (2019)

    Google Scholar 

  22. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  23. Tellez, D., et al.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Med. Image Anal. 58, 101544 (2019)

    Article  Google Scholar 

  24. Tian, Z., Shen, C., Chen, H., He, T.: FCOS: fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9627–9636 (2019)

    Google Scholar 

  25. Vahadane, A., et al.: Structure-preserving color normalization and sparse stain separation for histological images. IEEE Trans. Med. Imaging 35(8), 1962–1971 (2016)

    Article  Google Scholar 

  26. Wu, X., Lu, C., Gu, C., Wu, K., Zhu, S.: Domain adaptation for viewpoint estimation with image generation. In: 2021 International Conference on Control, Automation and Information Sciences (ICCAIS), pp. 341–346. IEEE (2021)

    Google Scholar 

  27. Xia, X., et al.: Joint bilateral learning for real-time universal photorealistic style transfer. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12353, pp. 327–342. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58598-3_20

    Chapter  Google Scholar 

  28. Yang, Z., Liu, S., Hu, H., Wang, L., Lin, S.: Reppoints: point set representation for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9657–9666 (2019)

    Google Scholar 

  29. Yoo, J., Uh, Y., Chun, S., Kang, B., Ha, J.W.: Photorealistic style transfer via wavelet transforms. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9036–9045 (2019)

    Google Scholar 

  30. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kaijie Wu .

Editor information

Editors and Affiliations

Appendix

Appendix

Our network (FTNC-Net) is splitted into three branches: the low-resolution splatting (Lsp) branch, the autoencoder (Ae) branch, and the full-resolution guided branch (Fgd). The total process:

$$\begin{aligned} z&= AE(concat(Encoding(I_{c}),Encoding(I_{s})))\end{aligned}$$
(10)
$$\begin{aligned} \varGamma&= Lsp(I_{lc},I_{ls},attr_c,attr_s,z)\end{aligned}$$
(11)
$$\begin{aligned} m&= Fgd(I_{c},z)\end{aligned}$$
(12)
$$\begin{aligned} k,b&= GridSample(m,\varGamma )\end{aligned}$$
(13)
$$\begin{aligned} I_o&= k*SpAdaIn(I_c,I_s,attr_c,attr_s)+b \end{aligned}$$
(14)

The input of Ae is a vector concatenated by two stain style encodings from the full-resolution content image \(I_c\) and style image \(I_s\). Lsp generates the bilateral grid \(\varGamma \) by the low-resolution content image \(I_{lc}\), the low-resolution style image \(I_{ls}\), 3-attribute estimation (\(attr_c\), \(attr_s\)) of \(I_c\) and \(I_s\), and the latent variable z. Fgd produces the guided map m by \(I_c\) and z. The GrideSample computes the linear transformation weights k and b using m values and pixel locations from \(\varGamma \), and this Eq. (13) called slice connects Lsp and Fgd. The output image \(I_o\) is obtained by applying the linear weights calculated in Eq. (13) to the output of SpAdaIn, and this process is called apply, where SpAdaIn reduces the influence of the stain color of \(I_c\).

The detailed architecture of FTNC-Net is shown in Table 4 and 5:

Table 4. Architecture of Lsp and Fgd
Table 5. Architecture of Ae

Due to the constrain of GPU memory, StainGANFootnote 1 is implemented on images cropped into \(320\times 320\) from \(1024\times 1024\). As the Fig. 5 shown, the discriminator in GAN guides the generator to generate the nonexistent neutrophils to match the style image, which changes the structure of the content image. And the results of StainGAN appear cross-color since there is no color consistency limits.

Fig. 5.
figure 5

A sample from StainGAN.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cao, J., Lu, C., Wu, K., Gu, C. (2023). A Fast Stain Normalization Network for Cervical Papanicolaou Images. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1793. Springer, Singapore. https://doi.org/10.1007/978-981-99-1645-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-1645-0_10

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-1644-3

  • Online ISBN: 978-981-99-1645-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics