Abstract
The domain shift between different styles of stain images greatly challenges the generalization of computer-aided diagnosis (CAD) algorithms. To bridge the gap, color normalization is a prerequisite for most CAD algorithms. The existing algorithms with better normalization effect often require more computational consumption, resisting the fast application in large-size medical stain slide images. This paper designs a fast normalization network (FTNC-Net) for cervical Papanicolaou stain images based on learnable bilateral filtering. In our FTNC-Net, explicit three-attribute estimation and spatially adaptive instance normalization are introduced to guide the model to transfer stain color styles in space accurately, and dynamic blocks are adopted to adapt multiple stain color styles. Our method achieves at least 80 fps over 1024\(\times \)1024 images on our experimental platform, thus it can synchronize with the scanner for image acquisition and processing, and has advantages in visual and quantitative evaluation compared with other methods. Moreover, experiments on our cervical staining image dataset demonstrate that the FTNC-Net improves the precision of abnormal cell detection.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abbas, A., Aster, J., Kumar, V.: Robbins Basic Pathology, 9th edn. Elsevier, Amsterdam (2012)
Chen, K., et al.: MMDetection: open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155 (2019)
Chen, X., et al.: An unsupervised style normalization method for cytopathology images. Comput. Struct. Biotechnol. J. 19, 3852–3863 (2021)
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)
Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM Trans. Graph. (TOG) 36(4), 1–12 (2017)
Hong, K., Jeon, S., Yang, H., Fu, J., Byun, H.: Domain-aware universal style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14609–14617 (2021)
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)
Huang, X., Liu, M.Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018)
Kang, H., et al.: Stainnet: a fast and robust stain normalization network. Front. Med. 8 (2021). https://doi.org/10.3389/fmed.2021.746307
Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.H.: Universal style transfer via feature transforms. Adv. Neural Inf. Process. Syst. 30, 1–11 (2017)
Li, Y., Liu, M.Y., Li, X., Yang, M.H., Kautz, J.: A closed-form solution to photorealistic image stylization. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 453–468 (2018)
Lu, C., Gu, C., Wu, K., Xia, S., Wang, H., Guan, X.: Deep transfer neural network using hybrid representations of domain discrepancy. Neurocomputing 409, 60–73 (2020)
Lu, C., Wang, H., Gu, C., Wu, K., Guan, X.: Viewpoint estimation for workpieces with deep transfer learning from cold to hot. In: Cheng, L., Leung, A.C.S., Ozawa, S. (eds.) ICONIP 2018. LNCS, vol. 11301, pp. 21–32. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04167-0_3
Lu, C., Xia, S., Huang, W., Shao, M., Fu, Y.: Circle detection by arc-support line segments. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 76–80. IEEE (2017)
Lu, C., Xia, S., Shao, M., Fu, Y.: Arc-support line segments revisited: an efficient high-quality ellipse detection. IEEE Trans. Image Process. 29, 768–781 (2019)
Macenko, M., et al.: A method for normalizing histology slides for quantitative analysis. In: 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1107–1110. IEEE (2009)
Nadeem, S., Hollmann, T., Tannenbaum, A.: Multimarginal wasserstein barycenter for stain normalization and augmentation. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12265, pp. 362–371. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59722-1_35
Raju, K.: Evolution of pap stain. Biomed. Res. Therapy 3(2), 1–11 (2016)
Reinhard, E., Adhikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (2001)
Schulte, E.: Standardization of biological dyes and stains: pitfalls and possibilities. Histochemistry 95(4), 319 (1991)
Shaban, M.T., Baur, C., Navab, N., Albarqouni, S.: Staingan: stain style transfer for digital histological images. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 953–956. IEEE (2019)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Tellez, D., et al.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Med. Image Anal. 58, 101544 (2019)
Tian, Z., Shen, C., Chen, H., He, T.: FCOS: fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9627–9636 (2019)
Vahadane, A., et al.: Structure-preserving color normalization and sparse stain separation for histological images. IEEE Trans. Med. Imaging 35(8), 1962–1971 (2016)
Wu, X., Lu, C., Gu, C., Wu, K., Zhu, S.: Domain adaptation for viewpoint estimation with image generation. In: 2021 International Conference on Control, Automation and Information Sciences (ICCAIS), pp. 341–346. IEEE (2021)
Xia, X., et al.: Joint bilateral learning for real-time universal photorealistic style transfer. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12353, pp. 327–342. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58598-3_20
Yang, Z., Liu, S., Hu, H., Wang, L., Lin, S.: Reppoints: point set representation for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9657–9666 (2019)
Yoo, J., Uh, Y., Chun, S., Kang, B., Ha, J.W.: Photorealistic style transfer via wavelet transforms. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9036–9045 (2019)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Our network (FTNC-Net) is splitted into three branches: the low-resolution splatting (Lsp) branch, the autoencoder (Ae) branch, and the full-resolution guided branch (Fgd). The total process:
The input of Ae is a vector concatenated by two stain style encodings from the full-resolution content image \(I_c\) and style image \(I_s\). Lsp generates the bilateral grid \(\varGamma \) by the low-resolution content image \(I_{lc}\), the low-resolution style image \(I_{ls}\), 3-attribute estimation (\(attr_c\), \(attr_s\)) of \(I_c\) and \(I_s\), and the latent variable z. Fgd produces the guided map m by \(I_c\) and z. The GrideSample computes the linear transformation weights k and b using m values and pixel locations from \(\varGamma \), and this Eq. (13) called slice connects Lsp and Fgd. The output image \(I_o\) is obtained by applying the linear weights calculated in Eq. (13) to the output of SpAdaIn, and this process is called apply, where SpAdaIn reduces the influence of the stain color of \(I_c\).
The detailed architecture of FTNC-Net is shown in Table 4 and 5:
Due to the constrain of GPU memory, StainGANFootnote 1 is implemented on images cropped into \(320\times 320\) from \(1024\times 1024\). As the Fig. 5 shown, the discriminator in GAN guides the generator to generate the nonexistent neutrophils to match the style image, which changes the structure of the content image. And the results of StainGAN appear cross-color since there is no color consistency limits.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Cao, J., Lu, C., Wu, K., Gu, C. (2023). A Fast Stain Normalization Network for Cervical Papanicolaou Images. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1793. Springer, Singapore. https://doi.org/10.1007/978-981-99-1645-0_10
Download citation
DOI: https://doi.org/10.1007/978-981-99-1645-0_10
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-1644-3
Online ISBN: 978-981-99-1645-0
eBook Packages: Computer ScienceComputer Science (R0)