Skip to main content

E-Patcher: A Patch-Based Efficient Network for Fast Whole Slide Images Segmentation

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2023 (ICANN 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14255))

Included in the following conference series:

  • 1168 Accesses

Abstract

UNeXt is a leading medical image segmentation method that employs convolutional and multi-layer perceptron (MLP) structure for its segmentation network. It outperforms other image recognition algorithms, such as MedT and TransUNet, in terms of faster computation speed and has shown great potential for clinical applications. However, its reliance on limited pixel neighborhood information for pixel-level segmentation of large pathological images may lead to inaccurate segmentation of the entire image and overlook important features, resulting in suboptimal segmentation results. To this end, we designed a slight and universal plug-and-play block based patch-level for fully considering local features and global features simultaneously, named “Digging” and “Filling” ViT (DF-ViT) block. Specifically, a “Digging” operation is introduced to randomly select sub-blocks from each sub-patch. Multi-Head Attention (MHA) is then applied to integrate global information into these sub-blocks. The resulting sub-blocks with global semantic features are reassembled into the original feature map, and feature fusion is performed to combine the local and global features. This approach achieves global representation while maintaining a low computational complexity of the model at 0.1424G. Compared to UNeXt, it improves mIoU by 1.07%. Moreover, it reduces the parameter count by 58.50% and the computational workload by 68.09%. Extensive experiments on PAIP 2019 WSI dataset demonstrate that the DF-ViT block significantly enhances computation efficiency while maintaining a high level of accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Cao, H., et al.: Swin-Unet: Unet-like pure transformer for medical image segmentation. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds.) ECCV 2022, Part III. LNCS, vol. 13803, pp. 205–218. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-25066-8_9

    Chapter  Google Scholar 

  2. Chen, J., et al.: TransUNet: transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306 (2021)

  3. Elfwing, S., Uchibe, E., Doya, K.: Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Netw. 107, 3–11 (2018)

    Article  Google Scholar 

  4. Guan, Y., et al.: Node-aligned graph convolutional network for whole-slide image representation and classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18813–18823 (2022)

    Google Scholar 

  5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  6. Hou, L., Samaras, D., Kurc, T.M., Gao, Y., Davis, J.E., Saltz, J.H.: Patch-based convolutional neural network for whole slide tissue image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2424–2433 (2016)

    Google Scholar 

  7. Jha, D., et al.: ResUNet++: an advanced architecture for medical image segmentation. In: 2019 IEEE International Symposium on Multimedia (ISM), pp. 225–2255. IEEE (2019)

    Google Scholar 

  8. Kim, Y.J., et al.: PAIP 2019: liver cancer segmentation challenge. Med. Image Anal. 67, 101854 (2021). https://doi.org/10.1016/j.media.2020.101854. https://www.sciencedirect.com/science/article/pii/S1361841520302188

  9. Kong, B., Wang, X., Li, Z., Song, Q., Zhang, S.: Cancer metastasis detection via spatially structured deep network. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 236–248. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59050-9_19

    Chapter  Google Scholar 

  10. Li, Y., Ping, W.: Cancer metastasis detection with neural conditional random field. arXiv preprint arXiv:1806.07064 (2018)

  11. Mehta, S., Rastegari, M.: MobileViT: light-weight, general-purpose, and mobile-friendly vision transformer. arXiv preprint arXiv:2110.02178 (2021)

  12. Mehta, S., Rastegari, M.: Separable self-attention for mobile vision transformers. arXiv preprint arXiv:2206.02680 (2022)

  13. Moeskops, P., Viergever, M.A., Mendrik, A.M., De Vries, L.S., Benders, M.J., Išgum, I.: Automatic segmentation of MR brain images with a convolutional neural network. IEEE Trans. Med. Imaging 35(5), 1252–1261 (2016)

    Article  Google Scholar 

  14. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015, Part III. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  15. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

    Google Scholar 

  16. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  17. Sirinukunwattana, K., Alham, N.K., Verrill, C., Rittscher, J.: Improving whole slide segmentation through visual context - a systematic study. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018, Part II. LNCS, vol. 11071, pp. 192–200. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_22

    Chapter  Google Scholar 

  18. Tokunaga, H., Teramoto, Y., Yoshizawa, A., Bise, R.: Adaptive weighting multi-field-of-view CNN for semantic segmentation in pathology. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12597–12606 (2019)

    Google Scholar 

  19. Valanarasu, J.M.J., Oza, P., Hacihaliloglu, I., Patel, V.M.: Medical transformer: gated axial-attention for medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021,Part I. LNCS, vol. 12901, pp. 36–46. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87193-2_4

    Chapter  Google Scholar 

  20. Valanarasu, J.M.J., Patel, V.M.: UNeXt: MLP-based rapid medical image segmentation network. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022, Part V. LNCS, vol. 13435, pp. 23–33. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16443-9_3

    Chapter  Google Scholar 

  21. Wadekar, S.N., Chaurasia, A.: MobileVitV3: mobile-friendly vision transformer with simple and effective fusion of local, global and input features. arXiv preprint arXiv:2209.15159 (2022)

  22. Wang, D., Khosla, A., Gargeya, R., Irshad, H., Beck, A.H.: Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:1606.05718 (2016)

  23. Wetteland, R., Engan, K., Eftestøl, T., Kvikstad, V., Janssen, E.A.: A multiscale approach for whole-slide image segmentation of five tissue classes in urothelial carcinoma slides. Technol. Cancer Res. Treat. 19, 1533033820946787 (2020)

    Article  Google Scholar 

  24. Zhang, K., Zhuang, X.: CycleMix: a holistic strategy for medical image segmentation from scribble supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11656–11665 (2022)

    Google Scholar 

  25. Zhao, Z., et al.: Deep neural network based artificial intelligence assisted diagnosis of bone scintigraphy for cancer bone metastasis. Sci. Rep. 10(1), 17046 (2020)

    Article  MathSciNet  Google Scholar 

  26. Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: UNet++: a nested U-Net architecture for medical image segmentation. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS 2018. LNCS, vol. 11045, pp. 3–11. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00889-5_1

    Chapter  Google Scholar 

Download references

Acknowledgments

This work was funded by grants from the National Key Research and Development Program of China (2022YFF0608404, 2022YFF0608401), the Research Project of the National Institute of Metrology (AKYZD2111), and was supported by the high performance computing (HPC) resources at China Agricultural University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiang Fang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, X. et al. (2023). E-Patcher: A Patch-Based Efficient Network for Fast Whole Slide Images Segmentation. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14255. Springer, Cham. https://doi.org/10.1007/978-3-031-44210-0_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44210-0_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44209-4

  • Online ISBN: 978-3-031-44210-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics