Skip to main content

Segmentation of Intracellular Structures in Fluorescence Microscopy Images by Fusing Low-Level Features

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13021))

Abstract

Intracellular organelles such as the endoplasmic reticulum, mitochondria, and the cell nucleus exhibit complex structures and diverse morphologies. These intracellular structures (ICSs) play important roles in serving essential physiological functions of cells. Their accurate segmentation is crucial to many important life sciences and clinical applications. When using deep learning-based segmentation models to extract ICSs in fluorescence microscopy images (FLMIs), we find that U-Net provides superior performance, while other well-designed models such as DeepLabv3+ and SegNet perform poorly. By investigating the relative importance of the features learned by U-Net, we find that low-level features play a dominant role. Therefore, we develop a simple strategy that modifies general-purpose segmentation models by fusing low-level features via a decoder architecture to improve their performance in segmenting ICSs from FLMIs. For a given segmentation model, we first use a group of convolutions at the original image scale as the input layer to obtain low-level features. We then use a decoder to fuse the multi-scale features, which directly passes information of low-level features to the prediction layer. Experimental results on two custom datasets, ER and MITO, and a public dataset NUCLEUS, show that the strategy substantially improves the performance of all the general-purpose models tested in segmentation of ICSs from FLMIs. Data and code of this study are available at https://github.com/cbmi-group/icss-segmentation.

Y. Guo and J. Huang—Equal contributors.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Moen, E., Bannon, D., Kudo, T., et al.: Deep learning for cellular image analysis. Nat. Methods 16(12), 1233–1246 (2019)

    Article  Google Scholar 

  2. Nixon-Abell, J., Obara, C.J., Weigel, A.V., et al.: Increased spatiotemporal resolution reveals highly dynamic dense tubular matrices in the peripheral ER. Science 354(6311), aaf3928 (2016)

    Google Scholar 

  3. Mootha, V.K., Bunkenborg, J., Olsen, J.V., et al.: Integrated analysis of protein composition, tissue diversity, and gene regulation in mouse mitochondria. Cell 115(5), 629–640 (2003)

    Article  Google Scholar 

  4. Diaspro, A.: Confocal and Two-Photon Microscopy: Foundations, Applications, and Advances. Wiley-Liss. Hoboken (2002)

    Google Scholar 

  5. Pain, C., Kriechbaumer, V., Kittelmann, M., et al.: Quantitative analysis of plant ER architecture and dynamics. Nat. Commun. 10(1), 1–15 (2019)

    Article  Google Scholar 

  6. Mitra, K., Lippincott‐Schwartz, J.: Analysis of mitochondrial dynamics and functions using imaging approaches. Curr. Protoc. Cell Biol. 46(1), 4.25. 1–4.25. 21 (2010)

    Google Scholar 

  7. Yaffe, M.P.: Dynamic mitochondria. Nat. Cell Biol. 1(6), E149–E150 (1999)

    Article  Google Scholar 

  8. Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

  9. Everingham, M., Van Gool, L., Williams, C.K., et al.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)

    Article  Google Scholar 

  10. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

  11. Hu, X., Yu, L., Chen, H., Qin, J., Heng, P.A.: AGNet: attention-guided network for surgical tool presence detection. In: Cardoso, M., et al. (eds.) DLMIA 2017, ML-CDS 2017. LNCS, vol. 10553, pp. 186–194. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67558-9_22

  12. Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., et al.: UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867 (2019)

    Article  Google Scholar 

  13. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 833–851 Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_49

  14. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)

    Article  Google Scholar 

  15. Long, J., Shelhamer, E., and Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440. IEEE (2015)

    Google Scholar 

  16. Peng, C., Zhang, X., Yu, G., et al.: Large kernel matters–improve semantic segmentation by global convolutional network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4353–4361. IEEE (2017)

    Google Scholar 

  17. Wang, J., Sun, K., Cheng, T., et al.: Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2020)

    Google Scholar 

  18. Zhao, H., Shi, J., Qi, X., et al.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2881–2890. IEEE (2017)

    Google Scholar 

  19. Zhang Z., Zhang X., Peng C., Xue X., Sun J.: ExFuse: enhancing feature fusion for semantic segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 273–288. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01249-6_17

  20. Li, H., Xiong, P., Fan, H., et al.: DFANet: deep feature aggregation for real-time semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9522–9531. IEEE (2019)

    Google Scholar 

  21. Liu, S., Qi, L., Qin, H., et al.: Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759–8768. IEEE (2018)

    Google Scholar 

  22. Caicedo, J.C., Goodman, A., Karhohs, K.W., et al.: Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl. Nat. Methods 16(12), 1247–1253 (2019)

    Article  Google Scholar 

  23. Paszke, A., Gross, S., Massa, F., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, pp. 8026–8037 (2019)

    Google Scholar 

Download references

Acknowledgements

This study was supported in part by NSFC grant 91954201 and grant 31971289, the Strategic Priority Research Program of the Chinese Academy of Sciences grant XDB37040402, the Chinese Academy of Sciences grant 292019000056, the University of Chinese Academy of Sciences grant 115200M001, and the Beijing Municipal Science & Technology Commission grant 5202022.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ge Yang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 127 kb)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, Y., Huang, J., Zhou, Y., Luo, Y., Li, W., Yang, G. (2021). Segmentation of Intracellular Structures in Fluorescence Microscopy Images by Fusing Low-Level Features. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13021. Springer, Cham. https://doi.org/10.1007/978-3-030-88010-1_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88010-1_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88009-5

  • Online ISBN: 978-3-030-88010-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics