Skip to main content
Log in

Multiclass U-Net Segmentation of Brain Electron Microscopy Data Using Original and Semi-Synthetic Training Datasets

  • Published:
Programming and Computer Software Aims and scope Submit manuscript

Abstract

A manual labeling of 20 layers of the known open dataset EPFL for six classes is prepared. These classes are: (1) mitochondria, including their boundaries; (2) boundaries of mitochondria; (3) cell membranes; (4) postsynaptic densities (PSD); (5) axon sheaths; and (6) vesicles. Software for generating synthetic labeled datasets and the dataset itself balancing the representativeness of classes are created. Results of multiclass segmentation of brain electron microscopy (EM) data for each class for the case of binary segmentation and segmentation into five and six classes using a modified U-Net model are investigated. The model was trained on 256 × 256 fragments of the original EM resolution. In the case of six-class segmentation, mitochondria were segmented with the Dice–Sørensen coefficient of 0.908, which is somewhat lower than in the case of binary (0,911) and five-class segmentation (0.91). An extension of the dataset by synthesized images improved the classification results in an experiment. The extension of the manually labeled dataset (860 images of size 256 × 256) by the synthesized dataset (100 images of size 256 × 256 containing the poorly represented classes—axons and PSD) gave a significant increase of accuracy in the six-class U-Net model from 0.228 to 0.790 and from 0.553 to 0.745, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.

Similar content being viewed by others

REFERENCES

  1. Deerinck, T. et al., Enhancing serial block-face scanning electron microscopy to enable high resolution 3D nanohistology of cells and tissues, Microscopy Microanal., 2010, vol. 16, no. 2, pp. 1138–1139. https://doi.org/10.1017/S1431927610055170

    Article  Google Scholar 

  2. Ciresan, D.C. et al., Deep neural networks segment neuronal membranes in electron microscopy images, IN NIPS, 2012, pp. 2852–2860.

    Google Scholar 

  3. Lucchi, A. et al., Supervoxel-based segmentation of mitochondria in EM image stacks with learned shape features, IEEE Trans. Medical Imaging, 2012, vol. 31, no. 2, pp. 474–486. https://doi.org/10.1109/TMI.2011.2171705

    Article  Google Scholar 

  4. Helmstaedter, M. and Mitra, P.P., Computational methods and challenges for large-scale circuit mapping, Current Opinion Neurobiol., 2012, vol. 22, no 1, pp. 162–169. http://www.sciencedirect.com/science/article/pii/ S0959438811002133.https://doi.org/10.1016/j.conb.2011.11.010

    Article  Google Scholar 

  5. Arganda-Carreras I. et al., Crowdsourcing the creation of image segmentation algorithms for connectomics, Frontiers Neuroanatomy, 2015, vol. 9, pp. 1–13. https://www.frontiersin.org/article/10.3389/fnana. 2015.00142.https://doi.org/10.3389/fnana.2015.00142

    Article  Google Scholar 

  6. Kasthuri, N. et al., Saturated reconstruction of a volume of neocortex, Cell, 2015, vol. 162, pp. 648–661.

    Article  Google Scholar 

  7. Ronneberger, O., Fischer, P., and Brox, T., U-Net: Convolutional Networks for Biomedical Image Segmentation, 2015. arXiv: 1505.04597 [cs.CV].

  8. Drozdzal, M. et al. The importance of skip connections in biomedical image segmentation, 2016. arXiv: 1608.04117 [cs.CV].

  9. Fakhry, A.E., Zeng, T., and Ji, S., Residual deconvolutional networks for brain electron microscopy image segmentation, IEEE Trans. Medical Imaging, 2017, vol. 36, pp. 447–456.

    Article  Google Scholar 

  10. Xiao, C. et al., Deep contextual residual network for electron microscopy image segmentation in connectomics, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 2018, pp. 378–381. https://doi.org/10.1109/ISBI.2018.8363597.

  11. Çiçek, Ö. et al., 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. 2016. arXiv: 1606.06650 [cs.CV].

  12. Milletari, F., Navab, N., and Ahmadi, S.-A., V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation, 2016 Fourth International Conference on 3D Vision (3DV), 2016, pp. 565–571.

  13. Kamnitsas, K. et al., Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation, Med. Image Anal., 2017, vol. 36, pp. 61–78.

    Article  Google Scholar 

  14. Li, W. et al., On the compactness, efficiency, and representation of 3D convolutional networks: Brain parcellation as a pretext task, Inf. Process. Med. Imaging, Ed. by Niethammer, M. Cham Springer, 2017, pp. 348–360.

    Google Scholar 

  15. Long, J., Shelhamer, E., and Darrell, T., Fully convolutional networks for semantic segmentation, 2015. arXiv: 1411.4038 [cs.CV].

  16. Chen, H. et al., Deep contextual networks for neuronal structure segmentation, Proc. of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), 2016, pp. 1167–1173. https://ojs.aaai.org/index.php/AAAI/article/view/ 10141/10000.

  17. Liu, T. et al., A modular hierarchical approach to 3D electron microscopy image segmentation, J. Neurosci. Meth., 2014, vol. 226, pp. 88–102.

    Article  Google Scholar 

  18. Liu, J. et al., Automatic detection and segmentation of mitochondria from SEM images using deep neural network, 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2018, pp. 628–631.

  19. Oztel, I. et al., Mitochondria segmentation in electron microscopy volumes using deep convolutional neural network, 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2017, pp. 1195–1200. https://doi.org/10.1109/BIBM.2017.8217827.

  20. Žerovnik, Mekuč M. et al., Automatic segmentation of mitochondria and endolysosomes in volumetric electron microscopy data, Comput. Biol. Med., 2020, vol. 119, p. 103693. https://doi.org/10.1016/j.compbiomed.2020.103693. https://www.sciencedirect.com/science/article/pii/S0010482520300792.

  21. Huang, S.-C., Cheng, F., and Chiu, Y., Efficient contrast enhancement using adaptive gamma correction with weighting distribution, IEEE Trans. Image Process., 2013, vol. 22, pp. 1032–1041.

    Article  MathSciNet  Google Scholar 

  22. Szegedy, C. et al., Rethinking the Inception Architecture for Computer Vision, 06/2016. https://doi.org/10.1109/CVPR.2016.308

  23. Chollet, F., Xception: Deep Learning with depthwise separable convolutions, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1800–1807.

  24. Xie, S. et al., Aggregated residual transformations for deep neural networks, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5987–5995.

  25. Cheng, H.-C. and Varshney, A., Volume segmentation using convolutional neural networks with limited training data, 2017 IEEE International Conference on Image Processing (ICIP), 2017, pp. 590–594. https://doi.org/10.1109/ICIP.2017.8296349.

  26. Urakubo, H. et al., UNI-EM: An environment for deep neural network-based automated segmentation of neuronal electron microscopic images, Sci. Rep., 2019, vol. 9, p. 19413. https://www.biorxiv.org/content/biorxiv/early/2019/04/12/607366.full.pdf.https://doi.org/10.1038/s41598-019-55431-0

    Article  Google Scholar 

  27. Gómez-de-Mariscal, E. et al., Deep-learning-based segmentation of small extracellular vesicles in transmission electron microscopy images, Sci. Rep., 2019, vol. 9. https://doi.org/10.1038/s41598-019-49431-3

  28. Quan, T.M., Hildebrand, D.G.C., and Jeong, W.-K., FusionNet: A deep fully residual convolutional neural network for image segmentation in connectomics, Frontiers Comput. Sci., 2021, vol. 3, pp. 34. https://www.frontiersin.org/article/10.3389/fcomp.2021.613981.https://doi.org/10.3389/fcomp.2021.613981

    Article  Google Scholar 

  29. Yuan, Z. et al., HIVE-Net: Centerline-aware HIerarchical view-ensemble convolutional network for mitochondria segmentation in EM images, Comput. Meth. Programs Biomed., 2021, vol. 200, p. 105925.

Download references

ACKNOWLEDGMENTS

The study was supported by a grant from the strategic academic leadership program “Priority 2030” (project N‑483-99_2021-2022).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to A. A. Getmanskaya, N. A. Sokolov or V. E. Turlapov.

Ethics declarations

The authors declare that they have no conflicts of interest.

Additional information

Translated by A. Klimontovich

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Getmanskaya, A.A., Sokolov, N.A. & Turlapov, V.E. Multiclass U-Net Segmentation of Brain Electron Microscopy Data Using Original and Semi-Synthetic Training Datasets. Program Comput Soft 48, 164–171 (2022). https://doi.org/10.1134/S0361768822030057

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0361768822030057

Navigation