skip to main content
10.1145/3546607.3546617acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicvarsConference Proceedingsconference-collections
research-article

Semantic Segmentation Model of Fluorescent Neuronal Cells in Mouse Brain Slices Under Few Samples.

Published: 25 August 2022 Publication History

Abstract

Image of mouse neuronal cells is an important tool for studying mice. Examining mouse neuronal cells is an essential step in both pharmacological and toxicological tests. The morphology of individual neurons and the state of connections between multiple neurons are essential measures of the physiological state of the mouse. Staining neuronal cells samples and observation through microscopy is the mainstay of this field. However, this step is tedious, monotonous, and requires a high level of practical experience from the researcher. In recent years, the analysis of cell morphology using computer vision has proven to be an efficient and accurate solution. This paper presents a deep neural network-based semantic segmentation model; it does not use the popular attention mechanism but breaks down the process into two steps to achieve satisfactory performance while maintaining the number of parameters at a low level.

Supplemental Material

MP4 File
Supplemental video
PPT File
Presentation slides

References

[1]
R. Galli, "Skeletal myogenic potential of human and mouse neural stem cells," Nature neuroscience, vol. 3, no. 10, pp. 986-991, 2000.
[2]
National Research Council. Guide for the care and use of laboratory animals[J]. 2010.
[3]
J. N. Weinstein, "Neural computing in cancer drug development: predicting mechanism of action," Science, vol. 258, no. 5081, pp. 447-451, 1992.
[4]
A. D'Amico, E. Mercuri, F. D. Tiziano, and E. Bertini, "Spinal muscular atrophy," Orphanet journal of rare diseases, vol. 6, no. 1, pp. 1-10, 2011.
[5]
P. Lasjaunias, M. Chiu, K. Ter Brugge, A. Tolia, M. Hurth, and M. Bernstein, "Neurological manifestations of intracranial dural arteriovenous malformations," Journal of neurosurgery, vol. 64, no. 5, pp. 724-730, 1986.
[6]
C. M. Harper, "Intraoperative cranial nerve monitoring," Muscle Nerve, vol. 29, no. 3, pp. 339-351, 2004.
[7]
D. Purves and J. T. Voyvodic, "Imaging mammalian nerve cells and their connections over time in living animals," Trends in Neurosciences, vol. 10, no. 10, pp. 398-404, 1987.
[8]
J. Tolivia, A. Navarro, and D. Tolivia, "Differential staining of nerve cells and fibres for sections of paraffin-embedded material in mammalian central nervous system," Histochemistry, vol. 102, no. 2, pp. 101-104, 1994.
[9]
C.-h. Chen, Computer vision in medical imaging. World scientific, 2013.
[10]
G. Gerig, W. Kuoni, R. Kikinis, and O. Kübler, "Medical imaging and computer vision: An integrated approach for diagnosis and planning," in Mustererkennung 1989: Springer, 1989, pp. 425-432.
[11]
K. H. Kuo and J. M. Leo, "Optical versus virtual microscope for medical education: a systematic review," Anatomical Sciences Education, vol. 12, no. 6, pp. 678-685, 2019.
[12]
A. A. M. Al-Saffar, H. Tao, and M. A. Talab, "Review of deep convolution neural network in image classification," in 2017 International conference on radar, antenna, microwave, electronics, and telecommunications (ICRAMET), 2017, pp. 26-31: IEEE.
[13]
C. Ji, G. Zhou, G. Liu, and Q. Mei, "Behavior Inference based on Joint Node Motion under the Low Quality and Small-Scale Sample Size," in 2021 International Conference on Networking, Communications and Information Technology (NetCIT), 2021, pp. 305-309: IEEE.
[14]
G. Zhou, C. Wang, and Q. Mei, "Using Graph Attention Network to Predicte Urban Traffic Flow," in 2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM), 2021, pp. 442-445: IEEE.
[15]
A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, and J. Garcia-Rodriguez, "A review on deep learning techniques applied to semantic segmentation," arXiv preprint arXiv:.06857, 2017.
[16]
X. Yuan, J. Shi, and L. Gu, "A review of deep learning methods for semantic segmentation of remote sensing imagery," Expert Systems with Applications, vol. 169, p. 114417, 2021.
[17]
A. Yuniarti and N. Suciati, "A review of deep learning techniques for 3D reconstruction of 2D images," in 2019 12th International Conference on Information & Communication Technology and System (ICTS), 2019, pp. 327-331: IEEE.
[18]
J. Sun, Y. Xie, L. Chen, X. Zhou, and H. Bao, "NeuralRecon: Real-time coherent 3D reconstruction from monocular video," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15598-15607.
[19]
S. Albawi, T. A. Mohammed, and S. Al-Zawi, "Understanding of a convolutional neural network," in 2017 international conference on engineering and technology (ICET), 2017, pp. 1-6: IEEE.
[20]
O. Russakovsky, "Imagenet large scale visual recognition challenge," International journal of computer vision, vol. 115, no. 3, pp. 211-252, 2015.
[21]
Z. Niu, G. Zhong, and H. Yu, "A review on the attention mechanism of deep learning," Neurocomputing, vol. 452, pp. 48-62, 2021.
[22]
J. F. Torres, D. Hadjout, A. Sebaa, F. Martínez-Álvarez, and A. Troncoso, "Deep learning for time series forecasting: a survey," Big Data, vol. 9, no. 1, pp. 3-21, 2021.
[23]
R. Morelli, "Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet," Scientific Reports, vol. 11, no. 1, pp. 1-11, 2021.
[24]
M. Everingham, L. Van∼Gool, C. K. I. Williams, J. Winn, and A. Zisserman. (2012). The PASCAL Visual Object Classes Challenge 2012 Results. Available: http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html
[25]
M. Cordts, "The cityscapes dataset for semantic urban scene understanding," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213-3223.
[26]
H. Caesar, J. Uijlings, and V. Ferrari, "Coco-stuff: Thing and stuff classes in context," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1209-1218.
[27]
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
[28]
F. Chollet, "Xception: Deep learning with depthwise separable convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251-1258.
[29]
Z. Y. Khan and Z. Niu, "CNN with depthwise separable convolutions and combined kernels for rating prediction," Expert Systems with Applications, vol. 170, p. 114528, 2021.
[30]
B. Cheng, A. Schwing, and A. Kirillov, "Per-pixel classification is not all you need for semantic segmentation," Advances in Neural Information Processing Systems, vol. 34, 2021.
[31]
S. Jadon, "A survey of loss functions for semantic segmentation," in 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), 2020, pp. 1-7: IEEE.
[32]
L. Feng, S. Shu, Z. Lin, F. Lv, L. Li, and B. An, "Can cross entropy loss be robust to label noise?," in Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, 2021, pp. 2206-2212.
[33]
H. Xu, M. Yang, L. Deng, Y. Qian, and C. Wang, "Neutral cross-entropy loss based unsupervised domain adaptation for semantic segmentation," IEEE Transactions on Image Processing, vol. 30, pp. 4516-4525, 2021.
[34]
W. Yuan and W. Xu, "Neighborloss: a loss function considering spatial correlation for semantic segmentation of remote sensing image," IEEE Access, vol. 9, pp. 75641-75649, 2021.
[35]
T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980-2988.
[36]
F. Caliva, C. Iriondo, A. M. Martinez, S. Majumdar, and V. Pedoia, "Distance map loss penalty term for semantic segmentation," arXiv preprint arXiv:.03679, 2019.
[37]
S. S. M. Salehi, D. Erdogmus, and A. Gholipour, "Tversky loss function for image segmentation using 3D fully convolutional deep networks," in International workshop on machine learning in medical imaging, 2017, pp. 379-387: Springer.
[38]
J. Bertels, "Optimizing the dice score and jaccard index for medical image segmentation: Theory and practice," in International Conference on Medical Image Computing and Computer-Assisted Intervention, 2019, pp. 92-100: Springer.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICVARS '22: Proceedings of the 2022 6th International Conference on Virtual and Augmented Reality Simulations
March 2022
119 pages
ISBN:9781450387330
DOI:10.1145/3546607
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 August 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Deep Neural Network
  2. Medical Image Process
  3. Sematic Segmentation

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICVARS 2022

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 24
    Total Downloads
  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media