Abstract
Liver segmentation is critical for the location and diagnosis of liver cancer. The variant of U-Net network with skip connections has become popular in the medical image segmentation field. However, these variant networks tend to fuse semantically dissimilar feature maps via simple skip connections between encoder and decoder path. We argue that the network learning task would be handled easily when the feature maps from the encoder-decoder path are semantically similar. The fusion of semantically dissimilar feature maps can cause gaps between feature maps. Hence, the proposed method in this paper is to obtain semantically similar feature maps, alleviate the semantic gaps caused by simple skip connections, and improve segmentation accuracy. In this paper, we proposed a new U-Net architecture named Multi-Scale Nested U-Net (MSN-Net). The MSN-Net consists of Res-block and MSCF-block. The Res-block with the bottleneck layer is used to make the network deeper and avoid gradient disappearance. To alleviate the semantic gaps, we redesign a novel skip connection. The novel skip connection consists of MSCF-block and dense connections. The MSCF-block combines High-level and Low-level features and Multi-scale semantic information to obtain more representative features. The densely connections are adopted between MSCF-blocks. In addition, we use a weighted loss function which consists of cross-entropy loss and Dice loss. The proposed method is evaluated on the dataset of MICCAI 2017 LiTS Challenge. The results of experiment demonstrate that, MSN-Net can effectively alleviate the semantic gaps and outperform other state-of-the-art methods. The method proposed with the novel skip connections can effectively alleviate the semantic gaps between encoder and decoder path and improve segmentation accuracy of the network.
Similar content being viewed by others
References
Chen, Y., et al.: Channel-Unet: a spatial channel-wise convolutional neural network for liver and tumors segmentation. Front Genet 10, 1110 (2019)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention (2015)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Huang, G., et al.: Densely Connected Convolutional Networks. arXiv preprint arXiv:1608.06993 (2017)
Szegedy, C., et al.: Going deeper with convolutions. arXiv preprint arXiv:1409.4842 (2014)
Szegedy, C., et al.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)
Han, X., Automatic liver lesion segmentation using a deep convolutional neural network method. arXiv preprint arXiv:1704.07239 (2017)
Jégou, S., et al.: The one hundred layers tiramisu: fully convolutional DenseNets for semantic segmentation (2017)
Li, X., et al.: H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging 37(12), 2663–2674 (2018)
Zhang, J., et al.: MDU-Net: multi-scale densely connected U-Net for biomedical image segmentation. arXiv preprint arXiv:1812.00352 (2018)
Zhang, Z., et al.: DENSE-INception U-net for medical image segmentation. Comput. Methods Programs Biomed. 192, 105395 (2020)
Bozkurt, A., et al.: A multiresolution convolutional neural network with partial label training for annotating reflectance confocal microscopy images of skin. In: International Conference on Medical Image Computing & Computer-assisted Intervention (2018)
Zhou, Y., et al.: D-UNet: a dimension-fusion U shape network for chronic stroke lesion segmentation. IEEE/ACM Trans. Comput. Biol. Bioinform. (2019)
Murugesan, B., et al.: Psi-Net: shape and boundary aware joint multi-task deep network for medical image segmentation. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE (2019)
Qi, K., et al.: X-net: brain stroke lesion segmentation based on depthwise separable convolution and long-range dependencies. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer (2019)
Arsalan, M., et al.: FRED-Net: fully residual encoder–decoder network for accurate iris segmentation. Expert Syst. Appl. 122, 217–241 (2019)
Gu, Z., et al.: CE-Net: context encoder network for 2D medical image segmentation. IEEE Trans. Med. Imaging 38(10), 2281–2292 (2019)
Zhou, Z., et al.: UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39, 1856–1867 (2019)
Zhou, S., et al.: High-resolution encoder–decoder networks for low-contrast medical image segmentation. IEEE Trans. Image Process. 29, 461–475 (2019)
Ibtehaz, N., Rahman, M.S.: MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 121, 74–87 (2020)
Chen, L.-C., et al.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)
Chen, L., et al.: DRINet for medical image segmentation. IEEE Trans. Med. Imaging 37(11), 2453–2462 (2018)
Ni, J., et al.: GC-Net: global context network for medical image segmentation. Comput. Methods Programs Biomed. 190, 105121 (2019)
Zhao, H., et al.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Chen, L.-C., et al.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)
Song, T., et al.: U-Next: a novel convolution neural network with an aggregation U-Net architecture for gallstone segmentation in CT images. IEEE Access 7, 166823–166832 (2019)
Shen, T., et al.: Disan: directional self-attention network for rnn/cnn-free language understanding. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
Fu, J., et al.: Dual attention network for scene segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)
Wang, Y., et al.: Deep attentive features for prostate segmentation in 3d transrectal ultrasound. IEEE Trans. Med. Imaging 38(12), 2768–2778 (2019)
Crum, W.R., Camara, O., Hill, D.L.: Generalized overlap measures for evaluation and validation in medical image analysis. IEEE Trans. Med. Imaging 25(11), 1451–1461 (2006)
Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV). IEEE (2016)
Bilic, P., et al.: The liver tumor segmentation benchmark (lits). arXiv preprint arXiv:1901.04056 (2019)
Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)
Funding
This work was supported by the National Natural Science Foundation of China (61473112); Hebei Provincial Natural Science Fund Key Project (F2017201222);Education Department Science and Technology Research Project (QN2015135); Post-graduate’s Innovation Fund Project of Hebei University (hbu2020ss065).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Fan, T., Wang, G., Wang, X. et al. MSN-Net: a multi-scale context nested U-Net for liver segmentation. SIViP 15, 1089–1097 (2021). https://doi.org/10.1007/s11760-020-01835-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-020-01835-9