Skip to main content
Log in

Dual-Path Adversarial Learning for Fully Convolutional Network (FCN)-Based Medical Image Segmentation

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Segmentation of regions of interest (ROIs) in medical images is an important step for image analysis in computer-aided diagnosis systems. In recent years, segmentation methods based on fully convolutional networks (FCNs) have achieved great success in general images. FCN performance is primarily due to it leveraging large labeled datasets to hierarchically learn the features that correspond to the shallow appearance as well as the deep semantics of the images. However, such dependence on large dataset does not translate well into medical images where there is a scarcity of annotated medical training data, and FCN results in coarse ROI detections and poor boundary definitions. To overcome this limitation, medical image-specific FCN methods have been introduced with post-processing techniques to refine the segmentation results; however, the performance of these methods is reliant on the appropriate tuning of a large number of parameters and dependence on data-specific post-processing techniques. In this study, we leverage the state-of-the-art image feature learning method of generative adversarial network (GAN) for its inherent ability to produce consistent and realistic images features by using deep neural networks and adversarial learning concept. We improve upon GAN such that ROI features can be learned at different levels of complexities (simple and complex), in a controlled manner, via our proposed dual-path adversarial learning (DAL). The outputs from our DAL are then augmented to the learned ROI features into the existing FCN training data, which increases the overall feature diversity. We conducted experiments on three public datasets with a variety of visual characteristics. Our results demonstrate that our DAL can improve FCN-based segmentation methods and outperform or be competitive in performances to the state-of-the-art methods without using medical image-specific optimizations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Doi, K.: Current status and future potential of computer-aided diagnosis in medical imaging. Br. J. Radiol. 78, 3–19 (2014)

    Article  Google Scholar 

  2. Doi, K.: Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Comput. Med. Imaging Graph. 31, 198–211 (2007)

    Article  Google Scholar 

  3. Musen, M.A., et al.: Clinical decision-support systems. In: Biomedical Informatics, pp. 643–674. Springer (2014)

  4. Bi, L., et al.: Automatic detection and classification of regions of FDG uptake in whole-body PET-CT lymphoma studies. Comput. Med. Imaging Gr. 60, 3–10 (2017)

    Article  Google Scholar 

  5. Chen, X., et al.: Medical image segmentation by combining graph cuts and oriented active appearance models. IEEE Trans. Image Process. 21, 2035–2046 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  6. Jung, Y., et al.: Visibility-driven PET-CT visualisation with region of interest (ROI) segmentation. Vis. Comput. 29, 805–815 (2013)

    Article  Google Scholar 

  7. Shelhamer, E., et al.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intel. 39, 640–651 (2017)

    Article  Google Scholar 

  8. Chen, L.C., et al.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2017)

  9. Wang, L., et al.: Saliency detection with recurrent fully convolutional networks. In: Proceedings of ECCV, pp. 825–841 (2016)

  10. Bi, L., et al.: Dermoscopic image segmentation via multi-stage fully convolutional networks. IEEE Trans. Biomed. Eng. 64, 2065–2074 (2017)

    Article  Google Scholar 

  11. Chen, H., et al.: DCAN: deep contour-aware networks for object instance segmentation from histology images. Med. Image Anal. 36, 135–146 (2017)

    Article  Google Scholar 

  12. Yuan, Y., Chao, M., Lo, Y.-C.: Automatic skin lesion segmentation using deep fully convolutional networks with jaccard distance. IEEE Trans. Med. Imaging 36, 1876–1886 (2017)

    Article  Google Scholar 

  13. Dou, Q., et al.: 3D deeply supervised network for automatic liver segmentation from ct volumes. In: Proceedings of MICCAI, pp. 149–157 (2016)

  14. Lu, F., et al.: Automatic 3D liver location and segmentation via convolutional neural network and graph cut. Int. J. Comput. Assist. Radiol. Surg. 1–12 (2016)

  15. Staal, J., et al.: Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 23, 501–509 (2004)

    Article  Google Scholar 

  16. Sirinukunwattana, K., et al.: A stochastic polygons model for glandular structures in colon histology images. IEEE Trans. Med. Imaging 34, 2366–2378 (2015)

    Article  Google Scholar 

  17. Van Ginneken, B., et al.: Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database. Med. Image Anal. 10, 19–40 (2006)

    Article  Google Scholar 

  18. Shiraishi, J., et al.: Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. Am. J. Roentgenol. 174, 71–74 (2000)

    Article  Google Scholar 

  19. Yang, W., et al.: Lung field segmentation in chest radiographs from boundary maps by a structured edge detector. IEEE J. Biomed. Health Inform. (2017). https://doi.org/10.1109/JBHI.2017.2687939

    Google Scholar 

  20. Chatfield, K., et al.: Return of the devil in the details: delving deep into convolutional nets. In: Proceedings of BMVC (2014)

  21. Dean, J., et al.: Large scale distributed deep networks. In: Proceedings of NIPS, pp. 1223–1231 (2012)

  22. Goodfellow, I., et al.: Generative adversarial nets. In: Proceedings of NIPS, pp. 2672–2680 (2014)

  23. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv:1411.1784 (2014)

  24. Isola, P., et al.: Image-to-image translation with conditional adversarial networks. In: Proceedings of IEEE CVPR, pp. 1125–1134 (2017)

  25. Ronneberger, O., et al.: U-net: convolutional networks for biomedical image segmentation. In: Proceedings of MICCAI, pp. 234–241 (2015)

  26. He, K., et al.: Deep residual learning for image recognition. In: Proceedings of IEEE CVPR, pp. 770–778 (2016)

  27. Lin, G., et al.: Refinenet: multi-path refinement networks with identity mappings for high-resolution semantic segmentation. In: Proceedings of IEEE CVPR, pp. 5168–5177 (2017)

  28. Vedaldi, A., Lenc, K.: Matconvnet: Convolutional neural networks for matlab. In: Proceedings of ACM MM, pp. 689–692 (2015)

  29. Shin, H.-C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35, 1285–1298 (2016)

    Article  Google Scholar 

  30. Kumar, A., et al.: An ensemble of fine-tuned convolutional neural networks for medical image classification. IEEE J. Biomed. Health Inform. 21, 31–40 (2016)

    Article  Google Scholar 

  31. Deng, J., et al.: Imagenet: a large-scale hierarchical image database. In: Proceedings of IEEE CVPR, pp. 248–255 (2009)

  32. Collobert, R., et al.: Torch: a modular machine learning software library. No. EPFL-REPORT-82802, Idiap (2002)

  33. Yin, B., et al.: Vessel extraction from non-fluorescein fundus images using orientation-aware detector. Med. Image Anal. 26, 232–242 (2015)

    Article  Google Scholar 

  34. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)

  35. Zheng, S., et al.: Conditional random fields as recurrent neural networks. In: Proceedings of ICCV, pp. 1529–1537 (2015)

  36. BenTaieb, A., Hamarneh, G.: Topology awarefully convolutional networks for histology gland segmentation. In: Proceedings of MICCAI, pp. 460–468 (2016)

  37. Bi, L., et al.: Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation. Vis. Comput. 33, 1061–1071 (2017)

    Article  Google Scholar 

  38. Novikov, A.A., et al.: Fully convolutional architectures for multi-class segmentation in chest radiographs. arXiv:1701.08816 (2017)

  39. Dai, W., et al.: Scan: structure correcting adversarial network for chest x-rays organ segmentation. arXiv:1703.08770 (2017)

  40. Candemir, S., et al.: Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans. Med. Imaging 33, 577–590 (2014)

    Article  Google Scholar 

Download references

Funding

This work was supported in part by Australia Research Council (ARC) grants.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinman Kim.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bi, L., Feng, D. & Kim, J. Dual-Path Adversarial Learning for Fully Convolutional Network (FCN)-Based Medical Image Segmentation. Vis Comput 34, 1043–1052 (2018). https://doi.org/10.1007/s00371-018-1519-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-018-1519-5

Keywords

Navigation