Skip to main content

Deep Multi-Modal Encoder-Decoder Networks for Shape Constrained Segmentation and Joint Representation Learning

  • Conference paper
  • First Online:
Bildverarbeitung für die Medizin 2019

Part of the book series: Informatik aktuell ((INFORMAT))

Zusammenfassung

Deep learning approaches have been very successful in segmenting cardiac structures from CT and MR volumes. Despite continuous progress, automated segmentation of these structures remains challenging due to highly complex regional characteristics (e.g. homogeneous gray-level transitions) and large anatomical shape variability. To cope with these challenges, the incorporation of shape priors into neural networks for robust segmentation is an important area of current research. We propose a novel approach that leverages shared information across imaging modalities and shape segmentations within a unified multi-modal encoder-decoder network. This jointly end-to-end trainable architecture is advantageous in improving robustness due to strong shape constraints and enables further applications due to smooth transitions in the learned shape space. Despite no skip connections are used and all shape information is encoded in a low-dimensional representation, our approach achieves high-accuracy segmentation and consistent shape interpolation results on the multi-modal whole heart segmentation dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. Tobon-Gomez C, et al. Benchmark for algorithms segmenting the left atrium from 3D CT and MRI datasets. IEEE Trans Med Imag. 2015;34(7):1460–1473.

    Article  Google Scholar 

  2. Milletari F, et al.; IEEE. V-net: fully convolutional neural networks for volumetric medical image segmentation. Proc 3D Vis. 2016; p. 565–571.

    Google Scholar 

  3. Ronneberger O, et al.; Springer. U-net: convolutional networks for biomedical image segmentation. Proc MICCAI. 2015; p. 234–241.

    Google Scholar 

  4. Oktay O, et al. Anatomically constrained neural networks (ACNN): application to cardiac image enhancement and segmentation. CoRR. 2017;abs/1705.08302.

    Google Scholar 

  5. Ravishankar H, et al. Learning and incorporating shape models for semantic segmentation. Proc MICCAI. 2017; p. 203–211.

    Google Scholar 

  6. Jetley S, Sapienza M, Golodetz S, et al. Straight to shapes: real-time detection of encoded shapes. CoRR. 2016;abs/1611.07932.

    Google Scholar 

  7. Bouteldja N, Heinrich MP. Deep 3D encoder-decoder networks with applications to organ segmentation. Stud Conf. 2018; p. 229–232.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nassim Bouteldja .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bouteldja, N., Merhof, D., Ehrhardt, J., Heinrich, M.P. (2019). Deep Multi-Modal Encoder-Decoder Networks for Shape Constrained Segmentation and Joint Representation Learning. In: Handels, H., Deserno, T., Maier, A., Maier-Hein, K., Palm, C., Tolxdorff, T. (eds) Bildverarbeitung für die Medizin 2019. Informatik aktuell. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-25326-4_8

Download citation

Publish with us

Policies and ethics