Skip to main content

Multi-view Adaptive Bone Activation from Chest X-Ray with Conditional Adversarial Nets

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13834))

Included in the following conference series:

  • 1257 Accesses

Abstract

Activating bone from a chest X-ray (CXR) is significant for disease diagnosis and health equity for under-developed areas, while the complex overlap of anatomical structures in CXR constantly challenges bone activation performance and adaptability. Besides, due to high data collection and annotation costs, no large-scale labeled datasets are available. As a result, existing methods commonly use single-view CXR with annotations to activate bone. To address these challenges, in this paper, we propose an adaptive bone activation framework. This framework leverages the Dual-Energy Subtraction (DES) images to consist of multi-view image pairs of the CXR and the contrastive learning theory to construct training samples. In particular, we first devise a Siamese/Triplet architecture supervisor; correspondingly, we establish a cGAN-styled activator based on the learned skeletal information to generate the bone image from the CXR. To our knowledge, the proposed method is the first multi-view bone activation framework obtained without manual annotation and has more robust adaptability. The mean of Relative Mean Absolute Error (\(\overline{RMAE}\)) and the Fréchet Inception Distance (FID) are 3.45% and 1.12 respectively, which proves the results activated by our method retain more skeletal details with few feature distribution changes. From the visualized results, our method can activate bone images from a single CXR ignoring overlapping areas. Bone activation has drastically improved compared to the original images.

This work was supported by the Applied Basic Research Program of the Science and Technology Department of Sichuan Province [2022NSFSC1403].

C. Niu and Y. Li—Contributed equally to this work

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. CandemirS, S., et al.: Atlas-based rib-bone detection in chest x-rays. Comput. Med. Imaging Graph. 51, 32–39 (2016)

    Article  Google Scholar 

  2. Chen, S., Suzuki, K.: Computerized detection of lung nodules by means of “virtual dual-energy” radiography. IEEE Transactions on Biomedical Engineering 60(2), 369–378 (2012)

    Google Scholar 

  3. Chen, Y., et al.: Drop an octave: reducing spatial redundancy in convolutional neural networks with octave convolution. In: ICCV, pp. 3435–3444 (2019)

    Google Scholar 

  4. Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: CVPR, vol. 1, pp. 539–546 (2005)

    Google Scholar 

  5. Eslami, M., Tabarestani, S., Albarqouni, S., Adeli, E., Navab, N., Adjouadi, M.: Image-to-images translation for multi-task organ segmentation and bone suppression in chest x-ray radiography. IEEE Trans. Med. Imaging 39(7), 2553–2565 (2020)

    Article  Google Scholar 

  6. Han, L., Lyu, Y., Peng, C., Zhou, S.K.: Gan-based disentanglement learning for chest x-ray rib suppression. Med. Image Anal. 77, 102369 (2022)

    Article  Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

    Google Scholar 

  8. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local Nash equilibrium. In: NeurIPS, pp. 6626–6637 (2017)

    Google Scholar 

  9. Hoffer, E., Ailon, N.: Deep metric learning using triplet network. In: SIMBAD, pp. 84–92 (2015)

    Google Scholar 

  10. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR, pp. 4700–4708 (2017)

    Google Scholar 

  11. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 1125–1134 (2017)

    Google Scholar 

  12. Li, F., Engelmann, R., Pesce, L.L., Doi, K., Metz, C.E., MacMahon, H.: Small lung cancers: improved detection by use of bone suppression imaging-comparison with dual-energy subtraction chest radiography. Radiology 261(3), 937 (2011)

    Article  Google Scholar 

  13. Liu, Y., Zhang, X., Cai, G., Chen, Y., Yun, Z., Feng, Q., Yang, W.: Automatic delineation of ribs and clavicles in chest radiographs using fully convolutional densenets. Comput. Methods Programs Biomed. 180, 105014 (2019)

    Article  Google Scholar 

  14. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)

  15. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  16. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

    Google Scholar 

  17. Tang, C., He, Z., Li, Y., Lv, J.: Zero-shot learning via structure-aligned generative adversarial network. IEEE Trans. Neural Netw. Learn. Syst. 66, 1–14 (2021)

    Google Scholar 

  18. Van Ginneken, B., Ter Haar Romeny, B.M.: Automatic delineation of ribs in frontal chest radiographs. In: Medical Imaging 2000: Image Processing, vol. 3979, pp. 825–836 (2000)

    Google Scholar 

  19. Wang, J., Lv, J., Yang, X., Tang, C., Peng, X.: Multimodal image-to-image translation between domains with high internal variability. Soft. Comput. 24(23), 18173–18184 (2020)

    Article  Google Scholar 

  20. Wechsler, H.: Automatic Detection Of Rib Contours in Chest Radiographs. University of California, Irvine (1975)

    Google Scholar 

  21. Yang, W., et al.: Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain. Med. Image Anal. 35, 421–433 (2017)

    Article  Google Scholar 

  22. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Weibo Liang or Jiancheng Lv .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Niu, C. et al. (2023). Multi-view Adaptive Bone Activation from Chest X-Ray with Conditional Adversarial Nets. In: Dang-Nguyen, DT., et al. MultiMedia Modeling. MMM 2023. Lecture Notes in Computer Science, vol 13834. Springer, Cham. https://doi.org/10.1007/978-3-031-27818-1_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-27818-1_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-27817-4

  • Online ISBN: 978-3-031-27818-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics