Skip to main content
Log in

Transforming view of medical images using deep learning

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Since the last decade, there is a significant change in the procedure of medical diagnosis and treatment. Specifically, when internal tissues, organs such as heart, lungs, brain, kidneys and bones are the target regions, a doctor recommends ‘computerized tomography’ scan and/or magnetic resonance imaging to get a clear picture of the damaged portion of an organ or a bone. This is important for correct examination of the medical deformities such as bone fracture, arthritis, and brain tumor. It ensures prescription of the best possible treatment. But ‘computerized tomography’ scan exposes a patient to high ionizing radiation. These rays make a person more prone to cancer. Magnetic resonance imaging requires a strong magnetic field. Thus, it becomes impractical for patients with implants in their body. Moreover, the high cost makes the above-stated techniques unaffordable for low economy class of society. The above-mentioned challenges of ‘computerized tomography’ scan and magnetic resonance imaging motivate researchers to focus on developing a technique for conversion of 2-dimensional view of medical images into their corresponding multiple views. In this manuscript, the authors design and develop a deep learning model that makes an effective use of conditional generative adversarial network, an extension of generative adversarial network for the transformation of 2-dimensional views of human bone into the corresponding multiple views at different angles. The model will prove useful for both doctors and patients.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Wani N, Raza K (2018) Multiple kernel-learning approach for medical image analysis. In: Dey N, Ashour AS, Shi F, Balas VE (eds) Soft computing based medical image analysis. Academic Press, London, pp 31–47

    Chapter  Google Scholar 

  2. Watson M, Holman DM, Maguire-Eisen M (2016) Ultraviolet radiation exposure and its impact on skin cancer risk. In: Seminars in oncology nursing, vol 32, no 3. WB Saunders, pp 241–254

  3. The World Bank (2016) India’s Poverty Profile. https://www.worldbank.org/en/news/infographic/2016/05/27/india-s-poverty-profile. Accessed 17 Feb 2020

  4. Zhang B, Sun S, Chi Z et al (2010) 3D reconstruction method from bi-planar radiography using DLT algorithm: application to the Femur. In: 1st international conference on pervasive computing, signal processing and applications. IEEE, pp 251–254. https://doi.org/10.1109/PCSPA.2010.68

  5. Koh K, Yo KH, Kim K et al (2011) Reconstruction of patient-specific femurs using X-ray and sparse CT images. Comput Biol Med 41:421–426. https://doi.org/10.1016/j.compbiomed.2011.03.016

    Article  Google Scholar 

  6. Baka N, Kaptein LB, Bruijne M et al (2011) 2D–3D shape reconstruction of the distal femur from stereo X-ray imaging using statistical shape models. Med Image Anal 15:840–850

    Article  Google Scholar 

  7. Zheng G, Schumann S, Dong X et al (2009) A 2D/3D correspondence building method for reconstruction of a patient-specific 3D bone surface model using point distribution models and calibrated X-ray images. Med Image Anal 13:883–899

    Article  Google Scholar 

  8. Le Bras A, Laporte S, Bousson V et al (2003) Personalised 3D-reconstruction of proximal femur from low-dose digital bi-planar radiographs. Int Congr Ser 1256:214–219. https://doi.org/10.1016/S0531-5131(03)00285-1

    Article  Google Scholar 

  9. Laporte S, Skalli J, Lavaste F et al (2003) A biplanar reconstruction method based on 2D and 3D contours: application to the distal femur. Comput Methods Biomech Biomed Eng 6:1–6

    Article  Google Scholar 

  10. Zhang R, Isola P, Efros A (2016) Colorful image colorization. In: European conference on computer vision, pp 649–666

  11. Karade V, Ravi B (2014) 3D femur model reconstruction from bi-plane X-ray Images: a novel method based on Laplacian surface deformation. Int J Comput Assist Radiol Surg 10:473–485. https://doi.org/10.1007/s11548-014-1097-6

    Article  Google Scholar 

  12. Akkoul S, Hafine A, Leconge R et al (2014) 3D reconstruction method of the proximal femur and shape correction. In: 4th international conference on image processing theory, tools and application. IEEE, pp 1–6. https://doi.org/10.1109/IPTA.2014.7001939

  13. Zhang G (2013) 3D Volumetric intensity reconstruction from 2D X-ray images using partial least-squares regression. In: IEEE 10th international symposium on biomedical imaging, pp 1268–1271

  14. Radford A, Metz L, Chintala S (2016) Unsupervised representation learning with deep convolutional generative adversarial networks. In: International conference on learning representations. arXiv:1511.06434

  15. Chen X, Duan Y, Houthooft R et al. (2016) InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. arXiv:1606.03657v1

  16. Zhang H, Xu T, Li H et al. (2017) StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: IEEE international conference on computer vision. arXiv:1612.03242

  17. Ledig C, Theis L, Huszár F et al (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE conference on computer vision and pattern recognition. arXiv:1609.04802

  18. Zhu Yan J, Park T, Isola P et al (2018) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceeding of the IEEE international conference on computer vision, pp 2223–2232

  19. Nazeri K, Eric N, Ebrahimi M et al (2018) Image colorization using generative adversarial network. arXiv:1803.05400v5

  20. Antipov G, Baccouche M, Dugelay Luc J (2017) Face aging with conditional generative adversarial network. In: IEEE international conference on image processing, pp 2089–2093. https://doi.org/10.1109/ICIP.2017.8296650

  21. Mirza M, Osindero S (2014) Conditional generative adversarial nets. arXiv:1411.1784

  22. Gauthier J (2014) Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter Semester 2014(5):2.

  23. Huang B G, Ramesh M, Berg T et al. (2007) Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical Report 07–49, University of Massachusetts, Amherst

  24. Rizqie Q, Pahl C, Dewi DE, Ayob MA, Maolana I, Hermawan R, Soetkino RD, Suprivanto E (2014) 3D coordinate reconstruction from 2D X-ray images for guided lung biopsy. WSEAS Trans Biol Biomed, EISSN, pp 2224–2902

    Google Scholar 

  25. Zhang J et al (2013) 3-D reconstruction of the spine from biplanar radiographs based on contour matching using the Hough transform. IEEE Trans Biomed Eng 60(7):1954–1964

    Article  Google Scholar 

  26. Goodfellow I, Abadie P J, Mirza M et al (2014) Generative adversarial nets. Adv Neural Inf Process Syst 2:2672–2680.

    Google Scholar 

  27. Raza K, Singh NK (2018) A tour of unsupervised deep learning for medical image analysis. arXiv preprint arXiv:1812.07715

  28. Ernst K, Baidyk T (2004) Improved method of handwritten digit recognition tested on MNIST database. Image Vis Comput 22:971–981. https://doi.org/10.1016/j.imavis.2004.03.008

    Article  Google Scholar 

  29. Sharma MK, Dhaka VP (2016) Segmentation of English offline handwritten cursive scripts using a feedforward neural network. Neural Comput Appl 27(5):1369–1379

    Article  Google Scholar 

  30. Ruder S (2017) An overview of gradient descent optimization algorithms. arXiv:1609.04747v2

  31. The scikit-yb developers. Alpha Selection. https://www.scikit-yb.org/en/latest/api/regressor/alphas.html. Accessed Feb 2020

  32. Pessoa et al (2018) Performance analysis of google colaboratory as a tool for accelerating deep learning applications. IEEE Access. https://doi.org/10.1109/ACCESS.2018.2874767

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Dr. Nand Kishore Poonia, Managing Director, Sir Chhotu Ram Dana Shivam Hospital, Jaipur, for providing the dataset.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Geeta Rani.

Ethics declarations

Conflict of interest

Authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pradhan, N., Dhaka, V.S., Rani, G. et al. Transforming view of medical images using deep learning. Neural Comput & Applic 32, 15043–15054 (2020). https://doi.org/10.1007/s00521-020-04857-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-04857-z

Keywords

Navigation