Skip to main content

Generative Adversarial Network Using Multi-modal Guidance for Ultrasound Images Inpainting

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12532))

Included in the following conference series:

Abstract

Medical image inpainting not only helps computer-aided diagnosis systems to eliminate the interference of irrelevant information in medical images, but also helps doctors to prognosis and evaluate the operation by blocking and inpainting the lesion area. However, the existing diffusion-based or patch-based methods have poor performance on complex images with non-repeating structures, and the generate-based methods lack sufficient priori knowledge, which leads to the inability to generate repair content with reasonable structure and visual reality. This paper proposes a generative adversarial network via multi-modal guidance (MMG-GAN), which is composed of the multi-modal guided network and the fine inpainting network. The multi-modal guided network obtains the low-frequency structure, high-frequency texture and high-order semantic of original image through the structure reconstruction generator, texture refinement generator and semantic guidance generator. Utilizing the potential attention mechanism of convolution operation, the fine inpainting network adaptively fuses features to achieve realistic inpainting. By adding the multi-modal guided network, MMG-GAN realizes the inpainting content with reasonable structure, reliable texture and consistent semantic. Experimental results on Thyroid Ultrasound Image (TUI) dataset and TN-SCUI2020 dataset show that our method outperforms other state-of-the-art methods in terms of PSNR, SSIM, and relative l1 measures. Code and TUI dataset will be made publicly available.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. TN-SCUI2020 homepage (2020). https://tn-scui2020.grand-challenge.org/Home

  2. Akhtar, N., Mian, A.S.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018). https://doi.org/10.1109/ACCESS.2018.2807385

    Article  Google Scholar 

  3. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: Patchmatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 24 (2009). https://doi.org/10.1145/1531326.1531330

    Article  Google Scholar 

  4. Darabi, S., Shechtman, E., Barnes, C., Goldman, D.B., Sen, P.: Image melding: combining inconsistent images using patch-based synthesis. ACM Trans. Graph. 31(4), 82:1–82:10 (2012). https://doi.org/10.1145/2185520.2185578

  5. Efros, A.A., Freeman, W.T.: Image quilting for texture synthesis and transfer. In: SIGGRAPH 2001, pp. 341–346 (2001). https://dl.acm.org/citation.cfm?id=383296

  6. Grigorev, A., Sevastopolsky, A., Vakhitov, A., Lempitsky, V.S.: Coordinate-based texture inpainting for pose-guided human image generation. In: CVPR 2019, pp. 12135–12144 (2019). http://openaccess.thecvf.com/CVPR2019.py

  7. Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. 36(4), 107:1–107:14 (2017). https://doi.org/10.1145/3072959.3073659

  8. Jia, X., Wei, X., Cao, X., Foroosh, H.: Comdefend: an efficient image compression model to defend adversarial examples. In: CVPR 2019, pp. 6084–6092 (2019). http://openaccess.thecvf.com/CVPR2019.py

  9. Jin, D., Bai, X.: Patch-sparsity-based image inpainting through a facet deduced directional derivative. IEEE Trans. Circuits Syst. Video Tech. 29(5), 1310–1324 (2019). https://doi.org/10.1109/TCSVT.2018.2839351

    Article  Google Scholar 

  10. Ke, J., Deng, J., Lu, Y.: Noise reduction with image inpainting: an application in clinical data diagnosis. In: SIGGRAPH 2019, Posters, pp. 88:1–88:2 (2019). https://doi.org/10.1145/3306214.3338593

  11. Li, A., Qi, J., Zhang, R., Ma, X., Ramamohanarao, K.: Generative image inpainting with submanifold alignment. In: IJCAI 2019, pp. 811–817 (2019). https://doi.org/10.24963/ijcai.2019/114

  12. Li, H., Luo, W., Huang, J.: Localization of diffusion-based inpainting in digital images. IEEE Trans. Inf. Forensics Secur. 12(12), 3050–3064 (2017). https://doi.org/10.1109/TIFS.2017.2730822

    Article  Google Scholar 

  13. Liu, G., Reda, F.A., Shih, K.J., Wang, T., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: ECCV 2018, Proceedings, Part XI, pp. 89–105 (2018). https://doi.org/10.1007/978-3-030-01252-6_6

  14. Ma, Y., Liu, X., Bai, S., Wang, L., He, D., Liu, A.: Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation. In: IJCAI 2019, pp. 3123–3129 (2019). https://doi.org/10.24963/ijcai.2019/433

  15. Nazeri, K., Ng, E., Joseph, T., Qureshi, F.Z., Ebrahimi, M.: Edgeconnect: generative image inpainting with adversarial edge learning. CoRR abs/1901.00212 (2019). http://arxiv.org/abs/1901.00212

  16. Park, E., Yang, J., Yumer, E., Ceylan, D., Berg, A.C.: Transformation-grounded image generation network for novel 3D view synthesis. In: CVPR 2017, pp. 702–711 (2017). https://doi.org/10.1109/CVPR.2017.82

  17. Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: CVPR 2016, pp. 2536–2544 (2016). https://doi.org/10.1109/CVPR.2016.278

  18. Ren, Y., Yu, X., Zhang, R., Li, T.H., Liu, S., Li, G.: Structureflow: image inpainting via structure-aware appearance flow. In: ICCV 2019, pp. 181–190 (2019). https://doi.org/10.1109/ICCV.2019.00027

  19. Sagong, M., Shin, Y., Kim, S., Park, S., Ko, S.: PEPSI: fast image inpainting with parallel decoding network. In: CVPR 2019, pp. 11360–11368 (2019). http://openaccess.thecvf.com/CVPR2019.py

  20. Simakov, D., Caspi, Y., Shechtman, E., Irani, M.: Summarizing visual data using bidirectional similarity. In: CVPR 2008 (2008). https://doi.org/10.1109/CVPR.2008.4587842

  21. Song, Y., Yang, C., Shen, Y., Wang, P., Huang, Q., Kuo, C.J.: SPG-Net: segmentation prediction and guidance network for image inpainting. In: BMVC 2018, p. 97 (2018). http://bmvc2018.org/contents/papers/0317.pdf

  22. Xiong, W., Yu, J., Lin, Z., Yang, J., Lu, X., Barnes, C., Luo, J.: Foreground-aware image inpainting. In: CVPR 2019, pp. 5840–5848 (2019). http://openaccess.thecvf.com/CVPR2019.py

  23. Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: CVPR 2018, pp. 5505–5514 (2018). https://doi.org/10.1109/CVPR.2018.00577

  24. Zeng, Y., Fu, J., Chao, H., Guo, B.: Learning pyramid-context encoder network for high-quality image inpainting. In: CVPR 2019, pp. 1486–1494 (2019). http://openaccess.thecvf.com/CVPR2019.py

Download references

Acknowledgment

This work is supported by National Natural Science Foundation of China (Grant No. 61976155), and Key Project for Science and Technology Support from Key R&D Program of Tianjin (Grant No. 18YFZCGX0 0960).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuewei Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yu, R. et al. (2020). Generative Adversarial Network Using Multi-modal Guidance for Ultrasound Images Inpainting. In: Yang, H., Pasupa, K., Leung, A.CS., Kwok, J.T., Chan, J.H., King, I. (eds) Neural Information Processing. ICONIP 2020. Lecture Notes in Computer Science(), vol 12532. Springer, Cham. https://doi.org/10.1007/978-3-030-63830-6_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-63830-6_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-63829-0

  • Online ISBN: 978-3-030-63830-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics