Skip to main content

Advertisement

Log in

A GAN-based with expert-validated data augmentation method for wireless capsule endoscopy images of small intestine polyp

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Wireless capsule endoscopy (WCE) is a noninvasive method for examining the entire small intestine. An automatic polyp segmentation system can assist physicians in diagnosing polyps and accurately accessing lesions, improve clinical performance, and reduce expert time. However, collecting enough polyp images with WCE to train a deep learning model is challenging. Many data augmentation techniques were commonly used to address the problem of insufficient data. However, these techniques may not introduce enough diversity and generality to the training dataset. We introduce an expert-validated seamless cloning algorithm and a GAN-based refinement method to generate synthetic WCE polyp images. We then built an efficient small intestine polyp segmentation model using these synthetic data and the transfer learning with the pretrained weights from a colon polyp dataset. Our synthetic data closely resemble real polyps; experts have difficulty distinguishing between the real and synthetic images. The proposed small intestine polyp segmentation model in WCE images achieved a Dice coefficient of 0.89 for pixel level, precision of 0.9, and recall of 0.88 for polyp level. In this paper, we introduced a feasible method to expand a small dataset by generating synthetic data, which boosts the data quantity and diversity, thus improving the polyp segmentation model’s performance and enhancing generalization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

The data and code supporting this study’s findings are available from the corresponding author upon reasonable request.

Abbreviations

GAN :

Generative adversarial network

WCE :

Wireless capsule endoscopy

CNN :

Convolutional neural network

DCGAN :

Deep convolutional GAN

SE :

Squeeze and excitation

IRB :

Institutional review board

NCKUH :

National Cheng Kung University Hospital

PIE :

Poisson image editing

CUT :

Contrastive unpaired translation

MLP :

Multilayer perceptron

FPR :

False positive rate

TP :

The number of true positives

FP :

The number of false positives

FN :

The number of false negatives

TN :

The number of true negatives

TAD :

Traditional augmentation data

SD :

Synthetic data

QSD :

Qualified synthetic data

References

  1. Enns RA et al (2017) Clinical practice guidelines for the use of video capsule endoscopy. Gastroenterology 152:497–514

    MATH  Google Scholar 

  2. Pennazio M et al (2023) Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders: European society of gastrointestinal endoscopy (esge) guideline-update 2022. Endoscopy 55:58–95

    Google Scholar 

  3. Bhatia S, Sinha Y, Goel L (2017) Lung cancer detection: a deep learning approach. In Soft Computing for Problem Solving: SocProS 2, 699–705 (Springer, 2019)

  4. Sánchez-Peralta LF, Bote-Curiel L, Picón A, Sánchez-Margallo FM, Pagador JB (2020) Deep learning to find colorectal polyps in colonoscopy: A systematic literature review. Artif Intell Med 108:101923. https://doi.org/10.1016/j.artmed.2020.101923

    Article  MATH  Google Scholar 

  5. Rustam F et al (2021) Wireless capsule endoscopy bleeding images classification using cnn based model. IEEE Access 9:33675–33688. https://doi.org/10.1109/ACCESS.2021.3061592

    Article  MATH  Google Scholar 

  6. Gobpradit S, Vateekul P (2020) Angiodysplasia segmentation on capsule endoscopy images using albunet with squeeze-and-excitation blocks. In Intelligent Information and Database Systems: 12th Asian Conference, ACIIDS 2020, Phuket, Thailand, March 23–26, Proceedings, Part I 12, 283–293 (Springer, 2020)

  7. Alaskar H, Hussain A, Al-Aseem N, Liatsis P, Al-Jumeily D (2019) Application of convolutional neural networks for automated ulcer detection in wireless capsule endoscopy images. Sensors 19:1265

    Google Scholar 

  8. Aoki T et al (2019) Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest Endosc 89:357–363

    MATH  Google Scholar 

  9. Jha D et al (2021) Nanonet: Real-time polyp segmentation in video capsule endoscopy and colonoscopy. In 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), 37–43, https://doi.org/10.1109/CBMS52027.2021.00014

  10. Smedsrud PH et al (2021) Kvasir-capsule, a video capsule endoscopy dataset. Scientific Data 8:142

    Google Scholar 

  11. Awadie H et al (2021) The prevalence of small-bowel polyps on video capsule endoscopy in patients with sporadic duodenal or ampullary adenomas. Gastrointest Endosc 93:630–636. https://doi.org/10.1016/j.gie.2020.07.029

    Article  Google Scholar 

  12. Melson J et al (2021) Video capsule endoscopy. Gastrointest. Endoscopy 93:784–796. https://doi.org/10.1016/j.gie.2020.12.001

    Article  Google Scholar 

  13. Pennazio M et al (2023) Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders: European society of gastrointestinal endoscopy (esge) guideline - update 2022. Endoscopy 55:58–95. https://doi.org/10.1055/a-1973-3796

    Article  Google Scholar 

  14. Maghsoudi OH (2017) Superpixel based segmentation and classification of polyps in wireless capsule endoscopy. In 2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 1–4, https://doi.org/10.1109/SPMB.2017.8257027

  15. Karargyris A, Bourbakis N (2011) Detection of small bowel polyps and ulcers in wireless capsule endoscopy videos. IEEE Trans Biomed Eng 58:2777–2786. https://doi.org/10.1109/TBME.2011.2155064

    Article  MATH  Google Scholar 

  16. Goodfellow I et al (2020) Generative adversarial networks. Commun ACM 63:139–144

    MATH  Google Scholar 

  17. Xiao Z et al (2023) Wce-dcgan: A data augmentation method based on wireless capsule endoscopy images for gastrointestinal disease detection. IET Image Proc 17:1170–1180

    MATH  Google Scholar 

  18. Jha D et al (2020) Kvasir-seg: A segmented polyp dataset. In Ro, Y. M. et al. (eds.) MultiMedia Modeling, 451–462 (Springer International Publishing, Cham)

  19. Pérez P, Gangnet M, Blake A (2003) Poisson image editing. ACM Trans Graph 22:313–318. https://doi.org/10.1145/882262.882269

    Article  MATH  Google Scholar 

  20. Park T, Efros AA, Zhang R, Zhu J-Y (2020) Contrastive learning for unpaired image-to-image translation. In Vedaldi, A., Bischof, H., Brox, T. Frahm, J.-M. (eds.) Computer Vision – ECCV 2020, 319–345 (Springer International Publishing, Cham)

  21. Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, 2223–2232

  22. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In Navab, N., Hornegger, J., Wells, W. M. Frangi, A. F. (eds.) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, 234–241 (Springer International Publishing, Cham)

  23. Siddique N, Paheding S, Elkin CP, Devabhaktuni V (2021) U-net and its variants for medical image segmentation: a review of theory and applications. IEEE Access 9:82031–82057

    Google Scholar 

  24. Alokasi H, Ahmad MB (2022) The accuracy performance of semantic segmentation network with different backbones. In 2022 7th International Conference on Data Science and Machine Learning Applications (CDMA), 49–54, https://doi.org/10.1109/CDMA54072.2022.00013

  25. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  26. Szegedy C, Ioffe S, Vanhoucke V, Alemi AA (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17, 4278-4284 (AAAI Press)

  27. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778, https://doi.org/10.1109/CVPR.2016.90

  28. Kingma DP, Ba J (2017) Adam: a method for stochastic optimization. arXiv:1412.6980

Download references

Acknowledgements

We would like to thank Wallace Academic Editing for English language editing.

Funding

This work is partially funded by the Grant MOST 111-2221-E-260-008-MY2 and NSTC 113-2221-E-260-011.

Author information

Authors and Affiliations

Authors

Contributions

YTC, SYH, PCL and HHC developed the design and drafted the manuscript. YTC, SYH and HHC analyzed the datasets and did programming. PCL and HYK collected the clinical information and drafted the disease background. HYK selected and labeled the images. HHC managed this project. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Hsin-Yu Kuo or Hsin-Hung Chou.

Ethics declarations

Conflict of interest

The authors report no competing financial interests or conflict of interest.

Ethical approval and consent to participate

This study received institutional review board (IRB) approval (IRB number: A-ER-111-145) from the National Cheng Kung University Hospital for a retrospective review of WCE images from November 2013 to December 2022, with the IRB waiving the need for informed consent.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chou, YT., Hsieh, SY., Lin, PC. et al. A GAN-based with expert-validated data augmentation method for wireless capsule endoscopy images of small intestine polyp. J Supercomput 81, 653 (2025). https://doi.org/10.1007/s11227-025-07146-5

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11227-025-07146-5

Keywords