Skip to main content
Log in

Jigsaw training-based background reverse attention transformer network for guidewire segmentation

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

Guidewire segmentation plays a crucial role in percutaneous coronary intervention. However, it is a challenging task due to the low signal-to-noise ratio of X-ray sequences and the great imbalance between the number of foreground and background pixels. Besides, most existing guidewire segmentation methods are designed for single guidewire segmentation. This paper aims to solve the task of single and dual guidewire segmentation in X-ray fluoroscopy sequences.

Methods

A jigsaw training-based background reverse attention (BRA) transformer network is proposed. A jigsaw training strategy is used to train the guidewire segmentation network. A BRA module is also designed to reduce the influence of background information. First, robust principal component is conducted to generate background maps for guidewire sequences. Then, BRA is computed on the basis of the background features.

Results

The experimental results on the dataset collected from three hospitals show that the proposed method can achieve single and dual guidewire segmentation in X-ray fluoroscopy sequences. Higher F1 score and precision than state-of-the-art guidewire segmentation methods can be obtained in most cases.

Conclusion

The jigsaw training strategy helps reduce the need for dual guidewire data and improve the performance of the network. Our BRA module helps reduce the influence of background information and distinguish the guidewire. The proposed methods can obtain higher performance than state-of-the-art guidewire segmentation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Jiang S, Teng S, Lu J, Wang C, Wen T, Zhu J, Teng G (2022) PixelTopoIS: a pixel-topology-coupled guidewire tip segmentation framework for robot-assisted intervention. Int J Comput Assist Radiol Surg 17:329–341

    Article  PubMed  Google Scholar 

  2. Chen BJ, Wu Z, Sun S, Zhang D, Chen T (2016) Guidewire tracking using a novel sequential segment optimization method in interventional X-ray videos. IEEE 13th international symposium on biomedical imaging (ISBI). Czech Republic, Prague, pp 103–106

    Google Scholar 

  3. Vandini A, Glocker B, Hamady M, Yang GZ (2017) Robust guidewire tracking under large deformations combining segment-like features (SEGlets). Med Image Anal 38:150–164

    Article  PubMed  Google Scholar 

  4. Heibel H, Glocker B, Groher M, Pfister M, Navab N (2013) Interventional tool tracking using discrete optimization. IEEE Trans Med Imaging 32(3):544–555

    Article  PubMed  Google Scholar 

  5. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–455

    Article  CAS  PubMed  Google Scholar 

  6. Zhou YJ, Xie XL, Bian GB, Hou ZG, Wu YD, Liu SQ, Wang JX (2019) Fully automatic dual-guidewire segmentation for coronary bifurcation lesion. 2019 international joint conference on neural networks (IJCNN). Budapest, Hungary, pp 1–6

    Google Scholar 

  7. Zhou YJ, Xie XL, Zhou XH, Liu SQ, Bian GB, Hou ZG (2020) Pyramid attention recurrent networks for real-time guidewire segmentation and tracking in intraoperative X-ray fluoroscopy. Comput Med Imag Graph 83:101734

    Article  Google Scholar 

  8. Zhou YJ, Xie XL, Hou ZG, Bian GB, Liu SQ, Zhou XH (2020) FRR-NET: fast recurrent residual networks for real-time catheter segmentation and tracking in endovascular aneurysm repair. In: 2020 IEEE 17th international symposium on biomedical imaging (ISBI). Iowa City, pp 961–964

  9. Wu YD, Xie XL, Bian GB, Hou ZG, Cheng XR, Chen S, Wang QL (2018) Automatic guidewire tip segmentation in 2D X-ray fluoroscopy using convolution neural networks. In: International joint conference on neural networks (IJCNN), Rio de Janeiro, Brazil, pp 1–7

  10. He, K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, Nevada, pp 770–778

  11. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), Salt Lake City, pp 4510–4520

  12. Guo S, Tang S, Zhu J, Fan J, Ai D, Song H, Yang J (2019) Improved U-net for guidewire tip segmentation in X-ray fluoroscopy images. In: Proceedings of the 2019 3rd international conference on advances in image processing, Chengdu, China, pp 55–59

  13. Li RQ, Xie XL, Zhou XH, Liu SQ, Ni ZL, Zhou YJ, Hou ZG (2021) Real-time multi-guidewire endpoint localization in fluoroscopy images. IEEE Trans Med Imag 40:2002–2014

    Article  Google Scholar 

  14. Ullah I, Chikontwe P, Park SH (2019) Guidewire tip tracking using U-Net with shape and motion constraints. International conference on artificial intelligence in information and communication (ICAIIC). Okinawa, Japan, pp 215–217

    Google Scholar 

  15. Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. International conference on medical image computing and computer-assisted intervention. Munich, Germany, pp 234–241

    Google Scholar 

  16. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Polosukhin I (2017) Polosukhin, Attention is all you need. Advances in neural information processing systems. Long Beach, California, USA, pp 5998–6008

  17. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Houlsby N (2020) An image is worth 16x16 words: transformers for image recognition at scale. In: International conference on learning representations, 11929

  18. Chen J, Lu Y, Yu Q, Luo X, Adeli E, Wang Y, Zhou Y (2021) Transunet: transformers make strong encoders for medical image segmentation. CoRR, vol. abs/2102.04306

  19. Candès EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis? J ACM 58:1–37

    Article  Google Scholar 

  20. Li Y, Liu G, Liu Q, Sun Y, Chen S (2019) Moving object detection via segmentation and saliency constrained RPCA. Neurocomputing 323:352–362

    Article  Google Scholar 

  21. Hu Z, Wang Y, Su R, Bian X, Wei H, He G (2020) Moving object detection based on non-convex RPCA with segmentation constraint. IEEE Access 8:41026–41036

    Article  Google Scholar 

  22. Zhang J, Wang G, Xie H, Zhang S, Shi Z, Gu L (2018) Vesselness-constrained robust PCA for vessel enhancement in x-ray coronary angiograms. Phys Med Biol 63:155019

    Article  PubMed  Google Scholar 

  23. Zhang Z, Jin W, Xu J, Cheng MM (2020) Gradient-induced co-saliency detection. European conference on computer vision (ECCV). Glasgow, UK, August, pp 455–472

    Google Scholar 

  24. Hashimoto S, Takahashi A, Yamada T, Mizuguchi Y, Taniguchi N, Nakajima S, Hata T (2018) Usefulness of the twin guidewire method during retrieval of the broken tip of a microcatheter entrapped in a heavily calcified coronary artery. Cardiovasc Revasc Med 19:28–30

    Article  PubMed  Google Scholar 

  25. Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, Rueckert D (2018) Attention u-net: learning where to look for the pancreas. In: International conference on medical imaging with deep learning, Amsterdam, p 03999

  26. Huang G, Zhu J, Li J, Wang Z, Cheng L, Liu L, Zhou J (2020) Channel-attention U-Net: channel attention mechanism for semantic segmentation of esophagus and esophageal cancer. IEEE Access 8:122798–122810

    Article  Google Scholar 

  27. Zhang G, Wong HC, Wang C, Zhu J, Lu L, Teng G (2021) A temporary transformer network for guide-wire segmentation. 14th international congress on image and signal processing. BioMedical engineering and informatics. Shanghai, China, pp 1–5

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Key Research and Development Program of China (2018YFA0704100, 2018YFA0704104), the Science and Technology Development Fund of Macao SAR under grants 0016/2019/A1, National Natural Science Foundation of China (81827805), and Project Funded by China Postdoctoral Science Foundation (2021M700772). The funding sources had no role in the writing of the report, or the decision to submit the paper for publication.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Hon-Cheng Wong or Jianjun Zhu.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by the authors.

Informed consent

Informed consent was obtained in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, G., Wong, HC., Zhu, J. et al. Jigsaw training-based background reverse attention transformer network for guidewire segmentation. Int J CARS 18, 653–661 (2023). https://doi.org/10.1007/s11548-022-02803-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-022-02803-z

Keywords

Navigation