Skip to main content

Semi-supervised Surgical Tool Detection Based on Highly Confident Pseudo Labeling and Strong Augmentation Driven Consistency

  • Conference paper
  • First Online:
Deep Generative Models, and Data Augmentation, Labelling, and Imperfections (DGM4MICCAI 2021, DALI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13003))

Abstract

Surgical tool detection in computer-assisted intervention system aims to provide surgeons with specific supportive information. Existing supervised methods heavily rely on the volume of labeled data. However, manually annotating location of tools in surgical videos is quite time-consuming. To overcome this problem, we propose a semi-supervised pipeline for surgical tool detection, using strategies of highly confident pseudo labeling and strong augmentation driven consistency. To evaluate the proposed pipeline, we introduce a surgical tool detection dataset, Cataract Dataset for Tool Detection (CaDTD). Compared to the supervised baseline, our semi-supervised method improves mean average precision (mAP) by 4.3%. In addition, an ablative study was conducted to validate the effectiveness of the two strategies in our tool detection pipeline, and the results show the mAP improvement of 1.9% and 3.9%, respectively. The proposed dataset, CaDTD, is publicly available at https://github.com/evangel-jiang/CaDTD.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cleary, K., Kinsella, A., Mun, S.K.: OR 2020 workshop report: operating room of the future. In: International Congress Series, vol. 1281, pp. 832–838. Elsevier (2005)

    Google Scholar 

  2. Padoy, N.: Machine and deep learning for workflow recognition during surgery. Minim. Invasive Ther. Allied Technol. 28(2), 82–90 (2019)

    Article  Google Scholar 

  3. Bouget, D., Allan, M., Stoyanov, D., et al.: Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med. Image Anal. 35, 633–654 (2017)

    Article  Google Scholar 

  4. Bhatia, B., Oates, T., Xiao, Y., et al.: Real-time identification of operating roomstate from video. In: Proceedings of AAAI, vol. 2, pp. 1761–1766 (2007)

    Google Scholar 

  5. Sarikaya, D., Corso, J.J., Guru, K.A.: Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans. Med. Imaging 36(7), 1542–1549 (2017)

    Article  Google Scholar 

  6. Jin, A., Yeung, S., Jopling, J., et al.: Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: Proceedings of WACV, pp. 691–699 (2018)

    Google Scholar 

  7. Kurmann, T., et al.: Simultaneous recognition and pose estimation of instruments in minimally invasive surgery. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 505–513. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66185-8_57

    Chapter  Google Scholar 

  8. Zhang, B., Wang, S., Dong, L., et al.: Surgical tools detection based on modulated anchoring network in laparoscopic videos. IEEE Access 8, 23748–23758 (2020)

    Article  Google Scholar 

  9. Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2016)

    Article  Google Scholar 

  10. van Engelen, J.E., Hoos, H.H.: A survey on semi-supervised learning. Mach. Learn. 109(2), 373–440 (2019). https://doi.org/10.1007/s10994-019-05855-6

    Article  MathSciNet  MATH  Google Scholar 

  11. Yoon, J., Lee, J., Park, S.H., Hyung, W.J., Choi, M.-K.: Semi-supervised learning for instrument detection with a class imbalanced dataset. In: Cardoso, J., et al. (eds.) IMIMIC/MIL3ID/LABELS -2020. LNCS, vol. 12446, pp. 266–276. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61166-8_28

    Chapter  Google Scholar 

  12. Sohn, K., Zhang, Z., Li, C. L., et al.: A simple semi-supervised learning framework for object detection. arXiv preprint. arXiv:2005.04757 (2020)

  13. Al Hajj, H., Lamard, M., Conze, P.H., et al.: CATARACTS: challenge on automatic tool annotation for cataRACT surgery. Med. Image Anal. 52, 24–41 (2019)

    Article  Google Scholar 

  14. Grammatikopoulou, M., Flouty, E., Kadkhodamohammadi, A., et al.: CaDIS: cataract dataset for RGB-image segmentation. Med. Image Anal. 71, 102053 (2021)

    Article  Google Scholar 

  15. DeVries, T., Taylor, G. W.: Improved regularization of convolutional neural networks with cutout. arXiv preprint. arXiv:1708.04552 (2017)

  16. Cubuk, E. D., Zoph, B., Shlens, J., et al.: Randaugment: practical automated data augmentation with a reduced search space. In: Proceedings of CVPR, pp. 702–703 (2020)

    Google Scholar 

  17. Zoph, B., Cubuk, E.D., Ghiasi, G., Lin, T.-Y., Shlens, J., Le, Q.V.: Learning data augmentation strategies for object detection. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12372, pp. 566–583. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58583-9_34

    Chapter  Google Scholar 

  18. Wu, Y., et al.: Tensorpack (2016). https://github.com/tensorpack

Download references

Acknowledgments

This work was supported in part by the Guangdong Key Area Research and Development Program (2020B010165004), the Shenzhen Key Basic Science Program (JCYJ20180507182437217), the National Key Research and Development Program (2019YFC0118100 and 2017YFC0110903), the National Natural Science Foundation of China (12026602), and the Shenzhen Key Laboratory Program (ZDSYS201707271637577).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fucang Jia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, W., Xia, T., Wang, Z., Jia, F. (2021). Semi-supervised Surgical Tool Detection Based on Highly Confident Pseudo Labeling and Strong Augmentation Driven Consistency. In: Engelhardt, S., et al. Deep Generative Models, and Data Augmentation, Labelling, and Imperfections. DGM4MICCAI DALI 2021 2021. Lecture Notes in Computer Science(), vol 13003. Springer, Cham. https://doi.org/10.1007/978-3-030-88210-5_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88210-5_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88209-9

  • Online ISBN: 978-3-030-88210-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics