Skip to main content

Semi-supervised Learning for Instrument Detection with a Class Imbalanced Dataset

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12446))

Abstract

The automated recognition of surgical instruments in surgical videos is an essential factor for the evaluation and analysis of surgery. The analysis of surgical instrument localization information can help in analyses related to surgical evaluation and decision making during surgery. To solve the problem of the localization of surgical instruments, we used an object detector with bounding box labels to train the localization of the surgical tools shown in a surgical video. In this study, we propose a semi-supervised learning-based training method to solve the class imbalance between surgical instruments, which makes it challenging to train the detectors of the surgical instruments. First, we labeled gastrectomy videos for gastric cancer performed in 24 cases of robotic surgery to detect the initial bounding box of the surgical instruments. Next, a trained instrument detector was used to discern the unlabeled videos, and new labels were added to the tools causing class imbalance based on the previously acquired statistics of the labeled videos. We also performed object tracking-based label generation in the spatio-temporal domain to obtain accurate label information from the unlabeled videos in an automated manner. We were able to generate dense labels for the surgical instruments lacking labels through bidirectional object tracking using a single object tracker; thus, we achieved improved instrument detection in a fully or semi-automated manner.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Jin, A., et al.: Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: Proceedings of WACV (2018)

    Google Scholar 

  2. Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., de Mathelin, M., Padoy, N.: EndoNet a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans. Med. Imaging 36(1), 86–97 (2017)

    Article  Google Scholar 

  3. Allan, M., et al.: 2017 robotic instrument segmentation challenge. arXiv: 1902.06426 (2019)

  4. Ahmidi, N., et al.: A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. Trans. Biomed. Eng. 64(9), 2025–2041 (2017)

    Article  Google Scholar 

  5. Gao, Y., et al.: The JHU-ISI gesture and skill assessment working set (JIGSAWS): a surgical activity dataset for human motion modeling. In: Proceedings of MICCAIW (2014)

    Google Scholar 

  6. Misra, I., Shrivastava, A., Hebert, M.: Watch and learn: semi-supervised learning for object detectors from video. In: Proceedings of CVPR (2015)

    Google Scholar 

  7. Yoon, J., Hong, S., Jeong, S., Choi, M.-K.: Semi-supervised object detection with sparsely annotated dataset. arXiv:2006.11692 (2020)

  8. Choi, M.-K., et al.: Co-occurrence matrix analysis-based semi-supervised training for object detection. In: Proceedings of ICIP (2018)

    Google Scholar 

  9. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN towards real-time object detection with region proposal networks. In: Proceedings of NIPS (2015)

    Google Scholar 

  10. Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: Proceedings of NIPS (2016)

    Google Scholar 

  11. Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolutional networks. In: Proceedings of ICCV (2017)

    Google Scholar 

  12. Cai, Z., Vasconcelos, N.: Cascade R-CNN: delving into high quality object detection. In: Proceedings of CVPR (2018)

    Google Scholar 

  13. Wang, Q., Zhang, L., Bertinetto, L., Hu, W., Torr, P.H.S.: Fast online object tracking and segmentation: a unifying approach. In: Proceedings of CVPR (2019)

    Google Scholar 

  14. Chen, K., et al.: MMDetection: open MMLab detection toolbox and benchmark. arXiv:1906.07155 (2019)

  15. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Proceedings of NeurIPS (2019)

    Google Scholar 

  16. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  17. Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. In: Proceedings of ICCV (2017)

    Google Scholar 

  18. Tian, Z., Shen, C., Chen, H., He, T.: FCOS: fully convolutional one-stage object detection. In: Proceedings of ICCV (2019)

    Google Scholar 

  19. Jung, H., Choi, M.-K., Jung, J., Lee, J.-H., Kwon, S., Jung, W.Y.: ResNet-based vehicle classification and localization in traffic surveillance systems. In: Proceedings of CVPRW (2017)

    Google Scholar 

  20. Computer Vision Annotation Tool (CVAT). https://github.com/opencv/cvat

  21. Wang, J., et al.: Deep high-resolution representation learning for visual recognition. arXiv:1908.07919 (2019)

  22. Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of CVPR (2017)

    Google Scholar 

  23. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min-Kook Choi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yoon, J., Lee, J., Park, S., Hyung, W.J., Choi, MK. (2020). Semi-supervised Learning for Instrument Detection with a Class Imbalanced Dataset. In: Cardoso, J., et al. Interpretable and Annotation-Efficient Learning for Medical Image Computing. IMIMIC MIL3ID LABELS 2020 2020 2020. Lecture Notes in Computer Science(), vol 12446. Springer, Cham. https://doi.org/10.1007/978-3-030-61166-8_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61166-8_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61165-1

  • Online ISBN: 978-3-030-61166-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics