Skip to main content

Advertisement

Log in

Using artificial intelligence to quantify dynamic retraction of brain tissue and the manipulation of instruments in neurosurgery

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

There is no objective way to measure the amount of manipulation and retraction of neural tissue by the surgeon. Our goal is to develop metrics quantifying dynamic retraction and manipulation by instruments during neurosurgery.

Methods

We trained a convolutional neural network (CNN) to analyze microscopic footage of neurosurgical procedures and thereby generate metrics evaluating the surgeon’s dynamic retraction of brain tissue and, using an object tracking process, evaluate the surgeon’s manipulation of the instruments themselves. U-Net image segmentation is used to output bounding polygons around cerebral parenchyma of interest, as well as the vascular structures and cranial nerves. A channel and spatial reliability tracker framework is used in conjunction with our CNN to track desired surgical instruments.

Results

Our network achieved a state-of-the-art intersection over union (\(72.64\%\)) for biological tissue segmentation. Multivariate statistical analysis was used to evaluate dynamic retraction, tissue handling, and instrument manipulation.

Conclusion

Our model enables to evaluate dynamic retraction of soft tissue and manipulation of instruments during a surgical procedure, while accounting for movement of the operative microscope. This model can potentially provide the surgeon with objective feedback about the movement of instruments and its effect on brain tissue.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Little AS, Liu S, Beeman S, Sankar T, Preul MC, Hu LS, Smith KA, Baxter LC (2010) Brain retraction and thickness of cerebral neocortex. Oper Neurosurg 67(3):277–282. https://doi.org/10.1227/01.neu.0000374699.12150.0

    Article  Google Scholar 

  2. Zhong J, Dujovny M, Perlin AR, Perez-Arjona E, Park HK, Diaz FG (2003) Brain retraction injury. Neurol Res. https://doi.org/10.1179/016164103771953925

    Article  PubMed  Google Scholar 

  3. Nazim W, Elborady M (2021) Retractorless brain surgery: technical considerations. Egypt J Neurol Psychiatry Neurosurg 57(98)

  4. Scheikl PM, Laschewski S, Kisilenko A, Davitashvili T, Müller B, Capek M, Müller-Stich BP, Wagner M, Mathis-Ullrich F (2020) Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery. Current Dir Biomed Eng. https://doi.org/10.1515/cdbme-2020-0016

    Article  Google Scholar 

  5. Wang Y, Sun Q, Liu Z, Gu L (2022) Visual detection and tracking algorithms for minimally invasive surgical instruments: a comprehensive review of the state-of-the-art. Robot Auton Syst 149:103945. https://doi.org/10.1016/j.robot.2021.103945

    Article  Google Scholar 

  6. Iglovikov V, Seferbekov S, Buslaev A, Shvets A (2018) Ternausnetv2: Fully convolutional network for instance segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) workshops, 233–237

  7. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF (eds) Medical image computing and computer-assisted intervention - MICCAI 2015. Springer, Cham, pp 234–241

    Chapter  Google Scholar 

  8. Lukežič A, Vojíř T, Zajc LČ, Matas J, Kristan M (2018) Discriminative correlation filter tracker with channel and spatial reliability. Int J Comput Vision 126(7):671–688. https://doi.org/10.1007/s11263-017-1061-3

    Article  Google Scholar 

  9. Bradski G (2000) The OpenCV Library. Dr. Dobb’s Journal of Software Tools

  10. Bojanowski MW, Nitish GVR, Hage GE, Lalonde K, Chaalala C, Robert T (2019) Posterolateral route for a midbrain cavernous malformation reaching the anterior surface of the brainstem. Neurosurgical Focus: Video. https://doi.org/10.3171/2019.7.FocusVid.19162

  11. Bojanowski MW, Labidi M, L’Ecuyer N, Chaalala C (2019) Supracerebellar transtentorial resection of a ruptured thalamomesencephalic cavernous malformation. Neurosurgical Focus: Video 1(1) . https://doi.org/10.3171/2019.7.FocusVid.19164

  12. Wada K (2021) Labelme. GitHub

  13. Sabottke CF, Spieler BM (2020) The effect of image resolution on deep learning in radiography. Radiol: Artif Intell 2(1), 190015 . https://doi.org/10.1148/ryai.2019190015

  14. Kamrul Hasan SM, Linte CA (2019) U-netplus: a modified encoder-decoder u-net architecture for semantic and instance segmentation of surgical instruments from laparoscopic images. In: 2019 41st annual international conference of the ieee engineering in medicine and biology society (EMBC), pp. 7205–7211 . https://doi.org/10.1109/EMBC.2019.8856791

  15. Ni Z-L, Bian G, Xie X, Hou Z-G, Zhou X-H, Zhou Y-J (2019) Rasnet: segmentation for tracking surgical instruments in surgical videos using refined attention segmentation network. 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC), 5735–5738

  16. Başar Y, Weichert D (2001) Nonlinear continuum mechanics of solids: fundamental mathematical and physical concepts. Appl Mech Rev 54(6):98–99. https://doi.org/10.1115/1.1421109

    Article  Google Scholar 

  17. Dujovny J, Wackenhut N, Kossovsky N, Leff L, Gómez C, Nelson D (1980) Biomechanics of vascular occlusion in neurosurgery. Acta Neurol Latinoam 26(2)

  18. Ashman KM, Bird CM, Zepf SE (1994) Detecting bimodality in astrometrical datasets. Astron J 108:2348. https://doi.org/10.1086/117248

    Article  Google Scholar 

  19. Hartley R, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, Cambridge . https://doi.org/10.1017/CBO9780511811685

  20. Wu Y, Lim J, Yang M-H (2013) Online object tracking: a benchmark. In: 2013 IEEE conference on computer vision and pattern recognition, pp. 2411–2418 . https://doi.org/10.1109/CVPR.2013.312

  21. Babenko B, Yang M-H, Belongie S (2011) Robust object tracking with online multiple instance learning. IEEE Transact Pattern Anal Mach Intell 33(8):1619–1632. https://doi.org/10.1109/TPAMI.2010.226

    Article  Google Scholar 

  22. Bouget D, Benenson R, Omran M, Riffaud L, Schiele B, Jannin P (2015) Detecting surgical tools by modelling local appearance and global shape. IEEE Transact Med Imaging 34:2603–2617

    Article  Google Scholar 

  23. Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: 2018 IEEE winter conference on applications of computer vision (WACV), pp. 691–699. IEEE computer society, Los Alamitos, CA, USA . https://doi.org/10.1109/WACV.2018.00081. https://doi.ieeecomputersociety.org/10.1109/WACV.2018.00081

  24. Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N (2017) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Transact Med Imaging 36:86–97

    Article  Google Scholar 

  25. Cho SM, Kim Y-G, Jeong J, Kim I, Lee H-J, Kim N (2021) Automatic tip detection of surgical instruments in biportal endoscopic spine surgery. Comput Biol Med 133:104384. https://doi.org/10.1016/j.compbiomed.2021.104384

    Article  PubMed  Google Scholar 

  26. Ni Z-L, Bian G, Zhou X-H, Hou Z-G, Xie X, Wang C, Zhou Y-J, Li R-Q, Li Z (2019) Raunet: residual attention u-net for semantic segmentation of cataract surgical instruments. In: ICONIP

  27. Ni Z-L, Bian G, Hou Z-G, Zhou X-H, Xie X, Li Z (2020) Attention-guided lightweight network for real-time segmentation of robotic surgical instruments. 2020 IEEE International conference on robotics and automation (ICRA), 9939–9945

  28. MICCAI Challenges. http://camma.u-strasbg.fr/m2cai2016/index.php/program-challenge/

  29. Sarikaya D, Corso JJ, Guru KA (2017) Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Transact Med Imaging 36(7):1542–1549. https://doi.org/10.1109/TMI.2017.2665671

    Article  Google Scholar 

  30. Zhang B, Wang S, Dong L, Chen P (2020) Surgical tools detection based on modulated anchoring network in laparoscopic videos. IEEE Access 8:23748–23758. https://doi.org/10.1109/ACCESS.2020.2969885

    Article  Google Scholar 

  31. Maqbool S, Riaz A, Sajid H, Hasan O (2020) m2caiseg: Semantic segmentation of laparoscopic images using convolutional neural networks. arXiv: 2008.10134

  32. García-Peraza-Herrera LC, Li W, Gruijthuijsen C, Devreker A, Attilakos G, Deprest JA, Poorten EBV, Stoyanov D, Vercauteren TKM, Ourselin S (2016) Real-time segmentation of non-rigid surgical tools based on deep learning and tracking. In: CARE@MICCAI

  33. Qin F, Lin S, Li Y, Bly RA, Moe KS, Hannaford B (2020) Towards better surgical instrument segmentation in endoscopic vision: multi-angle feature aggregation and contour supervision. IEEE Robot Autom Lett 5:6639–6646

    Article  Google Scholar 

  34. Zhao Z, Cai T, Chang F, Cheng X (2019) Real-time surgical instrument detection in robot-assisted surgery using a convolutional neural network cascade. Healthc Technol Lett 6:275–279

    Article  PubMed  PubMed Central  Google Scholar 

  35. Zhao Z, Jin Y, Lu B, Ng C-F, Dou Q, Liu Y, Heng P-A (2021) One to many: adaptive instrument segmentation via meta learning and dynamic online adaptation in robotic surgical video. 2021 IEEE international conference on robotics and automation (ICRA), 13553–13559

  36. Allan M, Kondo S, Bodenstedt S, Leger S, Kadkhodamohammadi R, Luengo I, Fuentes-Hurtado F, Flouty E, Mohammed AK, Pedersen M, Kori A, Varghese A, Krishnamurthi G, Rauber D, Mendel R, Palm C, Bano S, Saibro G, Shih C-S, Chiang H-A, Zhuang J, Yang J, Iglovikov VI, Dobrenkii A, Reddiboina M, Reddy A, Liu X, Gao C, Unberath M, Azizian M, Stoyanov D, Maier-Hein L, Speidel S (2020) 2018 robotic scene segmentation challenge. ArXiv

  37. Gong J, Holsinger FC, Noel JE, Mitani S, Jopling J, Bedi N, Koh YW, Orloff LA, Cernea CR, Yeung S (2021) Using deep learning to identify the recurrent laryngeal nerve during thyroidectomy. Sci Rep. https://doi.org/10.1038/s41598-021-93202-y

    Article  PubMed  PubMed Central  Google Scholar 

  38. Kumar AN, Miga MI, Pheiffer TS, Chambless LB, Thompson RC, Dawant BM (2015) Persistent and automatic intraoperative 3d digitization of surfaces under dynamic magnifications of an operating microscope. Med Image Anal 19(1):30–45. https://doi.org/10.1016/j.media.2014.07.004

    Article  PubMed  Google Scholar 

Download references

Funding

The authors did not receive support from any organization for the submitted work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michel W. Bojanowski.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

For this type of study, formal consent is not required.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (mov 113393 KB)

Supplementary file 2 (mov 81508 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Martin, T., El Hage, G., Shedid, D. et al. Using artificial intelligence to quantify dynamic retraction of brain tissue and the manipulation of instruments in neurosurgery. Int J CARS 18, 1469–1478 (2023). https://doi.org/10.1007/s11548-022-02824-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-022-02824-8

Keywords