Abstract
Purpose
There is no objective way to measure the amount of manipulation and retraction of neural tissue by the surgeon. Our goal is to develop metrics quantifying dynamic retraction and manipulation by instruments during neurosurgery.
Methods
We trained a convolutional neural network (CNN) to analyze microscopic footage of neurosurgical procedures and thereby generate metrics evaluating the surgeon’s dynamic retraction of brain tissue and, using an object tracking process, evaluate the surgeon’s manipulation of the instruments themselves. U-Net image segmentation is used to output bounding polygons around cerebral parenchyma of interest, as well as the vascular structures and cranial nerves. A channel and spatial reliability tracker framework is used in conjunction with our CNN to track desired surgical instruments.
Results
Our network achieved a state-of-the-art intersection over union (\(72.64\%\)) for biological tissue segmentation. Multivariate statistical analysis was used to evaluate dynamic retraction, tissue handling, and instrument manipulation.
Conclusion
Our model enables to evaluate dynamic retraction of soft tissue and manipulation of instruments during a surgical procedure, while accounting for movement of the operative microscope. This model can potentially provide the surgeon with objective feedback about the movement of instruments and its effect on brain tissue.









Similar content being viewed by others
References
Little AS, Liu S, Beeman S, Sankar T, Preul MC, Hu LS, Smith KA, Baxter LC (2010) Brain retraction and thickness of cerebral neocortex. Oper Neurosurg 67(3):277–282. https://doi.org/10.1227/01.neu.0000374699.12150.0
Zhong J, Dujovny M, Perlin AR, Perez-Arjona E, Park HK, Diaz FG (2003) Brain retraction injury. Neurol Res. https://doi.org/10.1179/016164103771953925
Nazim W, Elborady M (2021) Retractorless brain surgery: technical considerations. Egypt J Neurol Psychiatry Neurosurg 57(98)
Scheikl PM, Laschewski S, Kisilenko A, Davitashvili T, Müller B, Capek M, Müller-Stich BP, Wagner M, Mathis-Ullrich F (2020) Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery. Current Dir Biomed Eng. https://doi.org/10.1515/cdbme-2020-0016
Wang Y, Sun Q, Liu Z, Gu L (2022) Visual detection and tracking algorithms for minimally invasive surgical instruments: a comprehensive review of the state-of-the-art. Robot Auton Syst 149:103945. https://doi.org/10.1016/j.robot.2021.103945
Iglovikov V, Seferbekov S, Buslaev A, Shvets A (2018) Ternausnetv2: Fully convolutional network for instance segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) workshops, 233–237
Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF (eds) Medical image computing and computer-assisted intervention - MICCAI 2015. Springer, Cham, pp 234–241
Lukežič A, Vojíř T, Zajc LČ, Matas J, Kristan M (2018) Discriminative correlation filter tracker with channel and spatial reliability. Int J Comput Vision 126(7):671–688. https://doi.org/10.1007/s11263-017-1061-3
Bradski G (2000) The OpenCV Library. Dr. Dobb’s Journal of Software Tools
Bojanowski MW, Nitish GVR, Hage GE, Lalonde K, Chaalala C, Robert T (2019) Posterolateral route for a midbrain cavernous malformation reaching the anterior surface of the brainstem. Neurosurgical Focus: Video. https://doi.org/10.3171/2019.7.FocusVid.19162
Bojanowski MW, Labidi M, L’Ecuyer N, Chaalala C (2019) Supracerebellar transtentorial resection of a ruptured thalamomesencephalic cavernous malformation. Neurosurgical Focus: Video 1(1) . https://doi.org/10.3171/2019.7.FocusVid.19164
Wada K (2021) Labelme. GitHub
Sabottke CF, Spieler BM (2020) The effect of image resolution on deep learning in radiography. Radiol: Artif Intell 2(1), 190015 . https://doi.org/10.1148/ryai.2019190015
Kamrul Hasan SM, Linte CA (2019) U-netplus: a modified encoder-decoder u-net architecture for semantic and instance segmentation of surgical instruments from laparoscopic images. In: 2019 41st annual international conference of the ieee engineering in medicine and biology society (EMBC), pp. 7205–7211 . https://doi.org/10.1109/EMBC.2019.8856791
Ni Z-L, Bian G, Xie X, Hou Z-G, Zhou X-H, Zhou Y-J (2019) Rasnet: segmentation for tracking surgical instruments in surgical videos using refined attention segmentation network. 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC), 5735–5738
Başar Y, Weichert D (2001) Nonlinear continuum mechanics of solids: fundamental mathematical and physical concepts. Appl Mech Rev 54(6):98–99. https://doi.org/10.1115/1.1421109
Dujovny J, Wackenhut N, Kossovsky N, Leff L, Gómez C, Nelson D (1980) Biomechanics of vascular occlusion in neurosurgery. Acta Neurol Latinoam 26(2)
Ashman KM, Bird CM, Zepf SE (1994) Detecting bimodality in astrometrical datasets. Astron J 108:2348. https://doi.org/10.1086/117248
Hartley R, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, Cambridge . https://doi.org/10.1017/CBO9780511811685
Wu Y, Lim J, Yang M-H (2013) Online object tracking: a benchmark. In: 2013 IEEE conference on computer vision and pattern recognition, pp. 2411–2418 . https://doi.org/10.1109/CVPR.2013.312
Babenko B, Yang M-H, Belongie S (2011) Robust object tracking with online multiple instance learning. IEEE Transact Pattern Anal Mach Intell 33(8):1619–1632. https://doi.org/10.1109/TPAMI.2010.226
Bouget D, Benenson R, Omran M, Riffaud L, Schiele B, Jannin P (2015) Detecting surgical tools by modelling local appearance and global shape. IEEE Transact Med Imaging 34:2603–2617
Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: 2018 IEEE winter conference on applications of computer vision (WACV), pp. 691–699. IEEE computer society, Los Alamitos, CA, USA . https://doi.org/10.1109/WACV.2018.00081. https://doi.ieeecomputersociety.org/10.1109/WACV.2018.00081
Twinanda AP, Shehata S, Mutter D, Marescaux J, de Mathelin M, Padoy N (2017) Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Transact Med Imaging 36:86–97
Cho SM, Kim Y-G, Jeong J, Kim I, Lee H-J, Kim N (2021) Automatic tip detection of surgical instruments in biportal endoscopic spine surgery. Comput Biol Med 133:104384. https://doi.org/10.1016/j.compbiomed.2021.104384
Ni Z-L, Bian G, Zhou X-H, Hou Z-G, Xie X, Wang C, Zhou Y-J, Li R-Q, Li Z (2019) Raunet: residual attention u-net for semantic segmentation of cataract surgical instruments. In: ICONIP
Ni Z-L, Bian G, Hou Z-G, Zhou X-H, Xie X, Li Z (2020) Attention-guided lightweight network for real-time segmentation of robotic surgical instruments. 2020 IEEE International conference on robotics and automation (ICRA), 9939–9945
MICCAI Challenges. http://camma.u-strasbg.fr/m2cai2016/index.php/program-challenge/
Sarikaya D, Corso JJ, Guru KA (2017) Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Transact Med Imaging 36(7):1542–1549. https://doi.org/10.1109/TMI.2017.2665671
Zhang B, Wang S, Dong L, Chen P (2020) Surgical tools detection based on modulated anchoring network in laparoscopic videos. IEEE Access 8:23748–23758. https://doi.org/10.1109/ACCESS.2020.2969885
Maqbool S, Riaz A, Sajid H, Hasan O (2020) m2caiseg: Semantic segmentation of laparoscopic images using convolutional neural networks. arXiv: 2008.10134
García-Peraza-Herrera LC, Li W, Gruijthuijsen C, Devreker A, Attilakos G, Deprest JA, Poorten EBV, Stoyanov D, Vercauteren TKM, Ourselin S (2016) Real-time segmentation of non-rigid surgical tools based on deep learning and tracking. In: CARE@MICCAI
Qin F, Lin S, Li Y, Bly RA, Moe KS, Hannaford B (2020) Towards better surgical instrument segmentation in endoscopic vision: multi-angle feature aggregation and contour supervision. IEEE Robot Autom Lett 5:6639–6646
Zhao Z, Cai T, Chang F, Cheng X (2019) Real-time surgical instrument detection in robot-assisted surgery using a convolutional neural network cascade. Healthc Technol Lett 6:275–279
Zhao Z, Jin Y, Lu B, Ng C-F, Dou Q, Liu Y, Heng P-A (2021) One to many: adaptive instrument segmentation via meta learning and dynamic online adaptation in robotic surgical video. 2021 IEEE international conference on robotics and automation (ICRA), 13553–13559
Allan M, Kondo S, Bodenstedt S, Leger S, Kadkhodamohammadi R, Luengo I, Fuentes-Hurtado F, Flouty E, Mohammed AK, Pedersen M, Kori A, Varghese A, Krishnamurthi G, Rauber D, Mendel R, Palm C, Bano S, Saibro G, Shih C-S, Chiang H-A, Zhuang J, Yang J, Iglovikov VI, Dobrenkii A, Reddiboina M, Reddy A, Liu X, Gao C, Unberath M, Azizian M, Stoyanov D, Maier-Hein L, Speidel S (2020) 2018 robotic scene segmentation challenge. ArXiv
Gong J, Holsinger FC, Noel JE, Mitani S, Jopling J, Bedi N, Koh YW, Orloff LA, Cernea CR, Yeung S (2021) Using deep learning to identify the recurrent laryngeal nerve during thyroidectomy. Sci Rep. https://doi.org/10.1038/s41598-021-93202-y
Kumar AN, Miga MI, Pheiffer TS, Chambless LB, Thompson RC, Dawant BM (2015) Persistent and automatic intraoperative 3d digitization of surfaces under dynamic magnifications of an operating microscope. Med Image Anal 19(1):30–45. https://doi.org/10.1016/j.media.2014.07.004
Funding
The authors did not receive support from any organization for the submitted work.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethical approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent
For this type of study, formal consent is not required.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary file 1 (mov 113393 KB)
Supplementary file 2 (mov 81508 KB)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Martin, T., El Hage, G., Shedid, D. et al. Using artificial intelligence to quantify dynamic retraction of brain tissue and the manipulation of instruments in neurosurgery. Int J CARS 18, 1469–1478 (2023). https://doi.org/10.1007/s11548-022-02824-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11548-022-02824-8