3D augmentation of the surgical video stream: Toward a modular approach
Introduction
In recent decades there has been a popularity increase in surgery techniques aimed to minimize invasiveness. Among those procedures, collectively referred to as minimally invasive surgery (MIS), robot-assisted surgery was developed to assist the surgeon in performing more complex and precise tasks. This led to an increased need for visual feedback from the operatory environment. In fact, the surgeon operating the robotic system through a console and a visor experiences reduced awareness of the operatory scene. Augmented reality (AR) was then introduced as the answer to this drawback, more or less successfully depending on the various disciplines of application. In this paper, we extend the work in [1] where we presented our progress in augmenting the endoscopes video during Robot-Assisted Radical Prostatectomies by overlaying the 3D virtual prostate model of the patient undergoing the procedure over its real counterpart, using different real-time tracking techniques. We want to present here, into detail, the technical aspects that enable our framework and that were addressed only briefly in our above-mentioned works. In particular, we present here the modular approach we developed to solve the capital problem of virtual over real registration. Instead of following the use of a single registration method for the whole procedure, as common practice in the literature (see Section 2), we opted for the building of a stack of different solutions for each of the stages of the surgical procedure, as each stage presents specific visual features that can be exploited differently to guide the virtual-over-real overlay. We believe our approach to be a solid addition to the existing ones because it allows programmers to update only those parts of the whole application that requires improvement. In the literature, there are numerous research works on laparoscopic AR all proposing different custom-tailored solutions, not always going beyond the stage of proof-of-concept software applications. This means that there isn’t yet a standardization for the development of such applications and trial-and-error situations may commonly occur. Hence, the benefits of using a modular methodology will helps developers to focus their intervention on specific parts of the whole project only, or to reuse an already developed solution for a different kind of surgical procedure with compatibly similar visual feature to use.
The applications developed according to our approach are currently being used during in-vivo surgery, for extensive testing, by the Urology unity of the San Luigi Hospital, in Orbassano (To) Italy, and the augmented video stream can be accessed directly into the Tile-Pro visualization system of the Da Vinci surgical console. In other papers, such as in [2], [3], [4], we have presented the various results obtained by the use of our application at different stages of its development, as new features were introduced and tested, from a medical perspective.
The proposed modular approach is presented using a formal model that describes the system of solutions applied each to a different stage of the surgical procedure. After summarizing in Section 2 our extensive literature research, in Section 3 we describe formally the proposed modular approach framework, focusing on the five main stages that characterize a prostatectomy procedure, as well as the main features and challenges to their detection that each phase introduces. The goal in each phase is to apply the best detection strategy to maximizes robustness, e.g.: minimum number of false feature detection, the correct position for the virtual model while keeping under control computational resources request. The control of these resources is mandatory to allow real-time fruition of the resulting augment stream. In Section 3 we also introduce the different Computer Vision tools we investigated as candidates to be applied for each of the different phases. Three phases were selected from the prostatectomy procedure as those potentially benefiting from augmentation, and we developed a different software solution for each of them. In Section 5 we describe the algorithms and the technological aspects of these software tools. At the present stage of development of the application stack, switching between one phase and the other is human-assisted but, in the future, our framework implementation will provide an autonomous switching system, machine learning-based. In Section 6, we discuss the proposed method and the results our research achieved through its application in terms of the benefits experienced during in-vivo tests. This future line of work is discussed in Section 7.
Section snippets
Related works
In order to reduce access wound trauma, and decrease the incidence of post-operative complications due to infections or to incisional hernias, thus lessening the hospital stays, and reduce general disfigurement, in recent years there has been an increasingly adopting of minimal invasive surgical (MIS) technologies [5]. The increasingly adopting of minimal invasive surgical technologies increased the demand for greater surgical precision, leading to the birth of robotic surgery. Minimally
Methods
In order to improve the surgeon’s spatial perception during robot-assisted minimally invasive procedures, we intend to provide him or her with a solid automatic software system to position, rotate and scale in real-time the 3D virtual model of a patient’s organ aligned over its image captured by the endoscope. Since the accuracy of the overlay is of the topmost importance, such a system needs to account for tissue’s elasticity: as the real organ shape is modified during the procedure we need to
The robot-assisted radical prostatectomy (RARP) procedure
As a case study for the proposed framework, in this Section we present its application to robot-assisted radical prostatectomies (RARP). We briefly address the phases of this surgical procedure to the extent required to introduce this paper framework. According to Huynh and Ahlering [38], this procedure’s steps are highly standardized and we aggregate them into 5 subsequent stages based, as previously stated, on similar visual characteristics as well as on similar benefit levels from AR use.
Augmentation strategies
In Section 4 we introduced three augmentation strategies that we are currently testing in our ongoing research. In this section, we present them into details from an implementation perspective. The three strategies are conceived as three stand-alone software applications, each used during a specific set of the medical procedure steps that we call stages. At the present state of development, the decision about which stage the system is currently in is human-made. In Fig. 4 we show the general
Discussion
The proposed framework has been developed to provide a modular structure to support the design of AR applications for minimally-invasive surgery. The framework is not limited in its potential applications to any particular surgical specialty. At present, the development process of this kind of applications is expensive under many perspectives, such as validation and testing, not counting the man-hours required for extensive methods research and programming. Moreover, AR applications in the
Conclusions
In this paper we propose a modular approach to the tracking problem during in-vivo robotic surgery. The segmentation of the whole procedure in a set of stages allows associating the best tracking strategy to each of them, as well as to re-utilize implemented software mechanisms in stages with similar features belonging to different urological specialities. At the current stage of development, the stack of applications developed according to the presented framework is used in-vivo robot-assisted
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Acknowlgedgments
We thanks all the men and women working in the Urology unity of the San Luigi Hospital, in Orbassano (To), Italy, as well as the Institution itself for the support given to our research and testing.
References (45)
- et al.
Three-dimensional elastic augmented-reality robot-assisted radical prostatectomy using hyperaccuracy three-dimensional reconstruction technology: a step further in the identification of capsular involvement
Eur. Urol.
(2019) - et al.
The evolution of robotic surgery: surgical and anaesthetic aspects
Br. J. Anaesth.
(2017) - et al.
The age of robotic surgery – is laparoscopy dead?
Arab J. Urol.
(2018) - et al.
The status of augmented reality in laparoscopic surgery as of 2016
Med. Image Anal.
(2017) - et al.
Augmented reality-assisted bypass surgery: embracing minimal invasiveness
World Neurosurg.
(2015) - et al.
A paradigm shift in orthognathic surgery? A comparison of navigation, computer-aided designed/computer-aided manufactured splints, and “classic” intermaxillary splints to surgical transfer of virtual orthognathic planning
J. Oral. Maxillofac. Surg.
(2013) - et al.
Augmented reality partial nephrectomy: examining the current status and future perspectives
Urology
(2014) - et al.
Enhancing spatial navigation in robot-assisted surgery: an application
Lecture Notes in Mechanical Engineering
(2019) - et al.
3D mixed reality holograms for preoperative surgical planning of nephron-sparing surgery: evaluation of surgeons’ perception.
Minerva Urol. Nefrol.
(2019) - et al.
Augmented-reality robot-assisted radical prostatectomy using hyper-accuracy three-dimensional reconstruction (HA3dTM) technology: a radiological and pathological study
BJU Int.
(2018)
Review of emerging surgical robotic technology
Surg. Endosc.
Current progress on augmented reality visualization in endoscopic surgery
Curr. Opin. Urol.
Augmented reality in minimally invasive surgery
Lecture Notes in Electrical Engineering
Recent advances in augmented reality
IEEE Comput. Graph. Appl.
Advanced medical displays: a literature review of augmented reality
J. Disp. Technol.
Introduction to augmented reality
J. Inst. Image Inform. Telev. Eng.
Augmented reality technologies, systems and applications
Multimed. Tools Appl.
Interactive virtual technologies in engineering education: why not 360 videos?
Int. J. Interact. Des.Manuf. (IJIDeM)
Augmented reality in medicine
Hanyang Med. Rev.
Image-guidance for surgical procedures
Phys. Med. Biol.
Comparative effectiveness and safety of image guidance systems in neurosurgery: a preclinical randomized study
J. Neurosurg.
Cited by (14)
Homography-based robust pose compensation and fusion imaging for augmented reality based endoscopic navigation system
2021, Computers in Biology and MedicineCitation Excerpt :Over the last decade, augmented reality (AR)-based surgical navigation systems have become an important auxiliary tool in minimally invasive surgery and play a critical role in guaranteeing the accuracy and safety of surgery [1–3].
Augmented Reality in Surgical Navigation: A Review of Evaluation and Validation Metrics
2023, Applied Sciences (Switzerland)Exploiting deep learning and augmented reality in fused deposition modeling: a focus on registration
2023, International Journal on Interactive Design and ManufacturingMixed Reality-Based Support for Total Hip Arthroplasty Assessment
2023, Lecture Notes in Mechanical Engineering