3D augmentation of the surgical video stream: Toward a modular approach

https://doi.org/10.1016/j.cmpb.2020.105505Get rights and content

Highlights

  • Augmented Reality systems for Robot-Assisted Surgery use a single tracking strategy.

  • Each stage of the surgery procedure potentially presents different visual features.

  • These visual features can be exploited by different more efficient tracking methods.

  • We combined different tracking methods into a single integrated navigation aid.

  • We provide a formal model to generalize our approach to any surgical specialty.

Abstract

Background and Objective. We present an original approach to the development of augmented reality (AR) real-time solutions for robotic surgery navigation. The surgeon operating the robotic system through a console and a visor experiences reduced awareness of the operatory scene. In order to improve the surgeon’s spatial perception during robot-assisted minimally invasive procedures, we provide him/her with a solid automatic software system to position, rotate and scale in real-time the 3D virtual model of a patient’s organ aligned over its image captured by the endoscope.

Methods. We observed that the surgeon may benefit differently from the 3D augmentation during each stage of the surgical procedure; moreover, each stage may present different visual elements that provide specific challenges and opportunities to exploit for organ detection strategies implementation. Hence we integrate different solutions, each dedicated to a specific stage of the surgical procedure, into a single software system.

Results. We present a formal model that generalizes our approach, describing a system composed of integrated solutions for AR in robot-assisted surgery. Following the proposed framework, and application has been developed which is currently used during in vivo surgery, for extensive testing, by the Urology unity of the San Luigi Hospital, in Orbassano (To), Italy.

Conclusions. The main contribution of this paper is in presenting a modular approach to the tracking problem during in-vivo robotic surgery, whose efficacy from a medical point of view has been assessed in cited works. The segmentation of the whole procedure in a set of stages allows associating the best tracking strategy to each of them, as well as to re-utilize implemented software mechanisms in stages with similar features.

Introduction

In recent decades there has been a popularity increase in surgery techniques aimed to minimize invasiveness. Among those procedures, collectively referred to as minimally invasive surgery (MIS), robot-assisted surgery was developed to assist the surgeon in performing more complex and precise tasks. This led to an increased need for visual feedback from the operatory environment. In fact, the surgeon operating the robotic system through a console and a visor experiences reduced awareness of the operatory scene. Augmented reality (AR) was then introduced as the answer to this drawback, more or less successfully depending on the various disciplines of application. In this paper, we extend the work in [1] where we presented our progress in augmenting the endoscopes video during Robot-Assisted Radical Prostatectomies by overlaying the 3D virtual prostate model of the patient undergoing the procedure over its real counterpart, using different real-time tracking techniques. We want to present here, into detail, the technical aspects that enable our framework and that were addressed only briefly in our above-mentioned works. In particular, we present here the modular approach we developed to solve the capital problem of virtual over real registration. Instead of following the use of a single registration method for the whole procedure, as common practice in the literature (see Section 2), we opted for the building of a stack of different solutions for each of the stages of the surgical procedure, as each stage presents specific visual features that can be exploited differently to guide the virtual-over-real overlay. We believe our approach to be a solid addition to the existing ones because it allows programmers to update only those parts of the whole application that requires improvement. In the literature, there are numerous research works on laparoscopic AR all proposing different custom-tailored solutions, not always going beyond the stage of proof-of-concept software applications. This means that there isn’t yet a standardization for the development of such applications and trial-and-error situations may commonly occur. Hence, the benefits of using a modular methodology will helps developers to focus their intervention on specific parts of the whole project only, or to reuse an already developed solution for a different kind of surgical procedure with compatibly similar visual feature to use.

The applications developed according to our approach are currently being used during in-vivo surgery, for extensive testing, by the Urology unity of the San Luigi Hospital, in Orbassano (To) Italy, and the augmented video stream can be accessed directly into the Tile-Pro visualization system of the Da Vinci surgical console. In other papers, such as in [2], [3], [4], we have presented the various results obtained by the use of our application at different stages of its development, as new features were introduced and tested, from a medical perspective.

The proposed modular approach is presented using a formal model that describes the system of solutions applied each to a different stage of the surgical procedure. After summarizing in Section 2 our extensive literature research, in Section 3 we describe formally the proposed modular approach framework, focusing on the five main stages that characterize a prostatectomy procedure, as well as the main features and challenges to their detection that each phase introduces. The goal in each phase is to apply the best detection strategy to maximizes robustness, e.g.: minimum number of false feature detection, the correct position for the virtual model while keeping under control computational resources request. The control of these resources is mandatory to allow real-time fruition of the resulting augment stream. In Section 3 we also introduce the different Computer Vision tools we investigated as candidates to be applied for each of the different phases. Three phases were selected from the prostatectomy procedure as those potentially benefiting from augmentation, and we developed a different software solution for each of them. In Section 5 we describe the algorithms and the technological aspects of these software tools. At the present stage of development of the application stack, switching between one phase and the other is human-assisted but, in the future, our framework implementation will provide an autonomous switching system, machine learning-based. In Section 6, we discuss the proposed method and the results our research achieved through its application in terms of the benefits experienced during in-vivo tests. This future line of work is discussed in Section 7.

Section snippets

Related works

In order to reduce access wound trauma, and decrease the incidence of post-operative complications due to infections or to incisional hernias, thus lessening the hospital stays, and reduce general disfigurement, in recent years there has been an increasingly adopting of minimal invasive surgical (MIS) technologies [5]. The increasingly adopting of minimal invasive surgical technologies increased the demand for greater surgical precision, leading to the birth of robotic surgery. Minimally

Methods

In order to improve the surgeon’s spatial perception during robot-assisted minimally invasive procedures, we intend to provide him or her with a solid automatic software system to position, rotate and scale in real-time the 3D virtual model of a patient’s organ aligned over its image captured by the endoscope. Since the accuracy of the overlay is of the topmost importance, such a system needs to account for tissue’s elasticity: as the real organ shape is modified during the procedure we need to

The robot-assisted radical prostatectomy (RARP) procedure

As a case study for the proposed framework, in this Section we present its application to robot-assisted radical prostatectomies (RARP). We briefly address the phases of this surgical procedure to the extent required to introduce this paper framework. According to Huynh and Ahlering [38], this procedure’s steps are highly standardized and we aggregate them into 5 subsequent stages based, as previously stated, on similar visual characteristics as well as on similar benefit levels from AR use.

Augmentation strategies

In Section 4 we introduced three augmentation strategies that we are currently testing in our ongoing research. In this section, we present them into details from an implementation perspective. The three strategies are conceived as three stand-alone software applications, each used during a specific set of the medical procedure steps that we call stages. At the present state of development, the decision about which stage the system is currently in is human-made. In Fig. 4 we show the general

Discussion

The proposed framework has been developed to provide a modular structure to support the design of AR applications for minimally-invasive surgery. The framework is not limited in its potential applications to any particular surgical specialty. At present, the development process of this kind of applications is expensive under many perspectives, such as validation and testing, not counting the man-hours required for extensive methods research and programming. Moreover, AR applications in the

Conclusions

In this paper we propose a modular approach to the tracking problem during in-vivo robotic surgery. The segmentation of the whole procedure in a set of stages allows associating the best tracking strategy to each of them, as well as to re-utilize implemented software mechanisms in stages with similar features belonging to different urological specialities. At the current stage of development, the stack of applications developed according to the presented framework is used in-vivo robot-assisted

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Acknowlgedgments

We thanks all the men and women working in the Urology unity of the San Luigi Hospital, in Orbassano (To), Italy, as well as the Institution itself for the support given to our research and testing.

References (45)

  • B.S. Peters et al.

    Review of emerging surgical robotic technology

    Surg. Endosc.

    (2018)
  • J. Fischer, M. Neff, D. Freudenstein, D. Bartz, Medical augmented reality based on commercial image guided surgery,...
  • M. Nakamoto et al.

    Current progress on augmented reality visualization in endoscopic surgery

    Curr. Opin. Urol.

    (2012)
  • L.T.D. Paolis et al.

    Augmented reality in minimally invasive surgery

    Lecture Notes in Electrical Engineering

    (2010)
  • R. Azuma et al.

    Recent advances in augmented reality

    IEEE Comput. Graph. Appl.

    (2001)
  • T. Sielhorst et al.

    Advanced medical displays: a literature review of augmented reality

    J. Disp. Technol.

    (2008)
  • H. Kato

    Introduction to augmented reality

    J. Inst. Image Inform. Telev. Eng.

    (2012)
  • J. Carmigniani et al.

    Augmented reality technologies, systems and applications

    Multimed. Tools Appl.

    (2010)
  • M.G. Violante et al.

    Interactive virtual technologies in engineering education: why not 360 videos?

    Int. J. Interact. Des.Manuf. (IJIDeM)

    (2019)
  • H.-G. Ha et al.

    Augmented reality in medicine

    Hanyang Med. Rev.

    (2016)
  • T.M. Peters

    Image-guidance for surgical procedures

    Phys. Med. Biol.

    (2006)
  • H.J. Marcus et al.

    Comparative effectiveness and safety of image guidance systems in neurosurgery: a preclinical randomized study

    J. Neurosurg.

    (2015)
  • Cited by (14)

    View all citing articles on Scopus
    View full text