Skip to main content

Interactive Endoscopy: A Next-Generation, Streamlined User Interface for Lung Surgery Navigation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 (MICCAI 2019)

Abstract

Computer generated graphics are superimposed onto live video emanating from an endoscope, offering the surgeon visual information that is hiding in the native scene—this describes the classical scenario of augmented reality in minimally invasive surgery. Research efforts have, over the past few decades, pressed considerably against the challenges of infusing a priori knowledge into endoscopic streams. As framed, these contributions emulate perception at the level of the surgeon expert, perpetuating debates on the technical, clinical, and societal viability of the proposition.

We herein introduce interactive endoscopy, transforming passive visualization into an interface that allows the surgeon to label noteworthy anatomical features found in the endoscopic video, and have the virtual annotations remember their tissue locations during surgical manipulation. The streamlined interface combines vision-based tool tracking and speech recognition to enable interactive selection and labeling, followed by tissue tracking and optical flow for label persistence. These discrete capabilities have matured rapidly in recent years, promising technical viability of the system; it can help clinicians offload the cognitive demands of visually deciphering soft tissues; and supports societal viability by engaging, rather than emulating, surgeon expertise. Through a video-assisted thoracotomy use case, we develop a proof-of-concept to improve workflow by tracking surgical tools and visualizing tissue, while serving as a bridge to the classical promise of augmented reality in surgery.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Healthcare Cost and Utilization Project. https://hcupnet.ahrq.gov/#setup

  2. Reduced lung-cancer mortality with low-dose computed tomographic screening. New Engl. J. Med. 365(5), 395–409 (2011)

    Google Scholar 

  3. Allaf, M.E., et al.: Laparoscopic visual field. Surg. Endosc. 12(12), 1415–1418 (1998)

    Article  Google Scholar 

  4. Balicki, M., et al.: Interactive OCT annotation and visualization for vitreoretinal surgery. In: Linte, C.A., Chen, E.C.S., Berger, M.-O., Moore, J.T., Holmes, D.R. (eds.) AE-CAI 2012. LNCS, vol. 7815, pp. 142–152. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38085-3_14

    Chapter  Google Scholar 

  5. Bernhardt, S., Nicolau, S.A., Soler, L., Doignon, C.: The status of augmented reality in laparoscopic surgery as of 2016. Med. Image Anal. 37, 66–90 (2017)

    Article  Google Scholar 

  6. Bodenstedt, S., et al.: Comparative evaluation of instrument segmentation and tracking methods in minimally invasive surgery (2018)

    Google Scholar 

  7. Carswell, C.M., Clarke, D., Seales, W.B.: Assessing mental workload during laparoscopic surgery. Surg. Innov. 12(1), 80–90 (2005)

    Article  Google Scholar 

  8. Chauvet, P., et al.: Augmented reality in a tumor resection model. Surg. Endosc. 32(3), 1192–1201 (2018)

    Article  Google Scholar 

  9. Collins, T., Bartoli, A., Bourdel, N., Canis, M.: Robust, real-time, dense and deformable 3D organ tracking in laparoscopic videos. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9900, pp. 404–412. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46720-7_47

    Chapter  Google Scholar 

  10. Doignon, C., Nageotte, F., de Mathelin, M.: Segmentation and guidance of multiple rigid objects for intra-operative endoscopic vision. In: Vidal, R., Heyden, A., Ma, Y. (eds.) WDV 2005-2006. LNCS, vol. 4358, pp. 314–327. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-70932-9_24

    Chapter  Google Scholar 

  11. Du, X., et al.: Robust surface tracking combining features, intensity and illumination compensation. Int. J. Comput. Assist. Radiol. Surg. 10(12), 1915–1926 (2015)

    Article  Google Scholar 

  12. Elhawary, H., Popovic, A.: Robust feature tracking on the beating heart for a robotic-guided endoscope. Int. J. Med. Robot. Comput. Assist. Surg. 7(4), 459–468 (2011)

    Article  Google Scholar 

  13. Fischer, P., Dosovitskiy, A., Brox, T.: Descriptor matching with convolutional neural networks: a comparison to SIFT (2014)

    Google Scholar 

  14. Flores, R.M., et al.: Video-assisted thoracoscopic surgery (VATS) lobectomy: catastrophic intraoperative complications. J. Thorac. Cardiovasc. Surg. 142(6), 1412–1417 (2011)

    Article  Google Scholar 

  15. Fuchs, H., et al.: Augmented reality visualization for laparoscopic surgery. In: Wells, W.M., Colchester, A., Delp, S. (eds.) MICCAI 1998. LNCS, vol. 1496, pp. 934–943. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0056282

    Chapter  Google Scholar 

  16. Kim, J.-H., Bartoli, A., Collins, T., Hartley, R.: Tracking by detection for interactive image augmentation in laparoscopy. In: Dawant, B.M., Christensen, G.E., Fitzpatrick, J.M., Rueckert, D. (eds.) WBIR 2012. LNCS, vol. 7359, pp. 246–255. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-31340-0_26

    Chapter  Google Scholar 

  17. Kinsinger, L.S., et al.: Implementation of lung cancer screening in the Veterans Health Administration. JAMA Intern. Med. 177(3), 399–406 (2017)

    Article  Google Scholar 

  18. Lee, C.Y., et al.: Novel thoracoscopic navigation system with augmented real-time image guidance for chest wall tumors. Ann. Thorac. Surg. 106(5), 1468–1475 (2018)

    Article  Google Scholar 

  19. Lin, J., et al.: Dual-modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks. Med. Image Anal. 48, 162–176 (2018)

    Article  Google Scholar 

  20. Liu, W.P., Richmon, J.D., Sorger, J.M., Azizian, M., Taylor, R.H.: Augmented reality and CBCT guidance for transoral robotic surgery. J. Robot. Surg. 9(3), 223–233 (2015)

    Article  Google Scholar 

  21. Mahmoud, N., Collins, T., Hostettler, A., Soler, L., Doignon, C., Montiel, J.M.M.: Live tracking and dense reconstruction for handheld monocular endoscopy. IEEE Trans. Med. Imaging 38(1), 79–89 (2019)

    Article  Google Scholar 

  22. Maier-Hein, L., et al.: Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery. Med. Image Anal. 17(8), 974–996 (2013)

    Article  Google Scholar 

  23. Mountney, P., Yang, G.-Z.: Motion compensated SLAM for image guided surgery. In: Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A. (eds.) MICCAI 2010. LNCS, vol. 6362, pp. 496–504. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15745-5_61

    Chapter  Google Scholar 

  24. Nicolau, S., Soler, L., Mutter, D., Marescaux, J.: Augmented reality in laparoscopic surgical oncology. Surg. Oncol. 20(3), 189–201 (2011)

    Article  Google Scholar 

  25. Puerto-Souza, G.A., Cadeddu, J.A., Mariottini, G.L.: Toward long-term and accurate augmented-reality for monocular endoscopic videos. IEEE Trans. Biomed. Eng. 61(10), 2609–2620 (2014)

    Article  Google Scholar 

  26. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  27. Shvets, A.A., Rakhlin, A., Kalinin, A.A., Iglovikov, V.I.: Automatic instrument segmentation in robot-assisted surgery using deep learning. In: IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 624–628 (2018)

    Google Scholar 

  28. Sotiras, A., Davatzikos, C., Paragios, N.: Deformable medical image registration: a survey. IEEE Trans. Med. Imaging 32(7), 1153–1190 (2013)

    Article  Google Scholar 

  29. Stoyanov, D., Scarzanella, M.V., Pratt, P., Yang, G.-Z.: Real-time stereo reconstruction in robotically assisted minimally invasive surgery. In: Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A. (eds.) MICCAI 2010. LNCS, vol. 6361, pp. 275–282. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15705-9_34

    Chapter  Google Scholar 

  30. Thienphrapa, P., Bydlon, T., Chen, A., Popovic, A.: Evaluation of surface feature persistence during lung surgery. In: BMES Annual Meeting, Atlanta, GA (2018)

    Google Scholar 

  31. Willekes, L., Boutros, C., Goldfarb, M.A.: VATS intraoperative tattooing to facilitate solitary pulmonary nodule resection. J. Cardiothorac. Surg. 3(1), 13 (2008)

    Article  Google Scholar 

  32. Yip, M.C., Lowe, D.G., Salcudean, S.E., Rohling, R.N., Nguan, C.Y.: Tissue tracking and registration for image-guided surgery. IEEE Trans. Med. Imaging 31(11), 2169–2182 (2012)

    Article  Google Scholar 

  33. Zagoruyko, S., Komodakis, N.: Learning to compare image patches via CNNs. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4353–4361 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Thienphrapa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Thienphrapa, P. et al. (2019). Interactive Endoscopy: A Next-Generation, Streamlined User Interface for Lung Surgery Navigation. In: Shen, D., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11768. Springer, Cham. https://doi.org/10.1007/978-3-030-32254-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32254-0_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32253-3

  • Online ISBN: 978-3-030-32254-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics