Skip to main content

Advertisement

Log in

Enhanced visualisation for minimally invasive surgery

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

Endoscopes used in minimally invasive surgery provide a limited field of view, thus requiring a high degree of spatial awareness and orientation. Attempts at expanding this small, restricted view with previously observed imagery have been made by researchers and is generally known as image mosaicing or dynamic view expansion. For minimally invasive endoscopy, SLAM-based methods have been shown to have potential values but have yet to address effective visualisation techniques.

Methods

The live endoscopic video feed is expanded with previously observed footage. To this end, a method that highlights the difference between actual camera image and historic data observed earlier is proposed. Old video data is faded out to grey scale to mimic human peripheral vision. Specular highlights are removed with the help of texture synthesis to avoid distracting visual cues. The method is further evaluated on in vivo and phantom sequences by a detailed user study to examine the ability of the user in discerning temporal motion trajectories while visualising the expanded field of view, a feature that is of practical value for enhancing spatial awareness and orientation.

Results

The difference between historic data and live video is integrated effectively. The use of a single texture domain generated by planar parameterisation is demonstrated for view expansion. Specular highlights can be removed through texture synthesis without introducing noticeable artefacts. The implicit encoding of motion trajectory of the endoscopic camera visualised by the proposed method facilitates both global awareness and temporal evolution of the scene.

Conclusions

Dynamic view expansion provides more context for navigation and orientation by establishing reference points beyond the camera’s field of view. Effective integration of visual cues is paramount for concise visualisation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Behrens A, Bommes M, Stehle T, Gross S, Leonhardt S, Aach T (2011) Real-time image composition of bladder mosaics in fluorescence endoscopy. Comput Sci—Res Dev 26(1): 51–64

    Article  Google Scholar 

  2. Atasoy S, Noonan DP, Benhimane S, Navab N, Yang G-Z (2008) A global approach for automatic fibroscopic video mosaicing in minimally invasive diagnosis. In: Metaxas D, Axel L, Fichtinger G, Székely G (eds) Medical image computing and computer-assisted intervention—MICCAI 2008. Lecture notes in computer science, vol. 5241/2008. Springer, Heidelberg, pp 850–857

    Google Scholar 

  3. Lerotic M, Chung AJ, Clark J, Valibeik S, Yang G-Z (2008) Dynamic view expansion for enhanced navigation in natural orifice transluminal endoscopic surgery. In: Metaxas D, Axel L, Fichtinger G, Székely G (eds) Medical image computing and computer-assisted intervention—MICCAI 2008. Lecture notes in computer science, vol 5242/2008. Springer, Heidelberg, pp 467–475

    Google Scholar 

  4. Mountney P, Yang G-Z (2009) Dynamic view expansion for minimally invasive surgery using simultaneous localization and mapping. In: 31st Annual international conference of the IEEE Engineering in Medicine and Biology Society, pp 1184–1187

  5. Grasa OG, Civera J, Guemes A, Munoz V, Montiel J (2009) EKF monocular SLAM 3D modeling, measuring and augmented reality from endoscope image sequences. In: Proceedings of AMI-ARCS: 5th workshop on augmented environments for medical imaging including augmented reality in computer-aided surgery, pp 102–109

  6. Stoyanov D, Darzi A, Yang G-Z (2004) Dense 3D depth recovery for soft tissue deformation during robotically assisted laparoscopic surgery. In: Barillot C, Haynor DR, Hellier P (eds) Medical image computing and computer-assisted intervention—MICCAI 2004. Lecture notes in computer science, vol 3217/2004. Springer, Heidelberg, pp 41–48

    Google Scholar 

  7. Moll M, Tang H-W, Gool LV (2010) GPU-accelerated robotic intra-operative laparoscopic 3d reconstruction. In: Navab N, Jannin P (eds) Information processing in computer-assisted interventions. Lecture notes in omputer science, vol 6135. Springer, Heidelberg, pp 91–101

    Chapter  Google Scholar 

  8. Stoyanov D, Yang G-Z (2005) Removing specular reflection components for robotic assisted laparoscopic surgery. In: IEEE International conference on image processing 3

  9. Arnold M, Ghosh A, Ameling S, Lacey G (2010) Automatic segmentation and inpainting of specular highlights for endoscopic imaging. EURASIP J Image Video Process 2010, pp 1–12

  10. Norman JF, Todd JT, Orban GA (2004) Perception of three-dimensional shape from specular highlights, deformations of shading, and other types of visual information. Psychol Sci 15: 565–570

    Article  PubMed  Google Scholar 

  11. Mountney P, Stoyanov D, Davison A, Yang G-Z (2006) Simultaneous stereoscope localization and soft-tissue mapping for minimal invasive surgery. In: Larsen R, Nielsen M, Sporring J (eds) Medical image computing and computer-assisted intervention—MICCAI 2006. Lecture notes in computer science, vol 4190/2006. Springer, Heidelberg, pp 347–354

    Google Scholar 

  12. Zhang P, Milios EE, Gu J (2005) Vision data registration for robot self-localization in 3D. In: IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 2315–2320

  13. Shi J, Tomasi C (1994) Good features to track. In: IEEE Computer society conference on computer vision and pattern recognition proceedings CVPR’94, pp 593–600

  14. Segal M, Akeley K (2006) The OpenGL graphics system: a specification (version 2.1)

  15. Floater MS (2003) Mean value coordinates. Comput Aided Geom Des 20(1): 19–27

    Article  Google Scholar 

  16. Gritz L, d’Eon E (2007) GPU Gems 3, ch. The importance of being linear. Addison-Wesley Professional

  17. Pérez P, Gangnet M, Blake A (2003) Poisson image editing. ACM Trans Graph 22: 313–318

    Article  Google Scholar 

  18. Wei L-Y, Lefebvre S, Kwatra V, Turk G (2009) State of the art in example-based texture synthesis. In: Eurographics 2009, State of the Art Report, EG-STAR

  19. Chiras D (2010) Human biology, ch the visual sense: the eye. Jones & Bartlett Publishers Incorporated, p 220

  20. Mountney P, Stoyanov D, Yang G-Z (2010) Three-dimensional tissue deformation recovery and tracking: introducing techniques based on laparoscopic or endoscopic images. IEEE Signal Process Mag 27: 14–24

    Article  Google Scholar 

  21. Clark J, Sodergren M, Noonan D, Darzi A, Yang G-Z (2009) The natural orifice simulated surgical environment (NOSsETM): Exploring the challenges of NOTES without the animal model. J Laparoendosc Adv Surg Tech 19: 211–214

    Article  Google Scholar 

  22. Chen MC, Anderson JR, Sohn MH (2001) What can a mouse cursor tell us more? correlation of eye/mouse movements on web browsing. In: CHI’01 extended abstracts on human factors in computing systems, CHI EA ’01. ACM, New York, NY, pp 281–282

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johannes Totz.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Totz, J., Fujii, K., Mountney, P. et al. Enhanced visualisation for minimally invasive surgery. Int J CARS 7, 423–432 (2012). https://doi.org/10.1007/s11548-011-0631-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-011-0631-z

Keywords

Navigation