Skip to main content

LightNeuS: Neural Surface Reconstruction in Endoscopy Using Illumination Decline

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (MICCAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14229))

Abstract

We propose a new approach to 3D reconstruction from sequences of images acquired by monocular endoscopes. It is based on two key insights. First, endoluminal cavities are watertight, a property naturally enforced by modeling them in terms of a signed distance function. Second, the scene illumination is variable. It comes from the endoscope’s light sources and decays with the inverse of the squared distance to the surface. To exploit these insights, we build on NeuS [25], a neural implicit surface reconstruction technique with an outstanding capability to learn appearance and a SDF surface model from multiple views, but currently limited to scenes with static illumination. To remove this limitation and exploit the relation between pixel brightness and depth, we modify the NeuS architecture to explicitly account for it and introduce a calibrated photometric model of the endoscope’s camera and light source.

Our method is the first one to produce watertight reconstructions of whole colon sections. We demonstrate excellent accuracy on phantom imagery. Remarkably, the watertight prior combined with illumination decline, allows to complete the reconstruction of unseen portions of the surface with acceptable accuracy, paving the way to automatic quality assessment of cancer screening explorations, measuring the global percentage of observed mucosa.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Azagra, P., et al.: EndoMapper dataset of complete calibrated endoscopy procedures. arXiv:2204.14240 (2022)

  2. Bae, G., Budvytis, I., Yeung, C.-K., Cipolla, R.: Deep multi-view stereo for dense 3D reconstruction from monocular endoscopic video. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 774–783. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_74

    Chapter  Google Scholar 

  3. Batlle, V.M., Montiel, J.M.M., Tardós, J.D.: Photometric single-view dense 3D reconstruction in endoscopy. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4904–4910 (2022)

    Google Scholar 

  4. Bobrow, T.L., Golhar, M., Vijayan, R., Akshintala, V.S., Garcia, J.R., Durr, N.J.: Colonoscopy 3D video dataset with paired depth from 2D–3D registration. arXiv:2206.08903 (2022)

  5. Campos, C., Elvira, R., Gómez-Rodríguez, J.J., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Trans. Rob. 37(6), 1874–1890 (2021)

    Article  Google Scholar 

  6. Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018)

    Article  Google Scholar 

  7. Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 834–849. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10605-2_54

    Chapter  Google Scholar 

  8. Gómez-Rodríguez, J.J., Lamarca, J., Morlana, J., Tardós, J.D., Montiel, J.M.M.: SD-DefSLAM: Semi-direct monocular SLAM for deformable and intracorporeal scenes. In: IEEE Int. Conf. on Robotics and Automation (ICRA). pp. 5170–5177 (2021)

    Google Scholar 

  9. Kajiya, J.T., Von Herzen, B.P.: Ray tracing volume densities. SIGGRAPH Comput. Graph. 18(3), 165–174 (jan 1984)

    Google Scholar 

  10. Kannala, J., Brandt, S.: A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell. 28(8), 1335–1340 (2006)

    Article  Google Scholar 

  11. Liu, X., Li, Z., Ishii, M., Hager, G.D., Taylor, R.H., Unberath, M.: Sage: Slam with appearance and geometry prior for endoscopy. In: IEEE Int. Conf. on Robotics and Automation (ICRA). pp. 5587–5593 (2022)

    Google Scholar 

  12. Ma, R., Wang, R., Pizer, S., Rosenman, J., McGill, S.K., Frahm, J.M.: Real-time 3D reconstruction of colonoscopic surfaces for determining missing regions. In: Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI). pp. 573–582 (2019)

    Google Scholar 

  13. Ma, R., Wang, R., Zhang, Y., Pizer, S., McGill, S.K., Rosenman, J., Frahm, J.M.: RNNSLAM: Reconstructing the 3D colon to visualize missing regions during a colonoscopy. Med. Image Anal. 72, 102100 (2021)

    Article  Google Scholar 

  14. Mahmoud, N., Collins, T., Hostettler, A., Soler, L., Doignon, C., Montiel, J.M.M.: Live tracking and dense reconstruction for handheld monocular endoscopy. IEEE Trans. Med. Imaging 38(1), 79–89 (2019)

    Article  Google Scholar 

  15. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. Commun. ACM. 65(1), 99–106 (2021)

    Article  Google Scholar 

  16. Modrzejewski, R., Collins, T., Hostettler, A., Marescaux, J., Bartoli, A.: Light modelling and calibration in laparoscopy. Int. J. Comput. Assist. Radiol. Surg. 15(5), 859–866 (2020)

    Article  Google Scholar 

  17. Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: dense tracking and mapping in real-time. In: IEEE International Conference on Computer Vision (ICCV), pp. 2320–2327 (2011)

    Google Scholar 

  18. Park, K., et al.: Nerfies: deformable neural radiance fields. In: IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5865–5874 (2021)

    Google Scholar 

  19. Scaramuzza, D., Martinelli, A., Siegwart, R.: A toolbox for easily calibrating omnidirectional cameras. In: IEEE/RJS International Conference on Intelligent Robots and Systems (IROS), pp. 5695–5701 (2006)

    Google Scholar 

  20. Schönberger, J.L., Zheng, E., Frahm, J.-M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 501–518. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_31

    Chapter  Google Scholar 

  21. Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  22. Sengupta, A., Bartoli, A.: Colonoscopic 3D reconstruction by tubular non-rigid structure-from-motion. Int. J. Comput. Assist. Radiol. Surg. 16(7), 1237–1241 (2021)

    Article  Google Scholar 

  23. Sung, H., et al.: Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: Cancer J. Clin. 71(3), 209–249 (2021)

    Google Scholar 

  24. Tokgozoglu, H.N., Meisner, E.M., Kazhdan, M., Hager, G.D.: Color-based hybrid reconstruction for endoscopy. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 8–15 (2012)

    Google Scholar 

  25. Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: NeuS: learning neural implicit surfaces by volume rendering for multi-view reconstruction. In: Advances in Neural Information Processing Systems, vol. 34, pp. 27171–27183 (2021)

    Google Scholar 

  26. Wang, Y., Han, Q., Habermann, M., Daniilidis, K., Theobalt, C., Liu, L.: NeuS2: fast learning of neural implicit surfaces for multi-view reconstruction. arXiv:2212.05231 (2022)

  27. Wang, Y., Long, Y., Fan, S.H., Dou, Q.: Neural rendering for stereo 3D reconstruction of deformable tissues in robotic surgery. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention. MICCAI 2022. LNCS, vol. 13437, pp. 431–441. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16449-1_41

  28. Zhao, Q., Price, T., Pizer, S., Niethammer, M., Alterovitz, R., Rosenman, J.: The Endoscopogram: a 3D model reconstructed from endoscopic video frames. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9900, pp. 439–447. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46720-7_51

    Chapter  Google Scholar 

Download references

Acknowledgement

This work was supported by EU-H2020 grant 863146: ENDOMAPPER, Spanish government grants PID2021-127685NB-I00 and FPU20/06782 and by Aragón government grant DGA_T45-17R.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Víctor M. Batlle .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 4551 KB)

Supplementary material 2 (mp4 4315 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Batlle, V.M., Montiel, J.M.M., Fua, P., Tardós, J.D. (2023). LightNeuS: Neural Surface Reconstruction in Endoscopy Using Illumination Decline. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14229. Springer, Cham. https://doi.org/10.1007/978-3-031-43999-5_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43999-5_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43998-8

  • Online ISBN: 978-3-031-43999-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics