Skip to main content

Tracking Adaptation to Improve SuperPoint for 3D Reconstruction in Endoscopy

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (MICCAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14220))

  • 7756 Accesses

Abstract

Endoscopy is the gold standard procedure for early detection and treatment of numerous diseases. Obtaining 3D reconstructions from real endoscopic videos would facilitate the development of assistive tools for practitioners, but it is a challenging problem for current Structure From Motion (SfM) methods. Feature extraction and matching are key steps in SfM approaches, and these are particularly difficult in the endoscopy domain due to deformations, poor texture, and numerous artifacts in the images. This work presents a novel learned model for feature extraction in endoscopy, called SuperPoint-E, which improves upon existing work using recordings from real medical practice. SuperPoint-E is based on the SuperPoint architecture but it is trained with a novel supervision strategy. The supervisory signal used in our work comes from features extracted with existing detectors (SIFT and SuperPoint) that can be successfully tracked and triangulated in short endoscopy clips (building a 3D model using COLMAP). In our experiments, SuperPoint-E obtains more and better features than any of the baseline detectors used as supervision. We validate the effectiveness of our model for 3D reconstruction in real endoscopy data. Code and model: https://github.com/LeonBP/SuperPointTrackingAdaptation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.cs.ubc.ca/research/image-matching-challenge/2021/leaderboard/.

  2. 2.

    https://github.com/magicleap/SuperGluePretrainedNetwork.

References

  1. Azagra, P., et al.: Endomapper dataset of complete calibrated endoscopy procedures. arXiv preprint arXiv:2204.14240 (2022)

  2. Barbed, O.L., Chadebecq, F., Morlana, J., Montiel, J.M., Murillo, A.C.: Superpoint features in endoscopy. In: Imaging Systems for GI Endoscopy, and Graphs in Biomedical Image Analysis: First MICCAI Workshop, ISGIE 2022, and Fourth MICCAI Workshop, GRAIL 2022, Held in Conjunction with MICCAI 2022, Singapore, 18 September 2022, Proceedings, pp. 45–55. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-21083-9_5

  3. Bobrow, T.L., Golhar, M., Vijayan, R., Akshintala, V.S., Garcia, J.R., Durr, N.J.: Colonoscopy 3d video dataset with paired depth from 2d–3d registration. arXiv preprint arXiv:2206.08903 (2022)

  4. DeTone, D., Malisiewicz, T., Rabinovich, A.: Self-improving visual odometry. arXiv preprint arXiv:1812.03245 (2018)

  5. DeTone, D., Malisiewicz, T., Rabinovich, A.: Superpoint: self-supervised interest point detection and description. In: Conference on Computer Vision and Pattern Recognition Workshops. IEEE (2018)

    Google Scholar 

  6. Dusmanu, M., et al.: D2-net: a trainable cnn for joint description and detection of local features. In: Conference on Computer Vision and Pattern Recognition. IEEE (2019)

    Google Scholar 

  7. Grasa, O.G., Bernal, E., Casado, S., Gil, I., Montiel, J.: Visual slam for handheld monocular endoscope. IEEE Trans. Med. Imaging 33(1), 135–146 (2013)

    Article  Google Scholar 

  8. Horn, B.K.: Closed-form solution of absolute orientation using unit quaternions. Josa a 4(4), 629–642 (1987)

    Article  Google Scholar 

  9. Jau, Y.Y., Zhu, R., Su, H., Chandraker, M.: Deep keypoint-based camera pose estimation with geometric constraints. In: International Conference on Intelligent Robots and Systems. IEEE (2020). https://github.com/eric-yyjau/pytorch-superpoint

  10. Jin, Y., et al.: Image matching across wide baselines: from paper to practice. Int. J. Comput. Vision 129(2), 517–547 (2021)

    Article  Google Scholar 

  11. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)

    Article  Google Scholar 

  12. Ma, J., Jiang, X., Fan, A., Jiang, J., Yan, J.: Image matching from handcrafted to deep features: a survey. Int. J. Comput. Vision 129, 1–57 (2020)

    MathSciNet  MATH  Google Scholar 

  13. Mahmoud, N., Collins, T., Hostettler, A., Soler, L., Doignon, C., Montiel, J.M.M.: Live tracking and dense reconstruction for handheld monocular endoscopy. IEEE Trans. Med. Imaging 38(1), 79–89 (2018)

    Article  Google Scholar 

  14. Maier-Hein, L., et al.: Optical techniques for 3d surface reconstruction in computer-assisted laparoscopic surgery. Med. Image Anal. 17(8), 974–996 (2013)

    Article  Google Scholar 

  15. Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: Orb-slam: a versatile and accurate monocular slam system. IEEE Trans. Rob. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  16. Ozyoruk, K.B., et al.: Endoslam dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos. Med. Image Anal. 71, 102058 (2021)

    Article  Google Scholar 

  17. Revaud, J., Weinzaepfel, P., de Souza, C.R., Humenberger, M.: R2D2: repeatable and reliable detector and descriptor. In: International Conference on Neural Information Processing Systems (2019)

    Google Scholar 

  18. Rodríguez, J.J.G., Tardós, J.D.: Tracking monocular camera pose and deformation for slam inside the human body. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5278–5285. IEEE (2022)

    Google Scholar 

  19. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: International Conference on Computer Vision. IEEE (2011)

    Google Scholar 

  20. Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: Superglue: learning feature matching with graph neural networks. In: Conference on Computer Vision and Pattern Recognition. IEEE (2020)

    Google Scholar 

  21. Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  22. Schönberger, J.L., Zheng, E., Frahm, J.-M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 501–518. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_31

    Chapter  Google Scholar 

  23. Sun, J., Shen, Z., Wang, Y., Bao, H., Zhou, X.: Loftr: detector-free local feature matching with transformers. In: CVPR. IEEE (2021)

    Google Scholar 

  24. Tyszkiewicz, M., Fua, P., Trulls, E.: Disk: learning local features with policy gradient. Adv. Neural Inf. Process. Syst. 33, 14254–14265 (2020)

    Google Scholar 

Download references

Acknowledgements

This project has been funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 863146 and Aragón Government project T45_23R.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to O. León Barbed .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 6967 KB)

Supplementary material 2 (mp4 14557 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Barbed, O.L., Montiel, J.M.M., Fua, P., Murillo, A.C. (2023). Tracking Adaptation to Improve SuperPoint for 3D Reconstruction in Endoscopy. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14220. Springer, Cham. https://doi.org/10.1007/978-3-031-43907-0_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43907-0_56

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43906-3

  • Online ISBN: 978-3-031-43907-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics