Abstract
Endoscopy is the gold standard procedure for early detection and treatment of numerous diseases. Obtaining 3D reconstructions from real endoscopic videos would facilitate the development of assistive tools for practitioners, but it is a challenging problem for current Structure From Motion (SfM) methods. Feature extraction and matching are key steps in SfM approaches, and these are particularly difficult in the endoscopy domain due to deformations, poor texture, and numerous artifacts in the images. This work presents a novel learned model for feature extraction in endoscopy, called SuperPoint-E, which improves upon existing work using recordings from real medical practice. SuperPoint-E is based on the SuperPoint architecture but it is trained with a novel supervision strategy. The supervisory signal used in our work comes from features extracted with existing detectors (SIFT and SuperPoint) that can be successfully tracked and triangulated in short endoscopy clips (building a 3D model using COLMAP). In our experiments, SuperPoint-E obtains more and better features than any of the baseline detectors used as supervision. We validate the effectiveness of our model for 3D reconstruction in real endoscopy data. Code and model: https://github.com/LeonBP/SuperPointTrackingAdaptation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Azagra, P., et al.: Endomapper dataset of complete calibrated endoscopy procedures. arXiv preprint arXiv:2204.14240 (2022)
Barbed, O.L., Chadebecq, F., Morlana, J., Montiel, J.M., Murillo, A.C.: Superpoint features in endoscopy. In: Imaging Systems for GI Endoscopy, and Graphs in Biomedical Image Analysis: First MICCAI Workshop, ISGIE 2022, and Fourth MICCAI Workshop, GRAIL 2022, Held in Conjunction with MICCAI 2022, Singapore, 18 September 2022, Proceedings, pp. 45–55. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-21083-9_5
Bobrow, T.L., Golhar, M., Vijayan, R., Akshintala, V.S., Garcia, J.R., Durr, N.J.: Colonoscopy 3d video dataset with paired depth from 2d–3d registration. arXiv preprint arXiv:2206.08903 (2022)
DeTone, D., Malisiewicz, T., Rabinovich, A.: Self-improving visual odometry. arXiv preprint arXiv:1812.03245 (2018)
DeTone, D., Malisiewicz, T., Rabinovich, A.: Superpoint: self-supervised interest point detection and description. In: Conference on Computer Vision and Pattern Recognition Workshops. IEEE (2018)
Dusmanu, M., et al.: D2-net: a trainable cnn for joint description and detection of local features. In: Conference on Computer Vision and Pattern Recognition. IEEE (2019)
Grasa, O.G., Bernal, E., Casado, S., Gil, I., Montiel, J.: Visual slam for handheld monocular endoscope. IEEE Trans. Med. Imaging 33(1), 135–146 (2013)
Horn, B.K.: Closed-form solution of absolute orientation using unit quaternions. Josa a 4(4), 629–642 (1987)
Jau, Y.Y., Zhu, R., Su, H., Chandraker, M.: Deep keypoint-based camera pose estimation with geometric constraints. In: International Conference on Intelligent Robots and Systems. IEEE (2020). https://github.com/eric-yyjau/pytorch-superpoint
Jin, Y., et al.: Image matching across wide baselines: from paper to practice. Int. J. Comput. Vision 129(2), 517–547 (2021)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)
Ma, J., Jiang, X., Fan, A., Jiang, J., Yan, J.: Image matching from handcrafted to deep features: a survey. Int. J. Comput. Vision 129, 1–57 (2020)
Mahmoud, N., Collins, T., Hostettler, A., Soler, L., Doignon, C., Montiel, J.M.M.: Live tracking and dense reconstruction for handheld monocular endoscopy. IEEE Trans. Med. Imaging 38(1), 79–89 (2018)
Maier-Hein, L., et al.: Optical techniques for 3d surface reconstruction in computer-assisted laparoscopic surgery. Med. Image Anal. 17(8), 974–996 (2013)
Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: Orb-slam: a versatile and accurate monocular slam system. IEEE Trans. Rob. 31(5), 1147–1163 (2015)
Ozyoruk, K.B., et al.: Endoslam dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos. Med. Image Anal. 71, 102058 (2021)
Revaud, J., Weinzaepfel, P., de Souza, C.R., Humenberger, M.: R2D2: repeatable and reliable detector and descriptor. In: International Conference on Neural Information Processing Systems (2019)
Rodríguez, J.J.G., Tardós, J.D.: Tracking monocular camera pose and deformation for slam inside the human body. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5278–5285. IEEE (2022)
Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: International Conference on Computer Vision. IEEE (2011)
Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: Superglue: learning feature matching with graph neural networks. In: Conference on Computer Vision and Pattern Recognition. IEEE (2020)
Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Schönberger, J.L., Zheng, E., Frahm, J.-M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 501–518. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_31
Sun, J., Shen, Z., Wang, Y., Bao, H., Zhou, X.: Loftr: detector-free local feature matching with transformers. In: CVPR. IEEE (2021)
Tyszkiewicz, M., Fua, P., Trulls, E.: Disk: learning local features with policy gradient. Adv. Neural Inf. Process. Syst. 33, 14254–14265 (2020)
Acknowledgements
This project has been funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 863146 and Aragón Government project T45_23R.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Barbed, O.L., Montiel, J.M.M., Fua, P., Murillo, A.C. (2023). Tracking Adaptation to Improve SuperPoint for 3D Reconstruction in Endoscopy. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14220. Springer, Cham. https://doi.org/10.1007/978-3-031-43907-0_56
Download citation
DOI: https://doi.org/10.1007/978-3-031-43907-0_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43906-3
Online ISBN: 978-3-031-43907-0
eBook Packages: Computer ScienceComputer Science (R0)