Skip to main content

Advertisement

Log in

Augmented reality navigation with real-time tracking for facial repair surgery

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

Facial repair surgeries (FRS) require accuracy for navigating the critical anatomy safely and quickly. The purpose of this paper is to develop a method to directly track the position of the patient using video data acquired from the single camera, which can achieve noninvasive, real time, and high positioning accuracy in FRS.

Methods

Our method first performs camera calibration and registers the surface segmented from computed tomography to the patient. Then, a two-step constraint algorithm, which includes the feature local constraint and the distance standard deviation constraint, is used to find the optimal feature matching pair quickly. Finally, the movements of the camera and the patient decomposed from the image motion matrix are used to track the camera and the patient, respectively.

Results

The proposed method achieved fusion error RMS of 1.44 ± 0.35, 1.50 ± 0.15, 1.63 ± 0.03 mm in skull phantom, cadaver mandible, and human experiments, respectively. The above errors of the proposed method were lower than those of the optical tracking system-based method. Additionally, the proposed method could process video streams up to 24 frames per second, which can meet the real-time requirements of FRS.

Conclusions

The proposed method does not rely on tracking markers attached to the patient; it could be executed automatically to maintain the correct augmented reality scene and overcome the decrease in positioning accuracy caused by patient movement during surgery.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Ma L, Jiang W, Zhang B, Qu X, Ning G, Zhang X, Liao H (2019) Augmented reality surgical navigation with accurate CBCT-patient registration for dental implant placement. Med Biol Eng Comput 57(1):47–57. https://doi.org/10.1007/s11517-018-1861-9

    Article  PubMed  Google Scholar 

  2. Zhu M, Liu F, Chai G, Pan JJ, Jiang T, Lin L, Xin Y, Zhang Y, Li Q (2017) A novel augmented reality system for displaying inferior alveolar nerve bundles in maxillofacial surgery. Sci Rep 7(1):1–11. https://doi.org/10.1038/srep42365

    Article  CAS  Google Scholar 

  3. Bouchard C, Magill JC, Nikonovskiy V, Byl M, Murphy BA, Kaban LB, Troulis MJ (2012) Osteomark: a surgical navigation system for oral and maxillofacial surgery. Int J Oral Max Surg 41(2):265–270. https://doi.org/10.1016/j.ijom.2011.10.017

    Article  CAS  Google Scholar 

  4. Pokhrel S, Alsadoon A, Prasad PWC, Paul M (2018) A novel augmented reality (AR) scheme for knee replacement surgery by considering cutting error accuracy. Int J Med Robot Comp 15(1):e1958. https://doi.org/10.1002/rcs.1958

    Article  Google Scholar 

  5. Wang J, Suenaga H, Yang L, Kobayashi E, Sakuma I (2016) Video see-through augmented reality for oral and maxillofacial surgery. Int J Med Robot Comp 13(2):e1754. https://doi.org/10.1002/rcs.1754

    Article  Google Scholar 

  6. Mirota DJ, Wang H, Taylor RH, Ishii M, Gallia GL, Hager GD (2012) A system for video-based navigation for endoscopic endonasal skull base surgery. IEEE Trans Med Image 31(4):963–976. https://doi.org/10.1109/TMI.2011.2176500

    Article  Google Scholar 

  7. Luó X, Feuerstein M, Deguchi D, Kitasaka T, Takabatake H, Mori K (2012) Development and comparison of new hybrid motion tracking for bronchoscopic navigation. Med Image Anal 16(3):577–596. https://doi.org/10.1016/j.media.2010.11.001

    Article  PubMed  Google Scholar 

  8. Longuet-Higgins HC (1981) A computer algorithm for reconstructing a scence from tow projections. Nature 293(5828):133–135. https://doi.org/10.1038/293133a0

    Article  Google Scholar 

  9. Feng W, Chu A, Hu J (2011) SU-E-I-123: quantification of 1H magnetic resonance spectroscopic imaging for breast cancer with singular value decompsition (SVD) method. Med Phys 38:3424–3424. https://doi.org/10.1118/1.3611697

    Article  Google Scholar 

  10. Mellado N, Aiger D, Mitra NJ (2014) Super 4pcs fast global point cloud registration via smart indexing. Comput Graph Forum 33(5):205–215. https://doi.org/10.1111/cgf.12446

    Article  Google Scholar 

  11. Shao J, Zhang W, Mellado N, Grussenmeyer P, Li R, Chen Y, Wan P, Zhang X, Cai S (2019) Automated markerless registration of point clouds from TLS and structured light scanner for heritage documentation. J Cult Herit 35:16–24. https://doi.org/10.1016/j.culher.2018.07.013

    Article  Google Scholar 

  12. Zhang Z (2000) A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 22(11):1330–1334. https://doi.org/10.1109/34.888718

    Article  Google Scholar 

  13. Redmon J, Farhadi A (2018) Yolov3: an incremental improvement. arXiv preprint arXiv: 1804. 02767

  14. Mur-Artal R, Tardós JD (2017) Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans Robot 33(5):1255–1262. https://doi.org/10.1109/TRO.2017.2705103

    Article  Google Scholar 

  15. Ma J, Zhao J, Jiang J, Zhou H, Guo X (2019) Locality preserving matching. Int J Comput Vis 127(5):512–531. https://doi.org/10.1007/s11263-018-1117-z

    Article  Google Scholar 

  16. Li X, Ai D, Chu Y, Fan J, Song H, Gu Y, Yang J (2020) Locality preserving based motion consensus for endoscopic image feature matching. In: Proceedings of the 2020 4th international conference on digital signal processing, pp 117–121

  17. Konolige K, Agrawal M (2008) FrameSLAM: from bundle adjustment to real-time visual mapping. IEEE Trans Robot 24(5):1066–1077. https://doi.org/10.1109/TRO.2008.2004832

    Article  Google Scholar 

  18. Lin WYD, Cheng MM, Lu J, Yang H, Do MN, Torr P (2014) Bilateral functions for global motion modeling. In: European conference on computer vision. Springer, Cham, pp 341–356

  19. Ma J, Zhao J, Tian J, Yuille AL, Tu Z (2014) Robust point matching via vector field consensus. IEEE Trans Image Process 23(4):1706–1721. https://doi.org/10.1109/TIP.2014.2307478

    Article  PubMed Central  Google Scholar 

  20. Puerto-Souza GA, Mariottini GL (2013) A fast and accurate feature-matching algorithm for minimally-invasive endoscopic images. IEEE Trans Med Imaging 32(7):1201–1214. https://doi.org/10.1109/TMI.2013.2239306

    Article  PubMed  Google Scholar 

  21. Chu Y, Li H, Li X, Ding Y, Yang X, Ai D, Chen X, Wang Y, Yang J (2020) Endoscopic image feature matching via motion consensus and global bilateral regression. Comput Methods Prog Biol 190:105370. https://doi.org/10.1016/j.cmpb.2020.105370

    Article  Google Scholar 

  22. Chu Y, Li X, Yang X, Ai D, Huang Y, Song H, Jiang Y, Wang Y, Chen X, Yang J (2018) Perception enhancement using importance-driven hybrid rendering for augmented reality based endoscopic surgical navigation. Biomed Opt Express 9(11):5205–5226. https://doi.org/10.1364/BOE.9.005205

    Article  PubMed  PubMed Central  Google Scholar 

  23. Holynski A, Kopf J (2018) Fast depth densification for occlusion-aware augmented reality. ACM Trans Graphic 37(6):1–11. https://doi.org/10.1145/3272127.3275083

    Article  Google Scholar 

  24. Ding X, Lin W, Chen Z, Zhang X (2019) Point cloud saliency detection by local and global feature fusion. IEEE Trans Image Process 28(11):5379–5393. https://doi.org/10.1109/TIP.2019.2918735

    Article  PubMed  Google Scholar 

Download references

Acknowledgments

The authors gratefully acknowledge the support provided by the National Key R&D Program of China (2019YFC0119300), the National Science Foundation Program of China (62025104, 61901031), and Beijing Nova Program (Z201100006820004) from Beijing Municipal Science & Technology Commission.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Tianyu Fu or Jian Yang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional review board and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shao, L., Fu, T., Zheng, Z. et al. Augmented reality navigation with real-time tracking for facial repair surgery. Int J CARS 17, 981–991 (2022). https://doi.org/10.1007/s11548-022-02589-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-022-02589-0

Keywords

Navigation