Skip to main content

PIPsUS: Self-supervised Point Tracking in Ultrasound

  • Conference paper
  • First Online:
Simplifying Medical Ultrasound (ASMUS 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15186))

Included in the following conference series:

  • 124 Accesses

Abstract

Finding point-level correspondences is a fundamental problem in ultrasound (US), enabling US landmark tracking for intraoperative image guidance and motion estimation. Most US tracking methods are based on optical flow or feature matching, initially designed for RGB images. Therefore domain shift can impact their performance. Ground-truth correspondences could supervise training, but these are expensive to acquire. To solve these problems, we propose a self-supervised point-tracking model called PIPsUS. Our model can track an arbitrary number of points at pixel-level in one forward pass and exploits temporal information by considering multiple, instead of just consecutive, frames. We developed a new self-supervised training strategy that utilizes a long-term point-tracking model trained for RGB images as a teacher to guide the model to learn realistic motions and use data augmentation to enforce tracking from US appearance. We evaluate our method on neck and oral US and echocardiography, showing higher point tracking accuracy when compared with fast normalized cross-correlation and tuned optical flow. Codes are available at https://github.com/aliciachenw/PIPsUS.

Supported by NSERC Discovery Grant and Charles Laszlo Chair in Biomedical Engineering held by Dr. Salcudean, VCHRI Innovation and Translational Research Awards, and the University of British Columbia Department of Surgery Seed Grant held by Dr. Prisman. This work was completed when Dr. Schmidt was at the University of British Columbia.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Al-Battal, A.F., Lerman, I.R., Nguyen, T.Q.: Object detection and tracking in ultrasound scans using an optical flow and semantic segmentation framework based on convolutional neural networks. In: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 1096–1100. IEEE (2022)

    Google Scholar 

  2. Alkhatib, M., Hafiane, A., Tahri, O., Vieyres, P., Delbos, A.: Adaptive median binary patterns for fully automatic nerves tracking in ultrasound images. Computer methods and programs in biomedicine 160, 129–140 (2018)

    Article  Google Scholar 

  3. Chuang, B.I., Hsu, J.H., Kuo, L.C., Jou, I.M., Su, F.C., Sun, Y.N.: Tendon-motion tracking in an ultrasound image sequence using optical-flow-based block matching. Biomedical engineering online 16, 1–19 (2017)

    Article  Google Scholar 

  4. Dall’Alba, D., Fiorini, P.: Bipco: ultrasound feature points based on phase congruency detector and binary pattern descriptor. International journal of computer assisted radiology and surgery 10, 843–854 (2015)

    Article  MATH  Google Scholar 

  5. De Luca, V., Banerjee, J., Hallack, A., Kondo, S., Makhinya, M., Nouri, D., Royer, L., Cifor, A., Dardenne, G., Goksel, O., et al.: Evaluation of 2d and 3d ultrasound tracking algorithms and impact on ultrasound-guided liver radiotherapy margins. Medical physics 45(11), 4986–5003 (2018)

    Article  Google Scholar 

  6. Doersch, C., Gupta, A., Markeeva, L., Recasens, A., Smaira, L., Aytar, Y., Carreira, J., Zisserman, A., Yang, Y.: Tap-vid: A benchmark for tracking any point in a video. Advances in Neural Information Processing Systems 35, 13610–13626 (2022)

    Google Scholar 

  7. Evain, E., Faraz, K., Grenier, T., Garcia, D., De Craene, M., Bernard, O.: A pilot study on convolutional neural networks for motion estimation from ultrasound images. IEEE transactions on ultrasonics, ferroelectrics, and frequency control 67(12), 2565–2573 (2020)

    Article  Google Scholar 

  8. Harley, A.W., Fang, Z., Fragkiadaki, K.: Particle video revisited: Tracking through occlusions using point trajectories. In: European Conference on Computer Vision. pp. 59–75. Springer (2022)

    Google Scholar 

  9. Ihler, S., Kuhnke, F., Laves, M.H., Ortmaier, T.: Self-supervised domain adaptation for patient-specific, real-time tissue tracking. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part III 23. pp. 54–64. Springer (2020)

    Google Scholar 

  10. Lasso, A., Heffter, T., Rankin, A., Pinter, C., Ungi, T., Fichtinger, G.: PLUS: Open-source toolkit for ultrasound-guided intervention systems. IEEE Transactions on Biomedical Engineering 61, 2527–2537 (2014)

    Article  Google Scholar 

  11. Lewis, J.: Fast normalized cross-correlation. Industrial Light & Magic 10,  7 (2001)

    MATH  Google Scholar 

  12. Liang, H., Ning, G., Zhang, X., Liao, H.: Semi-supervised anatomy tracking with contrastive representation learning in ultrasound sequences. In: 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI). pp. 1–5. IEEE (2023)

    Google Scholar 

  13. Liu, F., Liu, D., Tian, J., Xie, X., Yang, X., Wang, K.: Cascaded one-shot deformable convolutional neural networks: Developing a deep learning model for respiratory motion estimation in ultrasound sequences. Medical image analysis 65, 101793 (2020)

    Article  MATH  Google Scholar 

  14. Machado, I., Toews, M., Luo, J., Unadkat, P., Essayed, W., George, E., Teodoro, P., Carvalho, H., Martins, J., Golland, P., et al.: Non-rigid registration of 3d ultrasound for neurosurgery using automatic feature detection and matching. International journal of computer assisted radiology and surgery 13, 1525–1538 (2018)

    Article  Google Scholar 

  15. Makhinya, M., Goksel, O.: Motion tracking in 2d ultrasound using vessel models and robust optic-flow. Proceedings of MICCAI CLUST 20, 20–27 (2015)

    MATH  Google Scholar 

  16. Nicke, T., Graf, L., Lauri, M., Mischkewitz, S., Frintrop, S., Heinrich, M.P.: Realtime optical flow estimation on vein and artery ultrasound sequences based on knowledge-distillation. In: International Workshop on Biomedical Image Registration. pp. 134–143. Springer (2022)

    Google Scholar 

  17. Ouyang, D., He, B., Ghorbani, A., Lungren, M.P., Ashley, E.A., Liang, D.H., Zou, J.Y.: Echonet-dynamic: a large new cardiac motion video data resource for medical machine learning. In: NeurIPS ML4H Workshop: Vancouver, BC, Canada (2019)

    Google Scholar 

  18. Ouzir, N., Basarab, A., Lairez, O., Tourneret, J.Y.: Robust optical flow estimation in cardiac ultrasound images using a sparse representation. IEEE transactions on medical imaging 38(3), 741–752 (2018)

    Article  MATH  Google Scholar 

  19. Shen, C., He, J., Huang, Y., Wu, J.: Discriminative correlation filter network for robust landmark tracking in ultrasound guided intervention. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part V 22. pp. 646–654. Springer (2019)

    Google Scholar 

  20. Teed, Z., Deng, J.: Raft: Recurrent all-pairs field transforms for optical flow. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16. pp. 402–419. Springer (2020)

    Google Scholar 

  21. Wang, Q., Chang, Y.Y., Cai, R., Li, Z., Hariharan, B., Holynski, A., Snavely, N.: Tracking everything everywhere all at once. arXiv preprint arXiv:2306.05422 (2023)

  22. Wang, Y., Fu, T., Wang, Y., Xiao, D., Lin, Y., Fan, J., Song, H., Liu, F., Yang, J.: Multi3: multi-templates siamese network with multi-peaks detection and multi-features refinement for target tracking in ultrasound image sequences. Physics in Medicine & Biology 67(19), 195007 (2022)

    Article  MATH  Google Scholar 

  23. Wulff, D., Hagenah, J., Ernst, F.: Landmark tracking in 4d ultrasound using generalized representation learning. International Journal of Computer Assisted Radiology and Surgery 18(3), 493–500 (2023)

    MATH  Google Scholar 

  24. Wulff, D., Kuhlemann, I., Ernst, F., Schweikard, A., Ipsen, S.: Robust motion tracking of deformable targets in the liver using binary feature libraries in 4d ultrasound. Current Directions in Biomedical Engineering 5(1), 601–604 (2019)

    Article  Google Scholar 

  25. Zhao, C., Droste, R., Drukker, L., Papageorghiou, A.T., Noble, J.A.: Uspoint: Self-supervised interest point detection and description for ultrasound-probe motion estimation during fine-adjustment standard fetal plane finding. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 104–114. Springer (2022)

    Google Scholar 

  26. Zheng, Y., Harley, A.W., Shen, B., Wetzstein, G., Guibas, L.J.: Pointodyssey: A large-scale synthetic dataset for long-term point tracking. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 19855–19865 (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wanwen Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, W., Schmidt, A., Prisman, E., Salcudean, S.E. (2025). PIPsUS: Self-supervised Point Tracking in Ultrasound. In: Gomez, A., Khanal, B., King, A., Namburete, A. (eds) Simplifying Medical Ultrasound. ASMUS 2024. Lecture Notes in Computer Science, vol 15186. Springer, Cham. https://doi.org/10.1007/978-3-031-73647-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73647-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73646-9

  • Online ISBN: 978-3-031-73647-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics