Skip to main content

Reshaping the Online Data Buffering and Organizing Mechanism for Continual Test-Time Adaptation

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15140))

Included in the following conference series:

  • 341 Accesses

Abstract

Continual Test-Time Adaptation (CTTA) involves adapting a pre-trained source model to continually changing unsupervised target domains. In this paper, we systematically analyze the challenges of this task: online environment, unsupervised nature, and the risks of error accumulation and catastrophic forgetting under continual domain shifts. To address these challenges, we reshape the online data buffering and organizing mechanism for CTTA. We propose an uncertainty-aware buffering approach to identify and aggregate significant samples with high certainty from the unsupervised, single-pass data stream. Based on this, we propose a graph-based class relation preservation constraint to overcome catastrophic forgetting. Furthermore, a pseudo-target replay objective is used to mitigate error accumulation. Extensive experiments demonstrate the superiority of our method in both segmentation and classification CTTA tasks. Code is available at https://github.com/z1358/OBAO.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Boudiaf, M., Mueller, R., Ben Ayed, I., Bertinetto, L.: Parameter-free online test-time adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8344–8353 (2022)

    Google Scholar 

  2. Brahma, D., Rai, P.: A probabilistic framework for lifelong test-time adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3582–3591 (2023)

    Google Scholar 

  3. Burges, C., Ragno, R., Le, Q.: Learning to rank with nonsmooth cost functions. In: Advances in Neural Information Processing Systems, vol. 19 (2006)

    Google Scholar 

  4. Chakrabarty, G., Sreenivas, M., Biswas, S.: SANTA: source anchoring network and target alignment for continual test time adaptation. Trans. Mach. Learn. Res. (2023). https://openreview.net/forum?id=V7guVYzvE4

  5. Chen, D., Wang, D., Darrell, T., Ebrahimi, S.: Contrastive test-time adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 295–305 (2022)

    Google Scholar 

  6. Chen, H., Wang, Y., Hu, Q.: Multi-granularity regularized re-balancing for class incremental learning. IEEE Trans. Knowl. Data Eng. 35(7), 7263–7277 (2022)

    Google Scholar 

  7. Choi, S., Yang, S., Choi, S., Yun, S.: Improving test-time adaptation via shift-agnostic weight regularization and nearest source prototypes. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13693, pp. 440–458. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19827-4_26

    Chapter  Google Scholar 

  8. Cicek, S., Soatto, S.: Unsupervised domain adaptation via regularized conditional alignment. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1416–1425 (2019)

    Google Scholar 

  9. Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3213–3223 (2016)

    Google Scholar 

  10. Croce, F., et al.: Robustbench: a standardized adversarial robustness benchmark. In: Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) (2021)

    Google Scholar 

  11. Cui, S., Wang, S., Zhuo, J., Su, C., Huang, Q., Tian, Q.: Gradually vanishing bridge for adversarial domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12455–12464 (2020)

    Google Scholar 

  12. Döbler, M., Marsden, R.A., Yang, B.: Robust mean teacher for continual and gradual test-time adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7704–7714 (2023)

    Google Scholar 

  13. Dong, S., Hong, X., Tao, X., Chang, X., Wei, X., Gong, Y.: Few-shot class-incremental learning via relation knowledge distillation. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1255–1263 (2021)

    Google Scholar 

  14. Fan, Y., Wang, Y., Zhu, P., Hu, Q.: Dynamic sub-graph distillation for robust semi-supervised continual learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 11927–11935 (2024)

    Google Scholar 

  15. Gan, Y., et al.: Decorate the newcomers: visual domain prompt for continual test time adaptation. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 7595–7603 (2023)

    Google Scholar 

  16. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning, pp. 1180–1189. PMLR (2015)

    Google Scholar 

  17. Geirhos, R., et al.: Shortcut learning in deep neural networks. Nat. Mach. Intell. 2(11), 665–673 (2020)

    Article  Google Scholar 

  18. Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=HJz6tiCqYm

  19. Hoffman, J., et al.: Cycada: cycle-consistent adversarial domain adaptation. In: International Conference on Machine Learning, pp. 1989–1998. PMLR (2018)

    Google Scholar 

  20. Hwang, S., Lee, S., Kim, S., Ok, J., Kwak, S.: Combating label distribution shift for active domain adaptation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13693, pp. 549–566. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19827-4_32

    Chapter  Google Scholar 

  21. Koh, P.W., et al.: Wilds: a benchmark of in-the-wild distribution shifts. In: International Conference on Machine Learning, pp. 5637–5664. PMLR (2021)

    Google Scholar 

  22. Lee, K., Kim, S., Kwak, S.: Cross-domain ensemble distillation for domain generalization. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13685, pp. 1–20. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19806-9_1

    Chapter  Google Scholar 

  23. Li, H., Pan, S.J., Wang, S., Kot, A.C.: Domain generalization with adversarial feature learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5400–5409 (2018)

    Google Scholar 

  24. Li, S., Xie, B., Lin, Q., Liu, C.H., Huang, G., Wang, G.: Generalized domain conditioned adaptation network. IEEE Trans. Pattern Anal. Mach. Intell. 44(8), 4093–4109 (2021)

    Google Scholar 

  25. Li, S., Xie, M., Gong, K., Liu, C.H., Wang, Y., Li, W.: Transferable semantic augmentation for domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11516–11525 (2021)

    Google Scholar 

  26. Li, Y., Wang, N., Shi, J., Liu, J., Hou, X.: Revisiting batch normalization for practical domain adaptation. arXiv preprint arXiv:1603.04779 (2016)

  27. Liang, J., Hu, D., Feng, J.: Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In: International Conference on Machine Learning, pp. 6028–6039. PMLR (2020)

    Google Scholar 

  28. Lin, H., et al.: Prototype-guided continual adaptation for class-incremental unsupervised domain adaptation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13693, pp. 351–368. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19827-4_21

    Chapter  Google Scholar 

  29. Liu, H., Long, M., Wang, J., Jordan, M.: Transferable adversarial training: a general approach to adapting deep classifiers. In: International Conference on Machine Learning, pp. 4013–4022. PMLR (2019)

    Google Scholar 

  30. Liu, J., et al.: ViDA: homeostatic visual domain adapter for continual test time adaptation. In: The Twelfth International Conference on Learning Representations (2024). https://openreview.net/forum?id=sJ88Wg5Bp5

  31. Liu, Y., Kothari, P., Van Delft, B., Bellot-Gurlet, B., Mordan, T., Alahi, A.: Ttt++: When does self-supervised test-time training fail or thrive? In: Advance in Neural Information Processing System, vol. 34, pp. 21808–21820 (2021)

    Google Scholar 

  32. Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: International Conference on Machine Learning, pp. 97–105. PMLR (2015)

    Google Scholar 

  33. Long, M., Zhu, H., Wang, J., Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: International Conference on Machine Learning, pp. 2208–2217. PMLR (2017)

    Google Scholar 

  34. Nguyen, A.T., Nguyen-Tang, T., Lim, S.N., Torr, P.H.: Tipi: test time adaptation with transformation invariance. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24162–24171 (2023)

    Google Scholar 

  35. Niloy, F.F., Ahmed, S.M., Raychaudhuri, D.S., Oymak, S., Roy-Chowdhury, A.K.: Effective restoration of source knowledge in continual test time adaptation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2091–2100 (2024)

    Google Scholar 

  36. Niu, S., et al.: Efficient test-time model adaptation without forgetting. In: International Conference on Machine Learning, pp. 16888–16905. PMLR (2022)

    Google Scholar 

  37. Niu, S., et al.: Towards stable test-time adaptation in dynamic wild world. In: The Eleventh International Conference on Learning Representations (2023). https://openreview.net/forum?id=g2YraF75Tj

  38. Patel, V.M., Gopalan, R., Li, R., Chellappa, R.: Visual domain adaptation: a survey of recent advances. IEEE Signal Process. Mag. 32(3), 53–69 (2015)

    Article  Google Scholar 

  39. Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1406–1415 (2019)

    Google Scholar 

  40. Press, O., Schneider, S., Kümmerer, M., Bethge, M.: Rdumb: a simple approach that questions our progress in continual test-time adaptation. In: Advances in Neural Information Processing Systems, vol. 36 (2023)

    Google Scholar 

  41. Sakaridis, C., Dai, D., Van Gool, L.: ACDC: the adverse conditions dataset with correspondences for semantic driving scene understanding. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10765–10775 (2021)

    Google Scholar 

  42. Schneider, S., Rusak, E., Eck, L., Bringmann, O., Brendel, W., Bethge, M.: Improving robustness against common corruptions by covariate shift adaptation. In: Advance in Neural Information Processing System, vol. 33, pp. 11539–11551 (2020)

    Google Scholar 

  43. Sójka, D., Cygert, S., Twardowski, B., Trzciński, T.: AR-TTA: a simple method for real-world continual test-time adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pp. 3491–3495 (2023)

    Google Scholar 

  44. Song, J., Lee, J., Kweon, I.S., Choi, S.: Ecotta: memory-efficient continual test-time adaptation via self-distilled regularization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11920–11929 (2023)

    Google Scholar 

  45. Tan, M., et al.: Uncertainty-calibrated test-time model adaptation without forgetting. arXiv preprint arXiv:2403.11491 (2024)

  46. Tao, X., Chang, X., Hong, X., Wei, X., Gong, Y.: Topology-preserving class-incremental learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020 XIX. LNCS, vol. 12364, pp. 254–270. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58529-7_16

    Chapter  Google Scholar 

  47. Tao, X., Hong, X., Chang, X., Dong, S., Wei, X., Gong, Y.: Few-shot class-incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  48. Tao, X., Hong, X., Chang, X., Gong, Y.: Bi-objective continual learning: learning ‘new’ while consolidating ‘known’. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 5989–5996 (2020)

    Google Scholar 

  49. Tsai, Y.H., Hung, W.C., Schulter, S., Sohn, K., Yang, M.H., Chandraker, M.: Learning to adapt structured output space for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7472–7481 (2018)

    Google Scholar 

  50. Wang, D., Shelhamer, E., Liu, S., Olshausen, B., Darrell, T.: Tent: fully test-time adaptation by entropy minimization. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=uXl3bZLkr3c

  51. Wang, Q., Fink, O., Van Gool, L., Dai, D.: Continual test-time domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7201–7211 (2022)

    Google Scholar 

  52. Wang, S., Zhang, D., Yan, Z., Zhang, J., Li, R.: Feature alignment and uniformity for test time adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20050–20060 (2023)

    Google Scholar 

  53. Wang, Y., Ma, Z., Huang, Z., Wang, Y., Su, Z., Hong, X.: Isolation and impartial aggregation: a paradigm of incremental learning without interference. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 10209–10217 (2023)

    Google Scholar 

  54. Wang, Y., et al.: Continual test-time domain adaptation via dynamic sample selection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 1701–1710 (2024)

    Google Scholar 

  55. Wei, Y., Ye, J., Huang, Z., Zhang, J., Shan, H.: Online prototype learning for online continual learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 18764–18774 (2023)

    Google Scholar 

  56. Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: simple and efficient design for semantic segmentation with transformers. In: Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems (2021). https://openreview.net/forum?id=OG18MI5TRL

  57. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)

    Google Scholar 

  58. Yang, X., Gu, Y., Wei, K., Deng, C.: Exploring safety supervision for continual test-time domain adaptation. In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, pp. 1649–1657. International Joint Conferences on Artificial Intelligence Organization (2023). https://doi.org/10.24963/ijcai.2023/183

  59. Yang, Y., Soatto, S.: Fda: fourier domain adaptation for semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4085–4095 (2020)

    Google Scholar 

  60. Yao, X., et al.: Socialized learning: making each other better through multi-agent collaboration. In: Forty-First International Conference on Machine Learning (2024)

    Google Scholar 

  61. Yuan, L., Xie, B., Li, S.: Robust test-time adaptation in dynamic scenarios. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15922–15932 (2023)

    Google Scholar 

  62. Zagoruyko, S., Komodakis, N.: Wide residual networks. In: British Machine Vision Conference 2016. British Machine Vision Association (2016)

    Google Scholar 

  63. Zhang, J., Qi, L., Shi, Y., Gao, Y.: MVDG: a unified multi-view framework for domain generalization. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV. LNCS, vol. 13687, pp. 161–177. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19812-0_10

    Chapter  Google Scholar 

  64. Zhang, Y., Wang, Z., He, W.: Class relationship embedded learning for source-free unsupervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7619–7629 (2023)

    Google Scholar 

  65. Zhang, Y., Wang, Z., Li, J., Zhuang, J., Lin, Z.: Towards effective instance discrimination contrastive loss for unsupervised domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11388–11399 (2023)

    Google Scholar 

  66. Zhou, K., Liu, Z., Qiao, Y., Xiang, T., Loy, C.C.: Domain generalization: a survey. IEEE Trans. Pattern Anal. Mach. Intell. (2022)

    Google Scholar 

Download references

Acknolwedgement

This work was funded in part by the National Natural Science Foundation of China (62076195, 62376070, 62206271, U20B2052), in part by the Fundamental Research Funds for the Central Universities (AUGA5710011522), and by the Pengcheng Laboratory Research Project No. PCL2023A08.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaopeng Hong .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 535 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, Z. et al. (2025). Reshaping the Online Data Buffering and Organizing Mechanism for Continual Test-Time Adaptation. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15140. Springer, Cham. https://doi.org/10.1007/978-3-031-73007-8_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73007-8_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73006-1

  • Online ISBN: 978-3-031-73007-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics