skip to main content
10.1145/3581783.3612071acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Noise-Robust Continual Test-Time Domain Adaptation

Published: 27 October 2023 Publication History

Abstract

Continual test-time domain adaptation (TTA) is a challenging topic in the field of source-free domain adaptation, which focuses on addressing cross-domain multimedia information during inference with a continuously changing data distribution. Previous methods have been found to lack noise robustness, leading to a significant increase in errors under strong noise. In this paper, we address the noise-robustness problem in continual TTA by offering three effective recipes to mitigate it. At the category level, we employ the Taylor cross-entropy loss to alleviate the low confidence category bias commonly associated with cross-entropy. At the sample level, we reweight the target samples based on uncertainty to prevent the model from overfitting on noisy samples. Finally, to reduce pseudo-label noise, we propose a soft ensemble negative learning mechanism to guide the model optimization using ensemble complementary pseudo labels. Our method achieves state-of-the-art performance on three widely used continual TTA datasets, particularly in the strong noise setting that we introduced.

References

[1]
Gianluca Agresti, Henrik Schaefer, Piergiorgio Sartor, and Pietro Zanuttigh. 2019. Unsupervised domain adaptation for tof data denoising with adversarial learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5584--5593.
[2]
Hongjoon Ahn, Sungmin Cha, Donggyu Lee, and Taesup Moon. 2019. Uncertainty-based continual learning with adaptive regularization. Advances in neural information processing systems 32 (2019).
[3]
Yuki Arase, Xing Xie, Takahiro Hara, and Shojiro Nishio. 2010. Mining people's trips from large scale geo-tagged photos. In Proceedings of the 18th ACM international conference on Multimedia. 133--142.
[4]
Dian Chen, DequanWang, Trevor Darrell, and Sayna Ebrahimi. 2022. Contrastive test-time adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 295--305.
[5]
Marco Cristani, Alessandro Vinciarelli, Cristina Segalin, and Alessandro Perina. 2013. Unveiling the multimedia unconscious: Implicit cognitive processes and multimedia content analysis. In Proceedings of the 21st ACM international conference on Multimedia. 213--222.
[6]
Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. 2020. Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670 (2020).
[7]
Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ale? Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. 2021. A continual learning survey: Defying forgetting in classification tasks. IEEE transactions on pattern analysis and machine intelligence 44, 7 (2021), 3366--3385.
[8]
Fangxiang Feng, Xiaojie Wang, and Ruifan Li. 2014. Cross-modal retrieval with correspondence autoencoder. In Proceedings of the 22nd ACM international conference on Multimedia. 7--16.
[9]
Lei Feng, Senlin Shu, Zhuoyi Lin, Fengmao Lv, Li Li, and Bo An. 2021. Can cross entropy loss be robust to label noise?. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. 2206--2212.
[10]
Yingchao Feng, Xian Sun, Wenhui Diao, Jihao Li, Xin Gao, and Kun Fu. 2021. Continual learning with structured inheritance for semantic segmentation in aerial imagery. IEEE Transactions on Geoscience and Remote Sensing 60 (2021), 1--17.
[11]
Aritra Ghosh, Himanshu Kumar, and P Shanti Sastry. 2017. Robust loss functions under label noise for deep neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol. 31.
[12]
Jiayi Han, Longbin Zeng, Liang Du, Weiyang Ding, and Jianfeng Feng. 2023. Rethinking Precision of Pseudo Label: Test-Time Adaptation via Complementary Learning. arXiv preprint arXiv:2301.06013 (2023).
[13]
Dan Hendrycks and Thomas Dietterich. 2019. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261 (2019).
[14]
Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. 2018. Cycada: Cycle-consistent adversarial domain adaptation. In International conference on machine learning. Pmlr, 1989--1998.
[15]
Taehyeon Kim, Jongwoo Ko, JinHwan Choi, Se-Young Yun, et al. 2021. Fine samples for learning with noisy labels. Advances in Neural Information Processing Systems 34 (2021), 24137--24149.
[16]
Youngdong Kim, Junho Yim, Juseung Yun, and Junmo Kim. 2019. Nlnl: Negative learning for noisy labels. In Proceedings of the IEEE/CVF international conference on computer vision. 101--110.
[17]
Youngdong Kim, Juseung Yun, Hyounguk Shon, and Junmo Kim. 2021. Joint negative and positive learning for noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9442--9451.
[18]
Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
[19]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2017. Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 6 (2017), 84--90.
[20]
Dong-Hyun Lee et al. 2013. Pseudo-label: The simple and efficient semisupervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, Vol. 3. 896.
[21]
Dongge Li, Nevenka Dimitrova, Mingkun Li, and Ishwar K Sethi. 2003. Multimedia content processing through cross-modal association. In Proceedings of the eleventh ACM international conference on Multimedia. 604--611.
[22]
Jingjing Li, Erpeng Chen, Zhengming Ding, Lei Zhu, Ke Lu, and Zi Huang. 2019. Cycle-consistent conditional adversarial transfer networks. In Proceedings of the 27th ACM international conference on multimedia. 747--755.
[23]
Jingjing Li, Erpeng Chen, Zhengming Ding, Lei Zhu, Ke Lu, and Heng Tao Shen. 2020. Maximum density divergence for domain adaptation. IEEE transactions on pattern analysis and machine intelligence 43, 11 (2020), 3918--3930.
[24]
Jingjing Li, Zhekai Du, Lei Zhu, Zhengming Ding, Ke Lu, and Heng Tao Shen. 2021. Divergence-agnostic unsupervised domain adaptation by adversarial attacks. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 11 (2021), 8196--8211.
[25]
Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. 2020. Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 9641--9650.
[26]
Qing Lian, Fengmao Lv, Lixin Duan, and Boqing Gong. 2019. Constructing self-motivated pyramid curriculums for cross-domain semantic segmentation: A non-adversarial approach. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6758--6767.
[27]
Jian Liang, Dapeng Hu, and Jiashi Feng. 2020. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning. PMLR, 6028--6039.
[28]
Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning transferable features with deep adaptation networks. In International conference on machine learning. PMLR, 97--105.
[29]
Duc Tam Nguyen, Chaithanya Kumar Mummadi, Thi Phuong Nhung Ngo, Thi Hoai Phuong Nguyen, Laura Beggel, and Thomas Brox. 2019. Self: Learning to filter noisy labels with self-ensembling. arXiv preprint arXiv:1910.01842 (2019).
[30]
Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2001-- 2010.
[31]
Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. 2017. Continual learning with deep generative replay. Advances in neural information processing systems 30 (2017).
[32]
Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems 30 (2017).
[33]
Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7167--7176.
[34]
Bokun Wang, Yang Yang, Xing Xu, Alan Hanjalic, and Heng Tao Shen. 2017. Adversarial cross-modal retrieval. In Proceedings of the 25th ACM international conference on Multimedia. 154--162.
[35]
D. Wang, E. Shelhamer, S. Liu, B. Olshausen, and T. Darrell. 2021. Tent: Fully Test-Time Adaptation by Entropy Minimization. In International Conference on Learning Representations. 1--8.
[36]
Qin Wang, Olga Fink, Luc Van Gool, and Dengxin Dai. 2022. Continual test-time domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7201--7211.
[37]
Peter Wlodarczak, Jeffrey Soar, and Mustafa Ally. 2015. Multimedia data mining using deep learning. In 2015 Fifth International Conference on Digital Information Processing and Communications (ICDIPC). IEEE, 190--196.
[38]
Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, and Masashi Sugiyama. 2021. Sample selection with uncertainty of losses for learning with noisy labels. arXiv preprint arXiv:2106.00445 (2021).
[39]
Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. 2017. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1492--1500.
[40]
Shiqi Yang, Joost van deWeijer, Luis Herranz, Shangling Jui, et al. 2021. Exploiting the intrinsic neighborhood structure for source-free domain adaptation. Advances in neural information processing systems 34 (2021), 29393--29405.
[41]
Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, et al. 2022. Attracting and dispersing: A simple approach for source-free domain adaptation. In Advances in Neural Information Processing Systems.
[42]
Zhiqi Yu, Jingjing Li, Zhekai Du, Lei Zhu, and Heng Tao Shen. 2023. A Comprehensive Survey on Source-free Domain Adaptation. arXiv preprint arXiv:2302.11803 (2023).
[43]
Zhiqi Yu, Jingjing Li, Lei Zhu, Ke Lu, and Heng Tao Shen. 2022. Uneven Bi- Classifier Learning for Domain Adaptation. IEEE Transactions on Circuits and Systems for Video Technology (2022).
[44]
Zhiqi Yu, Jingjing Li, Lei Zhu, Ke Lu, and Heng Tao Shen. 2023. Classification Certainty Maximization for Unsupervised Domain Adaptation. IEEE Transactions on Circuits and Systems for Video Technology (2023).
[45]
Sergey Zagoruyko and Nikos Komodakis. 2016. Wide residual networks. arXiv preprint arXiv:1605.07146 (2016).
[46]
Friedemann Zenke, Ben Poole, and Surya Ganguli. 2017. Continual learning through synaptic intelligence. In International conference on machine learning. PMLR, 3987--3995.
[47]
Zizhao Zhang, Han Zhang, Sercan O Arik, Honglak Lee, and Tomas Pfister. 2020. Distilling effective supervision from severe label noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9294--9303.
[48]
Yang Zou, Zhiding Yu, BVK Kumar, and Jinsong Wang. 2018. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European conference on computer vision (ECCV). 289--305.

Cited By

View all
  • (2024)DPO: Dual-Perturbation Optimization for Test-time Adaptation in 3D Object DetectionProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681040(4138-4147)Online publication date: 28-Oct-2024
  • (2024)Generalized Source-Free Domain-adaptive Segmentation via Reliable Knowledge PropagationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680567(5967-5976)Online publication date: 28-Oct-2024
  • (2024)A Comprehensive Survey on Source-Free Domain AdaptationIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.337097846:8(5743-5762)Online publication date: 28-Feb-2024
  • Show More Cited By

Index Terms

  1. Noise-Robust Continual Test-Time Domain Adaptation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '23: Proceedings of the 31st ACM International Conference on Multimedia
    October 2023
    9913 pages
    ISBN:9798400701085
    DOI:10.1145/3581783
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 October 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. domain adaptation
    2. robust
    3. test-time
    4. transfer learning

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    MM '23
    Sponsor:
    MM '23: The 31st ACM International Conference on Multimedia
    October 29 - November 3, 2023
    Ottawa ON, Canada

    Acceptance Rates

    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)171
    • Downloads (Last 6 weeks)21
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)DPO: Dual-Perturbation Optimization for Test-time Adaptation in 3D Object DetectionProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681040(4138-4147)Online publication date: 28-Oct-2024
    • (2024)Generalized Source-Free Domain-adaptive Segmentation via Reliable Knowledge PropagationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680567(5967-5976)Online publication date: 28-Oct-2024
    • (2024)A Comprehensive Survey on Source-Free Domain AdaptationIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.337097846:8(5743-5762)Online publication date: 28-Feb-2024
    • (2024)Towards Test Time Domain Adaptation via Negative Label SmoothingNeurocomputing10.1016/j.neucom.2024.128182600:COnline publication date: 1-Oct-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media