Abstract:
Deep supervised learning has demonstrated strong capabilities; however, such progress relies on massive and expensive data annotation. Active Learning (AL) has been intro...Show MoreMetadata
Abstract:
Deep supervised learning has demonstrated strong capabilities; however, such progress relies on massive and expensive data annotation. Active Learning (AL) has been introduced to selectively annotate samples, thus reducing the human labeling effort. Previous AL research has focused on employing recently trained models to design sampling strategies, based on uncertainty or representativeness. Drawing inspiration from the issue of model forgetting, we propose a novel AL framework called Temporal Inconsistency-Based Active Learning (TIR-AL). In this framework, multiple snapshots of the models across consecutive cycles are jointly utilized to select samples with higher temporal inconsistency, by computing the proposed self-weighted nuclear norm metric. Furthermore, we introduce a consistency regularization term to mitigate the issue of forgetting. Together, these components make full use of the potential of data and facilitate effective interaction within the AL loop. To demonstrate the efficacy of TIR-AL, we conducted a set of experiments illustrating how our approach outperforms state-of-the-art methods without incurring any additional training costs.
Published in: ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 14-19 April 2024
Date Added to IEEE Xplore: 18 March 2024
ISBN Information: