Loading [a11y]/accessibility-menu.js
Meta-Learning Without Data via Unconditional Diffusion Models | IEEE Journals & Magazine | IEEE Xplore

Meta-Learning Without Data via Unconditional Diffusion Models


Abstract:

Although few-shot learning aims to address data scarcity, it still requires large, annotated datasets for training, which are often unavailable due to cost and privacy co...Show More

Abstract:

Although few-shot learning aims to address data scarcity, it still requires large, annotated datasets for training, which are often unavailable due to cost and privacy concerns. Previous studies have utilized pre-trained diffusion models, either to synthesize auxiliary data besides limited labeled samples, or to employ diffusion models as zero-shot classifiers. However, they are limited to conditional diffusion models needing class prior information (e.g., carefully crafted text prompts) about unseen tasks. To overcome this, we leverage unconditional diffusion models without needs for class information to train a meta-model capable of generalizing to unseen tasks. The framework contains (1) a meta-learning without data approach that uses synthetic data during training; and (2) a diffusion model-based data augmentation to calibrate the distribution shift during testing. During meta-training, we implement a self-taught class-learner to gradually capture class concepts, guiding unconditional diffusion models to generate a labeled pseudo dataset. This pseudo dataset is then used to jointly train the class-learner and the meta-model, allowing for iterative refinement and clear differentiation between classes. During meta-testing, we introduce a data augmentation that employs the diffusion models used in meta-training, to narrow the gap between meta-training and meta-testing task distribution. This enables the meta-model trained on synthetic images to effectively classify real images in unseen tasks. Comprehensive experiments showcase the superiority and adaptability of our approach in four real-world scenarios. Code available at https://github.com/WalkerWorldPeace/MLWDUDM.
Page(s): 11874 - 11885
Date of Publication: 08 July 2024

ISSN Information:

Funding Agency:


References

References is not available for this document.