Cited By
View all- Zhang PHuang ZXu XBai G(2024)Effective and Robust Adversarial Training Against Data and Label CorruptionsIEEE Transactions on Multimedia10.1109/TMM.2024.339467726(9477-9488)Online publication date: 2-May-2024
Despite great success achieved, deep learning methods are vulnerable to noise in the training dataset, including adversarial perturbations and annotation noise. These harmful factors significantly influence the learning process of deep models, leading ...
Partial multi-label learning (PML) models the scenario where each training sample is annotated with a candidate label set, among which only a subset corresponds to the ground-truth labels. Existing PML approaches generally promise that there are ...
Most deep neural networks (DNNs) are trained with large amounts of noisy labels when they are applied. As DNNs have the high capacity to fit any noisy labels, it is known to be difficult to train DNNs robustly with noisy labels. These ...
Association for Computing Machinery
New York, NY, United States
Check if you have access through your login credentials or your institution to get full access on this article.
Sign inView or Download as a PDF file.
PDFView online with eReader.
eReaderView this article in HTML Format.
HTML Format