Loading [a11y]/accessibility-menu.js
Decision Boundary-Aware Data Augmentation for Adversarial Training | IEEE Journals & Magazine | IEEE Xplore

Decision Boundary-Aware Data Augmentation for Adversarial Training


Abstract:

Adversarial training (AT) is a typical method to learn adversarially robust deep neural networks via training on the adversarial variants generated by their natural examp...Show More

Abstract:

Adversarial training (AT) is a typical method to learn adversarially robust deep neural networks via training on the adversarial variants generated by their natural examples. However, as training progresses, the training data becomes less attackable, which may undermine the enhancement of model robustness. A straightforward remedy is to incorporate more training data, but it may incur an unaffordable cost. To mitigate this issue, in this paper, we propose a deCisiOn bounDary-aware data Augmentation framework (CODA): in each epoch, the CODA directly employs the meta information of the previous epoch to guide the augmentation process and generate more data that are close to the decision boundary, i.e., attackable data. Compared with the vanilla mixup, our proposed CODA can provide a higher ratio of attackable data, which is beneficial to enhance model robustness; it meanwhile mitigates the model's linear behavior between classes, where the linear behavior is favorable to the standard training for generalization but not to the adversarial training for robustness. As a result, our proposed CODA encourages the model to predict invariantly in the cluster of each class. Experiments demonstrate that our proposed CODA can indeed enhance adversarial robustness across various adversarial training methods and multiple datasets.
Published in: IEEE Transactions on Dependable and Secure Computing ( Volume: 20, Issue: 3, 01 May-June 2023)
Page(s): 1882 - 1894
Date of Publication: 08 April 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.