Abstract:Recent studies have shown that adversarial training is an effective method to defend against adversarial sample attacks. However, existing adversarial training strategies improve the model robustness at a price of a lowered generalization ability of the model. At this stage, the mainstream adversarial training methods usually deal with each training sample independently and ignore the inter-sample relationships, which prevents the model from fully exploiting the geometric relationship between samples to learn a more robust model for better defense against adversarial attacks. Therefore, this paper focuses on how to maintain the stability of the geometric structure between samples during adversarial training to improve the model robustness. Specifically, in adversarial training, a new geometric structure constraint method is designed with the aim to maintain the consistency of the feature space distribution between normal samples and adversarial samples. Furthermore, a dual-label supervised learning method is proposed, which leverages the labels of both natural samples and adversarial samples for joint supervised training of the model. Lastly, the characteristics of the dual-label supervised learning method are analyzed, and the working mechanism of the adversarial samples are explained theoretically. It is concluded from extensive experiments on benchmark datasets that the proposed approach effectively improves the robustness of the model while maintaining good generalization accuracy. The related code has been open-sourced: https://github.com/SkyKuang/DGCAT