Abstract:
Existing adversarial example generation algorithms mainly consider the success rate of spoofing target model, but pay little attention to its own security. In this paper,...Show MoreMetadata
Abstract:
Existing adversarial example generation algorithms mainly consider the success rate of spoofing target model, but pay little attention to its own security. In this paper, we propose the concept of adversarial example security as how unlikely themselves can be detected. A two-step test is proposed to deal with the adversarial attacks of different strengths. Game theory is introduced to model the interplay between the attacker and the investigator. By solving Nash equilibrium, the optimal strategies of both parties are obtained, and the security of the attacks is evaluated. Five typical attacks are compared on the ImageNet. The results show that a rational attacker tends to use a relatively weak strength. By comparing the ROC curves under Nash equilibrium, it is observed that the constrained perturbation attacks are more secure than the optimized perturbation attacks in face of the two-step test. The proposed framework can be used to evaluate the security of various potential attacks and further the research of adversarial example generation/detection.
Published in: ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 23-27 May 2022
Date Added to IEEE Xplore: 27 April 2022
ISBN Information: