Abstract:
Deep neural networks (DNNs) have been widely used in remote sensing but demonstrated to be sensitive with adversarial examples. By introducing carefully designed perturba...Show MoreMetadata
Abstract:
Deep neural networks (DNNs) have been widely used in remote sensing but demonstrated to be sensitive with adversarial examples. By introducing carefully designed perturbations to clean images, DNNs can be led to incorrect predictions. Adversarial patch is commonly used to conduct adversarial attack, where traditional methods optimize its content and position separately, neglecting the coupling relation of two factors. In this paper, we propose a black-box attack framework targeting fine-grained aircraft recognition, named PatchGen, simultaneously optimizing both content and position of physical adversarial patches. For the requirements of physical attack, we further constrain the patch in object region and utilize elaborate criteria to evaluate its naturalness to alleviate the distortion when applying the patch in real world. We comprehensively validate our method in fine-grained aircraft classification, extending to object detection subsequently. Extensive experiments demonstrate that the proposed method achieves superior attack performance efficiently for classification and detection tasks in digital domain. Moreover, we validate the effectiveness of the adversarial patch under diverse circumstances in the physical world and prove that our method can be applied to different models as well as various domains.
Published in: IEEE Transactions on Information Forensics and Security ( Volume: 20)