Abstract:
Deep learning techniques were widely adopted in various scenarios as a service. However, they are found naturally exposed to adversarial attacks. Such imperceptible-pertu...Show MoreMetadata
Abstract:
Deep learning techniques were widely adopted in various scenarios as a service. However, they are found naturally exposed to adversarial attacks. Such imperceptible-perturbation-based attacks can cause severe damage in nowaday authentication systems that adopt DNNs as the core, such as fingerprint liveness detection systems, face recognition systems, etc. This article avoids improving the model’s robustness and realizes the defense against adversarial attacks based on denoising and reconstruction. Our proposed method can be viewed as a two-step defense framework. The first step denoises the input adversarial example, then reconstructing the sample to close to the original clean image and help the target model output the original label. The proposed method is evaluated using six kinds of state-of-art adversarial attacks, including the adaptive attacks, which are known as the strongest attacks. We also specifically focus on demonstrating the effectiveness of our proposed work in Finance Authentication systems as a real-life case study. Experimental results reveal that our method is more robust than the previous super-resolution-only defense in respect of attaining a higher averaging accuracy over clean and distorted samples. To the best of our knowledge, it’s the first work that reveals a comprehensive defense framework against adversarial attacks over Finance Authentication systems.
Published in: IEEE Transactions on Computers ( Volume: 73, Issue: 2, February 2024)