Cleaning Adversarial Perturbations via Residual Generative Network for Face Verification | IEEE Conference Publication | IEEE Xplore

Cleaning Adversarial Perturbations via Residual Generative Network for Face Verification


Abstract:

Deep neural networks (DNNs) have recently achieved impressive performances on various applications. However, recent researches show that DNNs are vulnerable to adversaria...Show More

Abstract:

Deep neural networks (DNNs) have recently achieved impressive performances on various applications. However, recent researches show that DNNs are vulnerable to adversarial perturbations injected into input samples. In this paper, we investigate a defense method for face verification: a deep residual generative network (ResGN) is learned to clean adversarial perturbations. We propose a novel training framework composed of ResGN, pre-trained VGG-Face network and FaceNet network. The parameters of ResGN are optimized by minimizing a joint loss consisting of a pixel loss, a texture loss and a verification loss, in which they measure content errors, subjective visual perception errors and verification task errors between cleaned image and legitimate image respectively. Specially, the latter two are provided by VGG-Face and FaceNet respectively and have essential contributions for improving verification performance of cleaned image. Empirical experiment results validate the effectiveness of the proposed defense method on the Labeled Faces in the Wild (LFW) benchmark dataset.
Date of Conference: 12-17 May 2019
Date Added to IEEE Xplore: 17 April 2019
ISBN Information:

ISSN Information:

Conference Location: Brighton, UK

References

References is not available for this document.