Abstract:
Trojan backdoor is a poisoning attack against Neural Network (NN) classifiers in which adversaries try to exploit the (highly desirable) model reuse property to implant T...View moreMetadata
Abstract:
Trojan backdoor is a poisoning attack against Neural Network (NN) classifiers in which adversaries try to exploit the (highly desirable) model reuse property to implant Trojans into model parameters for backdoor breaches through a poisoned training process. Most of the proposed defenses against Trojan attacks assume a white-box setup, in which the defender either has access to the inner state of NN or can run back-propagation through it. Moreover, most of exiting works that propose white-box and black-box methods to defend Trojan backdoor focus on image data. Due to the the difference in the data structure, these defenses cannot be directly applied for textual data. We propose T-TROJDEF which is a more practical but challenging black-box defense method for text data that only needs to run forward-pass of the NN model. T-TROJDEF tries to identify and filter out Trojan inputs (i.e., inputs augmented with the Trojan trigger) by monitoring the changes in the prediction confidence when the input is repeatedly perturbed. The intuition is that Trojan inputs are more stable as the misclassification only depends on the trigger, while benign inputs will suffer when perturbed due to the perturbation of the classification features.
Published in: 2021 Eighth International Conference on Social Network Analysis, Management and Security (SNAMS)
Date of Conference: 06-09 December 2021
Date Added to IEEE Xplore: 16 March 2022
ISBN Information: