Robust Adversarial Attacks on Imperfect Deep Neural Networks in Fault Classification | IEEE Journals & Magazine | IEEE Xplore

Robust Adversarial Attacks on Imperfect Deep Neural Networks in Fault Classification


Abstract:

In recent years, deep neural networks (DNNs) have been widely applied in fault classification tasks. Their adversarial security has received attention, but little conside...Show More

Abstract:

In recent years, deep neural networks (DNNs) have been widely applied in fault classification tasks. Their adversarial security has received attention, but little consideration has been given to the robustness of adversarial attacks against imperfect DNNs. Owing to the data scarcity and quality deficiencies prevalent in industrial data, the performance of DNNs may be severely constrained. In addition, black-box attacks against industrial fault classification models have difficulty in obtaining sufficient and comprehensive data for constructing surrogate models with perfect decision boundaries. To address this gap, this article analyzes the outcomes of adversarial attacks on imperfect DNNs and categorizes their decision scenarios. Subsequently, building on this analysis, we propose a robust adversarial attack strategy that transforms traditional adversarial attacks into an iterative targeted attack (ITA). The ITA framework begins with an evaluation of DNNs, during which a classification confidence score (CCS) is designed. Using the CCS and the prediction probability of the data, the labels and sequences for targeted attacks are defined. The adversarial attacks are then carried out by iteratively selecting attack targets and using gradient optimization. Experimental results on both a benchmark dataset and an industrial case demonstrate the superiority of the proposed method.
Published in: IEEE Transactions on Industrial Informatics ( Volume: 20, Issue: 12, December 2024)
Page(s): 14297 - 14307
Date of Publication: 12 September 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.