Loading [a11y]/accessibility-menu.js
Building a Robust and Efficient Defensive System Using Hybrid Adversarial Attack | IEEE Journals & Magazine | IEEE Xplore

Building a Robust and Efficient Defensive System Using Hybrid Adversarial Attack


Impact Statement:A hybrid adversarial attack strategy has been proposed to address the current existing problem of balancing between robustness and accuracy in the machine learning (ML) s...Show More

Abstract:

Adversarial attack is a method used to deceive machine learning models, which offers a technique to test the robustness of the given model, and it is vital to balance rob...Show More
Impact Statement:
A hybrid adversarial attack strategy has been proposed to address the current existing problem of balancing between robustness and accuracy in the machine learning (ML) system of the neural networks. The proposed work deals with adversarial attacks, which is a type of threat to the context of ML. The attacker manipulates images to provide adversarial examples that falsely lead ML models, such as those used in computer vision, to make incorrect predictions. The system design accepts the output of one attacker as the initialization input for its successor, and the obtained adversarial examples are fed back into the neural model to build the defense system by using adversarial training method. An extensive experiment has been executed on three different datasets. The proposed model achieves much better results as compared to other state-of-the-art methods, with faster convergence and a much improved model to obtain robustness and accuracy for the adversarial training.

Abstract:

Adversarial attack is a method used to deceive machine learning models, which offers a technique to test the robustness of the given model, and it is vital to balance robustness with accuracy. Artificial intelligence (AI) researchers are constantly trying to find a better balance to develop new techniques and approaches to minimize loss of accuracy and increase robustness. To address these gaps, this article proposes a hybrid adversarial attack strategy by utilizing the Fast Gradient Sign Method and Projected Gradient Descent effectively to compute the perturbations that deceive deep neural networks, thus quantifying robustness without compromising its accuracy. Three distinct datasets—CelebA, CIFAR-10, and MNIST—were used in the extensive experiment, and six analyses were carried out to assess how well the suggested technique performed against attacks and defense mechanisms. The proposed model yielded confidence values of 99.99% for the MNIST dataset, 99.93% for the CelebA dataset, an...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 9, September 2024)
Page(s): 4470 - 4478
Date of Publication: 02 April 2024
Electronic ISSN: 2691-4581

Contact IEEE to Subscribe

References

References is not available for this document.