On the Interpretable Adversarial Sensitivity of Iterative Optimizers | IEEE Conference Publication | IEEE Xplore

On the Interpretable Adversarial Sensitivity of Iterative Optimizers


Abstract:

Adversarial examples are an emerging threat of machine learning (ML) models, allowing adversaries to substantially deteriorate performance by introducing seemingly unnoti...Show More

Abstract:

Adversarial examples are an emerging threat of machine learning (ML) models, allowing adversaries to substantially deteriorate performance by introducing seemingly unnoticeable perturbations. These attacks are typically considered to be an ML risk, often associated with the black-box operation and sensitivity to features learned from data of deep neural networkss (DNNs), and are rarely viewed as a threat to classic non-learned decision rules, such as iterative optimizers. In this work we explore the sensitivity to adversarial examples of iterative optimizers, building upon recent advances in treating these methods as ML models. We identify that many iterative optimizers share the properties of end-to-end differentiability and existence of impactful small perturbations, that make them amenable to adversarial attacks. The interpretablity of iterative optimizers allows to associate adversarial examples with modifications to the traversed loss surface that notably affect the location of the sought minima. We visualize this effect and demonstrate the vulnerability of iterative optimizers for compressed sensing and hybrid beamforming tasks, showing that different optimizers tackling the same optimization formulation vary in their adversarial sensitivity.
Date of Conference: 17-20 September 2023
Date Added to IEEE Xplore: 23 October 2023
ISBN Information:

ISSN Information:

Conference Location: Rome, Italy

Contact IEEE to Subscribe

References

References is not available for this document.