Abstract
Sampling methods are a critical step for model-based evolutionary algorithms, their goal being the generation of new and promising individuals based on the information provided by the model. Adversarial perturbations have been proposed as a way to create samples that deceive neural networks. In this paper we introduce the idea of creating adversarial perturbations that correspond to promising solutions of the search space. A surrogate neural network is “fooled” by an adversarial perturbation algorithm until it produces solutions that are likely to be of higher fitness than the present ones. Using a benchmark of functions with varying levels of difficulty, we investigate the performance of a number of adversarial perturbation techniques as sampling methods. The paper also proposes a technique to enhance the effect that adversarial perturbations produce in the network. While adversarial perturbations on their own are not able to produce evolutionary algorithms that compete with state of the art methods, they provide a novel and promising way to combine local optimizers with evolutionary algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Baluja, S.: Deep learning for explicitly modeling optimization landscapes. CoRR abs/1703.07394 (2017). http://arxiv.org/abs/1703.07394
Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193. IEEE Press (2008)
Garciarena, U., Mendiburu, A., Santana, R.: Envisioning the benefits of back-drive in evolutionary algorithms. In: 2020 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8. IEEE (2020)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014)
Jin, Y.: Surrogate-assisted evolutionary computation: recent advances and future challenges. Swarm Evolut. Comput. 1(2), 61–70 (2011)
Jin, Y., Olhofer, M., Sendhoff, B.: On evolutionary optimization with approximate fitness functions. In: Proceedings of the 2nd Annual Conference on Genetic and Evolutionary Computation, pp. 786–793 (2000)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. CoRR abs/1607.02533 (2016). http://arxiv.org/abs/1607.02533
Larrañaga, P., Karshenas, H., Bielza, C., Santana, R.: A review on probabilistic graphical models in evolutionary computation. J. Heuristics 18(5), 795–819 (2012). https://doi.org/10.1007/s10732-012-9208-4
Linden, A., Kindermann, J.: Inversion of multilayer nets. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2, pp. 425–430 (1989)
Marti, L., García, J., Berlanga, A., Molina, J.M.: Introducing MONEDA: scalable multiobjective optimization with a neural estimation of distribution algorithm. In: Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation GECCO-2008, pp. 689–696. ACM, New York (2008). http://doi.acm.org/10.1145/1389095.1389228
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)
Pal, S.K., Mitra, S.: Multilayer perceptron, fuzzy sets, and classification. IEEE Trans. Neural Netw. 3(5), 683–697 (1992)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)
Probst, M., Rothlauf, F.: Deep Boltzmann machines in estimation of distribution algorithms for combinatorial optimization. CoRR abs/1509.06535 (2015). http://arxiv.org/abs/1509.06535
Probst, M., Rothlauf, F., Grahl, J.: Scalability of using restricted Boltzmann machines for combinatorial optimization. Eur. J. Oper. Res. 256(2), 368–383 (2017)
Rony, J., Hafemann, L.G., Oliveira, L.S., Ayed, I.B., Sabourin, R., Granger, E.: Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4322–4330 (2019)
Stork, J., Eiben, A.E., Bartz-Beielstein, T.: A new taxonomy of global optimization algorithms. Nat. Comput., 1–24 (2020)
Suganthan, P.N., et al.: Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical report, Nanyang Technological University, Singapore (2005)
Szegedy, C., et al.: Intriguing properties of neural networks. CoRR abs/1512.1312.6199 (2015). http://arxiv.org/abs/1312.6199
Tang, H., Shim, V., Tan, K., Chia, J.: Restricted Boltzmann machine based algorithm for multi-objective optimization. In: 2010 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8. IEEE (2010)
Wessing, S.: Optproblems: infrastructure to define optimization problems and some test problems for black-box optimization. Python package version 0.9 (2016)
Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Garciarena, U., Vadillo, J., Mendiburu, A., Santana, R. (2022). Adversarial Perturbations for Evolutionary Optimization. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2021. Lecture Notes in Computer Science(), vol 13164. Springer, Cham. https://doi.org/10.1007/978-3-030-95470-3_31
Download citation
DOI: https://doi.org/10.1007/978-3-030-95470-3_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95469-7
Online ISBN: 978-3-030-95470-3
eBook Packages: Computer ScienceComputer Science (R0)