Abstract
In this work, we propose a fundamentally novel single-point meta-heuristic designed for continuous optimization. Our algorithm continuously improves on a solution via a trajectory-based search inspired by the pinball arcade game in an anytime optimization manner. We evaluate our algorithm against widely employed meta-heuristics on several standard test-bed functions and various dimensions. Our algorithm exhibits high precision, and superior accuracy compared to the benchmark, especially when complex configuration spaces are considered.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The sign of the elevation angle is irrelevant as the square of its sin value is considered.
- 2.
We experimented with trajectories alternating from one direction to another; however we found this took a toll in exploration. Thus, the algorithm does not behave exactly like a pinball, however, the final trajectories do resemble a pinball movement.
- 3.
Crossing of the objective function means that a common point of the objective function and the current trajectory segment has been detected.
References
Abdel-Basset, M., Ding, W., El-Shahat, D.: A hybrid harris hawks optimization algorithm with simulated annealing for feature selection. Artif. Intell. Rev. 54(1), 593ā637 (2021)
Ali, M.M., Khompatraporn, C., Zabinsky, Z.B.: A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Global Optim. 31(4), 635ā672 (2005)
Biehl, M., Schwarze, H.: Learning by on-line gradient descent. J. Phys. A: Math. Gen. 28(3), 643 (1995)
Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numer 25, 161ā319 (2016)
Dhouib, S., Kharrat, A., Chabchoub, H.: A multi-start threshold accepting algorithm for multiple objective continuous optimization problems. Int. J. Numer. Meth. Eng. 83(11), 1498ā1517 (2010)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(7) (2011)
Dueck, G., Scheuer, T.: Threshold accepting: A general purpose optimization algorithm appearing superior to simulated annealing. J. Comput. Phys. 90(1), 161ā175 (1990)
Frausto-Solis, J., HernĆ”ndez-RamĆrez, L., Castilla-Valdez, G., GonzĆ”lez-Barbosa, J.J., SĆ”nchez-HernĆ”ndez, J.P.: Chaotic multi-objective simulated annealing and threshold accepting for job shop scheduling problem. Math. Comput. Appli. 26(1), 8 (2021)
Geiger, M.J.: Pace solver description: A simplified threshold accepting approach for the cluster editing problem. In: 16th International Symposium on Parameterized and Exact Computation (IPEC 2021). Schloss Dagstuhl-Leibniz-Zentrum fĆ¼r Informatik (2021)
Grass, J., Zilberstein, S.: Anytime algorithm development tools. ACM SIGART Bulletin 7(2), 20ā27 (1996)
Halim, A.H., Ismail, I., Das, S.: Performance assessment of the metaheuristic optimization algorithms: an exhaustive review. Artif. Intell. Rev. 54(3), 2323ā2409 (2021)
Hochreiter, S., Younger, A.S., Conwell, P.R.: Learning to learn using gradient descent. In: Dorffner, G., Bischof, H., Hornik, K. (eds.) ICANN 2001. LNCS, vol. 2130, pp. 87ā94. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44668-0_13
Jeyakumar, V., Rubinov, A.M.: Continuous Optimization: Current Trends and Modern Applications, vol. 99. Springer Science & Business Media (2006). https://doi.org/10.1007/b137941
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN 1995-International Conference On Neural Networks, vol. 4, pp. 1942ā1948. IEEE (1995)
Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671ā680 (1983)
Lin, S.W., Cheng, C.Y., Pourhejazy, P., Ying, K.C.: Multi-temperature simulated annealing for optimizing mixed-blocking permutation flowshop scheduling problems. Expert Syst. Appl. 165, 113837 (2021)
Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51ā67 (2016)
Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46ā61 (2014)
Molga, M., Smutnicki, C.: Test functions for optimization needs. Test Funct. Optim. Needs 101, 48 (2005)
Munoz, M.A., Kirley, M., Halgamuge, S.K.: The algorithm selection problem on the continuous optimization domain. In: Computational Intelligence In Intelligent Data Analysis, pp. 75ā89. Springer (2013). https://doi.org/10.1007/978-3-642-32378-2_6
Qian, N.: On the momentum term in gradient descent learning algorithms. Neural Netw. 12(1), 145ā151 (1999)
Shalev-Shwartz, S., Ben-David, S.: Understanding machine learning: From theory to algorithms. Cambridge University Press (2014)
Siddique, N., Adeli, H.: Simulated annealing, its variants and engineering applications. Int. J. Artif. Intell. Tools 25(06), 1630001 (2016)
Taylan, P., Weber, G.W., Yerlikaya, F.: Continuous optimization applied in mars for modern applications in finance, science and technology. In: ISI Proceedings of 20th Mini-euro Conference Continuous Optimization and Knowledge-based Technologies, pp. 317ā322. Citeseer (2008)
Vanderbilt, D., Louie, S.G.: A monte carlo simulated annealing approach to optimization over continuous variables. J. Comput. Phys. 56(2), 259ā271 (1984)
Weber, G.W., ĆzƶÄĆ¼r-AkyĆ¼z, S., Kropat, E.: A review on data mining and continuous optimization applications in computational biology and medicine. Birth Defects Res. C Embryo Today 87(2), 165ā181 (2009)
Xiong, Q., Jutan, A.: Continuous optimization using a dynamic simplex method. Chem. Eng. Sci. 58(16), 3817ā3828 (2003)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lymperakis, V., Panagopoulos, A.A. (2023). Buggy Pinball: A Novel Single-point Meta-heuristic forĀ Global Continuous Optimization. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2022. Lecture Notes in Computer Science(), vol 13589. Springer, Cham. https://doi.org/10.1007/978-3-031-23480-4_22
Download citation
DOI: https://doi.org/10.1007/978-3-031-23480-4_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-23479-8
Online ISBN: 978-3-031-23480-4
eBook Packages: Computer ScienceComputer Science (R0)