Performance analysis of dynamic optimization algorithms using relative error distance
Introduction
Dynamic optimization problems (DOPs) are challenging since, as time passes, optimal solutions may become sub-optimal while previously inferior regions of the search space may suddenly yield the best solutions. Most optimization algorithms that are specialized to solve problems in static environments tend to be ineffective in dynamic environments due to the dynamic nature of DOPs [1]. As a result, many swarm intelligence (SI) and evolutionary computation (EC) meta-heuristics have been developed that focus specifically on solving DOPs [2], [3], [4], [5].
Quantification of the performance of algorithms is not as straightforward for dynamic environments as it is for static environments. Cruz et al. [2] outline how, for static environments, it often suffices to report just the quality of the best-found solution, and (optionally) the memory/time cost to achieve the result. For DOPs, these simple assessments are not sufficient since practitioners are interested in other aspects of algorithm performance during the optimization process. Such details include understanding an algorithm’s ability to detect problem changes, continually explore and exploit different areas of the problem search space, track existing optima as they move through the problem space, find new optima as they appear over time, track the stability of solutions over time, and determine how constraints are handled.
Researchers have defined many specialized performance measures to assess algorithms that solve DOPs. A number of surveys discuss the most popular performance measures that are used to quantify the ability of algorithms in solving DOPs [2], [4], [5]. The majority of reviewed measures have shortcomings that make fair comparisons of algorithm performance challenging. These shortcomings are discussed at length in this paper. The most often-used measures in literature, namely the offline error and offline performance [6], [7], and average best error before change [8] have crippling limitations that can cause observers to misreport findings by up to 60% (as shown in Section 5). Such shortcomings make it hard for existing measures to faithfully represent the underlying reality of an algorithm’s performance while solving a DOP. This paper proposes a new performance measure called the relative error distance (RED), that addresses the limitations of popular performance measures. The paper therefore makes a contribution towards more fair and sound measures to analyze and compare the performance of algorithms focused on solving DOPs.
The paper outline is as follows. Section 2 provides background information on DOPs, performance measures for algorithms aimed at solving DOPs, and a discussion of the attributes and shortcomings of popular performance measures. Section 3 introduces the RED measure. Section 4 outlines the experimental approach to validate the noteworthy characteristics of the RED measure, and section 5 presents the results of the empirical investigation. Section 6 concludes the paper.
Section snippets
Dynamic optimization problems
A working definition of a DOP is provided below, followed by an outline of commonly used DOP-focused performance measures, as well as their attributes and shortcomings.
Relative error distance
This paper proposes a new performance measure called relative error distance (RED) that helps to address the shortcomings of existing performance measures. Consider the vector which represents measured performance values of a single execution (or run) of an algorithm. The components of b, namely , consist of different RE values (as defined in either Eqs. (10) or (11)). The exact algorithm iterations that are considered by the RED measure is configurable. That is, if
Empirical validation of relative error distance
The experiments in this study investigate whether the characteristics observed in actual algorithm error/fitness value data warrant the need for the proposed RED measure. The purpose is to obtain a reasonably large and diverse set of performance scores over many different types of problems. The performance comparison of specific algorithms against state-of-the-art methods is not the goal of the analysis.
The following four questions are answered:
- •
Normally distributed error/fitness data: Are the
Results
The results of the experiments laid out in Section 4 are presented below.
Conclusion
The dynamic time-dependent nature of dynamic optimization problems (DOPs) makes it complex to objectively capture the performance of algorithms. Empirical investigations in this paper highlighted that, overwhelmingly, the series of performance values yielded by computational intelligence algorithms while solving a DOP is likely to contain significant fitness scale changes over time, is unlikely to follow a Gaussian distribution, and is likely to show significant variance over time. Popular
CRediT authorship contribution statement
Stéfan A.G. van der Stockt: Conceptualization, Formal analysis, Investigation, Methodology, Software, Writing – original draft, Writing – review & editing. Gary Pamparà: Conceptualization, Methodology, Software, Writing – review & editing. Andries P. Engelbrecht: Supervision, Writing – review & editing. Christopher W. Cleghorn: Supervision, Writing – review & editing.
Declaration of Competing Interest
We the authors, Stefan van der Stockt, Gary Pamparà, Andries Engelbrecht, and Christopher Cleghorn confirm that there are no conflicts of interest with any of the authors as it pertains to publishing this work. This work is the result of Stefan and Gary’s PhD research that is getting submitted for publication.
References (70)
- et al.
A survey of swarm intelligence for dynamic optimization: algorithms and applications
Swarm Evol. Comput.
(2017) - et al.
Evolutionary dynamic optimization: a survey of the state of the art
Swarm Evol. Comput.
(2012) - et al.
A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms
Swarm Evol. Comput.
(2011) - et al.
Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power
Inf. Sci.
(2010) - et al.
Heuristic space diversity control for improved meta-hyper-heuristic performance
Inf. Sci.
(2015) - et al.
Recent advances in differential evolution–an updated survey
Swarm Evol. Comput.
(2016) - et al.
A hyper-heuristic based framework for dynamic optimization problems
Appl. Soft Comput.
(2014) - et al.
Using competitive population evaluation in a differential evolution algorithm for dynamic environments
Eur. J. Oper. Res.
(2012) - et al.
Optimal parameter regions and the time-dependence of control parameter values for the particle swarm optimization algorithm
Swarm Evol. Comput.
(2018) Computational Intelligence: An Introduction
(2007)
Optimization in dynamic environments: a survey on problems, methods and measures
Soft Comput.
Evolutionary optimization in uncertain environments – a survey
IEEE Trans. Evol. Comput.
Designing evolutionary algorithms for dynamic optimization problems
Advances in Evolutionary Computing
Searching for optima in non-stationary environments
Evolutionary Computation, 1999. CEC 99. Proceedings of the 1999 Congress on
Towards a more complete classification system for dynamically changing environments
Proceedings of the IEEE Congress on Evolutionary Computation
Adaptive multi-population differential evolution for dynamic environments
Tracking and optimizing dynamic systems with particle swarms
Proceedings of the IEEE Congress on Evolutionary Computation
Tracking dynamic systems with PSO: where’s the cheese
Proceedings of the Workshop on Particle Swarm Optimization
Tracking extrema in dynamic environments
Evolutionary Programming VI
Memory enhanced evolutionary algorithms for changing optimization problems
Proceedings of the IEEE Congress on Evolutionary Computation (CEC 1999), Washington, DC, USA
On the behavior of evolutionary algorithms in dynamic environments
1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360)
Performance measurement in dynamic environments
GECCO Workshop on Evolutionary Algorithms for Dynamic Optimization Problems
An analysis of the behavior of a class of genetic adaptive systems
Benchmark Generator for CEC 2009 Competition on Dynamic Optimization
Technical Report
Benchmarks for testing evolutionary algorithms
Asia-Pacific Conference on Control and Measurement
Performance measures for dynamic environments
Continuous dynamic optimisation using evolutionary algorithms
When is “nearest neighbor” meaningful?
International Conference on Database Theory
On the surprising behavior of distance metrics in high dimensional space
International Conference on Database Theory
Algebra
Geometric Algebra with Applications in Engineering
Handbook of Parametric and Nonparametric Statistical Procedures
Statistical comparisons of classifiers over multiple data sets
J. Mach. Learn. Res.
An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons
J. Mach. Learn. Res.
A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 special session on real parameter optimization
J. Heuristics
Cited by (6)
Adaptive local landscape feature vector for problem classification and algorithm selection
2022, Applied Soft ComputingAnalysis and Handling of Dynamic Problem Changes in Open-Ended Optimization
2022, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)