Abstract
This paper is about the numerical solution of multiobjective optimization problems in continuous spaces. The problem is to define a search direction and a dynamical adaptation scheme for sets of vectors that serve as approximation sets. Two algorithmic concepts are compared: These are stochastic optimization algorithms based on cooperative particle swarms, and a deterministic optimization algorithm based on set-oriented gradients of the hypervolume indicator. Both concepts are instantiated as algorithms, which are deliberately kept simple in order to not obfuscate their discussion. It is shown that these algorithms are capable of approximating Pareto fronts iteratively. The numerical studies of the paper are restricted to relatively simple and low dimensional problems. For these problems a visualization of the convergence dynamics was implemented that shows how the approximation set converges to a diverse cover of the Pareto front and efficient set. The demonstration of the algorithms is implemented in Java Script and can therefore run from a website in any conventional browser. Besides using it to reproduce the findings of the paper, it is also suitable as an educational tool in order to demonstrate the idea of set-based convergence in Pareto optimization using stochastic and deterministic search.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
In this paper we restrict ourselves to bicriteria optimization but the introduced principles are applicable also in higher dimensions.
References
Auger, A.: Benchmarking the (1+1) evolution strategy with one-fifth success rule on the bbob-2009 function testbed. In: Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, GECCO 2009, pp. 2447–2452. ACM, New York (2009)
Beyer, H.-G., Schwefel, H.-P.: Evolution strategies–a comprehensive introduction. Nat. Comput. 1(1), 3–52 (2002)
Bringmann, K., Friedrich, T.: Approximating the least hypervolume contributor: Np-hard in general, but fast in practice. In: Evolutionary Multi-Criterion Optimization, pp. 6–20. Springer (2009)
Coello Coello, C.A., Lechuga, M.S.: Mopso: a proposal for multiple objective particle swarm optimization. In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, vol. 2, pp. 1051–1056. IEEE (2002)
Emmerich, M., Beume, N., Naujoks, B.: An emo algorithm using the hypervolume measure as selection criterion. In: Evolutionary Multi-Criterion Optimization, pp. 62–76. Springer (2005)
Emmerich, M., Deutz, A.: Time complexity and zeros of the hypervolume indicator gradient field. In: Schuetze, O., Coello Coello, C.A., Tantar, A.-A., Tantar, E., Bouvry, P., Del Moral, P., Legrand, P. (eds.) EVOLVE - A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation III. Studies in Computational Intelligence, vol. 500, pp. 169–193. Springer International Publishing (2014)
Emmerich, M., Deutz, A., Beume, N.: Gradient-based/evolutionary relay hybrid for computing Pareto front approximations maximizing the S-metric. Springer (2007)
Emmerich, M.T.M., Deutz, A.H., Yevseyeva, I.: On reference point free weighted hypervolume indicators based on desirability functions and their probabilistic interpretation. Procedia Technol. 16, 532–541 (2014)
Emmerich, M.T.M., Fonseca, C.M.: Computing hypervolume contributions in low dimensions: asymptotically optimal algorithm and complexity results. In: Evolutionary Multi-Criterion Optimization, pp. 121–135. Springer (2011)
Fleischer, M.: The measure of pareto optima applications to multi-objective metaheuristics. In: Evolutionary Multi-Criterion Optimization, pp. 519–533. Springer (2003)
Guerreiro, A.P., Fonseca, C.M., Emmerich, M.T.M.: A fast dimension-sweep algorithm for the hypervolume indicator in four dimensions. In: CCCG, pp. 77–82 (2012)
Hupkens, I., Emmerich, M.: Logarithmic-time updates in sms-emoa and hypervolume-based archiving. In: EVOLVE-A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation IV, pp. 155–169. Springer (2013)
Mostaghim, S., Branke, J., Schmeck, H.: Multi-objective particle swarm optimization on computer grids. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, GECCO 2007, pp. 869–875. ACM, New York (2007)
HernĂ¡ndez, V.A.S., SchĂ¼tze, O., Emmerich, M.: Hypervolume maximization via set based Newton’s method. In: Tantar, A.-A., Tantar, E., Sun, J.-Q., Zhang, W., Ding, Q., Schtze, O., Emmerich, M., Legrand, P., Del Moral, P., Coello Coello, C.A. (eds.) EVOLVE - A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation V. Advances in Intelligent Systems and Computing, vol. 288, pp. 15–28. Springer International Publishing (2014)
Verhoef, W.: Interactive demo on multi-objective optimization, liacs, leiden university, nl, wilco.verhoef.nu/projects/moo, bsc project (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
AÂ Appendices
AÂ Appendices
1.1 A.1Â Manual of the Application
The application and code is available online [15]. Using the application is straightforward and does not need any installation or configuration. The application can be started with any modern browser. The application is shown in Fig. 6.
-
1.
This is the sidebar with the interaction parameters.
-
(a)
Pressing the Printable button sets the background color to white. Pressing the Regular button will set the color scheme regular again.
-
(b)
In the problem section you can choose a test problem.
-
(c)
In the initialization section you can select the population size. You can initialize the population with the set population size by pressing one of the buttons. Pressing the Initialize randomly button will position the particles at random in the decision space. Pressing the Initialize uniformly button will try to position the particles uniformly in the decision space.
-
(d)
In the algorithm section you can choose the optimization algorithm by pressing Particle swarm optimization or Gradient based optimization. Deselecting the Enable dominated set will enable the use of the whole population. Pressing Adaptive mutation makes the MOCOPS algorithm use the \(1/5^{th}\) success rule. The mutation rate for the MOCOPS algorithm can be adjusted by using the slider. The step size of the MOGO can be adjusted similarly with the other slider.
-
(e)
In the optimization section you can select how many milliseconds delay every iteration should have using the slider. A bigger delay can make the dynamics of the algorithm clearer. The button Start will start the selected algorithm. Pressing the button Stop will stop the algorithm again. The Benchmark button will run some benchmarks and outputs the statistics in the browser console. In most browsers the console is accessible by pressing F12.
-
(f)
The application automatically adapts to the window size. Full screen mode is available with F11.
-
(a)
-
2.
The objective functions of the chosen test problem are shown here.
-
3.
This part of the screen shows the decision space. Points of the efficient set of the population are displayed in red squared dots. The particles which are dominated by the particles in the efficient set are displayed as green dots.
-
4.
This part of the screen shows the objective space. Points of the Pareto set of the population are displayed in red squared dots. The particles which are dominated by the Pareto set are shown as green dots.
Finally, we exhibit some example screenshots:
-
Figure 7 shows a large population that is in the process of converging towards the Pareto front starting from a uniformly distributed sample. The MOGO algorithm is applied on Problem 1. Green, small points are dominated and red squared dots are non-dominated with respect to the other points in the population.
-
Figure 8 shows a large population that is in the process of converging towards the Pareto front starting from a uniformly distributed sample. The MOGO algorithm is applied on Problem 2.
-
Figure 9 shows a small population of particles moved by the MOCOPS algorithm on Problem 1. Also, the traces of the recent moves of the particles are visualized. Note, that points on the efficient set move sideways in order to find their optimal position with respect to diversity (hypervolume contribution).
-
Figure 10 shows the same population as shown in Fig. 9 at a later stage of the convergence process.
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Verhoef, W., Deutz, A.H., Emmerich, M.T.M. (2018). On Gradient-Based and Swarm-Based Algorithms for Set-Oriented Bicriteria Optimization. In: Tantar, AA., Tantar, E., Emmerich, M., Legrand, P., Alboaie, L., Luchian, H. (eds) EVOLVE - A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation VI. Advances in Intelligent Systems and Computing, vol 674. Springer, Cham. https://doi.org/10.1007/978-3-319-69710-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-69710-9_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-69708-6
Online ISBN: 978-3-319-69710-9
eBook Packages: EngineeringEngineering (R0)