Skip to main content

On Gradient-Based and Swarm-Based Algorithms for Set-Oriented Bicriteria Optimization

  • Conference paper
  • First Online:
EVOLVE - A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation VI

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 674))

  • 351 Accesses

Abstract

This paper is about the numerical solution of multiobjective optimization problems in continuous spaces. The problem is to define a search direction and a dynamical adaptation scheme for sets of vectors that serve as approximation sets. Two algorithmic concepts are compared: These are stochastic optimization algorithms based on cooperative particle swarms, and a deterministic optimization algorithm based on set-oriented gradients of the hypervolume indicator. Both concepts are instantiated as algorithms, which are deliberately kept simple in order to not obfuscate their discussion. It is shown that these algorithms are capable of approximating Pareto fronts iteratively. The numerical studies of the paper are restricted to relatively simple and low dimensional problems. For these problems a visualization of the convergence dynamics was implemented that shows how the approximation set converges to a diverse cover of the Pareto front and efficient set. The demonstration of the algorithms is implemented in Java Script and can therefore run from a website in any conventional browser. Besides using it to reproduce the findings of the paper, it is also suitable as an educational tool in order to demonstrate the idea of set-based convergence in Pareto optimization using stochastic and deterministic search.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In this paper we restrict ourselves to bicriteria optimization but the introduced principles are applicable also in higher dimensions.

References

  1. Auger, A.: Benchmarking the (1+1) evolution strategy with one-fifth success rule on the bbob-2009 function testbed. In: Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, GECCO 2009, pp. 2447–2452. ACM, New York (2009)

    Google Scholar 

  2. Beyer, H.-G., Schwefel, H.-P.: Evolution strategies–a comprehensive introduction. Nat. Comput. 1(1), 3–52 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  3. Bringmann, K., Friedrich, T.: Approximating the least hypervolume contributor: Np-hard in general, but fast in practice. In: Evolutionary Multi-Criterion Optimization, pp. 6–20. Springer (2009)

    Google Scholar 

  4. Coello Coello, C.A., Lechuga, M.S.: Mopso: a proposal for multiple objective particle swarm optimization. In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, vol. 2, pp. 1051–1056. IEEE (2002)

    Google Scholar 

  5. Emmerich, M., Beume, N., Naujoks, B.: An emo algorithm using the hypervolume measure as selection criterion. In: Evolutionary Multi-Criterion Optimization, pp. 62–76. Springer (2005)

    Google Scholar 

  6. Emmerich, M., Deutz, A.: Time complexity and zeros of the hypervolume indicator gradient field. In: Schuetze, O., Coello Coello, C.A., Tantar, A.-A., Tantar, E., Bouvry, P., Del Moral, P., Legrand, P. (eds.) EVOLVE - A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation III. Studies in Computational Intelligence, vol. 500, pp. 169–193. Springer International Publishing (2014)

    Google Scholar 

  7. Emmerich, M., Deutz, A., Beume, N.: Gradient-based/evolutionary relay hybrid for computing Pareto front approximations maximizing the S-metric. Springer (2007)

    Google Scholar 

  8. Emmerich, M.T.M., Deutz, A.H., Yevseyeva, I.: On reference point free weighted hypervolume indicators based on desirability functions and their probabilistic interpretation. Procedia Technol. 16, 532–541 (2014)

    Article  Google Scholar 

  9. Emmerich, M.T.M., Fonseca, C.M.: Computing hypervolume contributions in low dimensions: asymptotically optimal algorithm and complexity results. In: Evolutionary Multi-Criterion Optimization, pp. 121–135. Springer (2011)

    Google Scholar 

  10. Fleischer, M.: The measure of pareto optima applications to multi-objective metaheuristics. In: Evolutionary Multi-Criterion Optimization, pp. 519–533. Springer (2003)

    Google Scholar 

  11. Guerreiro, A.P., Fonseca, C.M., Emmerich, M.T.M.: A fast dimension-sweep algorithm for the hypervolume indicator in four dimensions. In: CCCG, pp. 77–82 (2012)

    Google Scholar 

  12. Hupkens, I., Emmerich, M.: Logarithmic-time updates in sms-emoa and hypervolume-based archiving. In: EVOLVE-A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation IV, pp. 155–169. Springer (2013)

    Google Scholar 

  13. Mostaghim, S., Branke, J., Schmeck, H.: Multi-objective particle swarm optimization on computer grids. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, GECCO 2007, pp. 869–875. ACM, New York (2007)

    Google Scholar 

  14. HernĂ¡ndez, V.A.S., SchĂ¼tze, O., Emmerich, M.: Hypervolume maximization via set based Newton’s method. In: Tantar, A.-A., Tantar, E., Sun, J.-Q., Zhang, W., Ding, Q., Schtze, O., Emmerich, M., Legrand, P., Del Moral, P., Coello Coello, C.A. (eds.) EVOLVE - A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation V. Advances in Intelligent Systems and Computing, vol. 288, pp. 15–28. Springer International Publishing (2014)

    Google Scholar 

  15. Verhoef, W.: Interactive demo on multi-objective optimization, liacs, leiden university, nl, wilco.verhoef.nu/projects/moo, bsc project (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wilco Verhoef .

Editor information

Editors and Affiliations

A Appendices

A Appendices

1.1 A.1 Manual of the Application

The application and code is available online [15]. Using the application is straightforward and does not need any installation or configuration. The application can be started with any modern browser. The application is shown in Fig. 6.

Fig. 6.
figure 6

A screenshot of the interactive multi-objective optimization application.

Fig. 7.
figure 7

A screenshot during convergence of the population with the MOGO algorithm on test problem 1.

  1. 1.

    This is the sidebar with the interaction parameters.

    1. (a)

      Pressing the Printable button sets the background color to white. Pressing the Regular button will set the color scheme regular again.

    2. (b)

      In the problem section you can choose a test problem.

    3. (c)

      In the initialization section you can select the population size. You can initialize the population with the set population size by pressing one of the buttons. Pressing the Initialize randomly button will position the particles at random in the decision space. Pressing the Initialize uniformly button will try to position the particles uniformly in the decision space.

    4. (d)

      In the algorithm section you can choose the optimization algorithm by pressing Particle swarm optimization or Gradient based optimization. Deselecting the Enable dominated set will enable the use of the whole population. Pressing Adaptive mutation makes the MOCOPS algorithm use the \(1/5^{th}\) success rule. The mutation rate for the MOCOPS algorithm can be adjusted by using the slider. The step size of the MOGO can be adjusted similarly with the other slider.

    5. (e)

      In the optimization section you can select how many milliseconds delay every iteration should have using the slider. A bigger delay can make the dynamics of the algorithm clearer. The button Start will start the selected algorithm. Pressing the button Stop will stop the algorithm again. The Benchmark button will run some benchmarks and outputs the statistics in the browser console. In most browsers the console is accessible by pressing F12.

    6. (f)

      The application automatically adapts to the window size. Full screen mode is available with F11.

  2. 2.

    The objective functions of the chosen test problem are shown here.

  3. 3.

    This part of the screen shows the decision space. Points of the efficient set of the population are displayed in red squared dots. The particles which are dominated by the particles in the efficient set are displayed as green dots.

  4. 4.

    This part of the screen shows the objective space. Points of the Pareto set of the population are displayed in red squared dots. The particles which are dominated by the Pareto set are shown as green dots.

Fig. 8.
figure 8

A screenshot during convergence of the population with the MOGO algorithm on test problem 2.

Fig. 9.
figure 9

A screenshot with path tracing during the convergence of a population with the MOCOPS algorithm on test problem 1.

Fig. 10.
figure 10

A screenshot with path tracing during the convergence of a population with the MOCOPS algorithm on test problem 1 at a late stage.

Finally, we exhibit some example screenshots:

  • Figure 7 shows a large population that is in the process of converging towards the Pareto front starting from a uniformly distributed sample. The MOGO algorithm is applied on Problem 1. Green, small points are dominated and red squared dots are non-dominated with respect to the other points in the population.

  • Figure 8 shows a large population that is in the process of converging towards the Pareto front starting from a uniformly distributed sample. The MOGO algorithm is applied on Problem 2.

  • Figure 9 shows a small population of particles moved by the MOCOPS algorithm on Problem 1. Also, the traces of the recent moves of the particles are visualized. Note, that points on the efficient set move sideways in order to find their optimal position with respect to diversity (hypervolume contribution).

  • Figure 10 shows the same population as shown in Fig. 9 at a later stage of the convergence process.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Verhoef, W., Deutz, A.H., Emmerich, M.T.M. (2018). On Gradient-Based and Swarm-Based Algorithms for Set-Oriented Bicriteria Optimization. In: Tantar, AA., Tantar, E., Emmerich, M., Legrand, P., Alboaie, L., Luchian, H. (eds) EVOLVE - A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation VI. Advances in Intelligent Systems and Computing, vol 674. Springer, Cham. https://doi.org/10.1007/978-3-319-69710-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-69710-9_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-69708-6

  • Online ISBN: 978-3-319-69710-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics