Skip to main content

Advertisement

Log in

A hybrid multi-objective evolutionary algorithm with feedback mechanism

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Exploration and exploitation are two cornerstones for multi-objective evolutionary algorithms (MOEAs). To balance exploration and exploitation, we propose an efficient hybrid MOEA (i.e., MOHGD) by integrating multiple techniques and feedback mechanism. Multiple techniques include harmony search, genetic operator and differential evolution, which can improve the search diversity. Whereas hybrid selection mechanism contributes to the search efficiency by integrating the advantages of the static and adaptive selection scheme. Therefore, multiple techniques based on the hybrid selection strategy can effectively enhance the exploration ability of the MOHGD. Besides, we propose a feedback strategy to transfer some non-dominated solutions from the external archive to the parent population. This feedback strategy can strengthen convergence toward Pareto optimal solutions and improve the exploitation ability of the MOHGD. The proposed MOHGD has been evaluated on benchmarks against other state of the art MOEAs in terms of convergence, spread, coverage, and convergence speed. Computational results show that the proposed MOHGD is competitive or superior to other MOEAs considered in this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Lu C, Gao L, Li X, Xiao S (2017) A hybrid multi-objective grey wolf optimizer for dynamic scheduling in a real-world welding industry. Eng Appl Artif Intel 57:61–79

    Article  Google Scholar 

  2. Lu C, Li XY, Gao L, Liao W, Yi J (2017) An effective multi-objective discrete virus optimization algorithm for flexible job-shop scheduling problem with controllable processing times. Comput Indus Eng 104:156–174. https://doi.org/10.1016/j.cie.2016.12.020

    Article  Google Scholar 

  3. Lu C, Gao L, Li XY, Wang Q, Liao W, Zhao QY (2017) An efficient multiobjective backtracking search algorithm for single machine scheduling with controllable processing times. Math Probl Eng. https://doi.org/10.1155/2017/8696985

    Article  MathSciNet  Google Scholar 

  4. Li JQ, Wang JD, Pan QK, Duan PY, Sang HY, Gao KZ, Xue Y (2017) A hybrid artificial bee colony for optimizing a reverse logistics network system. Soft Comput 21(1):1–18

    Article  Google Scholar 

  5. Lu C, Gao L, Li X, Chen P (2016) Energy-efficient multi-pass turning operation using multi-objective backtracking search algorithm. J Clean Prod 137:1516–1531. https://doi.org/10.1016/j.jclepro.2016.07.029

    Article  Google Scholar 

  6. Li K, Kwong S, Zhang Q, Deb K (2015) Interrelationship-based selection for decomposition multiobjective optimization. IEEE Trans Cybern 45(10):2076–2088. https://doi.org/10.1109/TCYB.2014.2365354

    Article  Google Scholar 

  7. Li K, Kwong S, Wang R, Tang K-S, Man K-F (2013) Learning paradigm based on jumping genes: a general framework for enhancing exploration in evolutionary multiobjective optimization. Inform Sci 226:1–22. https://doi.org/10.1016/j.ins.2012.11.002

    Article  MathSciNet  Google Scholar 

  8. Wang Y, Cai Z, Zhang Q (2012) Enhancing the search ability of differential evolution through orthogonal crossover. Inform Sci 185(1):153–177. https://doi.org/10.1016/j.ins.2011.09.001

    Article  MathSciNet  Google Scholar 

  9. Chen B, Zeng W, Lin Y, Zhang D (2015) A new local search-based multiobjective optimization algorithm. IEEE Trans Evol Comput 19(1):50–73. https://doi.org/10.1109/TEVC.2014.2301794

    Article  Google Scholar 

  10. Nebro AJ, Durillo JJ, Luna F, Dorronsoro B, Alba E (2009) MOCell: a cellular genetic algorithm for multiobjective optimization. Int J Intell Syst 24(7):726–746. https://doi.org/10.1002/int.20358

    Article  Google Scholar 

  11. Dai XS, Yuan XF, Wu LH (2017) A novel harmony search algorithm with gaussian mutation for multi-objective optimization. Soft Comput 21(6):1549–1567. https://doi.org/10.1007/s00500-015-1868-1

    Article  Google Scholar 

  12. Gao L, Li XY, Wen XY, Lu C, Wen F (2015) A hybrid algorithm based on a new neighborhood structure evaluation method for job shop scheduling problem. Comput Indus Eng 88:417–429. https://doi.org/10.1016/j.cie.2015.08.002

    Article  Google Scholar 

  13. Zhou A, Qu B-Y, Li H, Zhao S-Z, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary algorithms: a survey of the state of the art. Swarm Evol Comput 1(1):32–49. https://doi.org/10.1016/j.swevo.2011.03.001

    Article  Google Scholar 

  14. Yuan Y, Xu H (2015) Multiobjective flexible job shop scheduling using memetic algorithms. IEEE Trans Autom Sci Eng 12(1):336–353. https://doi.org/10.1109/TASE.2013.2274517

    Article  Google Scholar 

  15. Ke L, Zhang Q, Battiti R (2014) Hybridization of decomposition and local search for multiobjective optimization. IEEE Trans Cybern 44(10):1808–1820. https://doi.org/10.1109/TCYB.2013.2295886

    Article  Google Scholar 

  16. Nebro AJ, Luna F, Alba E, Dorronsoro B, Durillo JJ, Beham A (2008) AbYSS: adapting scatter search to multiobjective optimization. IEEE Trans Evol Comput 12(4):439–457. https://doi.org/10.1109/TEVC.2007.913109

    Article  Google Scholar 

  17. Durillo JJ, Nebro AJ, Luna F, Alba E (2008) Solving three-objective optimization problems using a new hybrid cellular genetic algorithm. In: Parallel problem solving from nature–PPSN X. Springer, pp 661–670

  18. Wang Y, Li B (2010) Multi-strategy ensemble evolutionary algorithm for dynamic multi-objective optimization. Memetic Comput 2(1):3–24. https://doi.org/10.1007/s12293-009-0012-0

    Article  Google Scholar 

  19. Tran D-H, Cheng M-Y, Cao M-T (2015) Hybrid multiple objective artificial bee colony with differential evolution for the time–cost–quality tradeoff problem. Knowledge-Based Syst 74:176–186. https://doi.org/10.1016/j.knosys.2014.11.018

    Article  Google Scholar 

  20. Sindhya K, Ruuska S, Haanpää T, Miettinen K (2011) A new hybrid mutation operator for multiobjective optimization with differential evolution. Soft Comput 15(10):2041–2055. https://doi.org/10.1007/s00500-011-0704-5

    Article  Google Scholar 

  21. Lu C, Xiao S, Li X, Gao L (2016) An effective multi-objective discrete grey wolf optimizer for a real-world scheduling problem in welding production. Adv Eng Softw 99:161–176

    Article  Google Scholar 

  22. Yang D, Jiao L, Gong M (2009) Adaptive multi-objective optimization based on nondominated solutions. Comput Intell 25(2):84–108. https://doi.org/10.1111/j.1467-8640.2009.00332.x

    Article  MathSciNet  Google Scholar 

  23. Elhossini A, Areibi S, Dony R (2010) Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization. Evol Comput 18(1):127–156

    Article  Google Scholar 

  24. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197. https://doi.org/10.1109/4235.996017

    Article  Google Scholar 

  25. Geem ZW, Kim JH, Loganathan G (2001) A new heuristic optimization algorithm: harmony search. Simulation 76(2):60–68

    Article  Google Scholar 

  26. John H (1992) Holland, adaptation in natural and artificial systems. MIT Press, Cambridge

    Google Scholar 

  27. Price K, Storn RM, Lampinen JA (2006) Differential evolution: a practical approach to global optimization. Springer Science & Business Media

  28. Tang L, Wang X (2013) A hybrid multiobjective evolutionary algorithm for multiobjective optimization problems. IEEE Trans Evol Comput 17(1):20–45. https://doi.org/10.1109/TEVC.2012.2185702

    Article  Google Scholar 

  29. Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjective evolutionary algorithms: empirical results. Evol Comput 8(2):173–195

    Article  Google Scholar 

  30. Deb K, Thiele L, Laumanns M, Zitzler E (2005) Scalable test problems for evolutionary multiobjective optimization. Springer

  31. Huband S, Hingston P, Barone L, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Comput 10(5):477–506. https://doi.org/10.1109/TEVC.2005.861417

    Article  Google Scholar 

  32. Srinivasan N, Deb K (1994) Multi-objective function optimisation using non-dominated sorting genetic algorithm. Evol Comp 2(3):221–248

    Article  Google Scholar 

  33. Tanaka M, Watanabe H, Furukawa Y, Tanino T (1995) GA-based decision support system for multicriteria optimization. In: IEEE International Conference on systems, man and cybernetics, 1995. Intelligent systems for the 21st century., 22-25 Oct 1995. vol 1552, pp 1556–1561. https://doi.org/10.1109/ICSMC.1995.537993

  34. Osyczka A, Kundu S (1995) A new method to solve generalized multicriteria optimization problems using the simple genetic algorithm. Struct Optim 10(2):94–99. https://doi.org/10.1007/bf01743536

    Article  Google Scholar 

  35. Nair AR, Lewis KE (2000) An efficient design strategy for solving MDO problems in non-cooperative environments. In: Proceedings of the 8th AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, pp 6–8

  36. Zhang Y, Zhang H, Lu C (2012) Study on parameter optimization design of drum brake based on hybrid cellular multiobjective genetic algorithm. Math Probl Eng, 2012

  37. Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 3(4):257–271. https://doi.org/10.1109/4235.797969

    Article  Google Scholar 

  38. Derrac J, García S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1(1):3–18. https://doi.org/10.1016/j.swevo.2011.02.002

    Article  Google Scholar 

  39. Zitzler E, Laumanns M, Thiele L, Zitzler E, Zitzler E, Thiele L, Thiele L (2001) SPEA2: Improving the strength Pareto evolutionary algorithm. Eidgenössische Technische Hochschule Zürich (ETH). Institut für Technische Informatik und Kommunikationsnetze (TIK) Zürich, Switzerland

    MATH  Google Scholar 

  40. Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731. https://doi.org/10.1109/TEVC.2007.892759

    Article  Google Scholar 

  41. Li H, Zhang Q (2009) Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II. IEEE Trans Evol Comput 13(2):284–302. https://doi.org/10.1109/TEVC.2008.925798

    Article  Google Scholar 

  42. Zhao SZ, Suganthan PN, Zhang Q (2012) Decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes. IEEE Trans Evol Comput 16(3):442–446. https://doi.org/10.1109/TEVC.2011.2166159

    Article  Google Scholar 

  43. Li K, Fialho A, Kwong S, Zhang Q (2014) Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 18(1):114–130. https://doi.org/10.1109/tevc.2013.2239648

    Article  Google Scholar 

  44. Chen B, Lin Y, Zeng W, Zhang D, Si Y -W (2015) Modified differential evolution algorithm using a new diversity maintenance strategy for multi-objective optimization problems. Appl Intell 43(1):49–73. https://doi.org/10.1007/s10489-014-0619-9

    Article  Google Scholar 

  45. Mirjalili S, Jangir P, Saremi S (2017) Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems. Appl Intell 46(1):79–95. https://doi.org/10.1007/s10489-016-0825-8

    Article  Google Scholar 

  46. Zhang Q, Zhou A, Zhao S, Suganthan PN, Liu W, Tiwari S (2008) Multiobjective optimization test instances for the CEC 2009 special session and competition. University of Essex, Colchester, UK and Nanyang technological University. Singapore, special session on performance assessment of multi-objective optimization algorithms, technical report 264

  47. Yi J, Gao L, Li X, Gao J (2016) An efficient modified harmony search algorithm with intersect mutation operator and cellular local search for continuous function optimization problems. Appl Intell 44(3):725–753. https://doi.org/10.1007/s10489-015-0721-7

    Article  Google Scholar 

  48. Zeng B, Dong Y (2016) An improved harmony search based energy-efficient routing algorithm for wireless sensor networks. Appl Soft Comput 41:135–147

    Article  Google Scholar 

  49. Lu C, Gao L, Li XY, Pan QK, Wang Q (2017) Energy-efficient permutation flow shop scheduling problem using a hybrid multi-objective backtracking search algorithm. J Clean Prod 144:228–238. https://doi.org/10.1016/j.jclepro.2017.01.011

    Article  Google Scholar 

  50. Patrascu M, Stancu AF, Pop F (2014) HELGA: a heterogeneous encoding lifelike genetic algorithm for population evolution modeling and simulation. Soft Comput 18(12):2565–2576

    Article  Google Scholar 

  51. Zhou Y, Li X, Gao L (2013) A differential evolution algorithm with intersect mutation operator. Appl Soft Comput 13(1):390–401

    Article  Google Scholar 

Download references

Funding

This work was supported by the fundamental research funds for the central universities, China University of Geosciences (Wuhan) (CUG170688), the National Natural Science Foundation of China (NSFC) under Grant no. 51775216, 51435009, and the Program for HUST Academic Frontier Youth Team.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao Lu.

Ethics declarations

Conflict of interests

No conflict of interest exits in the submission of this manuscript, and manuscript is approved by all authors for publication. I would like to declare on behalf of my co-authors that the work described was original research that has not been published previously, and not under consideration for publication elsewhere, in whole or in part. All the authors listed have approved the manuscript that is enclosed.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

Appendix

Appendix

The three operators used in this paper are described as follows, respectively.

  1. 1)

    HS technique

HS is inspired by the improvisation process where music players improvise their instruments’ pitches or notes to search a beautiful harmony. The procedure of the HS algorithm is described as follows [25, 47, 48].

  • Step 1. Initialize the parameters. The parameters used in HS are as follows. Harmony memory size (HMS); harmony memory considering rate (HMCR); pith adjusting rate (PAR); bandwidth (BW ); and the number of improvisations (NI).

  • Step 2. Initialize the harmony memory. Generate random solution vectors \(\left \{ \mathbf {x}^{1},\mathbf {x}^{2},\mathellipsis ,\mathbf {x}^{HMS} \right \}\) with size HMS. The harmony memory (HM) is a memory archive that is used to store all the solutions. The HM matrix is defined as follow.

    $$ HM\mathrm{=}\left[ {\begin{array}{*{20}c} {x_{1}^{1}} & \mathrm{\cdots} & {x_{n}^{1}}\\ \mathrm{\vdots} & \mathrm{\ddots} & \mathrm{\vdots} \\ x_{1}^{HMS} & \mathrm{\cdots} & x_{n}^{HMS} \end{array}} \right] $$
    (6)
  • Step 3. Improvisation. A new harmony vector \(\mathbf {x}^{t}=\left ({x_{1}^{t}},{x_{2}^{t}},\mathellipsis ,{x_{n}^{t}} \right )\) is generated based on three rules. (a) harmony memory consideration, (b) pitch adjustment and (c) random selection. Creating a new harmony is also called improvisation.

In the memory consideration, the value of variable \({x_{i}^{t}}\) is selected from \(\left \{ {x_{i}^{1}},{x_{i}^{2}},\mathellipsis ,x_{i}^{HMS} \right \}\) with the probability of HMCR (HMCR∈ [0, 1]). In random selection, \({x_{i}^{t}}\) can be any feasible value not limited to those stored in HM with the probability of (1-HMCR). The formula can be written as follow.

$$ {x_{i}^{t}}=\left\{ {\begin{array}{lllllll} {x_{i}^{t}}\in \{{x_{1}^{t}},{x_{2}^{t}},\mathellipsis ,{x_{n}^{t}}\} & if\, rand\, \le HMCR \\ {x_{i}^{t}}\in X_{i} &otherwise \end{array}} \right. $$
(7)

where rand is a uniform random number in the range between 0 and 1.

Thereafter, each pitch obtained by the memory consideration is adjusted by the pitch adjustment rule with the rate of PAR. The adjustment rule works as follow.

$$ {x_{i}^{t}}=\left\{ {\begin{array}{lllll} {x_{i}^{t}}\pm rand\times BW &if \,rand\,\le PAR \\ {x_{i}^{t}} &otherwise \end{array}} \right. $$
(8)

where BW is the arbitrary distance bandwidth.

HS has many advantages. (1) HS can handle discrete problems as well as continuous problems. (2) It is easy for HS to combine with other algorithms, constructing a new algorithm with better performance. (3) HS overcomes the drawback of GA’s building block theory. However, premature phenomenon is likely to occur in HS.

  1. 2)

    GA technique

The GA is one of the most well-known and commonly used evolutionary algorithms (EAs). It was developed in the early 1970s by John Holland [26]. The basic GA is very generic, and it mainly includes the following steps: selection, crossover and mutation. In general GA works as follows. First, GA begins by creating a random initial population, and evaluate each individual of the current population by calculating its fitness value; then select two parents from the current population based on their fitness, that is to say, individual with higher fitness is often selected as a parent; child is from parents by executing recombination and mutation operation on parents; finally replace the current population with child according to fitness in every iteration. GA runs until stopping condition is met. The pseudocode of GA is given in Algorithm 1.

figure f

The merits of GA are as follows. (1) GA has good global search ability. (2) It is easy for GA to be extended and merged with other algorithms. However, it also has some limitations. For example, it easily falls into local optimum [49, 50].

  1. 3)

    DE technique

DE, proposed by Storn Price, is one of the most popular EAs in recent years [27]. The four main steps in DE are initialization, mutation, recombination and selection. In DE, a new solution is created by adding the weighted difference vector between two parents to a third parent. In the current generation G, a candidate solution xi,G, \(i =\) 1,…, N (N is population size) can be used to generate a new trial solution u by using this update formula as in Algorithm 2. This paper uses DE/rand/1/bin strategy to update population, even though different strategies have been proposed. In this pseudocode CR controls the crossover operation and F is the scaling factor. Both CR and F are constant number. After the applications of these two operators, the new solution \(\mathbf {u}_{i,G}\) is compared with the old vectorxi,G; the latter will be replaced by the former if this one has a better objective value.

figure g

The main advantages of DE contain the following aspects. (1) DE has fewer control parameters, namely N, F, and CR. (2) It is also competitive to the other EAs. (3) It can effectively solve high-dimensional complex optimization problems. However, DE has several drawbacks including unstable convergence in the last period [51].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, C., Gao, L., Li, X. et al. A hybrid multi-objective evolutionary algorithm with feedback mechanism. Appl Intell 48, 4149–4173 (2018). https://doi.org/10.1007/s10489-018-1211-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-018-1211-5

Keywords

Navigation