Skip to main content
Log in

Multi-granularity competition-cooperation optimization algorithm with adaptive parameter configuration

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The intelligent optimization algorithm has the advantage of giving feasible solutions in polynomial time when solving complex problems in reality. Its performance depends on its strategy design and parameter configuration. The key to achieving computational intelligence and self-organization is parameter configuration. However, the parameter configuration in the intelligent optimization algorithm relies on the designer’s experience, and the parameter adjustment process will waste a vast amount of resources. In terms of this issue, this paper proposes a multi-granularity competition-cooperation optimization algorithm with adaptive parameter configuration(MGAP). First, the particles are grouped into groups according to the fitness of the solution. Second, an inter-group competition-cooperation relationship network is established. According to the relationship network, each particle performs adaptive learning between and within the group and updates. Then, reinforcement learning is introduced to train learning rules and learning parameters of adaptive learning between and within groups to promote particle autonomous evolution. We have done comparative experiments with six algorithms on the 10-100 dimension test functions and then give application cases. The comparison results with other algorithms in the optimization function show that the proposed algorithm is computationally effective and efficient in the face of large-scale optimization problems. Further experiments on complex problems in the real scene of 270,000 campus consumption data verify the collaborative optimization performance and collaborative learning performance of our algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Vikhar PA (2016) Evolutionary algorithms: A critical review and its future prospects. In: 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC), pp 261–265

  2. Jaesung L, Wangduk S, Dae-Won K (2018) Effective evolutionary multilabel feature selection under a budget constraint. Complexity 2018:1–14

    Google Scholar 

  3. Karafotias G, Hoogendoorn M, Eiben AE (2015) Parameter control in evolutionary algorithms: Trends and challenges. IEEE Trans Evol Comput 19(2):167–187

    Article  Google Scholar 

  4. Eiben AE, Hinterding R, Michalewicz Z (1999) Parameter control in evolutionary algorithms. IEEE Trans Evol Comput 3(2):124–141

    Article  Google Scholar 

  5. Karafotias G, Eiben AE, Hoogendoorn M (2014) Generic parameter control with reinforcement learning. In: Proceedings of the 2014 annual conference on genetic and evolutionary computation, pp 1319–1326

  6. Zhang H, Sun J, Xu Z (2020) Adaptive structural hyper-parameter configuration by q-learning. In: 2020 IEEE Congress on Evolutionary Computation (CEC), pp 1–8

  7. Huang C, Li Y, Yao X (2020) A survey of automatic parameter tuning methods for metaheuristics. IEEE Trans Evol Comput 24(2):201–216

    Article  Google Scholar 

  8. Zhu H, Liu D, Zhang S, Zhu Y, Teng L, Teng S (2016) Solving the many to many assignment problem by improving the kuhn–munkres algorithm with backtracking. Theor Comput Sci 618: 30–41

    Article  MathSciNet  Google Scholar 

  9. Rashid MH, Tao L (2018) Parallelize simulated annealing heuristics with gpus. In: 2018 19th IEEE/ACIS international conference on software engineering, artificial intelligence, networking and parallel/distributed computing (SNPD), pp 76–81

  10. Das N, Priya PA (2019) A gradient-based interior-point method to solve the many-to-many assignment problems. Complexity

  11. Davis LD, De Jong K, Vose MD, Whitley LD (2012) Evolutionary algorithms. Springer Science & Business Media, vol 111

  12. Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39

    Article  Google Scholar 

  13. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN95- International Conference on Neural Networks, vol 4, pp 1942–1948

  14. Lai Z, Feng X, Yu H, Luo F (2021) A parallel social spider optimization algorithm based on emotional learning. IEEE Trans Syst Man Cybern Syst 51(2):797–808

    Article  Google Scholar 

  15. Ding S, Du W, Zhao X, Wang L, Jia W (2019) A new asynchronous reinforcement learning algorithm based on improved parallel pso. Appl Intell 49(12):4211–4222

    Article  Google Scholar 

  16. Zhou Y, Hao J-K, Duval B (2016) Reinforcement learning based local search for grouping problems: A case study on graph coloring. Expert Syst Appl 64:412–422

    Article  Google Scholar 

  17. Cui Y, Geng Z, Zhu Q, Han Y (2017) Multi-objective optimization methods and application in energy saving. Energy 125:681– 704

    Article  Google Scholar 

  18. Gunawan A, Lau HC (2011) Fine-tuning algorithm parameters using the design of experiments approach. In: International Conference on Learning and Intelligent Optimization. Springer, pp 278–292

  19. Yuan Z, De Oca MAM, Birattari M, Stutzle T (2012) Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms. Swarm Intell 6(1):49–75

    Article  Google Scholar 

  20. Lopez-Ibanez M, Dubois-Lacoste J, Caceres LP, Birattari M, Stutzle T (2016) The irace package: Iterated racing for automatic algorithm configuration. Oper Res Perspect 3:43–58

    MathSciNet  Google Scholar 

  21. Hutter F, Hoos HH, Leyton-Brown K, St ’utzle T (2009) Paramils: an automatic algorithm configuration framework. J Artif Intell Res 36:267–306

    Article  Google Scholar 

  22. Alsalibi B, Venkat I, Al-Betar MA (2017) A membrane-inspired bat algorithm to recognize faces in unconstrained scenarios. Eng Appl Artif Intell 64:242–260

    Article  Google Scholar 

  23. Alsalibi B, Abualigah L, Khader AT (2020) A novel bat algorithm with dynamic membrane structure for optimization problems. Appl Intell:1–26

  24. Cheng R, Jin Y (2014) A competitive swarm optimizer for large scale optimization. IEEE Trans Cybern 45(2):191–204

    Article  Google Scholar 

  25. Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4):341–359

    Article  MathSciNet  Google Scholar 

  26. Tanabe R, Fukunaga A (2013) Success-history based parameter adaptation for differential evolution. In: 2013 IEEE congress on evolutionary computation. IEEE, pp 71–78

  27. Wang Y, Wang B-C, Li H-X, Yen GG (2015) Incorporating objective function information into the feasibility rule for constrained evolutionary optimization. IEEE Trans Cybern 46(12):2938–2952

    Article  Google Scholar 

  28. Zhu G-Y, Zhang W-B (2017) Optimal foraging algorithm for global optimization. Appl Soft Comput 51:294–313

    Article  Google Scholar 

  29. Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science. IEEE, pp 39–43

  30. Guo W, Liu T, Dai F, Zhao F, Xu P (2021) Skewed normal cloud modified whale optimization algorithm for degree reduction of s-λ curves. Appl Intell (6)

  31. Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime mould algorithm: a new method for stochastic optimization. Futur Gener Comput Syst 111:300–323

    Article  Google Scholar 

  32. Cao Y, Wang Q, Wang Z, Jermsittiparsert K, Shafiee M (2020) A new optimized configuration for capacity and operation improvement of cchp system based on developed owl search algorithm. Energy Rep 6:315–324

    Article  Google Scholar 

  33. Dokeroglu T, Sevinc E, Kucukyilmaz T, Cosar A (2019) A survey on new generation metaheuristic algorithms. Comput Indust Eng 137, p 106040

  34. Sulaiman MH, Mustaffa Z, Saari MM, Daniyal H (2020) Barnacles mating optimizer: A new bio-inspired algorithm for solving engineering optimization problems. Eng Appl Artif Intell 87:103330

  35. Yapici H, Cetinkaya N (2019) A new meta-heuristic optimizer: pathfinder algorithm. Appl Soft Comput 78:545–568

    Article  Google Scholar 

  36. Zhao W, Zhang Z, Wang L (2020) Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng Appl Artif Intell 87:103300

  37. Abdullah JM, Ahmed T (2019) Fitness dependent optimizer: inspired by the bee swarming reproductive process. IEEE Access 7:43 473–43 486

    Article  Google Scholar 

  38. de Vasconcelos Segundo EH, Mariani VC, dos Santos Coelho L (2019) Metaheuristic inspired on owls behavior applied to heat exchangers design. Therm Sci Eng Progress 14:100431

  39. de Vasconcelos Segundo EH (2019) Design of heat exchangers using falcon optimization algorithm. Appl Therm Eng 156:119–144

    Article  Google Scholar 

  40. Kamboj VK, Nandi A, Bhadoria A, Sehgal S (2020) An intensify harris hawks optimizer for numerical and engineering optimization problems. Appl Soft Comput 89:106018

  41. Shayanfar H, Gharehchopogh FS (2018) Farmland fertility: a new metaheuristic algorithm for solving continuous optimization problems. Appl Soft Comput 71:728–746

    Article  Google Scholar 

  42. Zhao W, Wang L, Zhang Z (2019) Supply-demand-based optimization: a novel economics-inspired algorithm for global optimization. IEEE Access 7:73 182–73 206

    Article  Google Scholar 

  43. Moosavi SHS, Bardsiri VK (2019) Poor and rich optimization algorithm: a new human-based and multi populations algorithm. Eng Appl Artif Intell 86:165–181

    Article  Google Scholar 

  44. Djenouri Y, Comuzzi M (2017) Combining apriori heuristic and bio-inspired algorithms for solving the frequent itemsets mining problem. Inf Sci 420:1–15

    Article  Google Scholar 

  45. Pierezan J, Coelho LDS (2018) Coyote optimization algorithm: a new metaheuristic for global optimization problems. In: 2018 IEEE congress on evolutionary computation (CEC). IEEE, pp 1–8

  46. Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evol Comput 9(2):159–195

    Article  Google Scholar 

  47. Carrasco J, García S, Rueda M, Das S, Herrera F (2020) Recent trends in the use of statistical tests for comparing swarm and evolutionary computing algorithms: Practical guidelines and a critical review. Swarm Evol Comput 54:100665

Download references

Acknowledgements

This work was supported in part by the Key Program of the National Natural Science Foundation of China under Grant No. 62136003, the National Natural Science Foundation of China under Grant Nos. 61772200 and 61772201, Shanghai Pujiang Talent Program under Grant No. 17PJ1401900, Shanghai Economic and Information Commission “Special Fund for Information Development” under Grant No. XX-XXFZ-02-20-2463.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiang Feng.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, M., Feng, X., Yu, H. et al. Multi-granularity competition-cooperation optimization algorithm with adaptive parameter configuration. Appl Intell 52, 13132–13161 (2022). https://doi.org/10.1007/s10489-021-02952-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-02952-9

Keywords

Navigation