ABSTRACT
Though estimation of distribution algorithms (EDAs) have witnessed success in problem optimization, most of them suffer from sharp shrink of covariance, which may lead to premature convergence. To alleviate this issue, this paper proposes a layered learning estimation of distribution algorithm (LLEDA) by maintaining multiple probability distribution models. Specifically, LLEDA first separates the population into several layers based on fitness. Then, the mean position of each layer is computed. Subsequently, we let the estimated mean position in each lower layer randomly learn from the one of a randomly selected higher layer, so that the mean positions of lower layers could be promoted to be close to promising areas found in the current population. At last, the covariance of each layer is estimated based on the generated new mean position and the individuals in this layer. By this means, multiple probability models with high quality are maintained and then are used to sample promising and diversified offspring separately. Comparative experiments conducted on a widely used benchmark problem set demonstrate that the proposed LLEDA achieves competitive or even much better performance than several state-of-the-art and representative EDAs.
- Yang, Q., Li, Y., Gao, X.-D., Ma, Y.-Y., Lu, Z.-Y., Jeon, S.-W. and Zhang, J. An Adaptive Covariance Scaling Estimation of Distribution Algorithm. Mathematics, 9, 24 (2021).Google Scholar
- Liang, Y., Ren, Z., Yao, X., Feng, Z., Chen, A. and Guo, W. Enhancing Gaussian Estimation of Distribution Algorithm by Exploiting Evolution Direction With Archive. IEEE Transactions on Cybernetics, 50, 1 (2020), 140--152.Google Scholar
- Zhou, A., Sun, J. and Zhang, Q. An Estimation of Distribution Algorithm With Cheap and Expensive Local Search Methods. IEEE Transactions on Evolutionary Computation, 19, 6 (2015), 807--822.Google Scholar
- Yang, Q., Chen, W.-N., Li, Y., Chen, C. P., Xu, X.-M. and Zhang, J. Multimodal Estimation of Distribution Algorithms. IEEE transactions on cybernetics, 47, 3 (2016), 636--650.Google Scholar
- Larrañaga, P. and Lozano, J. A. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Genetic Algorithms and Evolutionary Computation, 2 (2001).Google Scholar
- Yang, Q., Chen, W., Deng, J. D., Li, Y., Gu, T. and Zhang, J. A Level-Based Learning Swarm Optimizer for Large-Scale Optimization. IEEE Transactions on Evolutionary Computation, 22, 4 (2018), 578--594.Google Scholar
- Yang, Q., Chen, W. N., Gu, T., Jin, H., Mao, W. and Zhang, J. An Adaptive Stochastic Dominant Learning Swarm Optimizer for High-Dimensional Optimization. IEEE Transactions on Cybernetics (2020), 1--17.Google Scholar
- Yang, Q., Chen, W. N., Gu, T., Zhang, H., Yuan, H., Kwong, S. and Zhang, J. A Distributed Swarm Optimizer With Adaptive Communication for Large-Scale Optimization. IEEE Transactions on Cybernetics, 50, 7 (2020), 3393--3408.Google Scholar
- Ceberio, J., Irurozki, E., Mendiburu, A. and Lozano, J. A. A review on estimation of distribution algorithms in permutation-based combinatorial optimization problems. Progress in Artificial Intelligence, 1, 1 (2012), 103--117.Google ScholarCross Ref
- Krejca, M. S. Theoretical Analyses of Univariate Estimation-of-distribution Algorithms. Universität Potsdam, 2019.Google Scholar
- Yang, Q., Chen, W.-N. and Zhang, J. Probabilistic Multimodal Optimization. Metaheuristics for Finding Multiple Solutions (2021), 191--228.Google ScholarCross Ref
- Grahl, J., Bosman, P. A. and Rothlauf, F. The Correlation-Triggered Adaptive Variance Scaling IDEA. Proceedings of Annual Conference on Genetic and Evolutionary Computation (2006), 397--404.Google ScholarDigital Library
- Bosman, P. A., Grahl, J. and Rothlauf, F. SDR: A Better Trigger for Adaptive Variance Scaling in Normal EDAs. Proceedings of Annual Conference on Genetic and Evolutionary Computation (2007), 492--499.Google ScholarDigital Library
- Cai, Y., Sun, X., Xu, H. and Jia, P. Cross Entropy and Adaptive Variance Scaling in Continuous EDA. Proceedings of Annual Conference on Genetic and Evolutionary Computation (2007), 609--616.Google ScholarDigital Library
- Bosman, P. A., Grahl, J. and Thierens, D. Enhancing the Performance of Maximum-likelihood Gaussian EDAs Using Anticipated Mean Shift. International Conference on Parallel Problem Solving from Nature (2008), 133--143.Google Scholar
- Ren, Z., He, C., Zhong, D., Huang, S. and Liang, Y. Enhance Continuous Estimation of Distribution Algorithm by Variance Enlargement and Reflecting Sampling. IEEE Congress on Evolutionary Computation (2016), 3441--3447.Google Scholar
- Liang, J. J., Qu, B. Y. and Suganthan, P. N. Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-parameter Numerical Optimization. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore (2013).Google Scholar
- Fang, H., Zhou, A. and Zhang, G. An Estimation of Distribution Algorithm Guided by Mean Shift. IEEE Congress on Evolutionary Computation (2016), 3268--3275.Google Scholar
- Valdez, S. I., Hernández, A. and Botello, S. A Boltzmann Based Estimation of Distribution Algorithm. Information Sciences, 236 (2013), 126--137.Google ScholarDigital Library
Index Terms
- A layered learning estimation of distribution algorithm
Recommendations
Sub-structural niching in estimation of distribution algorithms
GECCO '05: Proceedings of the 7th annual conference on Genetic and evolutionary computationWe propose a sub-structural niching method that fully exploits the problem decomposition capability of linkage-learning methods such as the estimation distribution algorithms and concentrate on maintaining diversity at the sub-structural level. The ...
Initial-population bias in the univariate estimation of distribution algorithm
GECCO '09: Proceedings of the 11th Annual conference on Genetic and evolutionary computationThis paper analyzes the effects of an initial-population bias on the performance of the univariate marginal distribution algorithm (UMDA). The analysis considers two test problems: (1) onemax and (2) noisy onemax. Theoretical models are provided and ...
Improved runtime bounds for the univariate marginal distribution algorithm via anti-concentration
GECCO '17: Proceedings of the Genetic and Evolutionary Computation ConferenceUnlike traditional evolutionary algorithms which produce offspring via genetic operators, Estimation of Distribution Algorithms (EDAs) sample solutions from probabilistic models which are learned from selected individuals. It is hoped that ED As may ...
Comments