Elsevier

Information Sciences

Volume 547, 8 February 2021, Pages 136-162
Information Sciences

GGA: A modified genetic algorithm with gradient-based local search for solving constrained optimization problems

https://doi.org/10.1016/j.ins.2020.08.040Get rights and content

Abstract

In the last few decades, genetic algorithms (GAs) demonstrated to be an effective approach for solving real-world optimization problems. However, it is known that, in presence of a huge solution space and many local optima, GAs cannot guarantee the achievement of global optimality. In this work, in order to make GAs more effective in finding the global optimal solution, we propose a hybrid GA which combines the classical genetic mechanisms with the gradient-descent (GD) technique for local searching and constraints management. The basic idea is to exploit the GD capability in finding local optima to refine search space exploration and to place individuals in areas that are more favorable for achieving convergence. This confers to GAs the capability of escaping from the discovered local optima, by progressively moving towards the global solution. Experimental results on a set of test problems from well-known benchmarks showed that our proposal is competitive with other more complex and notable approaches, in terms of solution precision as well as reduced number of individuals and generations.

Introduction

Optimization has received an ever-growing interest in recent years and many novel optimization approaches and techniques are continuously developed and used to solve real-world problems. It essentially aims at finding the best solution from a huge constrained solution space (search space). More in general, it deals with the study of decision-making problems in which one or more objective functions need to be minimized or maximized simultaneously under given constraints. In real life we incur in many examples of optimization problems (OPs), involving several fields of science, such as engineering, economics, manufacturing, marketing, finance, transportation, communication networks, and more. Problems in these fields are characterized by having restrictions coming from technical limitations, business plans, human needs, laws, and so forth. Therefore, a solution of an OP should be able to choose the decision alternative that considers all the constraints, while optimizing the involved objectives. Two main optimization options exist, respectively referred to as exact and heuristic methods. The former guarantees to find the best solution to the problem (absolute optimum), while the latter tries to achieve a satisfactory near-optimal result with a significantly reduced computational effort. Clearly, the exact method is desirable, but when the problem dimensions grow, the computational complexity of the adopted solution technique (usually measured in terms of execution time and memory space) becomes an essential concern. In these cases, heuristic approaches are used, and, even though they are not able to guarantee the convergence to an optimal solution, they result in good performances. Moreover, heuristic approaches are also used when the exact ones fail in finding the solution. Evolutionary algorithms (EAs) are popular examples of heuristics. One of the key aspects of EAs is that they are multi-point search-based, and hence are intrinsically parallel. This allows them to explore the solution space in multiple directions at once, and hence to have a vision progressively more and more global of the search space. Today, they are successfully used for problem-solving in many fields, and in the last few decades GAs, differential evolution (DE), and particle swarm optimization (PSO) demonstrated their effectiveness for solving many complex real-world optimization problems. Most of these problems are characterized by a huge solution space with many local optima. In such a situation, EAs may be entrapped in some local optima, and cannot reach global optimality (premature convergence problem). Especially for GAs, this may occur if the search process requires many generations, which may lead to low diversity in the population if the exploration step is not properly designed. Indeed, as the number of iteration increases, the number of individuals approaching the optimum increases. Thus, the crossover operator is forced to choose similar individuals. Furthermore, since EAs are a non-deterministic class of algorithms, the solution may vary at each new run, depending also on the initial population. Possible approaches to address this issue include running EA many times, and/or increasing the number of individuals in the involved population at the expense of time and space efforts. To increase the chances of finding out an optimal solution, a combination of local search algorithms with different optimization approaches may be used, which takes advantage of each approach’s strengths and minimizes their weaknesses. They are known as hybrid methods, and when the combination involves EAs they are also called memetic algorithms [31]. The presence of a large number of approaches that can be used to solve OPs puts researchers faced with the problem of choosing the best EA-based algorithm. It should be noted that even though DE achieves good performance in searching global optimum, it is slow and the parameters are problem-dependent [33]. Besides, it is not completely free from the premature convergence and stagnation problems, and its performance decreases with increasing dimensionality of the problem. Many notable solutions have been proposed to improve the basic DE approach. Such solutions are often complex, which is not necessarily the best way to design a good algorithm. Rather, the implementation of an easy local search algorithm, if properly designed, is often the best option to obtain a performance that outperforms more complex approaches [12]. Another important way of looking into Memetic design is by fitness landscape analysis [18]. However, this approach relies on studying the nature of the problem before deciding on the right approach to be used.

The main aim of our work is to provide a solution as independent as possible from the nature of the problem, while ensuring solution precision as well as reduced number of individuals and generations. GA-based algorithms seem to be the best candidates for achieving these results. Indeed, their notable strength of not needing any prior knowledge about the problem to be solved makes them very attractive. Also, they implement all the basic genetic operators, such as selection, crossover, and mutation, ensuring a complete and granular control of the effects of the GA-based designed solutions in solving OPs. On the other hand, as it is known, PSO is significantly different from GA, because it does not provide genetic operators like crossover and mutation.

In order to make GAs more effective and efficient in finding solutions for constrained and unconstrained Single Objective Optimization Problems (SOOPs), we propose a hybrid approach that combines GAs with two gradient descent (GD)-based algorithms and uses variants of both crossover and mutation GA operators. The resulting solution is an algorithm called GGA (Gradient-based Genetic Algorithm) able to confer to GAs the capability of finding the optimal solution through a reduced number of generations and fewer individuals. The basic idea is to exploit the capabilities of GD for refining local solutions and use them as more favorable starting points in the GA. As a result, the GA gets the ability to escape from the discovered local optima, and progressively moves towards the global solution. The GA is used to diversify the search in order to find promising new search areas (exploration), while a GD-based algorithm (named GDC – Gradient Descent with Constraints) is employed to examine more deeply a given area (exploitation). This guarantees a balancing between exploitation and exploration of the given search space. To deal with constraints, an algorithm called RGDA (Resultant Gradient Descent Algorithm) is provided as alternative to popular solutions, such as the penalty-based approach [47] or the augmented Lagrangian method [8]. Also, the effects of the proposals are emphasized by the usage of two provided crossover and mutation GA operators, named SPC (Sliding Point Crossover) and BGM (Bounded Genes Mutation), respectively. Fig. 1 depicts a block diagram of the proposed approach.

The remainder of the paper is organized as follows. Section 2 presents the related works. Preliminaries are shown in Section 3 whereas in Section 4 the proposed algorithms are described in detail. Experimental results and their comparison with other approaches available in literature are the subject of Section 5. Finally, conclusions and future research issues are reported in Section 6.

Section snippets

Related works

Many evolutionary optimization solutions have been proposed in the literature, and the most interesting ones are those based on combinations of local and global optimization approaches (memetic algorithms). Although these techniques have shown to achieve excellent results when applied to small-sized problems, they still encounter serious challenges when applied to large scale problems. The need of developing more effective and efficient search algorithms able to better explore this huge

Preliminary knowledge

In this section, we briefly review the SOOP basics, together with the heuristics used in the proposed algorithms.

Hybridization of GA with GDC and RGDA

The aim of the proposed approach is to provide an algorithm, named GGA, capable of increasing the chance of finding out the global optimal solution for SOOPs by using few individuals and a reduced number of generations. In the following, the minimization of SOOPs is only considered. To this purpose a good balance between exploitation and exploration is required. This is guaranteed by the combination of GA with two GD-based algorithms, and by variants of both crossover and mutation GA operators.

Experiments and results

In order to demonstrate the validity of the proposed algorithm, several experiments were carried-out. Firstly, to show the capability of GGA in solving different OPs by using a reduced number of generations and individuals, we compared GGA with two algorithms widely used in MATLAB® environment. Next, we tested the effectiveness of GGA, and more precisely of RGDA, in solving constrained OPs by using the test functions coming from CEC 2006 competition [17]. Also, to analyze the behavior of GGA in

Conclusions and future works

An effective and efficient variant of hybrid GA for solving both unconstrained and constrained SOOPs has been presented in this work. The proposed algorithm, named GGA, takes advantage of GD ability in finding local optima. These local solutions are effectively exploited by GGA in order to address the GA key task of finding a global solution by using a reduced number of generations including few individuals. This has been accomplished by adding to the classical GA mechanisms two novel GD-based

CRediT authorship contribution statement

Gianni D’Angelo: Conceptualization, Software, Data curation, Writing - original draft, Visualization, Investigation, Validation, Writing - review & editing. Francesco Palmieri: Conceptualization, Methodology, Supervision, Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Gianni D’Angelo is a research fellow and contract Professor at the University of Salerno, Italy. He received the Italian “Laurea” degree (cum laude) in computer engineering and a PhD in computer science from the University of Sannio, and Salerno, Italy, respectively. His research interests concern with the development of Soft Computing algorithms for HPC and parallel computing for knowledge discovery in big data context. He gained experience in the world of the pattern recognition,

References (50)

  • H.-F. Wang et al.

    Hybrid genetic algorithm for optimization problems with permutation property

    Comput. Oper. Res.

    (2004)
  • Q. Yuan

    A hybrid genetic algorithm with the baldwin effect

    Inf. Sci.

    (2010)
  • N. Awad, M. Ali, J. Liang, B. Qu, P. Suganthan, Problem definitions and evaluation criteria for the CEC 2017 special...
  • S. Blum et al.

    Adaptive mutation strategies for evolutionary algorithms

  • J. Brest et al.

    Self-adaptive differential evolution algorithm in constrained real-parameter optimization

  • N. Christu et al.

    Cost minimization of welded beam design problem using PSO, SA, PS, Godlike, Cuckoo, FF, FP, ALO, GSA and MVO

    Int. J. Mech. Eng. Technol.

    (2018)
  • C.A.C. Coello et al.

    Constraint-handling in genetic algorithms through the use of dominance-based tournament selection

    Adv. Eng. Inform.

    (2002)
  • K. Deb

    Optimal design of a welded beam via genetic algorithms

    AIAA J.

    (1991)
  • K. Deb et al.

    A genetic algorithm based augmented lagrangian method for constrained optimization

    Comput. Optim. Appl.

    (2012)
  • T. El-Mihoub et al.

    Hybrid genetic algorithms: a review

    Eng. Lett.

    (2006)
  • P. Gancarski et al.

    Darwinian, lamarckian, and baldwinian (co)evolutionary approaches for feature weighting in k-means-based algorithms

    IEEE Trans. Evol. Comput.

    (2008)
  • G.E. Hinton et al.

    How learning can guide evolution

  • M. Jamil et al.

    A literature survey of benchmark functions for global optimization problems

    Int. J. Math. Model. Numer. Optim.

    (2013)
  • P. Kora et al.

    Crossover operators in genetic algorithms: a review

    Int. J. Comput. Appl.

    (2017)
  • J. Liang, T. Runarsson, E. Mezura-Montes, M. Clerc, P. Suganthan, A.C. Coello, K. Deb, Problem definitions and...
  • Cited by (110)

    View all citing articles on Scopus

    Gianni D’Angelo is a research fellow and contract Professor at the University of Salerno, Italy. He received the Italian “Laurea” degree (cum laude) in computer engineering and a PhD in computer science from the University of Sannio, and Salerno, Italy, respectively. His research interests concern with the development of Soft Computing algorithms for HPC and parallel computing for knowledge discovery in big data context. He gained experience in the world of the pattern recognition, decision-making, deep-learning, evolutionary algorithms, neural networks, fuzzy logic, ANFIS systems, genetic algorithms, and parallel programming applied in various scientific and industrial fields. He is also engaged in research and development of Web and mobile applications involving Internet technologies and embedded devices. He is authors of numerous articles published in international journals, books and conferences, and currently serves as a reviewer, editorial board and guest editor for several international journals.

    Francesco Palmieri is a full professor at the University of Salerno, Italy. He received from the same university an Italian M.S. “Laurea” degree and a PhD in computer science. His major research interests concern high performance networking protocols and architectures, routing algorithms and network security. Previously he has been an assistant professor at the Second University of Naples, and the Director of the telecommunication and networking division of the Federico II University, in Naples, Italy. At the start of his career, he also worked for several international companies on networking-related projects. He has been closely involved with the development of the Internet in Italy as a senior member of the Technical-Scientific Advisory Committee and of the CSIRT of the Italian NREN GARR. He has published a large number of papers (more than 200) in leading technical journals, books and conferences and currently serves as the editor-in-chief of an international journal and is part of the editorial board or associate editor of many other well reputed ones.

    View full text