A revised particle swarm optimization based discrete Lagrange multipliers method for nonlinear programming problems

https://doi.org/10.1016/j.cor.2010.11.007Get rights and content

Abstract

In this paper, a new algorithm for solving constrained nonlinear programming problems is presented. The basis of our proposed algorithm is none other than the necessary and sufficient conditions that one deals within a discrete constrained local optimum in the context of the discrete Lagrange multipliers theory. We adopt a revised particle swarm optimization algorithm and extend it toward solving nonlinear programming problems with continuous decision variables. To measure the merits of our algorithm, we provide numerical experiments for several renowned benchmark problems and compare the outcome against the best results reported in the literature. The empirical assessments demonstrate that our algorithm is efficient and robust.

Introduction

One of the branches in operations research that due to its wide applications in the fields of management science, engineering optimization, and economics has attracted substantial attention over the years is nonlinear programming (NLP). In the realm of NLP, a general constrained nonlinear programming problem is formulated as follows:Minimizef(x)s.thi(x)=0i=1,,m1gj(x)0j=1,...,m2xXRnwhere x=[x1,x2,,xn]T is a vector of n decision variables, f(x) represents the objective function, hi(x) for i=1,…,m1 show the equality constraints, gj(x) for j=1,…,m2 stand for the inequality constraints and X is a closed continuous set. Either of the functions f(x), hi(x) or gj(x) can be linear or nonlinear. One can cite numerous methods of solving nonlinear programming problems from the rich literature in this area. To name a few, feasible directional methods [1], gradient projection method [2], reduced gradient method [3], and penalty function methods like augmented Lagrangian penalty function [4] can be mentioned. In addition, within the past decade, a number of novel methods have also been proposed like discrete Lagrange multipliers method for NLP [5], [6].

As a category of stochastic optimization methods, evolutionary computation techniques have attracted substantial interest on the part of scholars and practitioners in developing more efficient and robust computational procedures for solving complex optimization problems. Genetic algorithms are among the most well-known evolutionary algorithms. As a probabilistic search method, a genetic algorithm is designed to behave in accordance to the mechanism of natural evolution and selection. A genetic algorithm possesses the capability of simultaneously evaluating numerous points in the search space, so it stands a better chance of obtaining the global solution for the type of problems discussed here. Genetic algorithms enjoy the added advantage of ease of implementation, since they only use a scalar measure and not the derivative information in the search process.

Another evolutionary algorithm known as particle swarm optimization (PSO) was introduced by Eberhart and Kennedy in 1995 [7], which is based on individual entities called particles. Imitating the social behavior of animals has been the key idea in developing the PSO algorithm. The PSO solution process requires particles to pass (fly) through the search space as their attributes such as positions and velocities are being updated on a constant basis with regard to the best performance of all particles as well as the best performance of each particle and its neighbors. PSO has some interesting properties to offer. For one thing, PSO has memory in the sense that particles retain good solutions (this feature is similar to the one involved in the elitist genetic algorithm). In addition, some important topologies of PSO enjoy the property of sharing information between all the particles (for instance, the global star topology).

General quadratic penalty approaches are the commonly used constraint handling methods that the evolutionary algorithms employ. Static penalty methods consider substantial penalties and convert (1) into an unconstrained optimization problem [4] while dynamic penalty methods, on the other hand, try to increase the penalties in a gradual fashion in order to guard against the problems associated with the static penalty methods. These methods convert (1) into a sequence of unconstrained sub-problems, which tend to converge when all these sub-problems are solved optimally [4]. This means that, even if a single unconstrained sub-problem is not solved optimally, there is no assurance of reaching a constrained global minimum solution. Parsopoulos and Vrahatis [8] came up with a penalty framework for solving constrained optimization problems which employs a particle swarm paradigm (PSP) in the solution of resulting unconstrained sub-problems. Having introduced the notion of co-evolution, Coello [9] developed a penalty based algorithm by using a co-evolution model to adapt the penalty factors into the objective function in a genetic algorithm framework. Akin to the work of Coello [9], He and Wang [10] developed a co-evolutionary particle swarm optimization in which penalty factors and decision variables are optimized in a definite sequence by PSO.

In spite of their popularity, the quadratic penalty methods may encounter the difficulty of ill-conditioning, caused by the sensitive effect of penalty factors on the convergence behavior that can lead to a premature termination or slow rate of progress [4]. To remedy this drawback, Eberhard and Sedlaczek [11] developed a particle swarm featured by the augmented Lagrange penalty function which enjoys the benefit of smoothness. Their algorithm implements a dynamic penalty adaptation approach leading to a better quality of solutions with respect to the general quadratic penalty methods.

Recently, some interesting constraint handling methods have been proposed for the particle swarm optimization and genetic algorithms. For example, He and Wang [12] presented a hybrid particle swarm optimization in the context of a feasibility based rule that takes advantage of not using the penalty function to handle the constraints. To refuse the dominant effect of local optimum solutions, a simulated annealing algorithm has been incorporated in this framework to mutate the best solution with the aim of a more effective exploration of the solution space. Dong et al. [13] proposed a particle swarm optimization for nonlinear programming problems for which the constraints are handled with a priority based ranking method. Defining the concept of constraint fitness, they dealt with a constraint handling method to quantify the degree of violation for each particle. Chootinan and Chen [14] recommended using the gradient repair method to guide infeasible solutions toward the feasible region. They applied a simple GA configured by the gradient repair method to drive the constraints into satisfaction. With the above points in mind, Zahara and Kao [15] employed the Nelder–Mead simplex method along with PSO towards solving constrained optimization problems. They made use of gradient repair method of Ref. [14] as well as the priority based ranking method of Ref. [13] to manage the constraints.

Aside from PSO and GA, one can find some research efforts about the simulated annealing in the context of stochastic optimization methods. Wah and Wang [5], [6] offered an improved simulated annealing including the discrete Lagrange multipliers method for discrete constrained optimization problems. Pedamallu and Ozdamar [16] proposed a simulated annealing method supported by a local search algorithm called feasible sequential quadratic programming to solve general constrained optimization problems. The authors examined both versions of penalty and non-penalty based approaches to handle the constraints.

The discrete Lagrange multipliers theory [17], [18] is a collection of interconnected mathematical facts associated with the discrete space similar to the continuous Lagrange multipliers theory for the continuous space. More importantly, the theorems and lemmas defined in this theory cover nonconvex and nondifferentiable optimization problems. In fact, a strong mathematical background supports the discrete Lagrange multipliers method which not only does it but does not rely on any restrictive assumption, it is more widely applicable to tackle real life problems. This is rarely true for the other procedures of constraint handling such as the quadratic and augmented Lagrange penalty functions [4]. In this paper, a revised particle swarm optimization is proposed to solve constrained nonlinear optimization problems which employ the discrete Lagrange multipliers method (RPSO-DLM) to satisfy the constraints. We utilize PSO in this algorithm because of the following facts:

  • (1)

    PSO is easy to implement in the context of programming and parameter setting.

  • (2)

    PSO originally has been developed to address nonlinear programming problems [7] and more recently, some of its variants have been successfully applied to constrained optimization problems [12], [15], [27].

  • (3)

    The population based algorithms like PSO allow us to provide each entity (particle) with a separate vector of Lagrange multipliers rather than using one vector of Lagrange multipliers for all the entities (particles).

The rest of this paper is organized as follows. In Section 2, the continuous and discrete Lagrange multipliers theories are concisely reviewed. In Section 3, we describe particle swarm optimization method and two of its approaches. In Section 4, we present our contribution for solving constrained nonlinear optimization problems called PSO-DLM. In Section 5, we propose a type of priority strategy to promote PSO which can be helpful to eliminate some defects of our algorithm. Section 6 contains a discussion with the aim of shedding light on how to fine tune the essential parameters and finally, in Section 7, we provide several numerical examples to establish the merits of our proposed algorithm.

Section snippets

Lagrange multipliers method

In this section, we present a discussion on the continuous [19] and discrete Lagrange multipliers theory [17], [18] as much as needed for developing our proposed method. Thus, here, we mainly concentrate on definitions, lemmas and theorems. As the framework of the discrete Lagrange multipliers theory is akin to its continuous counterpart, it is necessary to remind some important parts of the continuous Lagrange multipliers theory in preparation of a rather detailed discussion on this subject.

Canonical approach

Particle swarm optimization (PSO) algorithm proposed by Eberhart and Kennedy [7] is one of the most novel meta-heuristic algorithms that was essentially developed for solving nonlinear programming problems. Principally, the main notion of PSO has been derived from the social behavior of animals in the nature. PSO acts similar to the genetic algorithms in the sense of its population based approach and imitating the evolutionary process in the nature but unlike the genetic algorithms, no entity

PSO based discrete Lagrange multipliers

As proclaimed in Section 2, in contrast to the discrete space, a solution that satisfies the necessary and sufficient conditions in the continuous space may not be a suitable constrained local optimum solution [18]. In other words, the discrete Lagrange multipliers theory is stronger than the continuous case in terms of the optimality conditions. Hence, if there exists a way to transform a continuous optimization problem into a discrete one and apply the discrete Lagrange multipliers method, it

Priority based feasibility strategy

A number of research efforts have been reported on the issue of the swarm convergence to the global optimum solution [21], [23]. In this relation, Van den Bergh [23] noticed a critical problem in the original PSO caused by the particles’ positions coinciding with the global best position. In this case, if the particles’ velocities are close to zero, the PSO algorithm may come to a premature convergence, a condition which is more crucial in our algorithm. To be more specific, if the number of

Sensitivity analysis and parameter tuning

In this section, we plan to perform a series of sensitivity analysis experiments to observe which of the parameters can essentially affect the total efficiency of our proposed algorithm. By the end of this section, we aim at screening the significant parameters which can seriously influence the conclusions based upon the quality and the corresponding rate of convergence. The relevant outcomes direct us to fine tune the critical parameters.

In the course of running the algorithm, the following

Numerical computations

In this section, we make an attempt to justify the constraint handling capability of the RPSO-DLM algorithm by posing this question: How promising the proposed constraint handling method can be when compared to the existing constraint handling methods in standard particle swarm optimization (SPSO 2007 [25]) and to what degree may PBFS be effective? Aside from that, we intend to asses the merits of our algorithm against some of the existing efficient optimization algorithms in the literature. To

Conclusions

In this paper, a particle swarm approach was introduced to tackle constrained nonlinear programming problems for which the constraints are managed using the discrete Lagrange multipliers method. In contrast to the mathematical theory of the popular exact methods which are essentially dependent on some sensitive assumptions, the discrete Lagrange multipliers theory can be expanded on nonconvex and nondifferentiable optimization problems in the realm of discrete space. Having been extended to the

References (28)

  • M.S. Bazaraa et al.

    Nonlinear Programming: theory and algorithms

    (2006)
  • Wah BW, Wang T. Constrained simulated annealing with applications in nonlinear continuous constrained global...
  • B.W. Wah et al.

    Simulated annealing with asymptotic convergence for nonlinear constrained optimization

    Journal of Global Optimization

    (2007)
  • Eberhart RC, Kennedy J. A new optimizer using particle swarm theory. In: Proceedings of the sixth international...
  • Cited by (9)

    • Bi-objective optimization of a three-echelon multi-server supply-chain problem in congested systems: Modeling and solution

      2016, Computers and Industrial Engineering
      Citation Excerpt :

      The reasons behind choosing MOPSO are: PSO has been developed successfully to solve similar constrained optimization problems such as the one in Mohammad Nezhad and Mahlooji (2011). Thus, an improved MOPSO algorithm can be used to solve the problem successfully.

    • Forecasting iron ore import and consumption of China using grey model optimized by particle swarm optimization algorithm

      2013, Resources Policy
      Citation Excerpt :

      EA may not get the optimal solution to a problem, but it takes little efforts to reach a near-optimal solution. As a typical evolutionary algorithm based on Swarm Intelligence (SI), PSO has been successfully applied in many research and application areas in past several years for its simple expression, convenience to program and fast convergence (Yu et al., 2012; Assareh et al., 2010; Boonchuay and Ongsakul, 2012; Nezhad and Mahlooji, 2011) since it was proposed by Kennedy and Eberhart (1995). It has been demonstrated that PSO gets better results in a faster, cheaper way compared with other methods.

    • A particle swarm-BFGS algorithm for nonlinear programming problems

      2013, Computers and Operations Research
      Citation Excerpt :

      Kayhan et al. [22] included a spreadsheet solver in the particle swarm optimization as a global search to improve the results. Mohammad Nezhad and Mahlooji [23] invoked the theory of discrete Lagrange multipliers in a revised particle swarm optimization to arrive at discrete saddle points. Following the previous research efforts, this article aims at integrating particle swarm optimization with a modified BFGS quasi-Newton method in a sequential manner to improve the results.

    • Parallel-machine scheduling to minimize tardiness penalty and power cost

      2013, Computers and Industrial Engineering
      Citation Excerpt :

      Hybrid PSO algorithms with other metaheuristics (Xia & Wu, 2005; Zhang, Shao, Li, & Gao, 2009) or integrating PSO algorithms with discrete Lagrange multipliers (Nezhad & Mahlooji, 2011) or designing new PSO features, e.g. dynamically adjusting acceleration coefficients as the course of iterations continues, are other possible research directions.

    • An intelligent augmentation of particle swarm optimization with multiple adaptive methods

      2012, Information Sciences
      Citation Excerpt :

      Fan and Zahara [10] explore the integration of PSO with the Nelder-Mead (NM) simplex search method for unconstrained optimization. A method based on PSO and discrete Lagrange multipliers is proposed for nonlinear programming problems and is demonstrated to be very efficient and robust in [26]. We conclude that integrating PSO with other techniques has greatly strengthened PSO’s capability for solving both uni-modal and multi-modal functions.

    View all citing articles on Scopus
    View full text