Elsevier

Applied Soft Computing

Volume 13, Issue 4, April 2013, Pages 2144-2158
Applied Soft Computing

A novel multi-swarm algorithm for optimization in dynamic environments based on particle swarm optimization

https://doi.org/10.1016/j.asoc.2012.12.020Get rights and content

Abstract

Optimization in dynamic environment is considered among prominent optimization problems. There are particular challenges for optimization in dynamic environments, so that the designed algorithms must conquer the challenges in order to perform an efficient optimization. In this paper, a novel optimization algorithm in dynamic environments was proposed based on particle swarm optimization approach, in which several mechanisms were employed to face the challenges in this domain. In this algorithm, an improved multi-swarm approach has been used for finding peaks in the problem space and tracking them after an environment change in an appropriate time. Moreover, a novel method based on change in velocity vector and particle positions was proposed to increase the diversity of swarms. For improving the efficiency of the algorithm, a local search based on adaptive exploiter particle around the best found position as well as a novel awakeningsleeping mechanism were utilized. The experiments were conducted on Moving Peak Benchmark which is the most well-known benchmark in this domain and results have been compared with those of the state-of-the art methods. The results show the superiority of the proposed method.

Highlights

► This paper proposed a novel approach called FTMPSO for dynamic optimization problems. ► The experiments have been conducted on Moving Peak Benchmark (MPB). ► The experimental results showed the superiority of the proposed method.

Introduction

Optimization is considered among the most important problems in mathematics and sciences. The importance of optimization and its numerous applications has inspired the scientists to investigate on different aspects of it. Optimization problems could be seen in real-world applications, e.g. itinerary selection. The goal in all optimization problems is to maximize or minimize one or more cost functions in a problem considering its limitations. While there are a limited number of limitations in a problem space, it can be solved easily. However, increasing limitations leads to an NP-hard problem which needs a high computational cost to be solved. Therefore, researchers are continually seeking the efficient ways for solving such NP-hard problems. Meta-heuristic methods are among these techniques.

Meta-heuristic methods present a computing method for solving optimization problems in which an iterative process for enhancing the obtained solution is utilized until a terminating state is reached. Until now, most existing meta-heuristic methods have focused on static problems. In such problems, the problem space remains unchanged during the optimization process. However, most optimization problems in real world are dynamic and non-deterministic, i.e. the problem search space changes during the optimization process. For example, scheduling tasks is a problem usually solved as a static optimization problem. However, by arriving of a new task during the scheduling procedure, or occurrence of some other problems such as failures in resources, the search environment is changed from a static problem into a dynamic one. As a result, the previous static solutions may no longer be applicable on the new environment. Such problems are called dynamic state optimization problems.

In static optimization problems, finding a global optimum is considered as the main goal. On the other hand, finding a global optimum is not the only goal in dynamic environments and tracking the optimum in the problem space is extremely important in this domain. In fact, the proposed methods for optimization in static environments fail to appropriately follow the optimum. Thus, such methods are not suitable to be used in dynamic environments and the necessity of finding different techniques involving different goal functions and different evaluation criteria for optimization in dynamic environments is obvious.

In this paper, a new optimization method based on PSO has been proposed, by presenting a set of consistency techniques with the problem space for optimization in dynamic environments. To evaluate the proposed method, Moving Peak Benchmark (MPB) has been used, which is the best-known benchmark for evaluating optimization methods in dynamic environments.

The rest of the paper is organized as follows. Section 2 reviews the previous literature on the subject. Section 3 explains the proposed method for solving optimization problems in dynamic environments. Section 4 is dedicated to the experiments and the obtained results. The last section concludes the paper and presents the scope of the future works.

Section snippets

Related work

Using meta-heuristic methods for optimization in dynamic environments has its own challenges which do not exist in static environments. The most important challenges encountered by meta-heuristic methods in dynamic environments are outdated memory and diversity loss. The outdated memory challenge exists in optimization in dynamic environments, because when the environment changes, the fitness value of the obtained solutions will change and will no longer correspond to the stored value in the

Finder–tracker multi-swarm PSO (FTPSO)

In this section, a novel algorithm based on PSO is proposed for optimization in dynamic environments. PSO is one of the swarm intelligence methods proposed by Kennedy and Eberhart in 1995 [42]. In the following, we briefly describe PSO algorithm as the base approach of the proposed method.

Let N be the population size. For the ith particle (1  i  N) in a D-dimensional space, the current position is xi = (xi1, xi2, …, xiD), and the velocity is vi = (vi1, vi2, …, viD). During the optimization process,

Experiments

In this section, we have tested proposed algorithm on Moving Peaks Benchmark (MPB) [45], [53], which is the best-known benchmark in dynamic environments optimization [54]. The reason for its popularity among the researchers is that this benchmark can produce numerous and various conditions and situations of dynamic environments. By utilizing this benchmark, we can study algorithms performances from different aspects.

In order to measure the efficiency of the algorithms, offline error that is the

Conclusion

In this paper, a novel optimization algorithm based on particle swarm optimization approach was proposed in dynamic environments, in which several mechanisms were employed to conquer challenges and necessities of dynamic environments. In the proposed algorithms, the swarms in the problem space were divided into two categories: finder and tracker. Finder category was configured in order to find the peaks in an appropriate time. On the other hand, tracker category was configured for appropriately

References (60)

  • X. Hu et al.

    Adaptive particle swarm optimization: detection and response to dynamic systems

  • S. Yang et al.

    Experimental study on population-based incremental learning algorithms for dynamic optimization problems

    Soft Computing: A Fusion of Foundations, Methodologies and Applications

    (2002)
  • J. Kennedy et al.

    Population structure and particle swarm performance

  • S. Janson et al.

    A hierarchical particle swarm optimizer for dynamic optimization problems

  • A.B. Hashemi et al.

    Cellular Pso: a PSO for dynamic environments

  • A.B. Hashemi et al.

    A multi-role cellular PSO for dynamic environments

  • S. Yang

    On the design ofdiploid genetic algorithms for problem optimization in dynamic environments

  • A.S. Uyar et al.

    A new population based adaptive domination change mechanism for diploid genetic algorithms in dynamic environments

    Soft Computing: A Fusion of Foundations, Methodologies and Applications

    (2005)
  • S. Yang

    Memory-based immigrants for genetic algorithms in dynamic environments

  • A. Simões et al.

    Evolutionary algorithms for dynamic environments: prediction using linear regression and Markov chains

  • H. Richter et al.

    Memory based on abstraction for dynamic fitness functions

  • S. Yang et al.

    Population-based incremental learning with associative memory for dynamic environments

    IEEE Transactions on Evolutionary Computation

    (2008)
  • H. Richter

    Memory design for constrained dynamic optimization problems

  • R.W. Morrison

    Designing Evolutionary Algorithms for Dynamic Environments

    (2004)
  • J. Grefenstette

    Genetic algorithms for changing environments

    Parallel Problem Solving from Nature

    (1992)
  • L.T. Bui et al.

    Multiobjective optimization for dynamic environments

  • H. Andersen, An investigation into genetic algorithms, and the relationship between speciation and the tracking of...
  • F. Oppacher et al.

    The shifting balance genetic algorithm: improving the GA in a dynamic environment

  • J. Branke et al.

    A multi-population approach to dynamic optimization problems

    Adaptive Computing in Design and Manufacturing

    (2000)
  • H. Cheng et al.

    Multi-population genetic algorithms with immigrants scheme for dynamic shortest path routing problems in mobile ad hoc networks

  • Cited by (0)

    View full text