Elsevier

Information Sciences

Volume 250, 20 November 2013, Pages 82-112
Information Sciences

Particle swarm optimization based on intermediate disturbance strategy algorithm and its application in multi-threshold image segmentation

https://doi.org/10.1016/j.ins.2013.07.005Get rights and content

Abstract

Particle swarm optimization (PSO) algorithm simulates social behavior among individuals (or particles) “flying” through multidimensional search space. For enhancing the local search ability of PSO and guiding the search, a region that had most number of the particles was defined and analyzed in detail. Inspired by the ecological behavior, we presented a PSO algorithm with intermediate disturbance searching strategy (IDPSO), which enhances the global search ability of particles and increases their convergence rates. The experimental results on comparing the IDPSO to ten known PSO variants on 16 benchmark problems demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the IDPSO algorithm to multilevel image segmentation problem for shortening the computational time. Experimental results of the new algorithm on a variety of images showed that it can effectively segment an image faster.

Introduction

Optimization forms an important part of our daily life. Many parameters of scientific, social, economic, and engineering problems can be adjusted to produce a more desirable outcome. Optimization is a process of finding the best possible solution to a problem within a reasonable time limit [47], [44]. As an important branch of optimization methodology, nature-inspired computation has attracted much attention in the past decades. Nature serves as a fertile source of concepts, principles, and mechanisms for designing optimization systems to deal with complex computational problems. Among them, the most successful algorithm is the evolutionary algorithm (EA) that simulates the evolution of individuals via the processes of selection, mutation, and reproduction [59], [18], [23], [40], [24], [43].

Recently, a new variant of EA, known as particle swarm optimization (PSO) [29], which is inspired by the emergent motion of flock birds searching for food, has been developed. As a recursive algorithm, the PSO algorithm simulates social behavior among individuals (or particles) “flying” through a multidimensional search space, where each particle represents a point at the intersection of all search dimensions. Recently, PSO has become one of the most popular optimization techniques for solving optimization problems in systems, such as power systems [32], [54], artificial neural network training [21], [57], fuzzy system control [8], [26], and others [34], [51], [61].

Accelerating convergence speed and avoiding the local optima are the two most important and appealing goals in PSO research. As mentioned in previous studies [14], [62], with components named pbest and gbest, which are respectively contributed by the best solutions achieved by an individual particle and all particles, the PSO algorithm has a faster and stable convergence rate compared to other EAs. Thus, identifying a competitive region that collects the experience of an individual particle and all particles can enhance both the local search ability and the convergence rate. Another problem in PSO is premature convergence, which indicates that the global optima in the search space may not be found [63], [36]. This weakness has restricted PSO from wider applications. A number of variant PSO algorithms have been proposed to help particles escape from the local optima [5], [45], [55], [30] (e.g., proposing new update rules and mutation operator). Kennedy and Mendes [30] investigated the effects of various population topologies on the particle swarm algorithm and presented a von Neumann topology, which performs more consistently. Liang et al. [36] designed a novel learning strategy, in which the historical best information of all particles is used to update a particle’s velocity, ensuring that the diversity of the swarm is preserved to discourage premature convergence.

More recently, the approach toward providing a balance between global exploration (global search) and local exploitation (local search) has attracted many PSO researchers. One significant improvement is the introduction of inertia weight [49], which allows a powerful global search in the early stage and more precise search in the later stage of the iteration process. However, similar with the original PSO, the particles in [49] could not jump out of the local optima especially in the last stage of iteration because the inertia weight of the improved algorithm was very small. As the Gaussian mutation in EAs is promising at a fine-grained search and chaotic mutation, it may give particles more opportunities to jump out of the local optima. Leandro [33] presented a novel, combined optimization method that incorporated chaotic mapping and Gaussian distribution to PSO to improve global and local convergence, respectively. Because the search space of a mutated particle in [33] belonged to the local area of pbest and gbest, the function of the mutated operator was limited. Sun et al. [50], [52] proposed a quantum-behaved PSO. The Delta potential well model makes it easier for particles to escape from the local optima [52]. They also used a contraction–expansion coefficient to balance the local and global searches during the optimization process. The exploration ability of QPSO depends mainly on the Delta potential well model, although the range of data generated by this model is limited; therefore, this model is not very powerful, especially in the last stage of iteration. By selecting the appropriate dilation parameter and the probability of mutation, Ling et al. [37] presented an improved PSO (HWPSO) and proposed a mutation strategy with dynamic mutating space by incorporating a wavelet function. At the early stage of the search, the solution space was explored by setting a larger mutating space. Besides, a precise solution is more likely to be fine-tuned in the later stage of the search when the properties of the wavelet lead to a smaller mutating space. Because the wavelet mutation generates only smaller values in the later stage of iteration, HWPSO is unlikely to jump out of the local optimum. Park et al. [42] proposed an improved PSO framework (CCPSO) that employed chaotic sequences combined with the traditional linearly decreasing inertia weights. As the chaotic inertial weight approach can encompass the whole weight domain under the decreasing line in a chaotic manner, the exploration and exploitation capabilities of particles are balanced [42]. Furthermore, by using a crossover operator that enables particles to have more opportunity to learn from pbest, the CCPSO achieves more favorable results on unimodal functions than on PSO algorithm. Gao et al. [16] proposed a novel PSO algorithm with a moderate-random-search strategy (MRPSO). MRPSO provides a greater chance to search in the local area and gives the particles a moderate probability of generating long jumps; this algorithm provides a better balance between global and local searches than the traditional PSO algorithm. Based on the search behaviors and the population distribution characteristics of PSO, a real-time evolutionary state estimation procedure [60] was performed to identify one of the four defined evolutionary states: exploration, exploitation, convergence, and jumping out. It enables automatic control of algorithmic parameters (e.g. inertia weight and acceleration coefficients) to accelerate the convergence speed and enhance the global search ability simultaneously. Using a slightly different social metaphor from that of the original PSO, a naive PSO (NPSO) was proposed in a previous study [25]. Each particle learns from a better one and takes warning from the worse one in the swarm [25]. As a result, this metaphor is helpful in exploring more regions of the search space in the early stage and exploiting more precise regions in the later stages.

Furthermore, the approach to strike a better balance between global exploration and local exploitation attracts EA researchers [22], [24], [27], [28], [35], [43]. In an artificial bee colony (ABC) [28], a greedy selection scheme was applied between the new solution and the old one, and the better one was preferred for inclusion in the population. In this way, the information of a good member of the population is distributed among the other members. The ABC algorithm also has a scout phase, which provides diversity in the population by allowing new and random solutions to be inserted into the population. ABC shows a more powerful global search ability than the traditional PSO and has been successfully applied in real-time [2]. Self-adaptive control method has been a popular strategy to improve the differential algorithm (DE). Adaptive rules incorporate some forms of the feedback from the search procedure to guide the parameter adaption. Qin et al. [43] proposed a self-adaptive DE (SaDE) algorithm, in which both the trial vector generation strategies and their associated parameters were gradually self-adapted by learning. Consequently, compared with the conventional DE and several state-of-the-art parameter adaptive DE variants, the solutions obtained by SaDE are more stable with a smaller standard deviation and have a higher success rate. Unlike PSO algorithms using both pbest and gbest to guide the evolutionary search, the SaDE algorithm only uses gbest to guide the evolutionary direction of individuals. Thus, its exploitation ability is limited. A group search optimization (GSO) [22] employed a producer–scrounger model as a framework. A number of group members are selected as the scrounger and the producer, who have great opportunities to search in the local area. The rest of the group members perform random walk motions to maintain the diversity of the GSO. As compared with GA and PSO algorithms, the GSO shows a better search ability for multimodal functions. Furthermore, since the GSO does not spread the experience of a swarm, it shows slow convergence rate than PSO. Based on the theoretical model that can depict the collaboration between global and local searches in memetic computation, the quasi-basin class (QBC), wihch categorizes problems according to the distribution of their local search zones, is adopted [35]. Then, a sub-threshold seeker, taken as a representative archetype of memetic algorithms, is analyzed on various QBCs to develop a general model for memetic algorithms. The Rosenbrock ABC (RABC) algorithm that combines Rosenbrock’s rotational direction method with an ABC algorithm is proposed for accurate numerical optimization [27]. There are two alternative phases of RABC: The exploration phase realized by ABC and the exploitation phase competed by rotational direction method. The results show that the proposed algorithm is promising in terms of convergence speed, success rate, and accuracy.

The intermediate disturbance hypothesis (IDH), which was first proposed by Grime in 1974 [19], predicts that diversity is maintained because both competitive and opportunistic species coexist at intermediate levels of disturbance. In an ecosystem, disturbances act to disrupt stable ecosystems and clear species’ habitat. As a result, disturbances lead species to newly cleared area. Once an area is cleared, there is a progressive increase in species richness, and, once again, the competition starts. Once the disturbance is removed, species richness decreases and competitive exclusion increases. The IDH states that species diversity is maximized when ecological disturbance is neither too rare nor too frequent. In this paper, inspired by the competition and diversity in the IDH, we propose a novel particle swarm algorithm called intermediate disturbance PSO (IDPSO), primarily for accelerating convergence speed and avoiding the local optima. Under this framework, concepts and strategies of resource searching from the IDH were adopted metaphorically for designing optimum searching strategies. First, by analyzing the limitation of the previous works, we identified a new and promising search region in PSO for preserving the experiences of particles in swarm, which allowed the swarm to be competitive. Second, for conquering the premature convergence, we introduced a new operator called as the intermediate disturbance operator into PSO. Every particle in the improved PSO not only focuses on searching in the promising region but also has more opportunities to jump out of the local search area. Then, diversity and competition of particles are balanced in IDPSO. Third, since the IDPSO is an improved version of PSO, other strategies in PSO can also be added to it (e.g., adaptive strategy, topology strategy, and mutation strategy). Finally, we applied the IDPSO to image segmentation for solving multilevel thresholding selection problems. It showed that IDPSO-based method is more effective than other EA-based methods, and it shortens the computation time of the traditional threshold-based segmentation methods.

This paper introduces PSO in Section 2, followed by presentation of the IDPSO and the details of its implementations in Section 3. The experimental studies of the proposed IDPSO have been presented in Section 4, and its application to image segmentation has been presented in Section 5. Finally, Section 6 concludes the paper.

Section snippets

Particle swarm optimization

PSO works by “flying” a population of cooperating potential solutions called particles through a problem’s solution space. Each particle in PSO has a position and a velocity, and its evaluation is achieved by using the objective function of the optimization problem, whose variables are the particle position dimensions. The particle updating method aims to move particles to better positions by accelerating them toward pbest and gbest. The basic PSO algorithm can be described as follows:vid(t+1)=

Intermediate disturbance particle swarm optimization

To enhance the global search ability and to improve the convergence rate of PSO, we presented a new PSO algorithm in cooperation with the intermediate disturbance strategy (IDS).

Optimization, a process of seeking global optimum in a search space, is analogous to the resource-searching process of species in nature. The mechanisms of PSO can also be considered as a process of competition in species (e.g., birds and fishes). Two or more species (called particles in PSO) live in the same

Experimental setup

To test the performance of the IDPSO, 16 benchmark functions [3], [56] listed in Table 1 were used for comparison with standard PSO [49], QPSO [52], HRPSO [16], HWPSO [37], CCPSO [42], NPSO [25], GCPSO [33], CLPSO [36], ISPSO [39], and IDPSO algorithms under the same maximum function evaluations (FEs). The parameters of the compared algorithms were those that achieved the best results, as described in their respective papers. The range of population initialization, the global solution X, the

Multilevel thresholding for image segmentation through IDPSO

The goal of image segmentation is to extract meaningful objects from an input image. It is useful in discriminating an object from other objects having distinct gray levels or different colors. Among all of the existing segmentation techniques, thresholding is one of the most popular techniques because of its simplicity, robustness, and accuracy. Over the years, many thresholding techniques have been proposed. These techniques can be roughly categorized into two. The first category contains

Conclusion

Inspired by the balance of competition and diversity in the ecology, we have presented a novel PSO algorithm incorporated with IDS strategy, which was based on identifying a special search region with more particles in it. The objective is to deploy the proposed strategy to not only accelerate convergence speed but also avoid the local optima of PSO. As the intermediate disturbance operator enables particles in the IDPSO to appear anywhere during iterations, it enhances the global search

Acknowledgement

The authors acknowledge support from City University of Hong Kong Strategic Research Grant (No. 7002826), the Introduction Foundation for the Talent of Nanjing University of Tele. and Com. (No. NY212025), National Natural Science Foundation of China (No. 61203270).

References (64)

  • S.-Z. Zhao et al.

    Multi-objective robust PID controller tuning using two lbests multi-objective particle swarm optimization

    Information Sciences

    (2011)
  • F. van den Bergh et al.

    A study of particle swarm optimization particle trajectories

    Information Science

    (2006)
  • B. Akay

    A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding

    Applied Soft Computing

    (2012)
  • M.M. Ali et al.

    A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems

    Journal of Global Optimization

    (2005)
  • F. Altermatt et al.

    Interactive effects of disturbance and dispersal directionality on species richness and composition in meta communities

    Ecology

    (2011)
  • P.S. Andrews

    An investigation into mutation operators for particle swarm optimization

  • K.Y. Chan et al.

    Modeling of a liquid epoxy molding process using a particle swarm optimization-base fuzzy regression approach

    IEEE Transactions on Industrial Informatics

    (2011)
  • J.S. Clark

    Individuals and the variation needed for high species diversity in forest trees

    Science

    (2010)
  • M. Clerc et al.

    The particle swarm-explosion, stability, and convergence in a multi-dimensional complex space

    IEEE Transactions on Evolutionary Computation

    (2002)
  • David C. Culver

    Competition and community

    Nature

    (1976)
  • J.H. Connell

    Diversity in tropical rain forests and coral reefs

    Science

    (1978)
  • M.W. Denny et al.

    Marine ecomechanics

    Annual Review of Marine Science

    (2010)
  • R.C. Eberhard et al.

    Comparison between genetic algorithms and particle swarm optimization

    Evolutionary Programming VII

    (1998)
  • B.C. Emerson et al.

    Species diversity can drive speciation

    Nature

    (2005)
  • H. Gao et al.

    A new particle swarm optimization and its globally convergent modifications

    IEEE Transactions on System, Man, and Cybernetics, Part B

    (2011)
  • H. Gao et al.

    Multilevel thresholding for image segmentation through an improved quantum-behaved particle swarm algorithm

    IEEE Transactions on Instrumentation and Measurement

    (2010)
  • D.E. Goldberg

    Genetic Algorithms in Search, Optimization & Machine Learning

    (1989)
  • J.P. Grime

    Competitive exclusion in herbaceous vegetation

    Nature

    (1973)
  • M.G. Hall et al.

    Convergence and parameter choice for Monte-Carlo simulations of diffusion MRI

    IEEE Transactions on Medical Imaging

    (2009)
  • M. Han et al.

    A dynamic feed forward neural network based on Gaussian particle swarm optimization and its application for predictive control

    IEEE Transactions on Neural Networks

    (2011)
  • S. He et al.

    Group search optimizer: an optimization algorithm inspired by animal searching behavior

    IEEE Transactions on Evolutionary Computation

    (2009)
  • Q. Jin, Z.J. Liang, A naïve particle swarm optimization, in: IEEE Congress Evolutionary Computation, Brisbane,...
  • Cited by (69)

    • Multi-technique diversity-based particle-swarm optimization

      2021, Information Sciences
      Citation Excerpt :

      The fourth employs a hybrid PSO with other SI algorithms (e.g., genetic algorithm [26], differential evolution (DE) [42,46], and ant-colony optimization [35]). The fifth includes various disturbance strategies to prevent the swarm from being trapped in local minima [4,13,16,22,29,51]. The sixth includes the selection of parameters that greatly affect the search ability of the PSO [15].

    • Fruit fly optimization algorithm based on a hybrid adaptive-cooperative learning and its application in multilevel image thresholding

      2019, Applied Soft Computing Journal
      Citation Excerpt :

      Firstly the benchmark images are introduced, then we describe the specific experiment circumstance and the parameters setting briefly, finally, the numerical results regarding the optimal multilevel threshold segmentation are presented. In this study, the test images include the frequently adopted images used in [12,26,51] and 3063.jpg, 145086.jpg, 147091.jpg and 157055.jpg images provided by the Berkeley segmentation data set (Available at http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/). The image spatial resolution of Lenna, Peppers, Baboon, and Goldhill is 512 × 512, and the spatial resolution of the other images is 481 × 321.

    View all citing articles on Scopus
    View full text