Training neural networks using Central Force Optimization and Particle Swarm Optimization: Insights and comparisons

https://doi.org/10.1016/j.eswa.2011.07.046Get rights and content

Abstract

Central Force Optimization (CFO) is a novel and upcoming metaheuristic technique that is based upon physical kinematics. It has previously been demonstrated that CFO is effective when compared with other metaheuristic techniques when applied to multiple benchmark problems and some real world applications. This work applies the CFO algorithm to training neural networks for data classification. As a proof of concept, the CFO algorithm is first applied to train a basic neural network that represents the logical XOR function. This work is then extended to train two different neural networks in order to properly classify members of the Iris data set. These results are compared and contrasted to results gathered using Particle Swarm Optimization (PSO) in the same applications. Similarities and differences between CFO and PSO are also explored in the areas of algorithm design, computational complexity, and natural basis. The paper concludes that CFO is a novel and promising meta-heuristic that is competitive with if not superior to the PSO algorithm, and there is much room to further improve it.

Highlights

► Central Force Optimization (CFO) is a new and deterministic metaheuristic algorithm. ► Neural networks are trained using CFO and Particle Swarm Optimization (PSO). ► Data sets used for classification include the XOR and Iris data sets. ► CFO is shown to be as good as or better than PSO in terms of performance. ► CFO and PSO are compared in terms of structure, complexity, and natural basis.

Introduction

Multiple population based, intelligent search (PIS) techniques have been developed over the past decades. These algorithms include Genetic Algorithms (GA), Particle Swarm Optimization (PSO), Artificial Immune System (AIS), and many others that leverage similar techniques and methodologies. All of these algorithms share two main features: They mimic a concept that can be found in the natural world and they leverage the inherent parallel, stochastic, and intelligent search features that are found in population based techniques. One of the newest techniques in this genre of optimization algorithms is a technique known as Central Force Optimization (CFO). This algorithm demonstrates and leverages two characteristics that make it unique when compared to other methodologies: A basis in gravitational kinematics and a deterministic nature.

As CFO is a very young algorithm, it has yet to be compared and contrasted against other algorithms for many different applications. One area of great research interest where multiple other algorithms, specifically evolutionary and PIS algorithms, have been applied is the training of neural networks. Because of this, this study will focus on the applications of CFO to the training of neural networks.

The remainder of this paper will discuss the details of the CFO algorithm and previous areas to which it has been applied, briefly review the PSO algorithm and its applications to training neural networks, discuss the test data and problem formulation used in this paper, and then present results and discussion based on the work completed in this study.

Section snippets

Particle Swarm Optimization

PSO is a population based, intelligent, guided, and stochastic search technique that was developed by Eberhart and Kennedy, 1995a, Eberhart and Kennedy, 1995b based upon the concept of swarming. Swarming is an attempt to mimic the movement of flocks of birds and schools of fish in order to leverage the group intelligence that they demonstrate. The algorithm itself uses a group of particles that move through a search space with a given velocity. At every iteration the velocity and position of

CFO and PSO: A comparison

As this study compares the CFO and PSO algorithms, it is proper to examine some differences and similarities between the two algorithms. Three areas that form a basis for comparison between these two algorithms are the areas of algorithmic design, computational complexity, and natural basis.

Test neural networks

For this work three different neural networks will be trained using the CFO algorithm. The first neural network trained will be a NN designed to model a XOR (exclusive-or) operation. With regards to the XOR operation the expected inputs and outputs are described in Table 7.

This information will be fed into a basic neural network that has been designed as described in Fig. 1. This figures shows that the network will consist of two input neurons, a single hidden layer that contains three neurons,

Representation

In all cases demonstrated in this study the representation will depict the weights between neurons in the given NN and it will be assumed that all biases are zero. The NN representing the XOR gate will have nine dimensions, the first NN with regards to the Iris data set will have 21 dimensions, and the final NN will have 35 dimensions. An example of these representations is given in Table 10 where the representation of the XOR NN is shown.

Objective function

The objective or fitness function for the CFO is the

Some discussion

CFO, while it is a young algorithm, is a very unique contribution to the area of metaheuristic optimization. The main contribution and the novelty of CFO lies in its deterministic qualities and its basis on the gravitational interaction of masses. While other papers have already demonstrated that CFO works well on test problems and some real world problems, this study demonstrates that CFO is very capable of applying itself to the area of training neural networks. The results produced are, at

References (34)

  • C.-J. Lin et al.

    Classification of mental task from eeg data using neural networks based on particle swarm optimization

    Neurocomputing

    (2009)
  • M. Riedmiller

    Advanced supervised learning in multi-layer perceptrons – from backpropagation to adaptive learning algorithms

    Computer Standards and Interfaces

    (1994)
  • Amani, M., & Sadeghian, A. (2010). A variation of particle swarm optimization for training of artificial neural...
  • Asuncion, A., & Newman, D., 2007. UCI machine learning repository. URL:...
  • Carvalho, M., & Ludermir, T. (2006). Particle swarm optimization of feed-forward neural networks with weight decay. In...
  • Z. Chunkai et al.

    A new evolved artificial neural network and its application

  • Eberhart, R., & Kennedy, J. (1995a). A new optimizer using particle swarm theory. In Proceedings of the sixth...
  • Eberhart, R., & Kennedy, J. (1995b). Particle swarm optimization. In Proceedings of the IEEE international conference...
  • R.A. Fisher

    The use of multiple measurements in taxonomic problems

    Annual Eugenics

    (1936)
  • R.A. Formato

    Central force optimization: A new metaheuristic with applications in applied electromagnetics

    Progress in Electromagnetics Research, PIER

    (2007)
  • R.A. Formato

    Central force optimization: A new nature inspired computational framework for multidimensional search and optimization

  • R.A. Formato

    Central force optimisation: A new gradient-like metaheuristic for multidimensional search and optimisation

    IJBIC

    (2009)
  • R.A. Formato

    Central force optimization: A new deterministic gradient-like optimization metaheuristic

    OPSEARCH, Journal of the Operations Research Society of India

    (2009)
  • Formato, R. A. (2010a). Central force optimization applied to the pbm suite of antenna benchmarks. CoRR...
  • Formato, R. A. (2010b). Comparative results: Group search optimizer and central force optimization. CoRR...
  • R.A. Formato

    Improved cfo algorithm for antenna optimization

    Progress in Electromagnetics Research, PIER B

    (2010)
  • Formato, R. A. (2010d). Parameter-free deterministic global search with central force optimization. CoRR...
  • Cited by (77)

    • Classification of underwater acoustical dataset using neural network trained by Chimp Optimization Algorithm

      2020, Applied Acoustics
      Citation Excerpt :

      There are many multi-solution meta-heuristic algorithms in the literature for training ANNs include Genetic Algorithm (GA) [19], Particle Swarm Optimization (PSO) [20], Ant Colony Optimization (ACO) [21,22], Artificial Bee Colony (ABC) [23], and Differential Evolution (DE) [24]. The recently suggested multi-solution meta-heuristic trainers are: hybrid Central Force Optimization and Particle Swarm Optimization (CFO-PSO) [25], Social Spider Optimization algorithm (SSO) [26], Charged System Search (CSS) [27], Invasive Weed Optimization (IWO) [28], Chemical Reaction Optimization (CRO) [29], Teaching-Learning Based Optimization (TLBO) trainer [30], Gray Wolf Optimization (GWO) [31,32], Biogeography-Based Optimizer (BBO) [33], Elephant Herding Optimization (EHO) [34], and Whale Optimization Algorithm (WOA) [35]. Although a wide range of meta-heuristic optimization algorithms is applied and investigated in the literature for training MLP NN, the problem of local optima is still open.

    View all citing articles on Scopus
    View full text