A performance-driven multi-algorithm selection strategy for energy consumption optimization of sea-rail intermodal transportation

https://doi.org/10.1016/j.swevo.2018.11.007Get rights and content

Highlights

  • The proposed strategy can automatically select a suitable algorithm to solve a particular problem.

  • The performance of PMSS is demonstrated on two suites of benchmark functions.

  • PMSS is utilized to optimize the energy consumption of sea-rail intermodal transportation.

Abstract

Various powerful differential evolution (DE) algorithms have been developed in the past years, although none of them can consistently perform well on all types of problems. However, it is not straightforward to choose an appropriate algorithm for solving a real-world problem, as the properties of the problem are usually not well understood beforehand. Therefore, how to automatically select an appropriate DE variant for solving a particular problem at hand is an important and challenging task. In the present work, a performance-driven multi-algorithm selection strategy (PMSS) is proposed to alleviate the above mentioned problems for single objective optimization. In PMSS, a learning-forgetting mechanism is introduced to update the selection probability of each algorithm from a pool of DE variants to make sure that the best performing one is chosen during the search process. The effectiveness of PMSS is carefully examined on two suites of widely used test problems and the results indicate that the PMSS is highly effective and computationally efficient. Finally, the proposed algorithm is employed to optimize the energy consumption of sea-rail intermodal transportation. Our simulation results demonstrate that the proposed algorithm is successful achieving satisfactory solution that are able to provide insights into the problem and the algorithm is promising to be applied for solving real sea-rail intermodal and other multimodal transportation planning problems.

Introduction

Over the past decades, a large number of meta-heuristic optimization algorithms, which are inspired from the evolutionary process and swarm behaviors in the nature [1], have been introduced and utilized to successfully solve a wide range of industrial application optimization problems. Among the family of meta-heuristic algorithms, differential evolution (DE) [2], particle swarm optimization (PSO) [3], genetic algorithm (GA) [4], estimation of distribution algorithm (EDA) [5], ant colony optimization (ACO) [6], and covariance matrix adaptation evolution strategy (CMA-ES) [7] are among the most popular ones. According to [8], meta-heuristic optimization algorithms can be generally categorized into distributed and centralized models.

DE is one of the most popular paradigms of meta-heuristic algorithms since it is a simple yet efficient search technique. However, the performance of DE heavily depends on its control parameters (i.e., mutation control parameter F, crossover control parameter CR, and population size NP) and strategies (i.e., mutation and crossover), especially when the complexity of optimization problem is high [9]. To enhance the searching performance of DE, researchers proposed numerous DE variants. However, most existing studies focused on the ensemble of production operators and/or tuning of control parameters. Although a number of advanced DE variants have been developed to solve various practical applications and benchmark problems, no single DE variant has been shown to be able to consistently perform well on various types of optimization problems, even if multi-operator search strategies or a combination of multi-parameters are used. Generally speaking, this is in accordance with the no free lunch (NFL) theorem [10]. To alleviate the NFL problem, some self-adaptive multi-algorithm selection mechanisms have been developed. For Example, Vrugt et al. [11] proposed a multi-algorithm genetically method for single-objective optimization (AMALGAM-SO), which can automatically adjust the number of offspring of each individual algorithm based on the current performance. Peng et al. [12] introduced a population-based algorithm porfolio (PAP) wherein a part of given computational resources are used to evaluate the performance of each constitute algorithm and a migration scheme is employed to encourage interaction among individual algorithms. Yuen et al. [13] proposed a multiple evolutionary algorithm, in which each individual algorithm runs independently with no information exchange and the best performance algorithm recommended by a novel online performance prediction metric is used to generate new individuals. Recently, Fan et al. [14] introduced an auto-selection mechanism (ASM), in which a learning strategy and an additional selection probability are used to update a select probability of each individual algorithm and alleviate a greedy selection issue. The main target of these methods is to automatically choose an appropriate algorithm in solving a given problem. Indeed, self-adaptive multi-algorithm is an effective way to address the shortcomings of each single algorithm. However, the following limitations exist in the above approaches when implemented in real applications. Firstly, information exchange is usually needed among the algorithms to be selected [11,12]. Unfortunately, the results reported in Ref. [13] and our experimental results imply that such information exchange may mislead the selection since each algorithm may have its own particular search behavior. Secondly, a greedy selection strategy [[11], [12], [13]], which increases the risk of wrong selections, is used to choose an algorithm. Lastly, the selection is based on the predicted performance [13], making it less reliable in practical applications, and ASM is a “either/or” strategy [14].

To deal with the above mentioned limitations, a performance-driven multi-algorithm selection strategy (PMSS) is proposed in this paper. In PMSS, a learning-forgetting mechanism is developed to implement self-adaptive selection of DE variants. The learning operator updates the selection probability of each algorithm in each generation, while the forgetting operator aims to reduce the risk of incorrect selections. Moreover, no information exchange is needed among the algorithms in the pool. It should be pointed out that this work does not mean to improve the performance of a single existing DE variant; rather, it aims to introduce a novel strategy (i.e., PMSS) to select a suitable DE variant for solving different types of optimization problems. In this study, novel DE (JADE) [15], ensemble of mutation strategies and control parameters with DE (EPSDE) [16], and DE with self-adaptive strategy and control parameters (SSCPDE) [17] are chosen as the algorithms to be selected in the pool. In JADE, the mutation control parameter F is produced by a Cauchy distribution C(μF, 0.1), the crossover control parameter CR is generated by a normal distribution function N(μCR, 0.1). Moreover, an improved current-to-best/1 mutation strategy (called as current-to-pbest/1) is used in JADE. In EPSDE, three distinct performance mutation strategies are selected in a pool. CR in pool varies from 0.1 to 0.9 with a step equal to 0.1 and F in pool is taken in the range 0.4–0.9 with a step equal to 0.1. Moreover, the selection of mutation strategy and control parameters in EPSDE is based on their previous successful experience. In SSCPDE, each individual has own control parameters (i.e., F and CR) and mutation strategy. The control parameters F and CR are produced by a normal distribution function in which a weighted average value is used as a location parameter. Moreover, appropriate mutation strategies can be self-adaptively selected from a strategy pool. Additionally, the performances of all algorithms are evaluated in terms of the average ranking according to the Friedman's test [18]. The performance of the proposed algorithm is compared with that of JADE, EPSDE, SSCPDE, and a few other state-of-the-art DE variants on two sets of 30- and 50-dimensional test functions introduced in IEEE CEC2005 [19] and BBOB2012 [20]. PMSS is also compared with two multi-algorithm selection strategies, i.e., a random selection strategy (randomly choose some of DE variants in each generation) and population-based algorithm portfolio (PAP) [12]. Experimental results show that the proposed PMSS can self-adaptively select a best-performing DE variant among all the variants and can take advantage of the strength of different DE variants.

The remainder of this paper is organized as follows. Sections 2 Differential evolution, 3 Related work briefly introduce the original DE and review previous studies on DE, respectively. In Section 4, the proposed PMSS is presented in detail. The results and parameter analyses are reported in Section 5. In Section 6, our proposed strategy is employed to solve the energy consumption problem of sea-rail intermodal transportation planning event. Finally, conclusions are drawn in Section 7.

Section snippets

Differential evolution

Without loss of generality, we consider a single objective minimization problem formulated as follows:f(x)=minxiΩf(xi),xiSj=1D[Lj,Uj]where f denotes the objective function, xi=(xi,1,,xi,D) is a D-dimensional decision vector, x is the global optimum solution of the optimization problem, ΩRD. Lj and Uj (j=1,2,,D) are the lower and upper bounds of the jth decision variable of xi, respectively, and S is the search space.

Mutation, crossover, and selection are the three main operators in

Related work

Although DE is one of the most competitive meta-heuristic algorithms and has been applied to solve a variety of optimization problems in practice, its performance is heavily dependent on parameter settings and selected strategies. To alleviate this problem, DE researchers proposed various techniques [22,23] to enhance the performance of DE. In the following, a number of popular DE variants are reviewed according to the parameter control method, strategy improvement, and use of other methods.

Performance-driven multi-algorithm selection strategy

Generally, many DE variants may be effective in exploring a fitness landscape and finding a promising region in the early stages of the evolution while it may perform poorly in the later exploitation phase, or vice versa. Therefore, if the algorithm selection is completely dependent on the previous experience of successful search, it may increase the risk of incorrect selection and fails to make the best use of multiple algorithms in some cases. Fortunately, there are many approaches that can

Experimental results and discussions

In all experimental studies of this work, the algorithm pool contains three DE variants, i.e., JADE, EPSDE, and SSCPDE. The proposed algorithm can be named as PMSS, which is compared with six single DE variants, including jDE [30], SaDE [28], JADE [15], CoDE [37], SSCPDE [17], and EPSDE [16] on two suites of test functions, i.e., CEC2005 [19] and BBOB2012 [20]. Two selection strategies (i.e., RSS and PAP) are adopted in all experiments. The CEC2005 benchmark suite contains five unimodal

Energy consumption optimization of sea-rail intermodal transportation

In recent years, cost and environment issues in maritime transportation have attracted increasing attention due to the soaring fuel prices, depressed market conditions, as well as serious exhaust emissions [70]. The speed of ships is a crucial variable for both energy consumer (i.e. cost saving) and emissions (i.e., environment protection) [[71], [72], [73]], as the emissions from maritime transportation are significantly correlated to its fuel consumption. These emitted gases not only damage

Conclusions and future work

In this paper, a performance-driven multi-algorithm selection strategy (PMSS) is introduced to automatically select one suitable DE variant from a pool of DE algorithms in dealing with specific optimization problems. The main idea is that a good performing DE variant can achieve more computational resources through an automated approach during the entire evolutionary process. PMSS is easy to implement and can be embedded in most existing meta-heuristic algorithms. The simulation results based

Acknowledgement

This work was partially supported by the National Key Research and development Program of China (No. 2016YFC0800200), the National Nature Science Foundation of china (No. 61603244), and the Shanghai Pujiang Program (No. 16PJ1403800).

References (76)

  • A. Piotrowski

    Adaptive memetic differential evolution with global and local neighborhood-based mutation operators

    Inf. Sci.

    (2013)
  • Y. Wang et al.

    Utilizing cumulative population distribution information in differential evolution

    Appl. Soft Comput.

    (2016)
  • Y. Zhou et al.

    Differential evolution with guiding archive for global numerical optimization

    Appl. Soft Comput.

    (2016)
  • G. Li et al.

    A novel hybrid differential evolution algorithm with modified CoDE and JADE

    Appl. Soft Comput.

    (2016)
  • H. Psaraftis et al.

    Speed models for energy-efficient maritime transportation: a taxonomy and survey

    Transport. Res. C Emerg. Technol.

    (2013)
  • M. Doudnikoff et al.

    Effect of a speed reduction of containerships in response to higher energy costs in sulphur emission control areas

    Transport. Res. Transport Environ.

    (2014)
  • J. Corbett et al.

    The effectiveness and costs of speed reductions on emissions from international shipping

    Transport. Res. Transport Environ.

    (2009)
  • K. Cullinane et al.

    Emission control areas and their impact on maritime transport

    Transport. Res. Transport Environ.

    (2014)
  • K. Fagerholt et al.

    Maritime routing and speed optimization with emission control areas

    Transport. Res. C Emerg. Technol.

    (2015)
  • S. Wang et al.

    Sailing speed optimization for container ships in a liner shipping network

    Transport. Res. E Logist. Transport. Rev.

    (2012)
  • T. Back

    Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms

    (1996)
  • R. Storn et al.

    Differential Evolution-a Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces

    (1995)
  • J. Kenndy et al.

    Particle swarm optimization

  • J. Holland

    Adaptation in Natural and Artificial Systems: an Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence

    (1975)
  • P. Larranaga et al.

    Estimation of Distribution Algorithms: a New Tool for Evolutionary Computation

    (2002)
  • M. Dorigo

    Optimization, Learning and Natural Algorithms

    (1992)
  • N. Hansen et al.

    Completely derandomized self-adaptation in evolution strategies

    Evol. Comput.

    (2001)
  • Y. Li et al.

    Differential evolution with an evolution path: a DEEP evolutionary algorithm

    IEEE Trans. Cybern.

    (2015)
  • R. Gämperle et al.

    A parameter study for differential evolution

    Adv. Intel. Syst., Fuzzy Syst., Evol. Comput.

    (2002)
  • D. Wolpert et al.

    No free lunch theorems for optimization

    IEEE Trans. Evol. Comput.

    (1997)
  • J. Vrugt et al.

    Self-adaptive multimethod search for global optimization in real-parameter spaces

    IEEE Trans. Evol. Comput.

    (2009)
  • F. Peng et al.

    Population-based algorithm portfolios for numerical optimization

    IEEE Trans. Evol. Comput.

    (2010)
  • J. Zhang et al.

    JADE: adaptive differential evolution with optional external archive

    IEEE Trans. Evol. Comput.

    (2009)
  • Q. Fan et al.

    Differential evolution algorithm with self-adaptive strategy and control parameters for P-xylene oxidation process optimization

    Soft Comput.

    (2015)
  • M. Friedman

    The use of ranks to avoid the assumption of normality implicit in the analysis of variance

    J. Am. Stat. Assoc.

    (1937)
  • P. Suganthan et al.

    Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-parameter Optimization

    (2005)
  • N. Hansen et al.

    Real-parameter Black-box Optimization Benchmarking 2012: Experimental Setup

    (2012)
  • R. Storn et al.

    Differential Evolution-a Practical Approach to Global Optimization

    (2005)
  • Cited by (20)

    • Discrete differential evolution metaheuristics for permutation flow shop scheduling problems

      2022, Computers and Industrial Engineering
      Citation Excerpt :

      So, they are kept constant during the optimization process. However, to avoid manual adjustment of these parameters, different adaptive mechanisms to dynamically update without prior knowledge of the relationship between the configurations and the characteristics of the problem are being proposed (Fan, Yan, & Zhang, 2018; Fan, Jin, Wang, & Yan, 2019). Bargaoui, Driss, and Ghédira (2017) adopted a chemical reaction optimization for a distributed PFS problem (PFSP) with an artificial chemical reaction metaheuristic minimizing the maximum time.

    • Non-revisiting stochastic search revisited: Results, perspectives, and future directions

      2021, Swarm and Evolutionary Computation
      Citation Excerpt :

      In the metaphor of EC or swarm intelligence, an agent is an abstract individual that might be a chromosome, a bird, a bee, etc. It is natural to assume that agents can learn, remember, and forget historical information [19]. It is notable that all these population-based optimization algorithms implicitly incorporate historical information throughout the optimization process.

    View all citing articles on Scopus
    View full text