A performance-driven multi-algorithm selection strategy for energy consumption optimization of sea-rail intermodal transportation
Introduction
Over the past decades, a large number of meta-heuristic optimization algorithms, which are inspired from the evolutionary process and swarm behaviors in the nature [1], have been introduced and utilized to successfully solve a wide range of industrial application optimization problems. Among the family of meta-heuristic algorithms, differential evolution (DE) [2], particle swarm optimization (PSO) [3], genetic algorithm (GA) [4], estimation of distribution algorithm (EDA) [5], ant colony optimization (ACO) [6], and covariance matrix adaptation evolution strategy (CMA-ES) [7] are among the most popular ones. According to [8], meta-heuristic optimization algorithms can be generally categorized into distributed and centralized models.
DE is one of the most popular paradigms of meta-heuristic algorithms since it is a simple yet efficient search technique. However, the performance of DE heavily depends on its control parameters (i.e., mutation control parameter F, crossover control parameter CR, and population size NP) and strategies (i.e., mutation and crossover), especially when the complexity of optimization problem is high [9]. To enhance the searching performance of DE, researchers proposed numerous DE variants. However, most existing studies focused on the ensemble of production operators and/or tuning of control parameters. Although a number of advanced DE variants have been developed to solve various practical applications and benchmark problems, no single DE variant has been shown to be able to consistently perform well on various types of optimization problems, even if multi-operator search strategies or a combination of multi-parameters are used. Generally speaking, this is in accordance with the no free lunch (NFL) theorem [10]. To alleviate the NFL problem, some self-adaptive multi-algorithm selection mechanisms have been developed. For Example, Vrugt et al. [11] proposed a multi-algorithm genetically method for single-objective optimization (AMALGAM-SO), which can automatically adjust the number of offspring of each individual algorithm based on the current performance. Peng et al. [12] introduced a population-based algorithm porfolio (PAP) wherein a part of given computational resources are used to evaluate the performance of each constitute algorithm and a migration scheme is employed to encourage interaction among individual algorithms. Yuen et al. [13] proposed a multiple evolutionary algorithm, in which each individual algorithm runs independently with no information exchange and the best performance algorithm recommended by a novel online performance prediction metric is used to generate new individuals. Recently, Fan et al. [14] introduced an auto-selection mechanism (ASM), in which a learning strategy and an additional selection probability are used to update a select probability of each individual algorithm and alleviate a greedy selection issue. The main target of these methods is to automatically choose an appropriate algorithm in solving a given problem. Indeed, self-adaptive multi-algorithm is an effective way to address the shortcomings of each single algorithm. However, the following limitations exist in the above approaches when implemented in real applications. Firstly, information exchange is usually needed among the algorithms to be selected [11,12]. Unfortunately, the results reported in Ref. [13] and our experimental results imply that such information exchange may mislead the selection since each algorithm may have its own particular search behavior. Secondly, a greedy selection strategy [[11], [12], [13]], which increases the risk of wrong selections, is used to choose an algorithm. Lastly, the selection is based on the predicted performance [13], making it less reliable in practical applications, and ASM is a “either/or” strategy [14].
To deal with the above mentioned limitations, a performance-driven multi-algorithm selection strategy (PMSS) is proposed in this paper. In PMSS, a learning-forgetting mechanism is developed to implement self-adaptive selection of DE variants. The learning operator updates the selection probability of each algorithm in each generation, while the forgetting operator aims to reduce the risk of incorrect selections. Moreover, no information exchange is needed among the algorithms in the pool. It should be pointed out that this work does not mean to improve the performance of a single existing DE variant; rather, it aims to introduce a novel strategy (i.e., PMSS) to select a suitable DE variant for solving different types of optimization problems. In this study, novel DE (JADE) [15], ensemble of mutation strategies and control parameters with DE (EPSDE) [16], and DE with self-adaptive strategy and control parameters (SSCPDE) [17] are chosen as the algorithms to be selected in the pool. In JADE, the mutation control parameter F is produced by a Cauchy distribution C(μF, 0.1), the crossover control parameter CR is generated by a normal distribution function N(μCR, 0.1). Moreover, an improved current-to-best/1 mutation strategy (called as current-to-pbest/1) is used in JADE. In EPSDE, three distinct performance mutation strategies are selected in a pool. CR in pool varies from 0.1 to 0.9 with a step equal to 0.1 and F in pool is taken in the range 0.4–0.9 with a step equal to 0.1. Moreover, the selection of mutation strategy and control parameters in EPSDE is based on their previous successful experience. In SSCPDE, each individual has own control parameters (i.e., F and CR) and mutation strategy. The control parameters F and CR are produced by a normal distribution function in which a weighted average value is used as a location parameter. Moreover, appropriate mutation strategies can be self-adaptively selected from a strategy pool. Additionally, the performances of all algorithms are evaluated in terms of the average ranking according to the Friedman's test [18]. The performance of the proposed algorithm is compared with that of JADE, EPSDE, SSCPDE, and a few other state-of-the-art DE variants on two sets of 30- and 50-dimensional test functions introduced in IEEE CEC2005 [19] and BBOB2012 [20]. PMSS is also compared with two multi-algorithm selection strategies, i.e., a random selection strategy (randomly choose some of DE variants in each generation) and population-based algorithm portfolio (PAP) [12]. Experimental results show that the proposed PMSS can self-adaptively select a best-performing DE variant among all the variants and can take advantage of the strength of different DE variants.
The remainder of this paper is organized as follows. Sections 2 Differential evolution, 3 Related work briefly introduce the original DE and review previous studies on DE, respectively. In Section 4, the proposed PMSS is presented in detail. The results and parameter analyses are reported in Section 5. In Section 6, our proposed strategy is employed to solve the energy consumption problem of sea-rail intermodal transportation planning event. Finally, conclusions are drawn in Section 7.
Section snippets
Differential evolution
Without loss of generality, we consider a single objective minimization problem formulated as follows:where f denotes the objective function, is a D-dimensional decision vector, x∗ is the global optimum solution of the optimization problem, . Lj and Uj () are the lower and upper bounds of the jth decision variable of xi, respectively, and S is the search space.
Mutation, crossover, and selection are the three main operators in
Related work
Although DE is one of the most competitive meta-heuristic algorithms and has been applied to solve a variety of optimization problems in practice, its performance is heavily dependent on parameter settings and selected strategies. To alleviate this problem, DE researchers proposed various techniques [22,23] to enhance the performance of DE. In the following, a number of popular DE variants are reviewed according to the parameter control method, strategy improvement, and use of other methods.
Performance-driven multi-algorithm selection strategy
Generally, many DE variants may be effective in exploring a fitness landscape and finding a promising region in the early stages of the evolution while it may perform poorly in the later exploitation phase, or vice versa. Therefore, if the algorithm selection is completely dependent on the previous experience of successful search, it may increase the risk of incorrect selection and fails to make the best use of multiple algorithms in some cases. Fortunately, there are many approaches that can
Experimental results and discussions
In all experimental studies of this work, the algorithm pool contains three DE variants, i.e., JADE, EPSDE, and SSCPDE. The proposed algorithm can be named as PMSS, which is compared with six single DE variants, including jDE [30], SaDE [28], JADE [15], CoDE [37], SSCPDE [17], and EPSDE [16] on two suites of test functions, i.e., CEC2005 [19] and BBOB2012 [20]. Two selection strategies (i.e., RSS and PAP) are adopted in all experiments. The CEC2005 benchmark suite contains five unimodal
Energy consumption optimization of sea-rail intermodal transportation
In recent years, cost and environment issues in maritime transportation have attracted increasing attention due to the soaring fuel prices, depressed market conditions, as well as serious exhaust emissions [70]. The speed of ships is a crucial variable for both energy consumer (i.e. cost saving) and emissions (i.e., environment protection) [[71], [72], [73]], as the emissions from maritime transportation are significantly correlated to its fuel consumption. These emitted gases not only damage
Conclusions and future work
In this paper, a performance-driven multi-algorithm selection strategy (PMSS) is introduced to automatically select one suitable DE variant from a pool of DE algorithms in dealing with specific optimization problems. The main idea is that a good performing DE variant can achieve more computational resources through an automated approach during the entire evolutionary process. PMSS is easy to implement and can be embedded in most existing meta-heuristic algorithms. The simulation results based
Acknowledgement
This work was partially supported by the National Key Research and development Program of China (No. 2016YFC0800200), the National Nature Science Foundation of china (No. 61603244), and the Shanghai Pujiang Program (No. 16PJ1403800).
References (76)
- et al.
Which algorithm should I choose: an evolutionary algorithm portfolio approach
Appl. Soft Comput.
(2016) - et al.
Auto-selection mechanism of differential evolution algorithm variants and its application
Eur. J. Oper. Res.
(2018) - et al.
Differential evolution algorithm with ensemble of parameters and mutation strategies
Appl. Soft Comput.
(2011) - et al.
An improved differential evolution algorithm with fitness-based adaptation of the control parameters
Inf. Sci.
(2011) - et al.
Differential evolution based on covariance matrix learning and bimodal distribution parameter setting
Appl. Soft Comput.
(2014) - et al.
Self-adaptive differential evolution algorithm with discrete mutation control parameters
Expert Syst. Appl.
(2015) - et al.
Enhancing the search ability of differential evolution through orthogonal crossover
Inf. Sci.
(2012) - et al.
Repairing the crossover rate in adaptive differential evolution
Appl. Soft Comput.
(2014) - et al.
Differential evolution with hybrid linkage crossover
Inf. Sci.
(2015) - et al.
Differential evolution with multi-population based ensemble of mutation strategies
Inf. Sci.
(2016)
Adaptive memetic differential evolution with global and local neighborhood-based mutation operators
Inf. Sci.
Utilizing cumulative population distribution information in differential evolution
Appl. Soft Comput.
Differential evolution with guiding archive for global numerical optimization
Appl. Soft Comput.
A novel hybrid differential evolution algorithm with modified CoDE and JADE
Appl. Soft Comput.
Speed models for energy-efficient maritime transportation: a taxonomy and survey
Transport. Res. C Emerg. Technol.
Effect of a speed reduction of containerships in response to higher energy costs in sulphur emission control areas
Transport. Res. Transport Environ.
The effectiveness and costs of speed reductions on emissions from international shipping
Transport. Res. Transport Environ.
Emission control areas and their impact on maritime transport
Transport. Res. Transport Environ.
Maritime routing and speed optimization with emission control areas
Transport. Res. C Emerg. Technol.
Sailing speed optimization for container ships in a liner shipping network
Transport. Res. E Logist. Transport. Rev.
Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms
Differential Evolution-a Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces
Particle swarm optimization
Adaptation in Natural and Artificial Systems: an Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence
Estimation of Distribution Algorithms: a New Tool for Evolutionary Computation
Optimization, Learning and Natural Algorithms
Completely derandomized self-adaptation in evolution strategies
Evol. Comput.
Differential evolution with an evolution path: a DEEP evolutionary algorithm
IEEE Trans. Cybern.
A parameter study for differential evolution
Adv. Intel. Syst., Fuzzy Syst., Evol. Comput.
No free lunch theorems for optimization
IEEE Trans. Evol. Comput.
Self-adaptive multimethod search for global optimization in real-parameter spaces
IEEE Trans. Evol. Comput.
Population-based algorithm portfolios for numerical optimization
IEEE Trans. Evol. Comput.
JADE: adaptive differential evolution with optional external archive
IEEE Trans. Evol. Comput.
Differential evolution algorithm with self-adaptive strategy and control parameters for P-xylene oxidation process optimization
Soft Comput.
The use of ranks to avoid the assumption of normality implicit in the analysis of variance
J. Am. Stat. Assoc.
Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-parameter Optimization
Real-parameter Black-box Optimization Benchmarking 2012: Experimental Setup
Differential Evolution-a Practical Approach to Global Optimization
Cited by (20)
Review and trends in regenerative braking energy recovery for traction power system with inverter substation in subway's of São Paulo city
2024, Journal of Rail Transport Planning and ManagementDiscrete differential evolution metaheuristics for permutation flow shop scheduling problems
2022, Computers and Industrial EngineeringCitation Excerpt :So, they are kept constant during the optimization process. However, to avoid manual adjustment of these parameters, different adaptive mechanisms to dynamically update without prior knowledge of the relationship between the configurations and the characteristics of the problem are being proposed (Fan, Yan, & Zhang, 2018; Fan, Jin, Wang, & Yan, 2019). Bargaoui, Driss, and Ghédira (2017) adopted a chemical reaction optimization for a distributed PFS problem (PFSP) with an artificial chemical reaction metaheuristic minimizing the maximum time.
Non-revisiting stochastic search revisited: Results, perspectives, and future directions
2021, Swarm and Evolutionary ComputationCitation Excerpt :In the metaphor of EC or swarm intelligence, an agent is an abstract individual that might be a chromosome, a bird, a bee, etc. It is natural to assume that agents can learn, remember, and forget historical information [19]. It is notable that all these population-based optimization algorithms implicitly incorporate historical information throughout the optimization process.
Purpose-directed two-phase multiobjective differential evolution for constrained multiobjective optimization
2021, Swarm and Evolutionary ComputationDecomposition-based multiobjective optimization with bicriteria assisted adaptive operator selection
2021, Swarm and Evolutionary Computation