Fibonacci indicator algorithm: A novel tool for complex optimization problems

https://doi.org/10.1016/j.engappai.2018.04.012Get rights and content

Abstract

In this paper a new meta-heuristic algorithm is introduced. This optimization algorithm is inspired by the very popular tool among the technical traders in the stock market called the Fibonacci Indicator. The Fibonacci Indicator uses to predict possible local maximum and minimum prices, and periods in which the price of a stock will experience a significant amount of movement. The proposed Fibonacci Indicator algorithm is validated on several Benchmark functions up to 100 dimensions to have a comparison to algorithms such as DE extensions, PSO extensions, ABC, ABC-PS, CS, MCS and GSA in the ability of convergence and finding the global optimum in different research areas. Finally two engineering design problems are used to show the performance of the algorithm. Application of the proposed Fibonacci Indicator Algorithm in a wide set of benchmark functions has asserted its capability to deal with difficult optimization problems.

Introduction

In the last few decades, the use of meta-heuristic algorithms has been much improved to approach the optimized solution of nonlinear functions. A heuristic algorithm is a method to find the solution to an optimization problem by “trial-and-error”. How ever, these algorithms may not find the global best solution to the problem and might get trapped in the local optimum points. On the other hand, the metaheuristic algorithms finds the optimum solution by higher-level strategies employing trial-and-error,exploration, and exploitations. Particle Swarm Optimization (PSO) (Eberhart and Kennedy, 1995), Evolutionary Algorithms (EA) including Genetic Algorithm (GA) (Holland, 1975), Ant Colony Optimization (ACO) Bilchev and Parmee (1995), Dorigo and Blum (2005), and the Bee Algorithm (BA) (Pham et al., 2006) are among the most popular metaheuristic algorithms. Evolutionary algorithms and swarm intelligence-based algorithms are two main categories of population-based optimization (Karaboga and Akay, 2009). Genetic algorithms, Differential Evolution (DE) (Storm and Price, 1995), Meng and Pan (2016), Meng et al. (2018) and evolutionary strategy (ES) Rechenberg (1965), Schwefel (1965) have been the most popular techniques in evolutionary computation. Particle swarm optimization and Bees Algorithm are the most popular examples of swarm intelligence optimization. Two advantages of the different categories, i.e. evolutionary algorithms and swarm intelligence-based algorithms are presented below:

  • 1.

    Particularly useful in multi-modal and multi-objective optimization problems;

  • 2.

    Hybridize algorithms to each other;

A number of optimization algorithms can combine with each other and produce the hybrid algorithm with the synergy of both algorithm’s advantages and elimination of their disadvantages.

Global optimization can be applied to various branches of science, economics and engineering Bomze et al. (1997), Gergel (1997), Horst and Tuy (1996), Li et al. (2015), Rizk-Allah et al. (2016), Rizk-Allah et al. (2018). Generally, solving nonlinear optimization problems can be classified into deterministic and stochastic methods Li et al. (2015), Arora et al. (1995), Pardalos et al. (2000), Younis and Dong (2010). In deterministic methods, optimization problems are solved by creating deterministic progression of convergence at the global optimal solution. This method requires unfailing mathematical specification and responsively depends on the initial conditions. On the contrary, in the stochastic methods including heuristic and meta-heuristic methods, new points are randomly generated (Younis and Dong, 2010). The efficiency of the optimization algorithms is usually determined by their ability in finding the global best solution by the minimum cost usually corresponding to the number of function evaluations. Exploration and exploitation are two main strategies to find the global best solution. Poor exploring and very fast convergence of algorithms increase the chance of getting trapped in local minima. Furthermore, very slow converging and increasing function evaluations is not economical. The balance between exploration and exploitation is crucial to improve the efficiency of optimization algorithms (Li et al., 2015). Over the past decade, meta-heuristic algorithms such as GA Price et al. (2005), Yang et al. (2007), Chelouah and Siarry (2000) ACO, PSO (Jiang et al., 2007) and the artificial bee colony (ABC) have shown considerable successes in optimization algorithms (Ghanbari and Rhati, 2017). Previous researches show that ABC and GA have better exploration and slower convergence. However, ACO and PSO converge faster with more possibility of getting trapped in local optima Alshamlan et al. (2015), Premalatha and Natarajan (2009), Fidanova et al. (2014), Meng and Pan (2016). A nonlinear optimization problem can be formulated as a D-dimensional problem of the following type: Gergel (1997), Nguyen et al. (2014). f(x)=minf(x)s.t.1xu

The objective function is defined by f(x) and the D dimensional vector of variables is x=(x1,x2,,xD); lower and upper limits of variables are defined by l=(l1,l2,,lD) and u=(u1,u2,,uD).

Section snippets

Fibonacci ratios

Leonardo Fibonacci is an Italian mathematician who found a sequence in 12–13th century. In this sequence, each number is generated by summing two previous numbers as follows: {0,1,1,2,3,5,8,13,21,34,55,89,144,}. In this sequence, each number is approximately 61.8% greater than the preceding number. By dividing one number in the sequence by the two and three places to the right, 38.2% and 23.6% will be found, respectively. Eqs. (2), (3), (4), (5), (6), (7) are used to generate Fibonacci ratios.

Fibonacci indicator algorithm

Here, we present a novel evolutionary optimization algorithm using a combination of two popular tools of technical traders in the stock marketing, Fibonacci retracement and Fibonacci time zone. Assume Fibonacci ratios are on arbitrary axis as shown in Fig. 1, so adding 50% to Fibonacci ratios arranges points more symmetrically around 100%. In this article, the act of generating new points proportional to Fibonacci series + 50% between the arbitrary point (x) and the independent variable of the

FIA performance analysis

In this section the performance of FIA is investigated by employing Rastrigin function on the interval [100,100] which is a challenging multimodal function. The experiments carried out to analyze the influence of population size and parameters P and C, on the 2 and 30 dimensions cases.

Experimental settings

We compare the efficiency of FIA with some optimization algorithms such as PSO variants, DE variants, PSO, ABC, ABC-PS, TLBO, HPA, HS, firefly algorithm (FA) (Yang, 2010), GSA, gray wolf optimizer and (GWO) (Mirjalili et al., 2014) by carrying out experiments on some well-known benchmark functions. All results of mentioned algorithms were directly taken from Meng et al. (2016), Gao et al. (2015), Li et al. (2015) and Rakhshani and Rahati (2017). However 2 engineering applications of FIA are

Conclusions

In this paper, Fibonacci Indicator Algorithm is proposed. Predictions about the time of maximum and minimum cost of stocks were the basic motivation to develop this new optimization algorithm. The introduced algorithm was tested on two engineering design problems and also several benchmark cost functions with up to 100 dimensions. The comparison with algorithms such as DE extensions, PSO extensions, ABC, ABC-PS, CS, MCS, GSA and FA illustrated its efficiency and global optima achievement.

Acknowledgments

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

References (36)

  • PardalosP.M. et al.

    Recent developments and trends in global optimization

    J. Comput. Appl. Math.

    (2000)
  • PhamD.T. et al.

    The bees algorithm _ A novel tool for complex optimisation problems

  • RakhshaniHojjat et al.

    Snap-drift cuckoo search: A novel cuckoo search optimization algorithm

    Appl. Soft Comput.

    (2017)
  • Rizk-AllahR.M. et al.

    A novel parallel hurricane optimization algorithm for secure emission/economic load dispatch solution

    Appl. Soft Comput.

    (2018)
  • AroraJ.S. et al.

    Global optimization methods for engineering applications: A review

    Struct. Optim.

    (1995)
  • BilchevG. et al.

    The ant colony metaphor for searching continuous design spaces

  • BomzeI. et al.

    Developments in Global Optimization, Vol.18

    (1997)
  • ChelouahR. et al.

    A continuous genetic algorithm designed for the global optimization of multimodal functions

    J. Heuristics

    (2000)
  • Cited by (31)

    • ABFIA: A hybrid algorithm based on artificial bee colony and Fibonacci indicator algorithm

      2022, Journal of Computational Science
      Citation Excerpt :

      We will discuss the FIA in Section 3 and the hybridization of the ABC with FIA in Section 4. The FIA is a newly published metaheuristic optimization method [19], which is inspired by the Fibonacci retracement method, a well-known technique used by stock market traders to predict highly volatile stock prices. In the stock market, technical analysis support and resistance levels are fixed levels for a stock price where it is assumed that the price would begin to stop and reverse.

    • A meta-inspired termite queen algorithm for global optimization and engineering design problems

      2022, Engineering Applications of Artificial Intelligence
      Citation Excerpt :

      Human-behavior-based algorithms imitate human learning and social and competitive behavior and often reflect the psychological and physiological activities of humans in specific contexts. For example, the Fibonacci indicator algorithm (FIA) (Etminaniesfahani et al., 2018) is inspired by the Fibonacci index in the stock market, the harmony search algorithm (HSA) (Lee and Geem, 2005) imitates the process of jazz musicians’ impromptu creation of beautiful music, the political optimizer (PO) (Askari et al., 2020) imitates the election campaigns of politicians, student psychology-based optimization (SPBO) (Das et al., 2020) is inspired by students’ psychological activities in pursuit of high scores, learning-based optimization (TLBO) (Rao et al., 2011) imitates the behavior of classroom teaching, and the volleyball premier league algorithm (VPA) (Moghdani and Salimifard, 2018) imitates the competition of teams in volleyball matches. In addition to the original algorithms mentioned above, there is a large body of literature in which excellent research work has been done on algorithm improvement and application.

    • Barnacles Mating Optimizer: A new bio-inspired algorithm for solving engineering optimization problems

      2020, Engineering Applications of Artificial Intelligence
      Citation Excerpt :

      Among the algorithms that fall under this category are Harmony Search Algorithm (HSA) (Lee and Geem, 2005) that is inspired by the improvisation process of jazz musicians, Teaching Learning Based Optimization (TLBO) (Ouyang et al., 2015; Rao et al., 2011; Zou et al., 2015) that mimics teaching and learning in classroom, Volleyball Premier League Algorithm (VPA) (Moghdani and Salimifard, 2018) which is motivated by the volleyball competition and interaction among volleyball teams during a season, Soccer League Competition (SLC) algorithm (Moosavian, 2015; Moosavian and Kasaee Roodsari, 2014) based on the professional soccer leagues competition and Imperialist Competition Algorithm (ICA) (Ardalan et al., 2015) which is based on a new socio-politically motivated global search strategy. The algorithm inspired from technical traders in the stock market called Fibonacci Indicator Algorithm (FIA) has been proposed in Etminaniesfahani et al. (2018). From all algorithms that have been proposed in literature, it can be said that there are two main issues in proposing new algorithms, which are (1) no free lunch theorem (NFL) and (2) determination of exploitation and exploration processes.

    • Novel modifications of social engineering optimizer to solve a truck scheduling problem in a cross-docking system

      2019, Computers and Industrial Engineering
      Citation Excerpt :

      Regarding the standard benchmark functions, this paper utilizes a set of standard functions to evaluate the proposed novel optimizers. The literature reports that there are more than 50 assessment functions (Etminaniesfahani, Ghanbarzadeh, & Marashi, 2018; Fard & Hajiaghaei-Keshteli, 2016; Fathollahi-Fard, Govindan, Hajiaghaei-Keshteli, & Ahmadi, 2019b; Ghorbani & Babaei, 2014; Kaboli, Fallahpour, Selvaraj, & Rahim, 2017a; Mortazavi, Toğan, & Nuhoğlu, 2018; Schwarzrock et al., 2018). In this work, 12 standard functions among all feasible alternatives were adopted from Conferences on Evolutionary Computation (CEC) (Ghorbani & Babaei, 2014) provided originally from (Fard & Hajiaghaei-Keshteli, 2016; Ghorbani & Babaei, 2014) and Fathollahi-Fard et al., (2018a,2018b,2018c,2018d).

    View all citing articles on Scopus
    View full text