Abstract
This paper proposes a new meta-heuristic algorithm named tornado optimizer with Coriolis force (TOC) which is applied to solve global optimization and constrained engineering problems in continuous search spaces. The fundamental concepts and ideas beyond the proposed TOC Optimizer are inspired by nature based on the observation of the cycle process of tornadoes and how thunderstorms and windstorms evolve into tornadoes using Coriolis force. The Coriolis force is applied to windstorms that directly evolve to form tornadoes based on the developed optimization method. The proposed TOC algorithm mathematically models and implements the behavioral steps of tornado formation by windstorms and thunderstorms and then dissipation of tornadoes on the ground. These steps ultimately lead to feasible solutions when applied to solve optimization problems. These behavioral steps are mathematically represented along with the Coriolis force to allow for a proper balance between exploration and exploitation during the optimization process, as well as to allow search agents to explore and exploit every possible area of the search space. The performance of the proposed TOC optimizer was thoroughly examined on a simple benchmark set of 23 test functions, and a set of 29 well-known benchmark functions from the CEC-2017 test for a variety of dimensions. A comparative study of the computational and convergence analysis results was carried out to clarify the efficacy and stability levels of the proposed TOC optimizer compared to other well-known optimizers. The TOC optimizer outperformed other comparative algorithms using the mean ranks of Friedman’s test by 20.75%, 27.248%, and 25.85% on the 10-, 30-, and 50-dimensional CEC 2017 test set, respectively. The reliability and appropriateness of the TOC optimizer were examined by solving real-world problems including eight engineering design problems and one industrial process. The proposed optimizer divulged satisfactory performance over other competing optimizers regarding solution quality and global optimality as per statistical test methods.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Optimization techniques have been thoroughly investigated in the past several years in a variety of real-world problems (Panagant et al. 2023; Kumar et al. 2024). Optimization is the process of determining the optimal combination of decision variables in solving optimization problems, which is common in everyday life and work (Rezk et al. 2024). The search for effective and efficient ways to address optimization problems is becoming more and more important (Tejani et al. 2016). Many optimization problems are likely to be nonlinear and non-convex in nature, with many decision variables and, in some cases, intricate objective functions constrained by a variety of constraints. Furthermore, such optimization problems may have several local optimums and variable or abrupt peaks (Sowmya et al. 2024). Finding solutions to optimization problems is important in all areas of science and engineering (Nonut et al. 2022), with the desire to find ever-more robust solutions. This means that there is a need for sensible algorithms that can satisfy the complexity of such up-to-date engineering and scientific problems (Gundogdu et al. 2024).
As optimization problems get increasingly complicated and multifaceted, the demand for effective and precise optimization techniques is rising (Cao et al. 2020; Aye et al. 2023). Consequently, researchers have examined optimization techniques such as machine learning, dynamic programming, and linear programming over the past decade. Suitably, there is a need for optimization algorithms that have the potential to greatly improve problem-solving efficacy, reduce the computational load, and preserve computational and financial resources (Tejani et al. 2017). A thorough review of existing optimization algorithms in the literature reveals a diverse range of techniques (Braik 2021; Zhu et al. 2024). These methods can range from classical linear or non-linear mathematical methods (Vagaská and Miroslav 2021) to nature-inspired methods (Braik et al. 2021; Zhu et al. 2024), each with advantages and disadvantages. Mathematical methods refer to a large range of optimization techniques that use a well-defined mathematical model with a starting condition to repeatedly locate the optimal solution to an optimization problem. Newton’s method (Bertsekas 2022) and Nelder and Mead algorithm (Shirgir et al. 2024) are a few examples of mathematical techniques.
Traditional methods have proven reasonably successful in solving large-scale optimization problems (Alavi et al. 2021). However, these methods are subject to inherent dependence on gradient information and require a fully promising initial starting vector within the search space. These methods often need an in-depth understanding of the problem and may not be the most effective solution to contemporary large-scale and multimodal optimization problems (Rezk et al. 2024). When any of the traditional methods are applied to complex optimization problems with nonlinear search spaces or a large number of constraints or decision variables, they may find only locally optimal solutions. However, there is no guarantee that they will find the global optimal solution, and it is easy to stumble upon locally optimal solutions.
Nature-inspired methods can be defined as algorithmic frameworks that use heuristics and stochastic operators inherited from nature, referred to as meta-heuristic algorithms (Comert and Harun 2023). Meta-heuristic techniques balance the limitations of mathematical methods with the advantages of unpredictability and ease of implementation. Meta-heuristics have gained popularity in academia and are commonly used to solve complex engineering and scientific problems (Rezk et al. 2024). Nature-inspired meta-heuristics may be highly effective in solving many real-world optimization problems, but they may not be able to provide adequate solutions to others. This is partly due to the common nature of these approaches, which are involved in local or sub-optimal solutions (Kumar et al. 2023; Ghasemi et al. 2024).
Meta-heuristic algorithms that draw inspiration from artificial, natural, and occasionally supernatural phenomena have become ubiquitous in the literature (Sharma and Raju 2024). Everything from simulated annealing to swarm intelligence, evolutionary theory, human behavior, musicians, and even the COVID-19 epidemic appear to be potential sources of inspiration for creating “novel” optimization methods. The history of meta-heuristics has been heavily influenced by natural processes (Gendreau et al. 2010), yet in the past two decades, too many self-described “novel” metaphor-based algorithms have been presented in the literature. It is regrettably unclear in the vast majority of instances why the presented metaphors are employed and what new insights they offer to the meta-heuristics’ community.
In studies that propose so-called “novel” metaphor-based techniques, the following are some of the most problematic elements: First, they redefine previously recognized concepts in the optimization area by introducing a new language using a metaphor. Second, they use the proposed metaphor to construct mathematical models that are trivial and only extravagantly based on the metaphor itself, which means that the models do not accurately reflect the metaphors. In addition, the proposed algorithms frequently do not correspond with the mathematical models generated by the metaphor. Third, they justify the use of a new metaphor by citing reasons like “it has never been used before” or “the mathematical models are different from those used in the past”-instead of defending it with a solid scientific foundation and describing the optimization process the metaphor represents and how it was applied to make efficient design decisions in the proposed algorithm. Lastly, they offer skewed assessments and comparisons with alternative approaches, such as an experimental assessment based on a limited number of low- complexity problems and/or a comparison of the proposed algorithm with outdated methods whose performance is far from state-of-the-art (Camacho-Villalón et al. 2023).
According to the above discussions, any emerging new optimization technique should enhance existing algorithms and provide unique advantages that are not found in other algorithms already reported in the literature. In this, a unique optimization technique provides an opportunity to share knowledge to address challenging real-world problems. New optimization algorithms frequently use fast-integrating methods or engines to enhance the optimization effectiveness of existing optimization methods. Thus, the optimization community gains from new optimization methods, which are still appropriate for experimentation using various search strategies for certain real-world problems (Abdollahzadeh et al. 2024). The current work is primarily motivated by these fundamental insights. The three basic approaches implied in the literature for creating meta-heuristic algorithms involve recommending new optimization algorithms, merging preexisting algorithms, and creating hyper heuristics. The development of novel optimization algorithms and their integration with existing ones is complementary rather than antagonistic. From one perspective, new optimization approaches can offer better solutions for solving difficult real-world problems in addition to compensating for shortcomings of existing optimization algorithms in certain scenarios or with specific difficulties. Many modern optimization algorithms utilize operators or strategies with novel search features. These components demonstrate a variety of ways to improve the performance levels of existing optimization algorithms.
A hyper heuristic is a technique for choosing or developing heuristics, according to a solid optimization paradigm commonly referred to as hyper heuristics (Burke et al. 2010). Hyper heuristics provide a high-level strategy (HLS) by manipulating or controlling a set of low-level heuristics (LLH). Since meta-heuristics may be used in a wide range of real-world optimization problems, they are considered universal techniques (Blocho 2020). Hyper heuristics can boost a solution’s agility and performance by combining and altering several meta-heuristics. In this case, meta-heuristics are an essential component of hyper heuristics, and the development of new meta-heuristics increases the pool of optimization techniques from which hyper heuristics might select. This might increase the effectiveness of hyper heuristics by adding better and more efficient meta-heuristics. Cutting-edge ideas and concepts are also often introduced by new meta-heuristics. Hyper heuristics can benefit from these advancements and improve their flexibility in different problem domains by incorporating them into their decision-making process. The development of new hyper heuristics can also be sparked by novel meta-heuristics. Academic researchers could adapt concepts from novel techniques to develop more advanced hyper heuristics.
For example, the reasonably promising cuckoo search (CS) algorithm (Yang and Deb 2014) presents a Lévy flying method with substantial exploration quirks. This method has been widely adopted by various contemporary algorithms to boost its potential for preventing local optima. Hybrid algorithms are also produced by mixing many new algorithms with pre-existing ones. In consequence, they can take advantage of each algorithm to increase the optimization efficiency. The ant colony optimization (ACO) algorithm mimics the behavior of foraging ants since it makes decisions about the path planning problem based on how ants forage for food (Dorigo et al. 1996). While foraging, ants release pheromones into the earth; the amount of these pheromones determines where the ants will eventually travel. As more ants comply and repeat this process, the pheromone accumulation deepens and attracts more ants. The ants can decide this is the best method to get to the food source. The ACO model is based on real-world foraging strategies used by ants. Thus, path planning-related problems such as vehicle routing, shop scheduling, and computational optimization may be effectively handled by ACO Comert and Harun (2023). On the other hand, ACO struggles to solve some other problems, such as continuous optimization and high-dimensional optimization problems. Different meta-heuristics have different origins and exhibit different search behaviors; as a result, each approach is suitable for a limited set of problems, and this may result in the incapacity of current optimization algorithms to tackle some newly emerging or extremely intricate real-world problems (Zhong et al. 2022).
The two most important aspects of meta-heuristic algorithms for solving optimization problems are exploration and exploitation (Daliri et al. 2024).
1.1 Exploration and exploitation
To estimate or find the global optimal solutions, meta-heuristic algorithms follow the same path, regardless of their differences. An initial collection of random solutions is used for optimization, and these solutions must aggregate and change quickly, easily, and arbitrarily. As a result, the solutions spread globally within the search space. This phase is referred to as “exploration” of the search space, during which different parts of the search space are targeted by solutions due to sudden changes (Askarzadeh 2016). This stage’s primary goals are to identify the most promising regions inside the search space, depart from the local optimum, and escape the local optima slump. Once the search space is sufficiently explored, the solutions start to change in a reasonable way and move locally in the search space towards the most promising solutions, which might improve the quality degree of their solutions. Enhancing the efficiency of the best solutions obtained during exploration is the main goal of this step, which is called “exploitation” Xue and Shen (2020). Although local optima may be avoided during the exploitation phase, the search region’s coverage is still smaller than during the exploration stage. In this situation, solutions avoid local solutions that are near the global optimum. Thus, one can deduce that the exploration and exploitation phases pursue opposing goals.
The most common way to evaluate the suitability of a new meta-heuristic in terms of its exploration and exploitation actions and its balance is to show how competitive it is when solving optimization problems compared to existing meta-heuristics and mathematical programming techniques. Although current meta-heuristics have proven their value in consistently identifying global optimal solutions throughout optimization of numerous real-world problems, existing meta-heuristics are not capable of successfully identifying the global optimum for all types of problems (Youfa et al. 2024). As per the ‘no-free-lunch’ (NFL) hypothesis (Wolpert and Macready 1997), there is no general optimization algorithm that can find the best solutions for all kinds of problems. Put otherwise, if a meta-heuristic algorithm in a particular class is fine-tuned to achieve a high level of performance in a particular class of problems, or even some methods within the same class, its performance in other classes of problems, or even other methods within the same class, will counteract it. This is why the NFL theory maintains this area of study open and encourages researchers to come up with new ways to improve accuracy and reinforce optimization. This is for the purpose of tackling complicated real-world problems that always arise due to high-tech advancements (Zhao et al. 2024). As per this, it is important for researchers to search for, devise, or propose new meta-heuristics that can give significant amelioration over current optimization techniques in solving optimization problems.
1.2 Outlines and motives of the proposed study
Construction of new efficient meta-heuristic algorithms is essential, but doing so is almost difficult. In essence, there is no existing meta-heuristic that specifically models and mathematically implements the life cycle of tornadoes and how thunderstorms and windstorms evolve to form tornadoes in nature. These reasons are the main motivations behind this work. So, this paper proposes and develops a new meta-heuristic named tornado optimizer with Coriolis force (TOC) which is inspired by the simulation of the life cycle of formation and dissipation of tornadoes as in nature. Another motivation behind the development of this algorithm is the expectancy of solving both unconstrained and constrained optimization problems that are not easy to solve using current optimization algorithms. Finding feasible optimal solutions for broadly well-known unconstrained optimization functions as well as nonlinear constrained engineering design and industrial problems that are superior to those found by existing meta-heuristics is another expectation of the proposed optimizer. Other aims include finding the global minimum among several local minimums as in multimodal functions as well as finding all the global minima for test functions having several global minima. Finally, exploration and exploitation are two essential aspects for the success of any meta-heuristic, and TOC aims to successfully orchestrate them to achieve a suitable balance between them.
1.3 Contributions of the work
The main novelties and contributions of this work can be succinctly summarized by the following points:
-
1.
A new optimizer referred to as TOC was first presented to simulate the life cycle of formation and dissipation of tornadoes, and is completely analyzed and mathematically expressed in detail.
-
2.
The Coriolis force and cyclostrophic wind speed are two vital novel concepts proposed in TOC to improve its competitiveness. These concepts were mathematically modeled for better exploration as well as for greater exploitation.
-
3.
The performance levels of the proposed optimizer were verified on 23 baseline benchmark functions and 29 well-known benchmark problems taken from the CEC-2017 test group with varied dimensions, and a comprehensive comparison was conducted with several excellent meta-heuristics to fully prove the advantages of the proposed optimizers.
-
4.
The relevance and reliability of the proposed optimizer were further investigated by applying it to 8 classical engineering problems and one complex industrial problem, and its outcomes were contrasted with those of several meta-heuristics.
The residual sections of this paper are partially structured as follows: Sect. 2 presents many meta-heuristics and their classes mentioned in the literature review. Section 3, describes the inspiration concepts of the optimizer developed in this work. The mathematical formulations of the proposed optimizer are presented in full in Sect. 4. Section 6 presents the computational evaluation results, convergence behavior, and statistical results of the competing algorithms. Section 7 presents the efficiency and practicality of TOC in tackling 8 engineering test cases. The experimental results of the TOC application to an industrial problem are examined and displayed in Sect. 8. In Sect. 10, the main findings and future trends of this work were drawn up.
2 Literature review
Meta-heuristic algorithms, which incorporate heuristics derived from natural phenomena, biological processes, natural life of creatures, human behavior, and even mathematics, are referred to as problem-independent algorithmic foundations. Due to their merits over mathematical approaches-such as unpredictability, simplicity of understanding, and black-box considerations-meta-heuristics are a potential substitute. Meta-heuristics are widely used to tackle a wide range of challenging optimization problems and have garnered considerable interest in the literature recently. Meta-heuristics can be broadly fall into one of the following classes: evolutionary-based algorithms (EAs), swarm-based algorithms (SAs), human-based algorithms (HAs), physics-based algorithms (PAs), mathematics-based algorithms (MAs), sport-based algorithms (SBAs), music-based algorithms (MBAs), and chemistry-based algorithms (CBAs). These classes are based on the processes that inspired the algorithms that belong to each of these classes (Zhao et al. 2024). These groups can be pictured as follows, along with certain well-known algorithms that belong to them:
2.1 Evolutionary-based algorithms (EAs)
EAs are the most previously advanced meta-heuristic techniques, and they were inspired by biological evolutionary phenomena such as natural selection, inheritance, and other processes resulting from biological evolution (Back 1996). Proponents of Darwin’s idea of evolution in biology, genetic algorithms (GAs) are among the most popular EAs. Moreover, GA is regarded as one of the most popular and long-standing meta-heuristics available today. During a specific iterative phase, a set of individuals is chosen at random as the starting spot in the search space and grows by a variety of evolutionary operators, such as selection, mutation, and reproduction procedures. At the end of the iteration loops, the GA’s best performer up to that point is considered the optimal solution. A kind of classical EA that employs some of the same evolutionary operators as GAs is called differential evolution (DE) Price (1996). The main difference between DE and GA is that, whereas DE places more emphasis on the mutation operator, GA places more emphasis on the crossover operator (Holland et al. 1992). There are also a few additional well-known EAs in Table 1.
2.2 Swarm-based algorithms
SAs are among the meta-heuristic algorithmic techniques with the fastest pace of growth, which draw inspiration from the collective behaviors of biological populations-such as those of microorganisms, and animals-found in nature. The optimization domain is heavily impacted by certain conventional SAs. An ant colony’s foraging strategy is modeled after that of an ant colony optimization (Dorigo et al. 1996). Ants have the capacity to leave behind substances called pheromones along their path while they forage. Ants adhere to the pheromone to the food and can determine the compounds’ strength. Particle swarm optimization (PSO) algorithm Kennedy and Eberhart (1995) mimics fish or bird’s social interactions. Artificial bee colony, or ABC for short, mimics the cooperation and division of work that individual bees use to find nectar in their surroundings (Karaboga and Basturk 2007). The whale optimization algorithm (WOA) Mirjalili and Lewis (2016) mimics the behaviors of whales seeking prey, circling prey, and attacking bubble nets. The capuchin search algorithm (CapSA) Braik et al. (2021) models the collective hunting behaviors of capuchin monkeys in nature, whereas the chameleon swarm algorithm (CSA) Braik (2021) resembles the hunting and foraging behaviors of chameleons in the wild. Another recently well-known SA is the white shark optimizer (WSO) Braik et al. (2022), which mimics the foraging behavior of white sharks in the ocean. Table 2 provides a variety of widely used meta-heuristics in this domain.
2.3 Human-based algorithms
HAs are newly developed meta-heuristics, which have received more attention in the literature. The two main things that inspired the creation of HAs are social relationships between humans and non-physical activities. Imperialist competitive algorithm (ICA) was created from interpersonal human behaviors that mimic the processes of imperial competition and colonial assimilation (Esmaeil and Caro 2007). The emergence of teaching-learning-based optimization (TLBO) Venkata Rao et al. (2011) is attributed to instructor guidance and student cooperation. This optimization method consists of two distinct phases: the teaching portion and the learning portion. When students interact with one another during the teaching phase, which is associated with learning from the teacher, learning happens during both the teaching and learning phases. Table 3 provides some notable examples of HAs.
2.4 Physics-based algorithms
PAs are the fundamental subset of meta-heuristic algorithms. These physical models include cover aspects of atomic physics, heat, electricity, mechanics, and a wide range of physical laws, processes, events, concepts, and motions. The law of gravitation serves as the motivation for a well-known PA known as the gravitational search algorithm (GSA) Rashedi et al. (2009). Within GSA, a group of search agents gravitationally draw near one other; a heavier search agent draws other search agents more readily. Another popular PA is atom search optimization (ASO) Zhao et al. (2019), which simulates atomic motion using the forces between atoms. The Lennard–Jones prospective and the bond-length prospective in this instance provide constraining forces that drive the interaction between the atoms. Additional typical PAs are included in Table 4.
2.5 Mathematics-based algorithms
MAs are a novel and essential part of meta-heuristics, which have achieved tremendous advancements in the discipline of optimization. The basis of MAs consists of certain mathematical operations, rules, formulas, and theories. Two well-known instances of this category are the arithmetic optimization algorithm (AOA) Abualigah et al. (2021) and the sine-cosine algorithm (SCA) Mirjalili (2016). The basis of AOA is the distributional assets of the four mathematical arithmetic operators: addition, subtraction, division, and multiplication. The SCA algorithm stimulates the regularity and fluctuation of the sine and cosine functions in mathematical notions. Although MAs are not as competitive as other meta-heuristics’ categories at present time, this category appears to have potential. Some former algorithms in this category include the Lévy flight distribution (LFD) method Houssein et al. (2020), which models a Lévy flight random walk; the circle search algorithm (CSA) Qais et al. (2022), which models the geometric properties of circles; and the golden sine algorithm (GSA) Tanyildizi and Demir (2017), which prototypes multiple types of sine function.
2.6 Sport-based algorithms
SBAs are inspired by physical activities involving humans and fitness programs. The classic SBA known as the league championship algorithm (LCA) Kashan (2009) is driven by the rivalry among sports teams in a league. A synthetic league in LCA comprises many weeks of action between various sports teams. Teams play fortnightly contests in combination, with the outcome determined by a certain structure’s fitness ratings and win/loss records. During the recovery phase, teams adjust their lineup and style of play in anticipation of the next week’s contest. The championship follows the league schedule for a few seasons or until an interim condition is met. Three other SBAs that are frequently used to handle complex optimization problems are world cup optimization (WCO), which is a recreation of FIFA’s world championships, tug of war optimization (TWO) Kaveh and Zolghadr (2016), which designs a simulated tug of war, and soccer league competition (SLC) Moosavian and Babak (2014), which propagates soccer partnership competition.
2.7 Music-based algorithms
MBAs are an inventive category of meta-heuristic algorithms where melody and music of nature served as the inspiration for this types of meta-heuristics. Harmony search (HS) Geem et al. (2001) is a well-known example of an MBA algorithm. It is comparable to musical improvisation in that musicians modify the pitch of their compositions to get the best harmony. Another well-known example of an MBA is the approach of musical composition (MMC) Mora-Gutiérrez et al. (2014), which simulates a dynamic music composition system.
2.8 Chemistry-based algorithms
Chemical reactions and thermodynamics are the main pieces of chemical reaction principles, which serve as the foundation for many CBAs. Another illustration of CBAs is chemical reaction optimization (CRO) Lam and Li (2012), which is predicated on the idea that molecule collisions direct the chain reaction to the stable and low trajectory of the potential energy surface during the chemical reaction. The conservation of energy principle is followed in the construction of four basic collision reactions. The artificial chemical reaction optimization algorithm, or ACROA, is a well-known CBA that was motivated by the many types of chemical reactions and their frequency (Alatas 2011).
The fact that most of the algorithms in the above lists share the same or comparable traits-exploration and exploitation-is noteworthy (Braik et al. 2021). Exploration implies that an algorithm explores the whole decision space by looking at it globally and in-depth. Exploitation occurs when a method actively investigates a certain decision space, usually in the area of solutions that already exist. Exploration increases the diversity of the set of possible solutions by making it easy to search across space for variables and generate solutions that differ from the ones that are presently in place. Exploitation pushes algorithms to often investigate the local neighborhood of existing solutions in order to find better ones. In view of this, convergence is increased, and solution accuracy is much enhanced. But this quest cannot break free from the trap of local optima. Both over-and under-exploration can slow down a method’s convergence time and solution accuracy. Under- and over-exploitation might speed up convergence while increasing the risk of becoming trapped in a local optimum. Therefore, to avoid immature convergence and local optima stasis, an efficient optimization algorithm must achieve a suitable balance between exploration and exploitation mechanisms (Braik et al. 2023).
Although the aforementioned algorithms play an important role in optimization, they suffer from drawbacks in some optimization cases. Some algorithms suffer from shortcomings in terms of computational burden, complexity, and parameter design. The balance between local exploitation and global exploration is greatly affected by the fact that many algorithms use the same individual search approach as the population search strategy. Additionally, this leads to the inefficiency of the algorithm when it comes to tackling continuous optimization problems, especially if they are very complex. The main question that arises in light of the number of optimization algorithms that have been developed so far is whether there is a need for other optimization algorithms. As mentioned before, the NFL theorem (Wolpert and Macready 1997) addresses this important question and issue. Specifically, the NFL theorem demonstrates how an optimization algorithm may perform very well on some optimization problems but perform poorly on a different class of problems. This is due to the fact that real-world problems vary in both their nature and how they are represented mathematically. This study’s development of a novel optimization algorithm that may be applied to the preparation of qualified quasi-optimal or optimum solutions for optimization problems is also motivated from the study of NFL theorem. Based on the above issues, a physics-based algorithm called tornado optimizer with Coriolis force (TOC), which draws inspiration from the creation process of tornadoes, is proposed to increase the optimization efficiency. By comparing this optimizer with other optimizers that have shown promising performance in the literature, this optimizer aims to address well-known constrained and unconstrained optimization problems.
3 Inspiration
The idea of the tornado optimizer with Coriolis force (TOC) presented in this paper is inspired based on the observation of tornado formation and dissipation and how windstorms and thunderstorms evolve to form tornadoes in nature (Cao and Liu 2023). To further clarify, some basics of how tornadoes, or also known as hurricanes, are created and move towards land, where they follow a recognizable life cycle, described as follows: The cycle begins when the wind speed and direction change within a storm system. This creates a spinning effect, which is tipped vertically by an updraft through the thunderclouds. A storm normally occurs at that point in this scenario. When a storm intensifies, it often turns into a supercell thunderstorm. Alternatively stated, a powerful thunderstorm develops a rotating system a few miles up in the atmosphere that becomes a supercell or thundercloud cell. These supercell thunderstorms are distinct, isolated cells that are not part of a storm line. Supercell storms are storms that go around and around. A storm cloud may produce a tornado when a rotating vertical column of air and a supercell thunderstorm come together (Zou and He 2023). The stages of the tornado formation process can be observed in Fig. 1.
Phases of tornado formation (SciJinks 2024a)
As it is observed from Fig. 1, tornadoes usually begin with a thunderstorm. But not just any thunderstorm - a specific kind of rotating thunderstorm called a supercell as shown in Fig. 1k. As shown in Fig. 1a and b, they can bring damaging hail, strong winds, lighting and flash floods. Supercells form when air becomes very unstable, and the wind speed and direction are different at different attitudes. This condition is called wind shear as shown in Fig. 1c to e. But when winds at ground level are blowing in one direction and winds higher up in the atmosphere blow in a different direction, this can cause a horizontal tube of air to form as shown in Fig. 1f to h. In a thunderstorm, warm air rises within the storm. This is called an updraft which can turn a horizontal rotating tube of air into a vertical one as shown in Fig. 1i. When this happens, the whole storm begins to spin, creating a supercell. Some supercells form a funnel cloud as shown in Fig. 1j and k. If this funnel cloud extends to the ground, it is called a tornado as shown in Fig. 1l.
A schematic diagram of a tornado (SciJinks 2024b)
In Fig. 2, the initial funnel, which hovers over the surface, grows from a thundercloud. Then, if conditions are favorable (temperature swings, winds, etc.), the tornado takes shape and reaches the ground. Finally, when the conditions start to change, the funnel narrows and starts to gradually rise toward the cloud. Occasionally, two or more tornadoes may occur from a single storm at the same time. Although tornadoes can vary in size, strength, and location, they all share certain traits (Hamideh and Sen 2022), which can be observed from Fig. 2, and can be described as follows:
In a nutshell, the Earth’s rotation around its axis causes winds in the northern hemisphere to deviate to the right, while winds in the southern hemisphere to deviate to the left. This is known as the Coriolis force or Coriolis effect, but it does not directly affect all air movement no matter how small. In general, the Coriolis effect only directly affects the direction of rotation of the largest atmospheric and oceanographic circulation systems on Earth.
3.1 Tornadoes and the Coriolis force (CF)
As windstorms move relative to the Earth, they experience a compound centrifugal force based on the combined tangential velocities of the Earth’s surface and the windstorms. When combined with the non-perpendicular gravitational component, the result is called the Coriolis force. This force indicates 90\(^\circ \) to the right of the downwind in the Northern Hemisphere, and 90\(^\circ \) to the left in the southern hemisphere. The magnitude of Coriolis acceleration is linear in speed and can be given as follows:
where f is the Coriolis parameter defined as presented in Eq. 2.
where \(\varOmega \) is the angular speed of the Earth, and \(\phi \) is the latitude.
3.1.1 Geostrophic wind and gradient wind
A gradient wind is defined as the wind that exists if a particle’s path is circular and there is a balance between the pressure gradient force, the Coriolis force, and the centrifugal force. If the flow is curved to the left (cyclonic flow) then the pressure gradient force must be stronger than the Coriolis force. Contrarily, if the flow is curved to the right (anticyclonic flow), the pressure gradient force must be weaker than the Coriolis force. Centripetal acceleration occurs when there is an imbalance between the pressure gradient force (PGF) and the Coriolis force (CF). The tangential wind speed can be defined as follows:
where V is the tangential wind speed, R is the radius of the curvature of a trajectory, f is the Coriolis parameter, and \(-\frac{\partial \phi }{\partial n}\) is the component of the pressure gradient force normal to the direction of the wind.
The gradient wind approximates the real wind that is typically closer to the real wind than geostrophic wind, where the PGF and CF are equal (Brill 2014). Because the gradient wind equation is quadratic, there are two possible solutions for the wind speed: cyclonic flow and anticyclonic flow.
3.1.2 Cyclonic flow (low pressure)
In this case, a Coriolis force and the centrifugal force (CeF) act in the same direction. To have a balance, the pressure gradient force must act in the opposite direction, and we have a lower pressure in the center. If we take the effect of curvature into account, we must expand the horizontal momentum formula to include the centrifugal term:
Equation 4 can be presented as:
Using the geostrophic balance \(fV_g =-\frac{1}{\rho } \frac{\partial p}{\partial n}\), we substitute the left side in Eq. 5 by \(fV_g\):
where \(V_g\) is the geostrophic wind, \(V_G\) is the gradient wind, and R is the radius of curvature.
The gradient wind speed is obtained by solving Eq. 6 for \(V_G\) to get:
Equation 7 tells us that \(V_G < V_g\) in all cases because the denominator is larger than one. The difference between \(V_G\) and \(V_g\) becomes larger at smaller R, and at smaller latitude angles.
3.1.3 Anticyclonic flow (high pressure)
In this case, the pressure gradient force and the centrifugal force are in the same direction. For there to be equilibrium, the Coriolis force must act in the opposite direction, resulting in a higher pressure at the center.
Equation 8 can be presented as:
In the same previous manner,
It is shown that \(V_G > V_g\) in all cases.
3.1.4 Cyclostrophic flow
If the horizontal scale of atmospheric disturbance is small enough, the Coriolis force may be neglected when compared to the centrifugal force and the pressure gradient force. Cyclostrophic balance occurs when the pressure gradient force and the centrifugal force are equal and in opposite direction as presented in Fig. 3. This is the situation near the equator, which can be formulated mathematically as shown in Eq. 3:
In Eq. 11, the centrifugal force: \(\frac{V^2}{R} \gg fV\), and the pressure gradient force \(\frac{\partial \phi }{\partial n} \gg fV\).
Solving Eq. 11, gives the cyclostrophic wind speed as follows:
There are four pictured cases as shown in Table 5.
The mathematically positive roots of the speed of the cyclostrophic wind correspond to only two physically possible solutions described as shown in Eq. 13.
Since the Coriolis force is not a factor, the cyclostrophic winds can rotate either clockwise or counterclockwise.
3.1.5 The gradient wind approximation
A gradient wind is just the wind component parallel to the height contour that satisfies:
Solving Eq. 14 gives:
The geostrophic flow can be defined as:
Finally, Eq. 15 can reformed as shown below:
In Eq. 17, Coriolis force is not neglected when compared to the centrifugal force and the pressure gradient force.
In this work, the overall picture of tornadoes in nature has guided us to the mathematical models devised for a new optimization algorithm that simulates tornadoes formation and carries out optimization. Below is a thorough characterization of these models and the proposed algorithm.
4 Tornado optimizer-based Coriolis force
In the tornado optimizer with Coriolis force (TOC), it is assumed that there are many windstorms, some thunderstorms, and precipitation phenomena, where tornadoes are generated by windstorms and thunderstorms, and thunderstorms are generated by windstorms. The following are the detailed mathematical models of the proposed TOC optimizer.
4.1 Initialization of population
The proposed TOC optimizer is a population-based algorithm; accordingly, the first step of initiating the optimization process by this optimizer is to randomly create an initial population of designs variables (i.e., windstorms and thunderstorms) between upper bounds (u) and lower bounds (l). The best individuals (i.e., windstorms and thunderstorms), ranked in terms of having minimum cost function, or in some other cases maximum fitness, are selected to form tornadoes or a tornado if there is only one tornado. A number of good individuals (i.e., values of the cost function close to the current best solution) are chosen as thunderstorms, while all other individuals are called windstorms that eventually evolve into thunderstorms and tornadoes.
To commence TOC as an optimization algorithm, an initial population matrix of n individuals (i.e., population size), in a d-dimensional search space (i.e., dimension of the problem) is created as a first step. In this, the position of every windstorm, thunderstorm, and tornado indicates a candidate solution to the optimization problem. Equation 21 states how to produce the initial population of windstorms, thunderstorms, and tornadoes in the search domain using a uniform random initialization process.
where rand is an arbitrary number generated in the range [0, 1], \(y_{i, j}\) is the starting value of the ith individual in the jth dimension, and \(u_{j}\) and \(l_{j}\) reflect the upper and lower limits of the search space, respectively.
After the creation of n individuals, \(n_{to}\) individuals are selected from the population that are considered the best candidates to be thunderstorms and tornadoes. Consequently, the individuals with the best values among them are considered tornadoes or are referred to as \(n_o\). Simply put, \(n_{to}\) is the summation of the number of thunderstorms and tornadoes, which can be described as exhibited in Eq. 19.
where \(n_t\) refers to the number of thunderstorms, while \(n_o\) refers to the number of tornadoes, which is equal to one in this work.
The rest of the population forms windstorms. These windstorms may evolve into thunderstorms or they may evolve directly into tornadoes, which can be calculated using Eq. 20.
where \(n_w\) stands for the number of windstorms and n denotes the total population size (i.e., \(n = n_w + n_t + n_o\)).
The initial population of windstorms, thunderstorms, and tornadoes can be described by a matrix of individuals of size \(n \times d\). Therefore, the randomly generated matrix y (that is, the total population) can be shown as follows:
where \(y_{i, j}\) denotes the ith candidate individual at dimension j, which could be a windstorm, a thunderstorm, or a tornado, d denotes the number of design variables (i.e., problem dimension), and the components \(y_w\), \(y_t\), and \(y_o\) stand for the population of tornadoes, thunderstorms and windstorms, which can be defined as shown in Eqs. 22, 23, and 24, respectively.
where \(y_{{w}_{i}}\) identifies the ith windstorm, \(y_{{t}_{i}}\) identifies the ith thunderstorm, and \(y_{{o}_i}\) represents the ith tornado.
As presented in Eq. 21, in a d dimensional optimization problem, windstorms, thunderstorms, and tornadoes can be combined together and described by a matrix of appropriate size.
4.2 Fitness evaluation
The cost of the fitness value (i.e., cost function) is computed for each windstorm and thunderstorm by evaluating the cost value as shown below:
where \(fit_i\) denotes the cost value of the ith individual.
Each potential solution to a new windstorm, thunderstorm, or tornado is evaluated based on a fitness criterion created specifically for this purpose. If the newly established position is superior to the present one, the former is then refurbished. As per this, several values of the objective function of the optimization problem of interest are evaluated as a result of putting potential solutions into the decision variables, which can be represented as given in Eq. 26.
where \(\vec {fit}\) is the vector of the acquired fitness function, and \(fit_i\) denotes the value of the acquired fitness function on the basis of the ith individual.
The value of the fitness function serves as a gauge of the candidate solution’s quality in meta-heuristic algorithms like TOC. The population’s member that results in the evaluation of the best value for the fitness function is referred to as the best population’s member. This member is updated in each iteration loop of the proposed optimizer because the candidate solutions are updated throughout. In the simulation of the proposed optimizer, individuals stay in their locations if they are better than the new locations.
4.3 Evolution of windstorms
Windstorms tend to move toward tornadoes and thunderstorms based on the volume and intensity of their evolution. This means that windstorms evolve into tornadoes more often than thunderstorms.
4.3.1 Initialization of windstorms’ population
As described above, \(n_w\) windstorms are generated, such that this number of candidate individuals are selected from the entire population. Equation 27 shows \(y_{w}\) (i.e., population of windstorms) that evolve into tornadoes or thunderstorms. Indeed, Eq. 27 is part of Eq. 21 (i.e., all individuals in the population):
4.3.2 Formation of windstorms
Depending on the size and power of the evolution of windstorms, tornadoes and each thunderstorm ingests windstorms. In any manner, one of the best ways to distribute windstorms between tornadoes and thunderstorms in a proportional way is to use cost functions (fitness functions) for tornadoes and thunderstorms. Hence, the number of windstorms that accede into thunderstorms and/or tornadoes mutates. The designated windstorms for tornadoes and each thunderstorm are evaluated using the following mathematical formulas:
where \(k = 1, 2, 3, \ldots, n_{to}\), and \(f_k\) specifies the cost value of the kth thunderstorm associated with a tornado.
where \(\left\lfloor \cdot \right\rceil \rrceil \) stands for the round operator, \(k = 1, 2, \ldots, n_{to}\), and \(n_{{\dot{w}}_k}\) is the number of windstorms that evolve or assign into specified thunderstorms or tornadoes.
In fact, in the implementation of the proposed optimizer, the costs of tornadoes and each thunderstorm have been deducted by the cost of an individual (i.e., \(n_{to} + 1\)) in the population of windstorms (see Eq. 27) as can be seen in Eq. 28. Based on their strength and rate of growth, windstorms frequently develop into thunderstorms and tornadoes. It implies more windstorms evolve into tornadoes than into thunderstorms. Hence, one of the finest techniques to distribute windstorms among tornadoes and thunderstorms in a proportionate manner is to employ objective criteria (fitness functions) for tornadoes and thunderstorms.
With the use of Eqs. 28 and 29, the best solution (i.e., tornadoes) will be able to control and retain more windstorms. It is worth noting that windstorms will be randomly selected from the population of windstorms. Each windstorm is controlled by one of the best individuals (i.e., tornados or thunderstorms). Thus, windstorms cannot be assigned to more than one best individual. However, in rare situations, the sum of \(n_{{\dot{w}}_k}\) in Eq. 29 may not equal \(n_{w}\). This issue has been sorted out in the implementation code of TOC. In this case, the number of windstorms deemed for thunderstorms and tornadoes is randomly decreased or increased by subtracting or adding a single value (i.e., \(\pm 1\)). Thus, the total number of windstorms assigned to thunderstorms and tornadoes will be exactly equal to \(n_{w}\).
The speed and direction of movement of windstorms may be affected by the Coriolis force, as shown below:
4.4 Windstorm velocity with the Coriolis force
For large-scale atmospheric turbulences that do not require the windstorm to move in a straight line, there is a three-way balance among the Coriolis force, centrifugal force, and the pressure gradient forces. As per this, the gradient windstorm speed that forms thunderstorms and tornadoes can be identified as shown in Eq. 30.
where \(i=1, 2, \ldots, n_w\), is the windstorm’s index for a population of size \(n_w\), \(\vec {v}_{i}^{t+1}\) denotes the new velocity vector of the ith windstorm, \(\vec {v}_{i}^{t}\) defines the current speed vector of the ith windstorm, rand refers to a generated random number with uniform distribution in the scope [0, 1], \(\eta \) identifies a shrinkage factor presented to simulate the convergence conduct of windstorms as defined in Eq. 31, \(\mu \) implements the fuzzy adaptive kinetic energy of windstorms defined as exposed in Eq. 32, \(R_l\) is the radius of curvature of the trajectory of windstorms in the Northern Hemisphere defined as given by Eq. 33, \(R_r\) is the radius of curvature of the path of windstorms in the Southern Hemisphere given by Eq. 34, c stands for a created random number in different ranges defined as shown in Eq. 35, and f, \(CF_l\), and \(CF_r\) can be defined as presented in Eqs. 38, 39, and 40, respectively.
In Eq. 30, \(rand \ge 0.5\) demonstrates that the motion of the windstorms is in the northern hemisphere, and \(rand < 0.5\) illustrates that the motion of windstorms is in the southern hemisphere. Thus, rand was used to simulate the motion of windstorms between the Northern and Southern Hemispheres
where \(\chi \) identifies the rate of acceleration of windstorms, which is equal to 4.10, where this value was obtained after careful investigation.
Equation 31 was introduced in the proposed optimizer as a constriction factor for ameliorating the convergence behavior. The constriction factor \(\eta \) in this equation has a value of 0.7298. Mathematically, the constriction factor is analogous to momentum energy, which can be important to provide windstorms with the necessary power to reach the target to form thunderstorms and tornadoes. This factor can be essential for the success of the proposed optimizer and achieving promising performance levels.
Beyond the constriction factor \(\eta \) in Eq. 31, a fuzzy adaptive \(\mu \) was applied in the proposed optimizer with a random version setting of what is defined in Eq. 32.
where rand denotes a generated random number with uniform distribution in the scope [0, 1].
Equation 32 was used to give a fuzzy adaptive random number for dynamic system optimization, where this random \(\mu \) has an expectation of 0.75.
where t and T stand for the current and maximum number of iteration indices, respectively.
In fact, windstorm speeds can exhibit both clockwise and counterclockwise rotation. Most (but not all) tornadoes in the northern hemisphere rotate counterclockwise, because they develop from large, rotating supercell thunderstorms.
where \(b_r\) is a constant equal to 100000, \(\delta _1\) stands for a change in the sign presented as shown in Eq. 36, and \(w_r\) identifies a random value generated with different ranges defined as given by Eq. 37.
where \(f_d\) represents a function of values 1 and -1 to represent the change in sign.
where rand stands for generated random values in the range ]0, 1], and \(w_{min}\) and \(w_{max}\) are fixed values equal to 1.0 and 4.0, respectively.
where \(\varOmega \) stands for the angular rotation rate that is equal to 0.7292115E-04 radians \(\hbox {s}^{-1}\), rand stands for a random number generated with uniform distribution in the range [0, 1] Vallis (2017), and \(-1 + 2\cdot rand\) specifies a random value for the latitude.
where \(\phi _{i}^{t}\) is the component of the pressure gradient force (PGF) normal to the direction of the current ith windstorm at the specified t iteration as exposed in Eq. 41.
where \(y_{{o}_\zeta }^t\) is the current position vector of the tornado at a random index \(\zeta \) at the tth iteration, \(y_{{w}_{i}}^{t}\) identifies a position vector of a windstorm at the tth iteration, and \(\zeta \) is a random index of a tornado defined as shown in Eq. 42.
where \(rand(1, n_o)\) implements a uniformly generated vector of random values with a uniform distribution in the interval \(\left[ 0, 1\right] \).
Equation 30 is subject to the constraints given in Eqs. 43, 44, and 45.
where rand stands for a random number generated with uniform distribution in the range [0, 1].
4.4.1 Evolution of windstorms to tornadoes
The process of evolution of windstorms into tornadoes is performed in the TOC optimizer. A tornado or tornadoes are formed from windstorms or thunderstorms when windstorms evolve to tornadoes or thunderstorms. This evolution process can be simulated mathematically as shown in Eq. 46.
where \(\vec {y}_{{w}_{i}}^{t+1}\) and \(\vec {y}_{{w}_{i}}^{t}\) define the next and current position vectors of the ith windstorm at iterations \((t+1)\) and t, respectively, \(\vec {y}_{{o}_i}^t\) defines the current position vector of the ith tornado at iteration t, \((\vec {y}_{{o}_i}^t - rand_w)\) denotes the difference between the evolution of windstorms into tornadoes and the random formation of windstorms, the components \(rand_w\) and \(\alpha \) stand for random values that can be defined as presented in Eqs. 47 and 48, respectively.
where \(rand_w\) is an index vector for randomly selected windstorms.
where rand denotes a random value created with a uniform distribution in the range [0, 1], and \(a_y\) represents an exponential parameter defined as shown in Eq. 49.
where \(a_0\) denotes a constant value of 2.0 and was found after extensive analysis.
Equation 46 can essentially be thought of as an update formula for new positions of windstorms that evolve into tornadoes.
4.4.2 Evolution of windstorms to thunderstorms
As noted above, there are n individuals of which \(n_{t}\) is selected as thunderstorms and \(n_{o}\) is selected as tornadoes. In this work, we assume that there is only one tornado. A schematic view of a windstorm evolving into a particular thunderstorm along its contact line is seen in Fig. 4.
The distance \(\gamma \) between windstorms and thunderstorm may be amended randomly as given in Eq. 50.
where x is the present separation between windstorms and thunderstorms, \(0.5< \rho < 2\) where 2 may be the optimal value of \(\rho \), and \(\gamma \) conforms to a random number between 0 and \(\rho \times x\) that is uniformly distributed or chosen from a plausible distribution.
Windstorms can evolve in several directions approaching thunderstorms when \(\rho > 0.5\) is set. This idea may be utilized as well to explain how thunderstorms may evolve into tornadoes. In essence, to consummate the exploration and exploitation phases in TOC, the evolution process of windstorms into thunderstorms may be simulated as follows:
where \(\vec {y}_{{w}_{j+\sum _{1}^{n_{{\dot{w}}_k}}}}^{t+1}\) and \(\vec {y}_{{w}_{j+\sum _{1}^{n_{{\dot{w}}_k}}}}^{t}\) represent the next and current position vectors of windstorms developing into thunderstorms at iterations \((t+1)\) and t, respectively, \(\vec {y}_{{t}_i}^t\) represents the current position vector of the ith thunderstorm at iteration t, and rand stands for a random number produced between 0 and 1 with uniform distribution.
Equation 51 is regarded as a mathematical formula for new positions of windstorms that evolve into thunderstorms.
4.5 Evolution of thunderstorms to tornadoes
In the exploration and exploitation phases of the proposed optimizer, the new position of thunderstorms during evolving into tornadoes can be simulated in the manner defined below:
where \(\vec {y}_{{t}_{i}}^{t+1}\) and \(\vec {y}_{{t}_{i}}^{t}\) represent the next and current position vectors of thunderstorms developing into tornadoes at iterations \((t+1)\) and t, respectively, \(y_{{o}_\zeta }^t\) identifies a position vector for a tornado at a random index \(\zeta \), and \(\vec {y}_{{t}_{\vec {p}}}^{t}\) identifies a position vector for a thunderstorm at a random index vector \(\vec {p}\) which is the index vector for randomly selected thunderstorms identified as shown in Eq. 53.
where rand is a uniformly distributed random number in the range of [0, 1].
Equation 52 is the updated mathematical model for thunderstorms that evolve into tornadoes. Notations marked with a vector sign correspond to vector values, otherwise the rest of the notations and parameters are scalar values.
The positions of the thunderstorms and windstorms are exchanged if the windstorm’s solution is better than that of its connected thunderstorm (i.e., the windstorm becomes a thunderstorm, and the thunderstorm becomes a windstorm). As a result, windstorms from preceding thunderstorms serve as new thunderstorms and are better windstorms (in terms of the cost function value). In fact, the current thunderstorm is in charge of all earlier windstorms. The transition between a thunderstorm and a tornado, as well as between windstorms and tornadoes, may be done similarly. In this scenario, the evolving thunderstorm will behave like a new tornado, and the outgoing tornado will be a new thunderstorm with its own windstorms that can be pushed exactly in its direction. Therefore, the windstorms connected to the prior thunderstorms-which are now a new tornado-will behave as windstorms that are directly developing into the new tornado. The interchange of windstorms and thunderstorms in the population of the proposed optimizer can be observed as shown in Fig. 5.
Figure 6 (which also incorporates the concept from Fig. 4) shows how the optimization process of the proposed optimizers can be evolved, with balls, asterisks, and the bullet represent windstorms, thunderstorms, and tornadoes, respectively. The white (empty) shapes show where the windstorms and thunderstorms have moved to.
4.6 Random formation of windstorms
The stochastic formation of windstorms process is defined in the proposed TOC-based optimization to enhance its exploration capability. To be more precise, the random formation of windstorms enables TOC to avoid falling into local solutions and immature convergence. Basically, windstorms evolve in random locations when they evolve into thunderstorms or evolve into tornadoes, resulting in mature tornadoes in different positions. This process is applied to both windstorms and thunderstorms, which must be checked to see if they are close enough to a tornado to make this process occur. For this purpose, the following mathematical formula can be utilized to accomplish the random process of forming windstorms into tornadoes:
where l and u refer to the bottom and top limits of the search area, respectively, rand is a random number systematically inserted into the range [0, 1], \(\delta _2\) stands for a change in the sign specified as given in Eq. 55, \(\left\| \cdot \right\| \) refers to a norm operator, and \(\nu \) is an exponential function defined as shown in Eq. 56, which is capable of generating small numbers.
where \(f_d\) represents a function of the values 1 and -1 to represent the change in sign.
where t represents the current iteration index and T represents the maximum iteration index.
Continuing the random formation of windstorm, the following mathematical formula can be utilized to accomplish the random process of windstorms formation into thunderstorms:
Accordingly, Eqs. 54 and 57 were introduced in this work to specify the new locations of newly formed windstorms. As presented in Eq. 56, \(\nu \) regulates the search intensity close to the tornado. In view of this, big values of \(\nu \) may discourage further searches, but small values may promote search activity in the immediate vicinity of the tornado.
From a mathematical perspective, the parameter \(a_y\) creates an adaptive function over the iteration loops of TOC. Using this adaptive function, windstorms with \(a_y\) following the condition utilized in Eqs. 54 and 57 are scattered about it. In fact, with the use of this method, TOC may execute a better search surrounding the tornado during its exploitation phase.
Additionally, as observed in nature, certain thunderstorms form slowly since just a few windstorms give rise to them. As a result, they will not be able to go closer to a tornado and may eventually get smaller after making certain motions. The TOC optimizer adopts the parameter \(a_y\) to boost the random evolution process of windstorms to reinforce this idea. Then, using Eqs 54 and 57, new windstorms will be produced at new locations, equal to the number of prior windstorms and thunderstorms. As can be seen from Eq. 52, thunderstorms are not regarded as fixed points in the proposed TOC and must evolve towards tornadoes (i.e., the optimal solution). This process (developing windstorms into thunderstorms and then thunderstorms into tornadoes) promotes indirect development in the direction of the best solution. In short, as the iterations of the TOC optimizer continue, the likelihood of random generation process of windstorms decreases.
4.7 Complexity analysis
A function that relates the dimension and input size of a given input problem to the time of execution of the optimization algorithm under investigation may be employed to quantify the computational complexity of the algorithm. This basically singles out how the complexity issue of the proposed optimizer can be studied. The time and space computational complexities of the developed optimizer are described below in terms of Big-O notation as a standard expression.
4.7.1 Time complexity
Time complexity of the proposed optimizer can be generally represented using Big-O notation as follows.
The computational complexities of the proposed TOC optimizer depend on the time complications of several components connected to the relevant problem and the proposed method, as shown in Eq. 58. There are various temporal complexity issues with each of these components, where these elements can be described as follows:
-
1.
Defining the problem takes \({\mathcal {O}} (1)\) time.
-
2.
The time required for population initialization is \({\mathcal {O}} (v \times n \times d)\) time.
-
3.
The time required for cost function evaluation is \({\mathcal {O}}(v \times K \times c \times n)\) time, where c represents the cost of the criterion assessment.
-
4.
Updates to the solution and their evaluation take \({\mathcal {O}}(v \times K \times n \times d)\) time.
The time complexity of the optimization process is dependent on a number of variables, including v, n, d, T, and c, where these parameters stand for the total amount of evaluations, the amount of individuals, the overall dimensions of the optimization problem, the amount of iteration steps, and the cost of the fitness criterion, respectively. This is explained above and in connection with what is presented in Eq. 58. One may describe the total time complexity of TOC in distinct components as shown in Eq. 59 by considering the points above and Eq. 58.
As \(1 \ll Kcn\), \(1 \ll 5Knd\), \(nd \ll Kcn\), \(nd \ll 5Knd\), and \(Knd < 5Knd\), Eq. 59 may be streamlined to what is presented in Eq. 60:
As it turns out, the time complexity issue of TOC in terms of Big-O notation is of the polynomial order. The proposed TOC optimizer might be seen in this sense as a computationally efficient optimization technique. The number of decision variables in the problem (d), the cost of the problem’s objective criterion (c), the number of individuals (n), and the overall amount of iterations (T) can all be expressed as the main considerations to recognize the computational complexity of TOC when addressing an optimization problem.
4.7.2 Space complexity
The parameters of the number of windstorms, thunderstorms, and tornadoes, and the size of the problem of interest influence the space complexity of TOC in terms of the available memory space. This reveals how much room TOC would take up while starting the optimization process. Accordingly, the spatial complexity of TOC may be well represented as exhibited in Eq. 61.
4.8 Implementation steps of the proposed optimizer
The general picture of the evolution process of tornadoes and the stochastic formation methods of windstorms and thunderstorms have led us to develop mathematical models of the proposed TOC optimizer and the implementation of optimization. While solving optimization problems, the optimization process of TOC endeavors to advance in the direction of the global optimal solution. This is because it is very probable that windstorms and thunderstorms as well as tornadoes will turn out and spirally rotate in the encircling area in the search space to locate a better solution. The implementation of this capacity relies on where the optimal windstorms, thunderstorms, and tornadoes are found. Consequently, windstorms and thunderstorms are consistently capable of turning all around the potential areas in the search space to evolve into tornadoes. The key procedural steps mentioned in Algorithm 1 summarize the pseudo code of the proposed optimizer.
The proposed TOC optimizer starts the optimization process by randomly generating the locations of the population of windstorms, thunderstorms, and tornadoes in the search space, in accordance with the stages of the pseudo code presented in Algorithm 1. To update these individuals’ positions during each function evaluation, TOC uses Eqs. 30, 46, 51, 52, 54, and 57. According to the simulated stages of the proposed optimizer, if any of the entities (i.e., windstorms, thunderstorms, and tornadoes) depart the search space, they will all be brought back. In each function evaluation, the solutions are evaluated using a predetermined fitness criterion, and the individuals with the best fitness values are identified by updating the fitness function. It is indicated that the best position for individuals to achieve their goals is the most appropriate solution. In each function evaluation, all algorithmic steps-aside from the initialization ones-are repeated until the predetermined overall amount of function evaluations has been attained. The proposed models of this optimizer are continuously capable of spiraling and spinning out in all places in the search space, according to the theoretical claims of the optimizer described above.
4.9 Characteristics of TOC
The proposed TOC optimizer as a nature-inspired meta-heuristic has two capabilities during the optimization of a particular problem inside the search space: exploration and exploitation. The convergence of the proposed TOC optimizer towards the global optimal solution makes these capabilities possible. Specifically, convergence occurs when the majority of thunderstorms and windstorms congregate in the same area of the search space. TOC uses a number of crucial parameters that promote the exploration and exploitation aspects including \(\mu \), \(\alpha \), \(a_y\), and \(\nu \) which are defined in Eqs. 32, 48, 49, and 56, respectively. By adjusting these parameters, the proposed TOC optimizer may more effectively explore the search space for every potential solution to find the sub-optimal or optimal solutions. Therefore, these control parameters are useful in TOC to implement promising convergence property. As windstorms and thunderstorms develop into tornadoes in the TOC optimizer, these search agents-windstorms, thunderstorms, and tornadoes-can update their positions in accordance with the mathematical models and tuning criteria of TOC implemented by the evolution models of the search agents. The models of TOC are presented in Eqs. 30, 46, 51, 52, 54, and 57. Windstorms and thunderstorms are assumed to efficiently evolve into tornadoes within the search space in each of the presented models. Additionally, the random motions of windstorms and thunderstorms occur as a result of their random evolution, which forces them to move to random locations. Thus, tornadoes, thunderstorms, and windstorms all explore the search space in various directions and places, implying that additional attractive areas could hold better solutions. In summary, TOC offers a few characteristics based on its fundamental idea, which may be summed up as follows: (1) The position update models of the proposed TOC optimizer efficiently help the population search agents explore and exploit each region of the search space, (2) The random motions of windstorms and their random formation into tornadoes in the search space using Eqs. 46, 54 and 57 boost population diversity of TOC and guarantee a reasonable convergence rate, demonstrating an effective trade-off between exploration and exploitation, (3) The random movements of windstorms and their evolution into thunderstorms using Eq. 51 and the random motions of thunderstorms and their evolution into tornadoes in the search space using Eq. 52 increase the diversity of the population, ensure sensible convergence property, and provide an efficient equilibrium between exploration and exploitation aspects, (4) The number of parameters in TOC is reasonably acceptable, and these are promising operators to provide a high level of performance.
5 Comparative analysis of TOC with other optimizers
This section compares the TOC method with various well-established meta-heuristic algorithms, such as particle swarm optimization (PSO), genetic algorithms (GAs), differential evolution (DE), and ant colony optimization (ACO) algorithm.
5.1 Particle swarm optimization
PSO imitates the collective cooperative social behavior of living organisms, such as flocks of birds, fish, and many other species of creatures (Kennedy and Eberhart 1995). Artificial particles, or randomly produced solutions, are used as the starting point for optimization in PSO. The velocity of every particle in the swarm is initially produced at random. As introduced in Yan et al. (2017), the position updating technique can be expressed, assuming that \(x_i\) is the initial location of particle i with velocity \(v_i\), as shown in Eq. 62.
where w represents the inertial weight, \(c_1\) and \(c_2\) represent the cognitive and social parameters, respectively, \(r_{1}\) and \(r_{2}\) are randomly distributed values produced in the interval [0, 1], \(Pbest_i\) represents the local best solution for particle i, and Gbest represents the global best solution for all particles in the swarm.
5.1.1 TOC versus PSO
Like the PSO algorithm, the proposed TOC optimizer motivates search agents to move around in the search space in pursuit of their objective, which starts the optimization process. However, the mathematical models of TOC and PSO have completely distinct updating mechanisms. The following is a description of some of the primary differences between these two optimizers:
-
1.
In PSO, the parameters \(Pbest_i\) and Gbest provide the position update of the ith particle, as shown in Eq. 62. The influence of these two parameters is taken into account to determine the particles’ new position within the search space.
-
2.
In the matter of the proposed TOC optimizer, four different design models are used to determine the new positions of search agents (windstorms, thunderstorms, and tornadoes). These models are provided in Eqs. 30, 46, 51, 52, 54, and 57. These design models feature windstorms turning into tornadoes and thunderstorms, thunderstorms turning into tornadoes, and windstorms forming at random. However, PSO uses a single strategy, as shown in Eq. 63, to update the positions of all search agents within the search space.
-
3.
The starting values of the cognitive and social parameters, as well as the velocity vector’s weighting strategy-which is employed when a particle of the swarms in PSO establishes a new position-have a significant impact on the PSO algorithm. On the other hand, during the iterative loops of TOC, tornadoes, windstorms, and thunderstorms use different parameters and search strategies to determine their new positions.
-
4.
The behavior of tornadoes is influenced by the formation of thunderstorms and windstorms, which are designed with a variety of distinct strategies, and the speed of this formation is affected by the Coriolis force. As a result, Eq. 30 is used to determine the speed of thunderstorms and tornadoes, which rotate clockwise and counterclockwise. By incorporating this velocity into TOC, thunderstorms move and evolve abruptly and are redirected to create tornadoes, and thus reinforce the exploration and exploitation features of TOC. However, PSO does not employ this kind of behavior.
-
5.
Simulation of tornadoes’ conduct in Eqs. 54 and 57 provides a chance to show a random tornado movement behavior. This makes it possible for the TOC optimizer to lessen stumbling in local optimal solutions. The PSO algorithm does not benefit from this feature because of swarms’ inherent nature simulated in this algorithm.
-
6.
Eq. 51, which formulates the evolution process of windstorms into thunderstorms, enhances the exploration and exploitation features of TOC, whereas PSO does not have such a feature.
-
7.
The use of the parameters \(\mu \), f, \(CF_l\), and \(CF_r\), defined in Eqs. 32, 38, 39, and 40, respectively, in the TOC optimizer, allows it to explore the search space globally at times, conduct local searches in local areas at other times, and obtain a suitable balance between exploration and exploitation features. In the PSO algorithm, these parameters are absent.
5.2 Conventional differential evolution algorithm
The differential evolution (DE) method is a well-established population-based evolutionary technique developed to address real-valued optimization problems (Storn and Price 1997). Similar to GAs, DE uses evolutionary principles such as those used in GAs including crossover, mutation, and selection mechanisms. The initialization process of each individual in DE can be defined as shown in Eq. 64.
where \(X_i\) stands for the ith individual in which \(i \in \left\{ {1, 2, \ldots, NP}\right\} \), NP stands for the population size, \(d \in \left\{ {1, 2, \ldots, D}\right\} \) indicates the problem’s dimension, u and l stand for the upper and lower limits of \(X_i\) in the dth dimension, respectively.
According to Eq. 65, the mutation mechanism in the DE algorithm may often produce a mutant vector that serves as an intermediary variable \(V_i\) for the evolution.
where \(r_1, r_2\), and \(r_3 \in \left\{ {1, 2, \dots, NP}\right\} \) are arbitrary indexes, in which \(i \ne r_1 \ne r_2 \ne r_3\), and F is a constant operator that denotes the degree of amplification.
The crossover technique of the DE algorithm, which combines the first solution \(X_i\) with the interim variable \(V_i\) to increase the diversity of the new solution \(U_i\), can be described as presented in Eq. 66.
where \(d_{rand} \in \left\{ [1, 2, \ldots, D]\right\} \) indicates a random value, and CR stands for a crossover control parameter.
The selection mechanism in the DE algorithm is carried out at each iteration loop by comparing \(U_i\) with \(X_i\) using a greedy norm for a better solution reserve in the population for the next iteration loop. It is possible for DE to quickly converge and ultimately reach the global optimum solution through these evolutionary processes.
5.2.1 TOC versus DE
Since TOC is a physics-based algorithm, there is no need for evolutionary processes such as crossover, mutation and selection operations. The main differences between DE and TOC can be briefed by the next points:
-
1.
The DE algorithm eliminates the knowledge from earlier generations when a new population is created, but the TOC optimizer keeps search space information over the course of successive rounds.
-
2.
Comparatively speaking, the TOC algorithm requires fewer operators to operate and modify than the DE algorithm, which employs many procedures including crossover and selection. Additionally, TOC uses a parameter that represents fuzzy adaptive \(\mu \), but DE does not use such a parameter to help locate the optimal solutions.
-
3.
In TOC, exploration is improved by permitting windstorms and thunderstorms to evolve into tornadoes to randomly explore the search space, whereas in DE, exploration is improved employing crossover and selection procedures.
-
4.
In DE algorithm, mutation is often carried out with the intention of improving exploitation. In contrast, a better exploitation of the TOC algorithm is achieved by the evolution process of windstorms into thunderstorms with the use of random values.
5.3 Genetic algorithm
Holland was the first to forward GA Holland et al. (1992). It is regarded as a global optimization technique that draws inspiration from biological processes like genetics and evolution. Every potential solution is coded as a chromosome (i.e., an individual) when employing GAs, which use the search space to create chromosomes. During the optimization process of GA, evolution starts with a collection of randomly selected individuals from a population. In every generation loop during optimization, each individual’s fitness score is re-calculated. The solutions’ variables are modified in accordance with their fitness scores. The random starting solutions are more likely to be improved as the best individuals are given a greater probability to contribute to improving other solutions. A fitness function is used to pick chromosomes during optimization for the following generations, and then certain genetic operators, including crossover and mutation, are applied to the chosen chromosomes to create new ones. Until global optimal solutions are reached, the theory goes that these chromosomes change and always produce better individuals (Song et al. 2019).
5.3.1 TOC versus GAs
Although both GAs and TOC are population-based algorithms, GA is an evolutionary-based algorithm while TOC is a physics-based algorithm. The following is a quick summary of the main distinctions between them:
-
1.
The TOC optimizer employs several distinct strategies during optimization to update the search process of the search agents within the search space, whereas GA uses crossover and mutation operators, which are comparable to those used by DE to update solutions during optimization.
-
2.
While the TOC optimizer does not employ a selection operation during optimization, GA does in order to choose the best chromosomes (i.e., search agents) throughout the generation phase in the optimization process.
-
3.
As seen in Eq. 30, the TOC optimizer estimates the formation speed of windstorms, thunderstorms, and tornadoes using the gradient speed and Coriolis force, whereas GA does not employ such a velocity term.
-
4.
While TOC enhances its exploration and exploitation capabilities with the concepts of random relocation and the formation of windstorms, thunderstorms, and tornadoes that are managed during optimization, GA uses crossover, mutation, and selection operations to evolve and update its population during optimization.
5.4 Ant colony optimization
Ant colony optimization (ACO) is a meta-heuristic method that assigns search tasks to so named “ants” Dorigo and Blum (2005). Activities of ants in ACO are divided among the search agents with basic skills that mimic the behavior of actual ants in foraging to a certain degree. It is important to emphasize that the ACO algorithm was not created as an ant colony simulation; rather, it uses the metaphor of artificial colonies of ants as an optimization technique. Ants choose a travel route at the beginning of the ACO process, when there is no knowledge of the path to take from one location to another. The idea behind processing is that if an ant is faced with a choice between several pathways at a certain moment, it is more likely to choose the ones that have been highly selected by the ants who came before it (i.e., those with a high trail level). To solve an optimization problem, the ACO algorithm often repeats the following two steps:
-
The candidate solutions are evolved over the solution space using a pheromone model, which is a particular probability distribution.
-
The purpose of the candidate solutions is to modify the pheromone values in a way that is thought to skew subsequent sampling in favor of better solutions.
When building the solution components with a pheromone model, the selection of ant agents is specified probabilistically at every stage of construction. An ant uses the following rule to migrate from node i to node j:
where \(\tau _{(i, j)}^{\alpha }\) is the edge-related pheromone value with edge (i, j), \(n_{(i, j)}^{\beta }\) is the edge-related heuristic value with edge (i, j), \(\alpha \) is a positive real parameter whose value determines the relative significance of pheromone versus heuristic information and manages the impact of \(\tau _{(i, j)}^{\alpha }\), and \(\beta \) is a positive parameter whose value also determines the corresponding significance of pheromone versus heuristic information and regulates the influence of \(n_{(i, j)}^{\beta }\).
Equation 68, which indicates how much pheromone to be deposited, is utilized by the ant to assess the partial solution to be employed once it is built.
where \(\tau _{(i, j)}\) is the edge-correlated pheromone value with edge (i, j), \(\rho \in (0, 1]\) represents the pheromone evaporation rate, and \(\delta \tau _{(i, j)}\) represents the pheromone deposit quantity, which is usually determined as shown in Eq. 69.
where \(L_K\) stands for the cost of the kth ant’s tour.
5.4.1 TOC versus ACO
Despite having a similar appearance, ACO and TOC are very distinct. Their formulations and methods for revising their positions differ in several ways, described as follows:
-
1.
When looking for the best solution, both of the ACO and TOC algorithms focus on effective division of the search process. The idea behind ACO is to produce a pool of synthetic ants that wander at random across a search space. The concept of TOC is to separate the population into tornadoes, thunderstorms, and windstorms that move randomly around an area and change throughout optimization. Using a variety of reasonable strategies in the position updating process, the population in TOC carries out the optimization process and performs exploration and exploitation processes for every potential area in the search space.
-
2.
A pheromone model is used in ACO to build candidate solutions. On the other hand, TOC finds the best solutions by leveraging the local and global best solutions of tornadoes, thunderstorms, and windstorms.
-
3.
Eq. 69 in ACO generates a new solution that is conceptually distinct from the position updating mechanisms in TOC as provided in Eqs. 46, 51, and 52. The updating strategy of the solutions in TOC is a sort of directed and undirected search approach that forces new solutions to advance in the direction of a superior solution.
-
4.
Two crucial parameters, referred to as \(\alpha \) and \(a_y\), are used in the updating search process of the TOC algorithm during optimization. However, ACO does not update its new solutions using such parameters. In addition, TOC enhances its exploration aspect by employing a random formation of windstorms, as demonstrated by Eqs. 54 and 57, whereas ACO does not employ such a strategy.
-
5.
The parameters of TOC so-called \(\mu \), \(\alpha \), \(a_y\), and \(\nu \), may be changed to balance and improve exploration and exploitation features throughout its iterative process. ACO does not, however, employ any such parameters that can be changed throughout the iteration loops.
As mentioned earlier, a successful optimizer has to strike an efficient balance between exploration and exploitation to achieve promising performance during optimization. There is no general rule of thumb to make this happen (Sayed et al. 2019). The performance of the meta-heuristics may be significantly impacted by slight differences in random distributions and solution updates (Civicioglu and Besdok 2013). Thus, the proposed TOC optimizer may emerge as a strong competitor to existing meta-heuristic algorithms.
6 Experimental results and comparisons
In this section, the performance of the proposed TOC optimizer is illustrated. We have performed extensive experiments on a diversified set of openly accessible optimization problems that are well-known in this research area. The obtained results are reported and compared to the-state-of-the-art optimization algorithms. A detailed discussion of our findings is presented.
6.1 Description and purpose of the functions used
We now give an overview of the test functions that we have set as benchmarks to evaluate the performance of TOC. A total of 52 optimization functions are the basis of our experiments. These functions represent two categories of benchmarks that highlight the test environment, dimensionality, boundaries of search spaces of these test functions, in addition to their optimum values. The first category can further be divided into three subcategories; the first comprises 7 test functions manifesting unimodal nature (Digalakis and Margaritis 2001), the second has 6 multimodal test functions (Yang 2010), while the third subcategory consists of 10 fixed-dimension multimodal benchmarks (Digalakis and Margaritis 2001; Yang 2010). The second category is the well-known CEC-2017 benchmark suite consisting of 29 test functions. Each subgroup of these functions has a distinct purpose to explore the strengths of the proposed optimizer.
The two main characteristics that any optimization algorithm should be tested against are exploitation and exploration, where the first influences the second. On that account, a good optimization algorithm should keep a balance between exploitation and exploration to be able to get to the optimum solution. To test for exploitation, functions \(\hbox {F}_1\)-\(\hbox {F}_7\) are unimodal in that each of them holds one optimum solution. Functions \(\hbox {F}_1\)-\(\hbox {F}_7\) are therefore the right choice for testing how good the optimization algorithm is in exploitation and convergence to the desired solution. The ability of the optimization algorithm to reach the optimum value in a unimodal search space is not enough; however, it should be able to find the global optima in a search space having many local optima. Say it another way, exploration is as important as exploitation. Additionally, convergence should be obtained with the minimum number of iterations. For this job, \(\hbox {F}_8\)-\(\hbox {F}_{13}\) are multimodal, each of which has numerous local solutions besides the global one. To further uncover the exploration strengths of the proposed algorithms, functions \(\hbox {F}_{14}\)-\(\hbox {F}_{23}\) have been experimented as well. These are homologous multimodal functions with fixed and low dimensions. Suffice it to say that the functions \(\hbox {F}_{1}\)-\(\hbox {F}_{23}\) are proper for checking the convergence rate, reaching global solution(s) while escaping from local ones, as well as balancing intensification and diversification.
The purpose of CEC-2017 competition was to test competing optimization algorithms on a rich assortment of challenging real problems. CEC-2017 is the successor of CEC-2005, CEC-2013, and CEC-2014 versions. CEC-2017 test functions are composite and hybrid representing real search domains with much complexity. Each member of this group of functions has a unique function shape with many local solutions in varied search spaces. It came to exist by performing shifts, rotations, extensions, in addition to hybridization of both unimodal and multimodal functions. The challenging nature of these functions allows us to test the proposed optimizer for accuracy and speed along with balancing exploitation and exploration.
6.2 Experimental setup
Seeking a fair comparison the-state-of-the-art optimization algorithms, we run the proposed TOC algorithm on the unimodal, multimodal, fixed-dimensional, and CEC-2017 test functions mentioned in Subsection 6.1 along with a set of carefully chosen algorithms from previous works. Specifically, the following state-of-the-art algorithms were evaluated: elk herd optimizer (EHO) Al-Betar et al. (2024), DE algorithm Storn and Price (1997), CapSA Braik et al. (2021), CSA Braik (2021), CS algorithm Askarzadeh (2016), ABC algorithm Karaboga and Basturk (2007), MFO Mirjalili (2015), WSO Braik et al. (2022), POA Trojovskỳ and Dehghani (2022), SCA Mirjalili (2016), and SO Hashim and Hussien (2022). The rationale behind deciding on these algorithms is that they are well endorsed in the literature with eminent performance. Moreover, other attributes of TOC such as flexibility, generality, and simplicity also exist in these algorithms. All of these methods are independent of the merits of the just-mentioned functions. It would be useful to first have a look at the parameter settings of these algorithms besides those of our proposed TOC algorithm. This is shown in Table 6, where the various competing algorithms are tabulated in the same order just given.
The parameter values listed in Table 6 for the various algorithms are consistent with the settings reported in the literature. For fairness, all competing algorithms were subjected to the same initialization process. As outlined in Table 6, each algorithm was experimented 30 separate times per each test function. The number of search agents was set to be 30 with 1000 iterations, leading to a maximum number of 30,000 function evaluations (FEs) for each algorithm. The same floating-point precision is adopted for comparisons.
6.3 Performance of TOC on basic benchmark functions
The experimental performance of the proposed TOC algorithm is provided in this subsection accompanied with the performance of the-state-of-the-art algorithms described in the previous subsection. Here, the accuracy of each of the participating algorithms is reported on the classical unimodal (\(\hbox {F}_1\)-\(\hbox {F}_7\)) test functions, in addition to the multimodal (\(\hbox {F}_8\)-\(\hbox {F}_{13}\)) and fixed-dimension multimodal (\(\hbox {F}_{14}\)-\(\hbox {F}_{23}\)) subsets. Since the same experiment is repeated for 30 times, as aforesaid in the previous subsection, we follow a policy of reporting the achieved accuracy using the average (Ave) and standard deviation (Std) statistical measures of the best obtained solutions. The reason for appending Std to Ave is because Std reflects on the stability of the algorithm under test during the multiple independent runs. As is also mentioned, each algorithm has been run for 1000 iterations, the obtained Ave and Std figures have been therefore registered for the final iteration. The adopted stopping criterion is to reach the maximum number of iterations. It should also be pointed out that the highest findings are emboldened in all the upcoming tables and the second-best outcomes are underlined for every test function. Note that in the results tables (e.g., Tables 7–12), the bold text refers to the best solutions obtained (lowest is best).
6.3.1 Evaluation of TOC on functions \(\hbox {F}_1\)-\(\hbox {F}_7\)
We begin with the first function subset comprising uni-modal test functions \(\hbox {F}_1\)-\(\hbox {F}_7\), with each having one global optimal solution and no local solutions. The performance in terms of the Ave accuracy and Std of the proposed TOC algorithm and the remaining algorithms are tabulated in Table 7. The objective here is to test the capability of the proposed algorithm of excelling in the mission of exploitation.
It is good to mention that \(F_{min}\) for each of Functions \(\hbox {F}_1\)-\(\hbox {F}_7\) is zero. Knowing this, the winning optimization algorithm is the one with \(F_{min}\) closest to zero. Investigating Table 7 reveals that the proposed TOC algorithm outperform the other algorithms. Specifically, we can see that TOC was able to achieve the smallest objective value of the first four Functions \(\hbox {F}_1\)-\(\hbox {F}_4\) in both Ave and Std, while EHO excelled in Function \(\hbox {F}_7\). For \(\hbox {F}_5\) and \(\hbox {F}_6\), CapSA and ABC were the best, respectively. Even for these two functions, TOC performed better than some of the other algorithms. In general, each of TOC and EHO reached either zero, or extremely small figures in both Ave and Std. The implication of Ave having infinitesimally small values is that TOC are infallible in getting to the optimum solution in a unimodal space, and consequently accomplish the mission of being excellent in exploitation (or intensification). On the other hand, tiny values of Std indicate that TOC is indeed a stable algorithm. Contrasting TOC, one can clearly see that there is not much difference between the two, with TOC being slightly better. Together, TOC and EHO are the top algorithms in five out of seven functions and have relatively good results in the remaining \(\hbox {F}_5\) and \(\hbox {F}_6\) functions compared to the rest of algorithms.
6.3.2 Evaluation of TOC on functions \(\hbox {F}_8\)-\(\hbox {F}_{23}\)
Next, we move on to examine the performance of the proposed algorithm along with other algorithms on multimodal test functions. Functions \(\hbox {F}_8\)-\(\hbox {F}_{23}\) are used for testing the exploration capability of various algorithms. The results on the high-dimensional multimodal functions, \(\hbox {F}_8\)-\(\hbox {F}_{13}\), are reported in Table 8, while Table 9 shows how each algorithm performs on the fixed-dimensional multimodal functions (\(\hbox {F}_{14}\)-\(\hbox {F}_{23}\)).
Examining the results reported in Table 8 obviously reveals the strengths of TOC in exploration compared to competing algorithms. Knowing that function \(\hbox {F}_{8}\) has \(F_{min}=-12,569\), whereas \(F_{min}=0\) for functions \(\hbox {F}_{9}\) - \(\hbox {F}_{13}\), we can see that TOC achieved the global minima in half of the six functions and EHO was the winner in two of them. In specific, TOC reached the smallest \(F_{min}\) in functions \(\hbox {F}_{9}\), \(\hbox {F}_{10}\), and \(\hbox {F}_{11}\), while EHO is also the top algorithm in functions \(\hbox {F}_{9}\) and \(\hbox {F}_{11}\). The TOC algorithm achieved comparable results in functions \(\hbox {F}_{8}\), \(\hbox {F}_{12}\), and \(\hbox {F}_{13}\) to those of the remaining algorithms. It is interesting to observe that all algorithms are quite far from the global minimum of \(\hbox {F}_{8}\), with TOC being the closest with \(F_{min}=-2485.2810\). For \(\hbox {F}_{12}\), DE is the best with Ave \(=9.32E-25\) and Std \(=4.93E-24\), while ABC is the winner in function \(\hbox {F}_{13}\) with Ave \(=1.23E-21\) and Std \(=3.38E-21\). We can also see that POA accompanied TOC and EHO in getting to the global minima of functions \(\hbox {F}_{9}\) and \(\hbox {F}_{11}\).
Regarding functions \(\hbox {F}_{14}\) - \(\hbox {F}_{23}\), the results are presented in Table 9. The nature of the functions \(\hbox {F}_{22}\) - \(\hbox {F}_{21}\) is multimodal but fixed-dimensional; they are truly tough and challenging problems. No wonder that winning these functions was distributed among the various optimization algorithms, with POA being the best in 3 out of the ten functions. Going through the functions in ascending order and seeking the closest Ave to \(f_{min}\) and the smallest Std, we get to the following findings. The \(f_{min}\) value of \(\hbox {F}_{14}\) is 0.998, and therefore the proposed TOC is the winner with Ave \(=9.9800E-01\) and Std \(=8.12E-16\). With \(f_{min}=0.00030\) for \(\hbox {F}_{15}\), WSO is the closest with Ave \(=3.0748E-04\) and Std \(=2.28E-19\). For \(\hbox {F}_{16}\), \(f_{min}=-1.0316\), and hence EHO is the best having Ave \(=-9.9567E-01\) and Std \(=1.2162E-01\). \(\hbox {F}_{17}\) has an \(f_{min}=0.39788\), with WSO the second best optimizer, getting Ave \(=3.9789E-01\) and Std \(=3.17E-05\). With \(f_{min}\) of \(\hbox {F}_{18}\) being 3, POA is the closest algorithm with Ave \(=3.0000\) and Std \(=5.15E-16\). Regarding \(\hbox {F}_{19}\), \(f_{min}=-3.86\), with EHO being the best, where Ave \(=-3.3698\) and Std \(=3.4925E-01\). \(\hbox {F}_{20}\) has an \(f_{min}\) equal to \(-3.22\), and we can see that TOC is the winner with Ave \(=-1.8752\) and Std \(=5.6920E-01\). For \(\hbox {F}_{21}\), \(f_{min}=-10.1532\) and TOC is the closest with Ave \(=-7.3284E-01\) and Std \(=3.8188E-01\). As for \(\hbox {F}_{22}\), \(f_{min}=-10.4029\) with TOC being the winner having Ave \(=-9.4329E-01\) and Std \(=3.9915E-01\). Finally, \(f_{min}=-10.5364\) for \(\hbox {F}_{23}\), and therefor TOC is the closest with Ave \(=-1.2436\) and Std \(=8.7761E-01\). To conclude, the proposed TOC algorithm did good jobs in both exploitation of unimodal functions and exploration of multimodal functions. Collectively, TOC showed superior performance in comparison with the previous algorithms in many of the test functions. Even when the proposed algorithm is not the winner, it showed comparable performance with \(f_{min}\) in the vicinity of the global minimum. Moreover, the significantly small values of Std offered by TOC illustrate this intrinsic stable behavior.
6.3.3 Convergence curves of TOC on functions \(\hbox {F}_1\)-\(\hbox {F}_{23}\)
One of the most popular qualitative performance tools in literature is the convergence curve. Since optimization algorithms work iteratively, it would be nice to monitor how the algorithm behaves over consecutive iterations. Convergence curves are invaluable means that exhibit this behavior by plotting the results that the optimization algorithms achieve at the end of each iteration. In other words, convergence curves show how an algorithm approaches the global optimum over the course of execution iterations. Plots of convergence curves of the contrasting optimization algorithms, including those of the proposed TOC algorithm, are given in Fig. 7 for all basic test functions \(\hbox {F}_{1}\) - \(\hbox {F}_{23}\).
The x-axis in Fig. 7 has a linear scale showing the iteration number in steps from 100 iterations to a maximum of 1K iterations, while the y-axis is displayed with a logarithmic scale showing values of the obtained \(f_{min}\) for each algorithm at the end of every iteration. To judge the competing algorithms, we desire to see a quick convergence to the global minimum. This is a manifestation of the exploration capability of the optimization algorithm; a good optimizer can quickly find the way despite the presence of many local minima. Additionally, convergence curves demonstrate the stability of the optimization algorithm; a good optimizer settles on the global minimum and stays. In other words, it is good in exploitation as well.
Looking at Fig. 7, one can observe disparities in the convergence behaviors of the various algorithms from one function to another. The variations of behaviors between functions are due to the nature of each function. As mentioned earlier, each subset of these basic functions has a distinct objective. In general, one can clearly see that EHO showed superior stability at the same time when TOC continued hitting low. In conclusion, TOC is successful in keeping a good balance between exploration and exploitation and proved to have improved performance over the rival algorithms in many of the test functions.
6.3.4 Qualitative analysis of TOC
As a further qualitative appraisal of the proposed TOC algorithm in comparison with previous algorithms, Fig. 8 gives us additional insights.
Plots of five useful characteristics shown in Fig. 8 are employed in an assortment of test functions consisting of \(\hbox {F}_1\), \(\hbox {F}_3\), \(\hbox {F}_5\), \(\hbox {F}_{11}\), \(\hbox {F}_{13}\) and \(\hbox {F}_{16}\):
-
The left-most column of plots in Fig. 8 shows the function space plotted in 3-D of each of the chosen eight test functions. One can observe a big variation in the spaces of the various functions and the challenging mission of looking for the global minima in tough search spaces. The complexity of function search spaces tells us a lot about the need for an optimization algorithm that balances exploitation and exploration, and that can reach the global optimum despite the presence of several local optima.
-
We have already pointed out the importance of convergence curves in studying and understanding optimization algorithms and how they navigate through the search space seeking the global value. The second-to-the-left column of the plots in Fig. 8 shows the convergence curves of the proposed algorithm plotted in 2-D in a path of 1000 iterations. The elegance of the curves illustrates the smooth convergence toward the global minimum for each of the selected test functions. The plots in Fig. 8 show differences in convergence curves among the various functions. For functions \(\hbox {F}_{1}\), \(\hbox {F}_{3}\), \(\hbox {F}_{5}\), \(\hbox {F}_{11}\), and \(\hbox {F}_{16}\), we clearly see a quick convergence toward the global minimum and a stability of the proposed algorithm. Indeed, we see a convergence in the very early iterations and settlement on the global minimum thereafter with extremely low error values. For functions \(\hbox {F}_{11}\), \(\hbox {F}_{13}\), and \(\hbox {F}_{16}\), we observe interesting convergence behaviors. For \(\hbox {F}_{5}\), we see a late convergence that started to take off after about 700 iterations. For \(\hbox {F}_{5}\), we see the approach in the direction of the global minimum and the search in promising areas; The final convergence on the global minimum has not happened before about 350 iterations. For \(\hbox {F}_{5}\), we see an inability to reach the global minimum.
-
The middle column of the plots in Fig. 8 shows the average fitness values against iteration number. For various functions, one can see how the average fitness gets smaller and smaller as the algorithm progresses through iterations. One can see small ups and downs in Functions \(\hbox {F}_{1}\) and \(\hbox {F}_{3}\) for example, and the steady fading away of the average fitness value for functions \(\hbox {F}_{11}\) and \(\hbox {F}_{13}\), but the behavior in general looks fine.
-
The trajectory in the second-to-the-right column of plots in Fig. 8 illustrates the strength of the proposed algorithm in exploration in specific and the revolving around the optimum value. We can see the variations among the eight selected functions.
-
It is interesting to look at the research history of the algorithm; the points at which the algorithm has stopped in its attempts to reach global minima. The plots illustrate the complex search spaces of various test functions, favorable and unfavorable regions, and the success of the proposed algorithm in escaping local minima.
The characteristics studied in Fig. 8 demonstrate the effectiveness of TOC regarding the ability of balancing exploration and exploitation, avoidance of local minima and getting to the global minima, robustness in complex search spaces, in addition to the stability when applied to sophisticated test functions.
6.4 Performance of TOC on CEC-2017 benchmark
The performance and reliability levels of the proposed TOC algorithm were tested using the CEC-2017 benchmark test suite, where this test group is publicly available. This test set consists of thirty test functions produced by shifting and rotation that are unimodal, multimodal, hybrid, and composite functions (Awad et al. 2017). These test functions are divided into four classes: 1) unimodal test functions (f1 and f3); 2) basic multimodal test functions (f4 to f10); 3) hybrid test functions (f11 to f20); and 4) composition test functions (f21 to f30). As per this context, most of these test evaluation functions rank among the hardest composite and hybrid functions. Note that this collection does not include the f2 function due to its unsteady conduct, especially in big dimensions. The search spectrum for all of these functions is \([-100, 100]\), given the different dimensions of every single test function, which are displayed below.
These test functions in the CEC-2017 benchmark test suite were designed to assess both the exploration capacity of optimization methods in the search space of concern and the reliability of optimization methods of local optimum escape. It is well known that a full of promise optimization method might prevent local optimal solutions and arrive at the global optimum quickly and effectively. This test collection was used to investigate the strength of the proposed TOC algorithm as it offers difficult test functions and presents great difficulty when evaluating the efficiency of the proposed TOC algorithm. Further details about the CEC-2017 standard test functions are given in Awad et al. (2017).
The solution error measure (f(x) - f(\(x*\))) was employed, where x represents the best result, which is the algorithm achieved in a single run and \(x*\) represents the globally realized optimal solution for every test function. For every test function in the CEC-2017 benchmark, TOC algorithm used 100 search agents and a maximum amount of \(10000 \times d\) of FEs; note that d represents the function’s dimensions. For every test function in every experiment, the stop condition of TOC was configured to run 51 times independently, which was the highest possible number of FEs. The parameter settings of the proposed TOC algorithm are given in Table 6. This study examines test functions with dimensions of 10, 30, and 50.
To get a more precise evaluation of the performance of TOC on this evaluation suite, the collected results of TOC, with respect to the mean errors and standard deviations, are contrasted with the previously mentioned meta-optimization algorithms. To ensure a fair comparison, the proposed TOC algorithm, along with the other contenders, used an upper limit of FEs of \(10000 \times d\) for each problem in the CEC-2017 test suite. The stopping criterion for the competing algorithms was the largest number of FEs, where each algorithm was run 51 times independently for each test problem in each experiment. The parameter settings for the competing meta-heuristics are shown in Table 6. This study examines test functions with dimensions of 10, 30, and 50. The mean error and standard deviation findings for the proposed TOC optimizer and other contending optimizers on the CEC-2017 problems with 10, 30, and 50 dimensions are displayed in Tables 10, 11, and 12, respectively. The optimal solutions of the proposed TOC algorithm as well as other many state-of-the-art optimization algorithms are shown in these tables together with the mean errors and standard deviations for each problem in the CEC-2017 benchmark over 51 independent runs. The best findings to every test problem in these tables are emboldened and the second-best results are underlined to give them greater significance than the other findings achieved. It is noteworthy that values of the standard deviation and mean error values smaller than \(1E-08\) are considered zero.
Once the mean error values of Tables 10, 11, and 12 are examined, it becomes evident that TOC successfully solved the functions of the CEC-2017 test suite. Out of a total of 29 test functions for each dimension, TOC reported the optimum outcomes for 19 problems in 10 dimensions, 8 problems in 30 dimensions, 10 problems in 50 dimensions, and 11 functions in 100 dimensions. This points out that the proposed TOC is the best optimizers among all other competing optimizers, providing optimal outcomes in 10d, 30d, and 50d, while EHO is the most effective optimizer in test functions with 50d, according to the results shown in Tables 10, 11, and 12. Outstandingly, CSA has a promising rank in 30d and TOC ranked second in 50d when it comes to optimizing test problems with dimensions of 30 and 50. In several test functions, TOC obtained optimum results that were on par with those provided by EHO and CSA.
These findings show how well the proposed TOC algorithm performs when optimizing challenging test functions, including the ones with lots of dimensions as those in CEC-2017 benchmark functions. From a different perspective, TOC outperformed several other well-known algorithms, such as CS, SCA, SO, and MFO, by obtaining unique standard deviation scores in a variety of functions and, when perusing the standard deviation values of Table 10, scoring optimal values akin to those attained by EHO and CSA algorithms in the first three test functions.
In terms of standard deviation values, Tables 11 and 12 showed that TOC performed better than other algorithms in 16, 6, 6, and 8 functions in 10d, 30d, and 50d, respectively. This confirms the observation that when problems under examination are evaluated in different search regions with different dimensions, TOC shows significant degrees of stability. Specifically, in unimodal test problems (f1 and f3), TOC showed remarkable performance across 51 independent runs in these test problems in 10, 30, and 50 dimensions, where they can consistently obtain the global best solutions. Additionally, in two of the test cases for the hybrid functions-namely, f15 and f18 in 10d-TOC has the capacity to determine the optimal mean error solutions. Additionally, for every other problem-aside from problem f19, where TOC struggled with this test function-it earned a reasonable mean error rate.
The proposed TOC method identified the best solutions for the composition functions, which are the hardest to solve in the CEC-2017 benchmark, in test cases f23, f25, and f26 in 10d. It could not, however, produce the optimal results in a number of test functions, including f20 in dimensions 30 and 50, and f30 in dimension 50. Thus, the proposed TOC algorithm is not too distant from the optimal solutions in all the test functions in CEC-2017, but it periodically gets stuck in local optimums. Finally, it is clear that in many CEC-2017 test functions, the CSA and CS algorithms generated acceptable solutions in a few of the considered dimensions. On the other hand, SO did poorly in many test functions across all dimensions. In most test functions, TOC, CS, EHO, and CSA fared better than the other rivals in all dimensions considered. Furthermore, it can be shown that TOC fight strongly with CSA and EHO in several test functions in 10, 30, and 50 dimensions, and exceed other rivals in a broad range of test functions over all dimensions. This accomplishment offers additional proof of the superiority of TOC over recently created and well-researched meta-heuristics, as well as their ability to outperform high performance optimizers such as CSA and EHO in commonly used benchmark test functions.
Overall, the efficacy of competing algorithms, such as SO and SCA, shows a significant degradation when the search space dimension rises from 10 to 30 to 50, while the overall efficiency of TOC remains almost constant and just slightly falls at this point.
6.4.1 Convergence analysis of TOC on CEC-2017
Convergence analysis is an essential part of understanding the exploration and exploitation features of meta-heuristic algorithms in the search context. Following suit, Figs. 9, and 10 present the convergence curves of the proposed TOC algorithm as well as other contending algorithms in the context of the fitness scores, indicated by Log\(_{10}\), of the median run over 51 runs for each function of the CEC-2017 benchmark with dimensions 10 and 30, respectively. The purpose of these convergence curves is to examine the convergence behavior of TOC in relation to its partner competitors.
The convergence curves in Figs. 9 and 10 show the fitness values of the best solutions found so far when speaking of function evaluations of the optimization process of the proposed TOC optimizer. In these curves, the fitness value is expressed on the y-axis by Log\(_{10}\), while the number of function evaluations is shown on the x-axis. Additionally, for all of the convergence characteristic plots in these figures, TOC used a population size of 100, which is connected with an upper limit of FEs of \(10000 \times d\), where d represents the dimensions of the functions considered. More specifically, the overall amounts of function assessments employed by the proposed TOC and other competing methods are 100,000 and 300,000, for functions with dimensions of 10, and 30, respectively. The convergence graphs presented in Figs. 9, and 10 make it evident that TOC outperformed the competing algorithms and converges quickly in early evaluations of the optimization process for the CEC-2017 test problems. As can be observed graphically in Figs. 9, and 10, the convergence curves of TOC display different mannerisms during its iterative processes for different test functions.
The optimization process has significantly improved as judged by the half and final evaluations, whereas the proposed TOC algorithm exhibits a discernible drop in its rate of convergence. This is due to the original and very good global search method of the TOC algorithm that was presented, as well as its local search capabilities for the best-preserved locations in the search space. Nevertheless, during the preliminary analyses of the optimization procedure, these convergence curves encounter transient, unanticipated dips. In order to get the optimum solutions in the final assessments, it progressively converges after the first few evaluations to leverage the global or near-global optimal solutions. This demonstrates how, in the beginning of the proposed TOC algorithm, the search agents abruptly switch and thereafter become fluctuating in proportion to the number of FEs. According to this perspective, the search agents need to circle the search area and conduct a complete examination for it before the proposed TOC algorithm can work its optimization search process. Next, by encouraging the search agents to explore for solutions in prospective local areas of the search space and pushing them to undertake global search for the global optimum, TOC makes use of every possible area inside the search space.
In sum, as demonstrated by the convergence curves in Figs. 9 and 10, most of the CEC-2017 test functions can be solved in a practical manner by TOC in fewer iterations than the maximum predefined number of FEs. The proposed TOC algorithm is sufficiently scalable to allow for the realization of the greatest number of FEs while also successfully balancing exploration and exploitation capabilities. As demonstrated in Figs. 9 and 10, TOC has finally proven to be a powerful and effective optimization method for optimizing unconstrained optimization functions within a limited amount of FEs. This becomes a serious challenge when TOC tackles difficult real-world optimization problems.
6.4.2 Sensitive analysis of TOC on CEC-2017
For a meta-heuristic algorithm to be effective and to better balance exploration and intensification, its parameter values are crucial. Therefore, choosing the best parameter values is one of the biggest obstacles when applying a meta-heuristic to tackle a specific optimization problem. As extensively reported in the literature, alternative values of the control parameters of meta-heuristics may result in varied outcomes (Braik et al. 2021). This section examines the sensitivity of the control parameters, \(a_0\) and \(b_r\), of TOC, which are defined in Eqs. 49 and 35, respectively. All other control parameters of TOC are adaptive through the optimization process that depend on either \(a_0\) or \(b_r\). A Design of Experiment (DoE) approach was used to empirically evaluate TOC on the CEC 2017 benchmark to identify the best parameter set (Braik et al. 2021). This study used these parameters in a comprehensive design on 10-dimensional CEC 2017 functions in the analysis of the TOC algorithm. Each parameter in this study was given the following values: \(a_0={0.5, 1.0, 1.5, 2.0}\) and \(b_r={100, 100000, 0.1, 0.001}\). Using 10,000 FEs and 100 search agents, the proposed TOC technique was evaluated for different values of \(a_0\) and \(b_r\). This was done to determine how effectively these characteristics may balance exploration and exploitation. Table 13 displays the mean error values that demonstrate how sensitive TOC is to these values for the CEC 2017 functions with ten dimensions.
It is clear from Table 13, which shows the margins of error between the results, that TOC is rather sensitive to these parameters. This table shows that the TOC algorithm is quite stable when \(a_0\) is between 1.5 and 2.0. For \(a_0 = 2\), TOC produced the best results. Furthermore, at \(b_r = 100000\), TOC was clearly at its peak. With TOC yielding the best results, this analysis indicates that \(a_0\) and \(b_r\) should have the best values of 2 and 100000, respectively. These settings were applied to every problem addressed in this work. As often happens, however, only suitable settings-perhaps not the “best” settings-are obtained, where these values can be adjusted for other problems as well.
6.5 Statistical test analysis
This section conducted two successive statistical tests, one for Friedman’s test and the other for Holm’s test, to ascertain that the accuracy difference margins of all optimization methods in the benchmark test functions under study are statistically significant (Pereira et al. 2015).
As previously said and elucidated in the earlier subsections, a thorough statistical analysis utilizing mean score and standard deviation metric measures is essential to verify and confirm the consistency of the proposed optimizer. The use of average results obtained from several separate runs makes it easier to conduct a meticulous analysis of the optimizer’s effectiveness and consistency. The findings acquired validate the optimizer’s ability to explore the search space and identify regions of potential interest. These results, meanwhile, are insufficient to prove the proposed optimizer’s clear-cut advantage. To ensure that the findings got by the proposed TOC algorithm are not the results of random chance, further thorough statistical tests are conducted.
To obtain a trustworthy comparison when employing Friedman’s and Holm’s statistical tests, one must compare over ten benchmark functions and over five different optimization techniques (Demšar 2006). The performance levels of twelve meta-heuristics were investigated in this work. In respect to the test functions, two test suites were comprehensively examined in this study. In the first test suite, there are 23 test functions, including multimodal, fixed-dimension multimodal, and unimodal functions. The second test group consists of the CEC-2017 test functions, which includes 29 functions with various levels of complexity and dimensionality.
The competitors’ algorithms in the two benchmark sets under consideration will be graded according to Friedman’s test based on their overall performance on the evaluated test functions. The null hypothesis that all algorithms perform equally can be rejected if the p-value divulged by Friedman’s test is less than 0.05. This is due to the presence of statistically significant disparities between the competing algorithms’ performances. The algorithm that recovers the highest rank as per Friedman’s test is the one with the poorest rank, while the algorithm that retrieves the worst rank is the one that has the highest rank. We use the method with the lowest possible ranking as an ad hoc control approach. The ranking results for several algorithms utilizing Friedman’s test on the basic benchmark problems \(\hbox {F}_1\) - \(\hbox {F}_7\), \(\hbox {F}_8\) - \(\hbox {F}_{13}\), and \(\hbox {F}_{14}\) - \(\hbox {F}_{23}\) with \(\alpha = 0.05\) are presented in Table 14, where the results in Tables 7, 8, and 9 served as the foundation for the findings in Table 14.
Based on the evaluation results of the methods on the test functions \(\hbox {F}_1\) - \(\hbox {F}_7\), \(\hbox {F}_8\) - \(\hbox {F}_{13}\), and \(\hbox {F}_{14}\) to \(\hbox {F}_{23}\), Table 14 displays the p-value computed by Friedman’s test, which is 0.00259329, 0.06391161, and 2.664854E−05, respectively. One may observe that for all benchmark sets, the p-values are all smaller than \(\alpha \)=0.05. Therefore, one must embrace the alternative hypothesis and reject the null hypothesis as a result. Strictly speaking, this verifies that the efficacy of these methods varies statistically significantly across all test functions. The SO method is highly significant and the control algorithm that outperformed all other competitor algorithms in terms of optimization of the unimodal test functions (\(\hbox {F}_1\) - \(\hbox {F}_7\)), according to the ranking findings in Table 14. Notably, at the taken into consideration significance value of 5%, the proposed TOC algorithm outscored many of the competing algorithms in these test functions, with respectable rankings of 4.000000. It is possible to rank the competing meta-heuristics when it comes to optimizing the unimodal test functions in this order: SO, POA, TOC, EHO, CapSA, MFO, DE, ABC, SCA, CSA, CS, and lastly WSO.
It can be seen that the CapSA algorithm ranked first by receiving the lowest average ranking, while the proposed TOC algorithm ranked fifth, respectively, by studying the mean rankings in the context of optimizing multimodal test functions \(\hbox {F}_8\) - \(\hbox {F}_{13}\) in Table 14. The competing algorithms’ average ranking can be stated as follows: CapSA, SO, TOC, ABC, POA, DE, EHO, WSO, SCA, MFO, CSA, and finally CS.
It is evident that the EHO algorithm, which had the lowest average ranking, is ranked first when speaking about optimization of fixed-dimension multimodal functions, while the proposed TOC method is ranked second. It is possible to determine which algorithm performs best by examining the average ranking results obtained from the competing algorithms while optimizing fixed-dimension multimodal test functions. The TOC method comes first, followed by EHO, SCA, MFO, CSA, WSO, DE, POA, ABC, SO, CS, and CapSA is last because it has the weakest performance in this benchmark test set.
To determine which algorithms perform considerably differently and which operate similarly to the control algorithm, more steps must be taken. To judge whether the performance degrees of the control algorithms are statistically different from those of other competing algorithms, it is crucial to do these steps. For the purpose of deciding whether there is an important distinction between the efficiency of the control algorithm and other rivals on each benchmark group in Table 14, a post-hoc statistical method known as Holm’s test (Holm 1979) was considered in this work. This technique is critical to settle if algorithms outperform or under-perform the control algorithms revealed through the use of Friedman’s test.
Interestingly, in every benchmark test group, the control method is the one with the lowest mean rank. The SO algorithm is the control method for test functions \(\hbox {F}_1\) - \(\hbox {F}_7\). For multimodal test functions, the control method is the CapSA algorithm; for fixed-dimension multimodal test functions, the control algorithm is EHO. Holm’s test uses \(\alpha /k - i\), where k is the amount of freedom degree and i is the meta-heuristic algorithm number, to compare all meta-heuristics ranked in accordance with their p-values. This approach dismisses the null hypothesis sequentially, beginning with the most significant p-value and continuing until \(p_i < \alpha /k - i\). The process ends when the hypothesis is rejected, at which point any further hypotheses are considered plausible. Table 15 presents the statistical outcomes of the optimization of all functions in the fundamental benchmark test functions (\(\hbox {F}_1\) - \(\hbox {F}_{23}\)) using Holm’s test.
The hypotheses with p values of \(\le 0.006250\), \(\le 0.005000\), and \(\le 0.012500\) in the optimization of unimodal, multimodal, and fixed-dimension multimodal test functions, respectively, were rejected using Holm’s technique in Table 15. The proposed TOC algorithm is statistically important and efficient in providing results that are substantially competitive to those gathered by other meta-heuristics reported in the literature, as demonstrated by the findings of Holm’s method for \(\hbox {F}_1\) - \(\hbox {F}_{23}\). It is clear from reading the results shown in Table 15 that when it comes to optimizing \(\hbox {F}_1\) - \(\hbox {F}_{23}\), the proposed TOC algorithm performs much better than many other competing methods.
A summary of the average ranking results produced for all algorithms on the CEC-2017 benchmark set for different degrees of dimensionality (i.e., \(d = 10, d = 30\), and \(d = 50\)) for all test functions is shown in Table 16.
Table 16 indicates that the highest performance is indicated by the lowest ranking value. As indicated in Table 16, the p-value was determined using Friedman’s test. All of the p-values for the CEC-2017 test functions for the considered dimensions, which were 1.326555E−10, 1.385879E−10, and 1.406160E−10, respectively, were beneath the significance level \(\alpha \) = 0.5. As a result, the alternative hypothesis was accepted, and the null hypothesis was rejected. The alternative hypothesis suggests that there are differences in the algorithms’ performance behaviors when applied to optimization problems, contrary to the null hypothesis, which claims that all the compared algorithms exhibit the same performance behavior.
Based on the statistical findings displayed in Table 16, it can be settled that the proposed TOC algorithm exhibits statistical significance and surpasses all other rival methods across all dimensions considered. It is apparent from this that the proposed TOC algorithm placed top, in terms of optimizing CEC-2017 test functions for 10, 30, and 50 dimensions. These results show how robust this proposed algorithm is in comparison to all other competitors. For dimensions 10, 30, and 50, the proposed TOC algorithm received rankings of 1.551724, 2.068965, and 1.931034, respectively, while EHO algorithm received rankings of 2.827586, 1.896551, and 2.172413, for the same dimensions of CEC-2017 benchmark functions.
The results of Friedman’s test are then verified using Holm’s statistical test as a post-test process on the CEC-2017 suite using benchmark functions with dimensions of 10, 30, and 50. For the proposed TOC algorithm as well as other competitors’ algorithms in optimizing the CEC-2017 benchmark functions for dimensions 10, 30, and 50, Table 17 presents the statistical findings obtained using Holm’s method.
Applying Holm’s test, as shown in Table 17, which discards hypotheses with p-values \(\le 0.025000\), \(\le 0.016666\), and \(\le 0.050000\) in optimizing CEC-2017 with dimensions of 10, 30, and 50, respectively, allowed for a comparison of the proposed TOC algorithm with other competing methods. As per the results in Table 17, it can be concluded that, when it comes to optimizing CEC-2017 in 10 dimension, there is an important distinction between TOC and WSO, CSA, CapSA, CS, MFO, ABC, SCA, SO, POA, and DE algorithms, but not between TOC and EHO.
According to the statistical findings computed on the basis of Friedman’s and Holm’s test on CEC-2017 with dimension of 30, there is no substantial difference between EHO, TOC, and another competing algorithm (i.e., WSO). The EHO method, however, differs significantly from other algorithms (i.e., CapSA, ABC, CSA, MFO, CS, DE, POA, SO, and SCA). The proposed TOC algorithm and the other nine algorithms (i.e., CapSA, DE, SO, CS, ABC, MFO, SCA, CSA, and POA) differ significantly from each other, according to the findings obtained on CEC-2017 with dimension 50. However, there is a significant difference between TOC and other comparative algorithms, including WSO, CapSA, DE, SO, CS, ABC, MFO, SCA, CSA, and POA, but there is no big difference between TOC and EHO algorithms. This outcome does, in fact, align with the earlier ones.
Tables 14, 15, 16, and 17 reveal that the proposed TOC algorithm is an efficient meta-heuristic that resulted in encouraging outcomes for the benchmark test functions studied. It performs at a statistically significant level in this, outperforming some of the rival techniques mentioned above and not significantly different from those superior algorithms.
One important conclusion drawn from the statistical evaluation results discussed above is that, on average, the proposed TOC algorithm outperformed several reliable state-of-the-art meta-heuristics mentioned in the literature, including CSA, CapSA, MFO, and ABC. This highlights the good performance of TOC and corroborates that the search space may be effectively explored by this proposed algorithm, regardless of the number of optimal locations or the small, medium, or large dimensions of the optimization problems. Moreover, the average ranking shows that, in the optimization of the fundamental benchmark functions \(\hbox {F}_1\) - \(\hbox {F}_{23}\), the performance scores of TOC lag slightly behind SO, but in terms of performance degrees, TOC exceeded many other competitive contenders such as CS and WSO algorithms. Through the excellent and meaningful mathematical models of the proposed TOC algorithm, one can conclude the exceptional superiority of this algorithm in CEC-2017. Overall, the statistical analysis’s findings indicate that the TOC algorithm is a dependable and efficient optimizer with well-calibrated exploration and exploitation characteristics that preserves a balance between a variety of local and global optimization. These finishes serve as s motivating evidence to use the proposed method to tackle more challenging real-world optimization problems.
7 Applications of TOC on engineering problems
The applicability of the proposed TOC algorithm on traditional engineering design problems reveals its dependability in figuring out real-world problems, particularly constrained optimization problems. Welded beam design problem, pressure vessel design problem, tension/compression spring design problem, speed reducer design problem, three-bar truss design problem, I-beam design problem, cantilever beam design problem, and step-cone pulley design problem are the eight well-studied engineering design problems solved using the proposed TOC algorithm in this section. Numerous constraints in these design problems necessitate the employment of a constraint management technique to effectively optimize them. The objective function has a penalty coefficient associated with it when the decision variables of the problems under study fall outside the acceptable ranges.
7.1 Constraint handling techniques
Constraints handling describes the practice of accounting for equality and inequality constraints both at the design stage and during optimization. When employing meta-heuristics, which were first created to tackle unconstrained optimization problems, constraint handling techniques (CHTs), which are used to solve optimization problems, are crucial. Meanwhile, CHTs have a significant impact on how well meta-heuristic algorithms work in unconstrained optimization cases. Constraints are used to manage both feasible and impractical candidate solutions for meta-heuristic algorithms. This implies that to optimize constrained optimization problems, an optimization method must include a constraint handling method. The literature provides a range of methods for handling constraints, such as the partition of objectives and constraints, hybrid approaches, special operators, repair algorithms, and penalty functions (Coello Coello 2002). The most fundamental techniques, known as penalty functions, are employed in this work as they are outside their scope to find an efficient mechanism for tackling constraints of optimization problems solved by TOC. These methods convert constrained optimization into unconstrained optimization by punishing impractical candidate solutions. Dynamic, static, adaptive, annealing, death penalty, and co-evolutionary are further classifications of penalty functions (Coello Coello 2002; Mirjalili and Lewis 2016). When the decision variables of the examined optimization problems deviate from acceptable boundaries, a penalty factor is added.
In the death penalty approach, the search agents are treated equally and are penalized by assigning a large fitness score (or small objective value in the context of maximization) if they violate any level of constraints. The rejection of meta-heuristics for impractical solutions is a logical consequence of this process during optimization. Some advantages of this method for the TOC optimizer include low computational burden, modesty, and ease of application without the need for major modifications. This method, however, does not utilize the knowledge of infeasible solutions that might be helpful when dealing with optimization problems with dominant infeasible zones. To address constraints in a straightforward manner, TOC used a death penalty function, with an explanation of this method given below.
The generic form of penalty functions, which is adopted by meta-heuristic algorithms based on mathematical programming approaches, is found in Eq. 70. It does this by transforming the constrained numerical optimization problem into an unconstrained one (Bäck 1997).
where \(\phi (\vec {x})\) is the extended target function to be optimized, and \(p(\vec {x})\) is the penalty value that may be calculated as shown in Eq. 71.
when \(r_i\) and \(c_j\) are positive constants known as “penalty factors”.
It is clear that the objective is to reduce the fitness of infeasible solutions in order to increase the likelihood that feasible solutions would be chosen. Equation 70 applies a penalty value to the fitness of the solution since low values are inherently selected in a minimization problem. By fine-tuning the penalty factors of penalty functions, it is necessary to carefully decide the level of severity of the penalties to be imposed. These values are very reliant on the specific scenario, even though they are quite simple to apply (Runarsson and Yao 2000). The TOC optimizer was modified to incorporate a death penalty handling mechanism to address the limitations of the engineering design problems mentioned above. This is done to fairly compare the TOC algorithm with rival methods. Under this constraint-handling strategy, the penalty function assigns the lowest fitness value to possible solutions or removes them completely from the optimization process (Back 1991). The death penalty approach initializes all solutions in the viable region of the search space and assumes an infinite penalty factor. This implies that feasible solutions consistently perform better than infeasible solutions. Therefore, the death penalty approach avoids the necessity of adjusting the penalty function’s parameters. Another advantage of this approach is that it eliminates the need to add a penalized constraint violation to the goal function. Thus, throughout the search process, the objective function value and viability of each candidate design may be evaluated separately.
It is important to keep in mind that TOC used the same parameter settings as indicated in Table 6 to solve the eight constrained engineering design problems explained in the next subsections.
7.1.1 Death penalty method
To overcome the constraints of the above-mentioned design problems, the death penalty handling technique (Yang 2010) was used to fine-tune the proposed TOC algorithm. This was done to fairly compare the competing algorithms used in this study, as well as the TOC algorithm, equitably. The penalty function of this constraint handling method is defined as follows:
where \(\zeta (z)\) is the objective function, \(U_j(z)\) and \(t_i(z)\) stand for two constraint functions, \(o_j\) and \(l_i\) stand for two positive penalty constants, and the settings of \(\psi \) and \(\gamma \) were assigned to 2 and 1, respectively.
The low computational burden and simplicity of this constraint approach are the notable features of this constraint technique. It is very useful for solving the aforementioned problems where there are dominant infeasible areas, and this constraint approach does not utilize knowledge about infeasible solutions. To help the search agents in optimization techniques enter the problem’s search space, this method establishes the penalty amount for each solution in the stable penalty function.
It is important to note that the proposed TOC optimizer and all other contending algorithms used the same initialization procedure described above to solve every engineering design mentioned above and optimized below. This implies that the competitors employed an equivalent number of search agents and iterations like those employed in the previous optimization of the benchmark problems. Table 6 lists the parameter configurations utilized by TOC and other opposing algorithms to solve each of the engineering applications listed below.
7.2 Welded beam design problem
The design of this problem is a cantilever beam with a welded end and spot fabrication done at one end. This design case aims to produce a welded beam for the structure identified in Fig. 11Wang and Guo (2014) with the lowest feasible manufacturing cost.
A schematic diagram of a welded beam structure (Wang and Guo 2014)
The beam (A) and the welding (B) required to connect it to the beam structural member are the constituent pieces of the welded beam construction. This problem is subject to the following constraints: the beam’s end deflection (\(\delta \)), buckling load (\(P_c\)), shear stress (\(\tau \)), and bending stress (\(\theta \)). To solve this design problem, the optimal possible configuration of structural qualities of the welded beam design must be determined. The height of the bar (t), length of the clamping bar (l), weld thickness (h), and bar thickness (b) are the structural characteristics of this design.
These parameters were represented by the following vector: \(\vec {x} = [x_1, x_2, x_3, x_4]\), where the elements of this vector, namely \(x_1, x_2, x_3\) and \(x_4\) stand for the parameters h, l, t, and b, respectively. The following mathematical formula represents the cost value of the function for this design problem:
\(f(\vec {x}) = 1.10471x^2_1x_2 +0.04811x_3x_4(14.0+x_2)\)
This design is subject to the following constraints,
the following definition stands for other parameters of this design problem:
where \(L =14\)in, \(P =6000lb\), \(E =30*10^6\) psi, \(\delta _{max} = 0.25\)inch, \(\sigma _{max} = 30000\) psi, and \(G = 12*10^6\) psi.
The ranges of \(x_1\), \(x_2\), \(x_3\) and \(x_4\) were selected to be, in the following order, \(0.1\le x_1\le 2\), \(0.1\le x_2\le 10\), \(0.1\le x_3\le 10\), and \(0.1\le x_4\le 2\), respectively.
The optimal solutions as well as the values of the decision optimization parameters obtained by the proposed TOC algorithm for addressing this optimization problem are contrasted with those of other competing algorithms in Table 18.
The proposed TOC algorithm found the optimal designs for the welded beam structure by calculating optimum cost of 1.72485230, which is the lowest costs among all the other competing algorithms, as per the results in Table 18. To solve optimal design problems, the proposed TOC algorithm actually performed better than many other competing algorithms.
Table 19, which presents a comparison of the best, worst, average, and standard deviation outcomes, shows the statistical effectiveness of TOC algorithm against other rivals over thirty separate runs.
The TOC method beat other meta-heuristic techniques with the lowest mean and standard deviation scores in comparison to others, according to the findings of Table 19. This table’s findings also show that, in comparison to other algorithms, TOC again exhibits significantly better average and standard deviation behavior as well as the ability to identify low values of best and worst solutions. This further demonstrates the excellence and degree of dependability of the proposed TOC algorithm in handling this design problem.
7.3 Pressure vessel design problem
This standard engineering design problem with mixed type variables (continuous/discrete) is one of the frequently used benchmark design problems (Kannan and Kramer 1994). The goal of this problem is to reduce the total cost of the materials used in the welding and construction of the cylindrical pressure vessel, as is shown in Fig. 12, and is strengthened at both ends by hemispherical heads.
A schematic structure of the cross-section of pressure vessel design problem (Kannan and Kramer 1994)
The design problem’s four optimization variables are the thickness of the shell (\(\hbox {T}_s\)), the head (\(\hbox {T}_h\)), the length of the cylindrical part of the vessel without looking at the head (L), and the inner radius (R). The continuous parameters L and R are accompanied by integer values, \(\hbox {T}_h\) and \(\hbox {T}_s\), which are multiples of 0.0625 inch. The four optimization variables, \(\hbox {T}_s\), \(\hbox {T}_h\), R, and L, are represented by the vector \(\vec {x} = [x_1, x_2, x_3, x_4]\), where \(x_1, x_2\), \(x_3\), and \(x_4\) represent them respectively. The mathematical formula for this design formula is as follows:
The objective function to be reduced is given below:
This problem is subject to four constraints as shown below:
where \(0 \le x1 \le 99\), \(0 \le x_2 \le 99\), \(10 \le x_3 \le 200\) and \(10 \le x_4 \le 200\).
One of the most often utilized standard engineering optimization problems is the pressure vessel design problem, which has been employed by researchers in a number of studies to confirm the efficacy of their new optimization methods.
Table 20 presents a comparison of the optimal solutions for the pressure vessel engineering design problem, as derived by the proposed TOC algorithm and other competing techniques mentioned above. Comparisons are made only with continuous variable solutions because some researchers consider this case to be a continuous design problem.
According to the cost results of the pressure vessel design problem in Table 20, the optimal design with an optimal cost of \(5.88533222E+03\) was reported by the proposed TOC algorithm. This demonstrates the ability of this algorithm to discover the optimum design at a reasonable cost.
Table 21 presents a statistical comparison of the presented TOC algorithm with competing methods for the pressure vessel design problem over 30 separate runs.
Table 21 shows that compared to other algorithms, TOC performed better and offered extremely optimum solutions in terms of Ave and Std values achieved thus far. After 30 separate runs, the standard deviation findings of the proposed TOC optimizer is 0.00017845, which is substantially lower than that of other rival methods. This demonstrates how effective and dependable the proposed optimization technique is in addressing this complicated design problem.
7.4 Tension/compression spring design problem
As another recognized standard engineering design problem, the structure of a tension/compression spring with the design shown in Fig. 13 is utilized to evaluate the feasibility of the proposed TOC algorithm in traditional engineering applications.
A schematic structural diagram of a tension/compression spring (Mirjalili et al. 2017)
The goal of this optimization task is to reduce the burden of a tension/compression spring design. This engineering design problem is limited to the following few constraints: minimal deflection, minimum surge frequency, and shear stress. The design case’s optimization decision factors are the wire’s diameter (d), the mean coil’s diameter (D), and the overall number of active coils (N). The optimization parameters for this design case were represented by a vector that looked like this: \(\vec {x} = [x_1, x_2, x_3]\), in which the parameters d, D, and N are represented by the variables \(x_1, x_2, and x_3\). The mathematical formula for this optimization design problem can be characterized as follows:
The following cost function has to be optimized: \(f(\vec {x}) = (x_3 +2)x_2x^2_1\)
This engineering design is subject to the following restrictions:
where \(0.05 \le x_1 \le 2.0\), \(0.25 \le x_2 \le 1.3\) and \(2 \le x_3 \le 15.0\).
The proposed TOC algorithm is compared with other promising competing algorithms for the tension/compression spring design problem in Table 22 with respect to the values of design variables and objective cost values.
The findings shown in Table 22 demonstrate that the tension/compression spring problem can be optimally designed with an optimum cost of 0.01266523 using the proposed TOC algorithm. Compared to other rival optimization methods, this cost is somewhat lower in many cases.
Table 23 presents an overview of the statistical outcomes of this design problem, as determined by the TOC algorithm together with other competing meta-heuristics.
As can be seen from Table 23, the TOC algorithm outperformed other optimization techniques once more in the context of the best, average, worst, and standard deviation statistical outcomes. This claims that for the same number of iterations and search agents, TOC is more dependable and efficient in addressing this design problem than many other rivals.
7.5 Speed reducer design problem
A different real-world example that is often utilized as a reference benchmark for assessing optimization approaches is the design of a speed reducer (Gandomi and Yang 2011), with Fig. 14 providing a structural schematic for this design. This design problem is challenging because it involves seven decision parameters.
A schematic structural diagram of a speed reducer design (Gandomi and Yang 2011)
The four constraints that affect the weight to be reduced in this design problem are as follows Mezura-Montes and Coello (2005): transverse shaft deflections, stresses in the shafts, surface stress, and bending stress of the gear teeth. There are seven decision design parameters that were employed to solve this optimization problem, and they can be described as follows: b, m, z, \(l_1\), \(l_2\), \(d_1\) and \(d_2\). These parameters are, in sequence, the face width, the tooth module, number of teeth in the pinion, distance between bearings between the first and second shafts, diameter of the first and second shafts, and distance between bearings between the first and second shafts. These parameters were represented by a vector in the process of solving this optimization problem, which is given as follows: \(\vec {x}= [x_1 x_2 x_3 x_4 x_5 x_6 x_7] = [b m z l_1 l_2 d_1 d_2]\). The mathematical formula for this problem may be described as follows:
One way to characterize the cost function to be optimized is:
This engineering design is subject to the following restrictions:
where the ranges of the design variables are \(2.6\le x_1\le 3.6\), \(0.7\le x_2\le 0.8\), \(17\le x_3\le 28\), \(7.3\le x_4\le 8.3\), \(7.3\le x_5\le 8.3\), \(2.9\le x_6\le 3.9\), and \(5.0\le x_4\le 5.5\), for the variables \(b, m, z, l_1, l_2, d_1\), and \(d_2\), respectively.
A comparison of the designs and cost solutions for the speed reducer design problem reached by TOC with the other optimization techniques listed above can be seen in Table 24.
The proposed TOC algorithm outperforms other competing optimization techniques, as seen in Table 24, by having the best design cost for this problem, which is around 2994.47106874. This indicates that the optimal design for this problem can be found using TOC.
Table 25 tabulates the statistical outcomes of the TOC algorithm as well as additional optimization techniques for the speed reducer design problem.
The superiority of TOC algorithm over other meta-heuristic methods is demonstrated by the statistical data presented in Table 25. This demonstrates that, out of all the competing algorithms, the TOC algorithm revealed the best optimal solutions. This exemplifies that the TOC algorithm outperforms competing algorithms on the basis of these statistical results.
7.6 Three-bar truss design problem
The goal of optimizing this classic engineering problem is to reduce the weight of the truss by designing it with three bars. With two design variables, this problem has a severely limited search space (Sadollah et al. 2013). Figure 15 displays the structural parameters and diagram of this design.
A schematic diagram of a three-bar truss design problem (Sadollah et al. 2013)
In order to reduce the weight of the truss structure of a target function, the goal of this engineering design problem is to create a truss with three bars defined as follows:
The following stress limits apply to this design problem:
where \(0\le x_1\le 1.0\) and \(0\le x_2\le 1.0\). The other constants are \(P = 2kN/cm^2\) and \(L = 100cm\).
The outcomes of optimizing a design for the three-bar truss problem using the proposed TOC algorithm are contrasted with the previously mentioned meta-heuristics. For the three-bar truss design problem, Table 26 briefly provides a comparison of the designs and cost outcomes produced by the TOC algorithm with those of the other optimization techniques discussed above.
The findings in Table 26 demonstrate that when compared with existing met-heuristic methods documented in the literature, the proposed TOC algorithm is competitive in optimizing the three-bar truss design problem. According to these design and cost findings, TOC is able to determine the appropriate optimal cost for the three-bar truss problem that is competitive with the cost solutions found by other algorithms reported in the literature.
Table 27 presents the statistical findings acquired for this designs problem with respect to the best, worst, mean, and Std score of the proposed TOC and other comparative algorithms.
Table 27 illustrates and analyzes how the proposed TOC algorithm performed in comparison to other competing algorithms, yielding optimum cost results for best, average, worst, and standard deviation.
7.7 I-beam design problem
The real I-beam engineering design problem, which involves four structural design parameters (Mirjalili et al. 2017), was solved using the proposed TOC algorithm to show its applicability in solving real-world problems compared to other respected meta-heuristics. Reducing the vertical deflection of the I-beam design depicted in Fig. 16 is the primary goal of this problem.
A schematic diagram of an I-beam design problem (Mirjalili et al. 2017)
Under specified loads, this design problem concurrently satisfies the cross-sectional area and stress limits. The four structural variables relevant to this problem are b, h, \(t_w\), and \(t_f\). The modulus of elasticity (E) and the length of the beam (L) are 523.104 kN/\(\hbox {cm}^2\) and 5.200 cm, respectively. The following formula was used to determine the objective function that was specified for reducing the vertical deflection:
The following definition of cross-sectional area less than 300 \(\hbox {cm}^2\) applies to the design case:
where \(10\le x_1\le 50, 10\le x_2\le 80, 0.9\le x_3\le 5.0\), and \(0.9\le x_4\le 5.0\) are the definitions of the design spaces for these decision variables.
The following stress restriction applies if the beam’s permissible bending stress is 56 kN/\(\hbox {cm}^2\):
where the components \(\psi \) and \(\kappa \) can be defined as shown in Eqs. 74 and 75, respectively.
The decision design parameters and cost solutions produced by the proposed TOC algorithm, as well as other meta-heuristic algorithms, are put in comparison in Table 26 with regard to the nonlinear constrained design shown in Fig. 16. These methods employed the same set of design variables while adhering to the previously indicated constraint.
Table 28 clearly shows that the proposed TOC algorithm performed much better than many other comparable meta-heuristics in optimizing this design problem. They have arrived at exceptional solutions, maybe the only global optimums. They also outperformed a lot of other competing algorithms in respect to the value of the minimal objective function.
The average statistical findings for the best, worst, mean, and Std scores for the I-beam design problem, acquired over 30 separate runs utilizing the TOC algorithm along with other meta-heuristics, are displayed in Table 29.
Table 29 shows that the proposed TOC algorithm worked better than many other rival algorithms. This shows that TOC is better than other algorithms in addressing the I-beam design problem with the best, average, worst, and standard deviation outcomes showing a high success rate.
7.8 Cantilever beam design problem
The goal of this problem, despite its resemblance to the former one, is to reduce the weight of a cantilever beam made up of five parts, each of which has a hollow cross section that thickens steadily (Mirjalili et al. 2017). There is an outer vertical force acting on the free end of the cantilever, and the beam is firmly supported as shown in Fig. 17.
A schematic diagram of a cantilever beam design problem (Mirjalili et al. 2017)
This design problem aims to decrease a cantilever beam’s weight while imposing a maximum restriction on the free end’s vertical displacement. The cross-sectional heights and widths of each part make up the design variables. These variables do not become operative in the problem because the upper and lower bounds are too big and little, respectively. The cantilever beam design problem must be solved by figuring out what feasible combinations of the five structural design parameters there are. A vector representing these design parameters might look like this: \(\vec {x} = [x_1, x_2, x_3, x_4, x_5]\). This design problem’s objective cost function may be expressed as follows: \(f(\vec {x}) = 0.0624(x_1+x_2+x_3+x_4+x_5)\), where the following optimization constraint applies to this design problem.
For this design problem, the variables were taken to be in the range \(1\le x_i\le 10\), where \(i\in {1, 2, 3, 4, 5}\).
Table 30 presents the optimization results of the proposed TOC algorithm as well as other comparable competing meta-heuristics utilized to handle this problem.
The proposed TOC algorithm produced the best solution for the cantilever design problem, with an optimal cost of around 263.89584337, according to the cost weight values listed in Table 30. This outcome demonstrated extremely competitive finding for TOC when contrasted to other rival algorithms and exceeded most of them.
Table 31 summarizes the statistical optimization findings of the TOC method as well as the other optimization techniques discussed above with respect to the best, worst, mean, and Std scores, across 30 separate runs, for the cantilever design problem.
The proposed TOC method outperformed many other competitor algorithms in terms of statistical solutions for this design problem, as evidenced by the solutions shown in Table 31. This suggests that TOC is better than other meta-heuristics, outperforming the majority of algorithms in delivering outcomes that are quite competitive with them.
7.9 Step-cone pulley design
This problem involves minimizing the step-cone pulley’s weight by optimizing a set of five design variables (Kumar et al. 2020; Zhao et al. 2024). There are a set of eleven constraints in this design problem, three of which are equality constraints and eight of which are inequality constraints. Figure 18 shows the framework of the step-cone pulley. The mathematical structure of this design can be given as shown below:
Consider: \(\vec {x} = [x_1, x_2, x_3, x_4, x_5] = [d_1, d_2, d_3, d_4, \omega ]\),
A schematic design showing the layout of a step-cone pulley (Zhao et al. 2024)
Minimize:
This design is subject to: \(h_1(\vec {x}) = C_1 - C_2 = 0\),
where the ranges of decision variables for this design are as follows: \(50 \le x_i \le 400, \; i = 1, 2, 3, 4, \;\; 50 \le x_5 \le 400\).
where \(C_i\) represents the belt’s length to achieve the speed \(N_i\) which is defined as follows:
where \(R_i\) denotes the tension ratio which is defined as follows:
where \(P_i\) denote the power transmitted at each stage which is defined as follows:
where \(\rho = 7200kg/m^3\), \(a = 3m\), \(\mu = 0.35\), \(s=1.75MPa\), \(t = 8mm\).
The performance of the proposed TOC optimizer is contrasted with other competing algorithms in solving the step-cone pulley design problem over the course of 30 separate runs. Table 32 displays the best solutions for each competing optimizer along with the optimal values of the decision variables and optimal weights that correspond to each algorithm.
The cost weight values shown in Table 32 indicate that the proposed TOC optimizer has yielded the optimal solution for the step-cone pulley problem. These results show that TOC has produced very competitive results when compared to other competing algorithms.
The statistical results obtained from using different optimizers in solving the step-cone pulley problem are displayed in Table 33.
Table 33 shows that the proposed TOC optimizer outperformed other competing algorithms in solving this complex problem, as evidenced by its promising level of performance in terms of ‘Best’, ‘Worst’, ‘Ave’, and ‘Std’ metrics.
To put it succinctly, the reliability and efficiency of the proposed TOC algorithm have been validated by its overall performance in solving various classical engineering design problems, making it a novel meta-heuristic-based optimization algorithm. Thus, this algorithm has several benefits, including superior performance in terms of both standard deviation values and optimal cost outcomes compared to many other popular algorithms like MFO and SCA. This leads us to the view that TOC is unquestionably acceptable optimization algorithm and has strong potential to succeed in solving a variety of contemporary real-world problems.
8 Modeling of an industrial winding process
This section demonstrates the reliability of the proposed TOC optimizer for modeling a real industrial winding machine system.
In actual web conveyance systems, the industrial winding machine under investigation is typically encountered (Bastogne et al. 1998; Rodan et al. 2017). Figure 19 displays this process’s structural diagram, which is an extremely nonlinear process and poses a difficulty to both the control community and research modeling (Nozari et al. 2012).
A graphical representation of the winding process (Nozari et al. 2012)
As seen in Fig. 19, three reels, known as reel 1, reel 2, and reel 3, make up the winding process that is the focus of this investigation. Three DC motors, designated \(\hbox {M}_1\), \(\hbox {M}_2\), and \(\hbox {M}_3\), respectively, are in charge of these reels. Set-point currents \(\hbox {I}_{1}\) at motor 1 and \(\hbox {I}_{3}\) at motor 2 operate the DC motors connected to reels 1 and 3. Additionally, tension meters are positioned to measure the strip tensions in the web between reels 1 and 2, known as \(\hbox {T}_{1}\), and between reels 2 and 3, or called \(\hbox {T}_{3}\). In this industrial process, the angular speeds of reels 1, 2, and 3 (designated as \(\hbox {S}_{1}\), \(\hbox {S}_{2}\), and \(\hbox {S}_{3}\)), respectively, are measured using a dynamo tachometer. Using a dynamo tachometer, the angular velocity of motor \(\hbox {M}_2\), or \(\varOmega _2\), is determined. This process’s primary input variables are \(\hbox {S}_{1}\), \(\hbox {S}_{2}\), \(\hbox {S}_{3}\), \(\hbox {I}_1\), and \(\hbox {I}_3\). The outputs of this process are \(\hbox {T}_{1}\) and \(\hbox {T}_{3}\). For the linear estimates of \(\hbox {T}_{1}\) and \(\hbox {T}_3\), the modeling problem’s size for this process is 5.
Each model created for \(\hbox {T}_{1}\) and \(\hbox {T}_3\) included a training dataset containing 1250 data samples for each input variable. There are 1250 samples for every input variable in the test dataset as described in detail in Nozari et al. (2012). Given the input and output parameters of this process, the objective of this modeling problem is to capture the key features of the appropriate output responses. In order to use linear estimates to tackle this problem, linear models were constructed for \(\hbox {T}_{1}\) and \(\hbox {T}_{3}\) as presented in Eqs. 80 and 80, respectively.
where \(T_{1}(t-1)\) and \(T_{3}(t-1)\) implement the previous values of \(\hbox {T}_{1}\) and \(\hbox {T}_3\) at \(t-1\), respectively, \(\alpha _1\), \(\alpha _2\), \(\alpha _3\), \(\alpha _4\), \(\alpha _5\), \(\beta _1\), \(\beta _2\), \(\beta _3\), \(\beta _4\), and \(\beta _5\) stand for the weights of the linear models generated for models \(T_{1}(t)\) and \(T_{3}(t)\), respectively.
The fitness function for this modeling problem is the mean absolute percentage error (MAPE) criteria, which is described in Eq. 80.
where n is the number of experimental data values, y is the actual values of an actual experiment, and \({\hat{y}}\) is the estimated values produced by the generated models.
The maximum number of iterations and search agents of the proposed TOC optimizer were set to 100 and 30, respectively, during the creation of the models \(\hbox {T}_{1}\) and \(\hbox {T}_3\). Figure 20 displays the convergence curves produced by TOC in the modeling of \(\hbox {T}_{1}\) and \(T_3\).
The data of the plots in Fig. 20 represents the MAPE between the estimated responses generated by the models TOC-\(T_1\) and TOC-\(T_3\) and the corresponding true responses for \(T_1\) and \(T_3\), respectively.
The convergence curves in Fig. 20 underline the efficiency of TOC in achieving the optimal fitness with minimal errors. As well as the actual responses of \(\hbox {T}_{1}\) and \(\hbox {T}_3\) for both training and test instances, Figs. 21 and 22, demonstrate how well TOC tracks the output data of \(\hbox {T}_{1}\) and \(\hbox {T}_3\), respectively.
The simulation findings in Figs. 21 and 22 show that the proposed TOC optimizer is an efficient and acceptable method for accurately simulating the web tensions, \(\hbox {T}_{1}\) and \(\hbox {T}_3\), of the winding process. In this, there appears to be a high degree of similarity between the actual data and the predicted data obtained by TOC in Figs. 21 and 22. The performance levels of developing \(T_1\) and \(T_3\) models using the proposed TOC optimizer compared to other competing meta-heuristics employing the same modeling schemes are presented in Tables 34 and 35, respectively. The optimal parameter settings were selected to be associated with each meta-heuristic, and the outcomes were averaged over a series of thirty assessment experiments.
The evaluation outcomes presented in Tables 34 and 35 demonstrate the suitability and effectiveness of TOC in simulating the web tensions, \(\hbox {T}_{1}\) and \(\hbox {T}_{3}\), of the winding process. One may conclude from these findings that TOC is very convincing in estimating the parameters of the nonlinear industrial process under study. To be more precise, TOC performed well, nearly meeting the VAF criterion’s unit performance threshold. This reveals that even when the performance goal is unity, TOC has a good chance of reaching the required levels of sufficient performance. These results confirm the superior performance rates of the models developed based on TOC compared to models developed based on other meta-heuristics. Thus, TOC is a promising candidate for the design of complex manufacturing systems. In sum, the results in Tables 34 and 35 show that TOC considerably outperforms other comparative algorithms such as POA, EHO, and SO algorithms in achieving optimal VAF rates for \(\hbox {T}_{1}\) and \(\hbox {T}_3\) models.
9 Analysis of the results
The performance of the proposed TOC optimizer was firmly assessed and validated in the preceding sections using a series of challenging benchmark functions, namely CEC 2017, and a set of various engineering design problems correlated with a variety of constraints. Its reliability was also examined in solving a set of basic benchmark test functions involving 23 unimodal, multimodal, and fixed-dimensional multimodal functions.
These benchmark test groups experience both numerous global optimums and many local optimal cases, as described above. As a result, the test functions of these test groups are very good at determining the algorithms’ ability to avoid local optimum scenarios as well as their capacity for exploration and exploitation. One may conclude that TOC performs better than many competing algorithms in the majority of test functions by looking at the results on the CEC 2017 functions with dimensions of 10, 30, and 50 in Tables 10, 11, and 12, respectively. The higher mean values in these tables indicate that TOC outperforms its competitors on average, while the standard deviation values confirm that TOC’s dominance is consistent. These results demonstrate TOC’s ability to handle difficult test functions with accuracy and consistency. Due to the several optimal states of the test functions of the suites under study, they are able to assess the algorithms’ performance levels. As a result, the findings in the above tables show how TOC utilizes high levels of exploration, exploitation, convergence, and avoidance of local optima in these functions. The outstanding way that TOC performs in these test cases demonstrates its ability to effectively explore and exploit the search space. The reason for this is because the search agents are reluctant to be enticed to a local solution because of their tendency to communicate with one another. Because of their interconnectedness, the search agents in TOC are also able to traverse the search space and gradually get closer to the global optimum. In conclusion, TOC’s logical mathematical model is responsible for its reasonable level of convergence.
The global optimal solution is approached by all search agents (windstorms, thunderstorms, and tornadoes) in proportion to the number of iterations in the proposed mathematical model of TOC. Additionally, the statistical tests conducted by Friedman’s and Holm’s show that the benefit of TOC is statistically significant, as shown in Tables 14, 15, 16, and 17. Thus, TOC stands at the top of Friedman’s rankings based on a comprehensive statistical analysis of the 12 algorithms tested. This achievement shows its superiority over other well-known algorithms like MFO and SCA, as well as other highly regarded algorithms such as EHO and SO. These results demonstrate the effectiveness and reliability of TOC in a competitive algorithmic setting.
Even though TOC was able to handle challenging benchmark test functions of varying degrees of complexity, we conducted a longer test to see if it could also solve engineering design problems. This was to investigate its robustness, application, and dependability in tackling real-world problems. As per the results on these engineering design problems, TOC outperformed other algorithms and generated consistent performance, as revealed by the findings in Tables 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33. Despite the ability of TOC to manage difficult benchmark functions with different levels of complexity, we conducted a longer test to evaluate whether TOC can help address engineering design and industrial problems. This intended to look at how reliable, resilient, and applicable it is for solving real-world problems. Tables 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, and 34 and 35 show that TOC produced consistent performance and outperformed other algorithms in various engineering design and industrial tasks.
These engineering design problems present very challenging test tasks for any meta-heuristic. Because of this, they are somewhat comparable to the real search area that TOC might encounter when dealing with real-world practical problems. As a result, academics have long considered meta-heuristics to be generally useful methods for solving progressively complex real-world problems. We recognize that every algorithm has advantages like flexibility and simplicity as well as disadvantages like complexity and parameter setup.
One may naturally question the benefits and drawbacks of the proposed TOC optimizer given that it contains several significant qualities that have previously been looked at and studied. On the basis of the mathematical frameworks, the obtained results, and the convergence action of TOC, we may summarize the advantages and disadvantages of TOC as mentioned in the following two subsections.
9.1 Advantages of TOC for optimization problems
The proposed TOC optimizer has several advantages for solving global optimization problems, and its predicted versatility to manage different kinds of optimization problems shows that it is a viable approach. Many sorts of problems that can be solved with TOC when only a few numbers of parameters need to be changed demand more flexibility, as this study shows. The promising mathematical model of TOC enables it to tackle a wide range of engineering and industrial optimization problems, especially those with relatively high dimensions. A third advantage is the simplicity and power of TOC to facilitate rapid and accurate finding of global solutions. Furthermore, the adaptive models of TOC enable finding an accurate approximation of the global best solution during optimization and avoiding local solutions. Thus, the proposed TOC optimizer can be efficiently used to solve benchmark problems with different dimensions and degrees of complexity. The outcomes of TOC are probably going to enable this algorithm to outperform several cutting-edge algorithms such as EHO, CSA, and SO.
9.2 Limitations of the proposed TOC optimizer
Although the proposed TOC optimizer has shown promising outcomes and fulfilling pathways, along with outstanding performance in solving the benchmark problems being studied, this optimizer necessitates a certain amount of computational effort. For example, the time complexity discussed in Sect. 4.7.1 is its greatest weakness and might be important when dealing with benchmark functions with very extreme dimensions or very high-dimensional feature selection problems. Therefore, further work must be done to reduce the running time without sacrificing the optimizer’s capacity to address real-world problems. Furthermore, TOC could find certain solutions close to the global optimal solutions or slip down into local optimum solutions, as happened in some cases in CEC 2017 with 50d. A quick approach, such as hybridization with other meta-heuristics, may be used in the early phases. The proposed TOC faces a problem with binary search spaces, such as feature selection, even if it performs well in test problems limited to continuous search spaces. Therefore, further efforts to enhance its effectiveness may be taken into consideration to overcome these limitations. As mentioned in Subsect. 9.1, the proposed TOC optimizer offers a few benefits and is better than other well-known and renowned algorithms like MFO and SCA. The NFL theory does not, however, guarantee this for all optimization problems. Thus, it would be helpful to examine the strengths and weaknesses of TOC by applying it to additional problem classes and applications, such as pattern recognition applications and image processing applications, and others.
Beyond the problems described above, we might wish to conduct further tests in the future to evaluate the performance of TOC in comparison to the performance of other evolutionary and swarm intelligence methods reported in the literature.
10 Conclusion and future work
This work introduced a new meta-heuristic algorithm called tornado optimizer with Coriolis force (TOC) to solve broadly well-known global optimization problems. The fundamental concepts behind this optimizer are driven by nature and based on the life cycle process of tornadoes. These ideas and inspirations can be briefed as follows: (a) tornadoes with Coriolis force for windstorms and thunderstorms was proposed in this optimizer; (b) a set of adaptive parameters within the iterative loops of this optimizer was utilized to adaptively adjust the evolution rates of tornadoes; (c) the windstorms and thunderstorms generated to evolve into tornadoes can search near tornadoes for global solutions; (d) the adaptive parameters behave in two phase during the evolution of tornadoes, where the adaptive parameters increase in the exploration phase as iteration continues, and decrease as iteration continues during the exploitation phase. A group of 29 benchmark test problems with various dimensionality levels that are part of the CEC-2017 benchmark test functions were used to evaluate and test the proposed optimizer. The performance of TOC in dependably reaching the global optimal solutions was obtained with generally superior performance on these examined problems with minimum possibility of getting trapped into local minima for most of these problems compared to other algorithms. The proposed optimizer was also tested on a collection of several constrained engineering design problems and an industrial problem. The optimization results obtained from the comparisons show that in most test cases, TOC converges to the global minima faster and more accurately than other reported optimizers. As the TOC is efficient in traveling towards the optimal point, hybridization of this optimizer with other methods may be considered as further research. Future efforts will modify, implement, and test binary and multi-objective versions of this optimizer to address extremely high-dimensional problems or large-scale real-life problems in binary and continuous search spaces.
Data availability
Data available on request from the author.
References
Abbasi Nozari H, Dehghan Banadaki H, Mokhtare M, Hekmati Vahed S (2012) Intelligent non-linear modelling of an industrial winding process using recurrent local linear neuro-fuzzy networks. J Zhejiang Univ Sci C 13(6):403–412
Abdollahzadeh B, Khodadadi N, Barshandeh S, Trojovský P, Gharehchopogh FS, El-kenawy ES, Abualigah L, Mirjalili S (2024) Puma optimizer (po): a novel metaheuristic optimization algorithm and its application in machine learning. Clust Comput. https://doi.org/10.1007/s10586-023-04221-5
Abualigah L, Diabat A, Mirjalili S, Abd Elaziz M, Gandomi AH (2021) The arithmetic optimization algorithm. Comput Methods Appl Mech Eng 376:113609
Agushaka JO, Ezugwu AE, Abualigah L (2022) Dwarf mongoose optimization algorithm. Comput Methods Appl Mech Eng 391:114570
Alatas B (2011) Acroa: artificial chemical reaction optimization algorithm for global optimization. Expert Syst Appl 38(10):13170–13180
Alavi A, Dolatabadi M, Mashhadi J, Noroozinejad Farsangi E (2021) Simultaneous optimization approach for combined control-structural design versus the conventional sequential optimization method. Struct Multidiscip Optim 63(3):1367–1383
Al-Betar MA, Awadallah MA, Braik MS, Makhadmeh S, Doush IA (2024) Elk herd optimizer: a novel nature-inspired metaheuristic algorithm. Artif Intell Rev 57(3):48
Askarzadeh A (2016) A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm. Comput Struct 169:1–12
Atashpaz-Gargari E, Lucas C (2007) Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. In: 2007 IEEE congress on evolutionary computation, IEEE, pp 4661–4667
Awad NH, Ali MZ, Suganthan PN (2017). Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving cec2017 benchmark problems. In: 2017 IEEE congress on evolutionary computation (CEC), IEEE, pp 372–379
Aye CM, Wansaseub K, Kumar S, Tejani GG, Bureerat S, Yildiz AR, Pholdee N (2023) Airfoil shape optimisation using a multi-fidelity surrogate-assisted metaheuristic with a new multi-objective infill sampling technique. CMES-Comput Model Eng Sci 137(3):2111
Back T (1991) A survey of evolution strategies. In: Proc of Fourth Internal Conf on Genetic Algorithms
Back T (1996) Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford University Press
Bäck T, Fogel DB, Michalewicz Z (1997). Handbook of evolutionary computation. Release 97(1):B1
Bastogne T, Noura H, Sibille P, Richard A (1998) Multivariable identification of a winding process by subspace methods for tension control. Control Eng Pract 6(9):1077–1088
Bertsekas D (2022) Newton’s method for reinforcement learning and model predictive control. Res Control Optim 7:100121
Braik MS (2021) Chameleon swarm algorithm: a bio-inspired optimizer for solving engineering design problems. Expert Syst Appl 174:114685
Braik M, Sheta A, Turabieh H, Alhiary H (2021) A novel lifetime scheme for enhancing the convergence performance of Salp swarm algorithm. Soft Comput 25:181–206
Braik M, Sheta A, Al-Hiary H (2021) A novel meta-heuristic search algorithm for solving optimization problems: capuchin search algorithm. Neural Comput Appl 33(7):2515–2547
Braik M, Sheta A, Al-Hiary H (2021) A novel meta-heuristic search algorithm for solving optimization problems: capuchin search algorithm. Neural Comput Appl 33:2515–2547
Braik M, Hammouri A, Atwan J, Al-Betar MA, Awadallah MA (2022) White shark optimizer: a novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl-Based Syst 243:108457
Braik M, Al-Zoubi H, Ryalat M, Sheta A, Alzubi O (2023) Memory based hybrid crow search algorithm for solving numerical and constrained global optimization problems. Artif Intell Rev 56(1):27–99
Brill KF (2014) Revisiting an old concept: the gradient wind. Mon Weather Rev 142(4):1460–1471
Burke EK, Hyde M, Kendall G, Ochoa G, Özcan E, Woodward JR (2010) A classification of hyper-heuristic approaches. Handbook of metaheuristics. Springer, Cham, pp 449–468
Camacho-Villalón CL, Dorigo M, Stützle T (2023) Exposing the grey wolf, moth-flame, whale, firefly, bat, and antlion algorithms: six misleading optimization techniques inspired by bestial metaphors. Int Trans Oper Res 30(6):2945–2971
Cao Y, Liu Z (2023) Study of wandering motion effects on the tornado-borne debris using proposed simplified numerical models. J Wind Eng Ind Aerodyn 233:105318
Cao B, Zhao J, Lv Z, Yang P (2020) Diversified personalized recommendation optimization based on mobile data. IEEE Trans Intell Transp Syst 22(4):2133–2139
Castro LN De, Timmis JI (2003) Artificial immune systems as a novel soft computing paradigm. Soft Comput 7:526–544
Chen D, Ge Y, Yujie Wan Y, Deng YC, Zou F (2022) Poplar optimization algorithm: a new meta-heuristic optimization technique for numerical optimization and image segmentation. Expert Syst Appl 200:117118
Civicioglu P, Besdok E (2013) A conceptual comparison of the cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif Intell Rev 39(4):315–346
Coello Coello CA (2002) Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art. Comput Methods Appl Mech Eng 191(11–12):1245–1287
Comert SE, Yazgan HR (2023) A new approach based on hybrid ant colony optimization-artificial bee colony algorithm for multi-objective electric vehicle routing problems. Eng Appl Artif Intell 123:106375
Daliri A, Alimoradi M, Zabihimayvan M, Sadeghi R (2024) World hyper-heuristic: a novel reinforcement learning approach for dynamic exploration and exploitation. Expert Syst Appl 244:122931
Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30
Digalakis JG, Margaritis KG (2001) On benchmarking functions for genetic algorithms. Int J Comput Math 77(4):481–506
Dorigo M, Blum C (2005) Ant colony optimization theory: a survey. Theoret Comput Sci 344(2–3):243–278
Dorigo M, Maniezzo V, Colorni A (1996) Ant system: optimization by a colony of cooperating agents. IEEE Trans Syst, Man, Cybern, Part B (Cybern) 26(1):29–41
Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S (2020) Equilibrium optimizer: a novel optimization algorithm. Knowl-Based Syst 191:105190
Fu Y, Liu D, Chen J, He L (2024) Secretary bird optimization algorithm: a new metaheuristic for solving global optimization problems. Artif Intell Rev 57(5):1–102
Gandomi AH, Yang XS (2011) Benchmark problems in structural optimization. Computational optimization, methods and algorithms. Springer, New York, pp 259–281
Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search. Simulation 76(2):60–68
Gendreau M, Potvin JY (2010) Handbook of metaheuristics. Springer, New York
Ghasemi M, Zare M, Zahedi A, Akbari M-A, Mirjalili S, Abualigah L (2024) Geyser inspired algorithm: a new geological-inspired meta-heuristic for real-parameter and constrained engineering optimization. J Bionic Eng 21(1):374–408
Ghasemian H, Ghasemian F, Vahdat-Nejad H (2020) Human urbanization algorithm: a novel metaheuristic approach. Math Comput Simul 178:1–15
Givi H, Hubalovska M (2023) Skill optimization algorithm: a new human-based metaheuristic technique. Comput, Mater Continua. https://doi.org/10.32604/cmc.2023.030379
Gundogdu H, Demirci A, Tercan SM, Cali U (2024) A novel improved grey wolf algorithm based global maximum power point tracker method considering partial shading. IEEE Access
Hamideh S, Sen P (2022) Experiences of vulnerable households in low-attention disasters: Marshalltown, Iowa (United States) after the ef3 Tornado. Glob Environ Change 77:102595
Hashim FA, Hussien AG (2022) Snake optimizer: a novel meta-heuristic optimization algorithm. Knowl-Based Syst 242:108320
Hashim FA, Houssein EH, Mabrouk MS, Al-Atabany W, Mirjalili S (2019) Henry gas solubility optimization: a novel physics-based algorithm. Future Gener Comput Syst 101:646–667
Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: algorithm and applications. Future Gener Comput Syst 97:849–872
Holland JH et al (1992) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press
Holm S (1979) A simple sequentially rejective multiple test procedure. Scand J Stat, pp 65–70
Houssein EH, Saad MR, Hashim FA, Shaban H, Hassaballah M (2020) Lévy flight distribution: a new metaheuristic algorithm for solving engineering optimization problems. Eng Appl Artif Intell 94:103731
Houssein EH, Oliva D, Samee NA, Mahmoud NF, Emam MM (2023) Liver cancer algorithm: a novel bio-inspired optimizer. Comput Biol Med 165:107389
Kannan BK, Kramer SN (1994) An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J Mech Des 116(2):405–411
Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Global Optim 39(3):459–471
Kashan AH (2009) League championship algorithm: a new algorithm for numerical function optimization. In: 2009 international conference of soft computing and pattern recognition, IEEE, pp 43–48
Kaveh A, Dadras A (2017) A novel meta-heuristic optimization algorithm: thermal exchange optimization a novel meta-heuristic optimization algorithm. Adv Eng Softw 110:69–84
Kaveh A, Zolghadr A (2016). A novel meta-heuristic algorithm: tug of war optimization
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95-international conference on neural networks. IEEE, pp 1942–1948
Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680
Koza JR (1992) Genetic programming: on the programming of computers by means of natural selection. MIT press
Kumar A, Wu G, Ali MZ, Luo Q, Mallipeddi R, Suganthan PN, Das S (2020) A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol Comput 56:100693
Kumar S, Panagant N, Tejani GG, Pholdee N, Bureerat S, Mashru N, Patel P (2023) A two-archive multi-objective multi-verse optimizer for truss design. Knowl-Based Syst 270:110529
Kumar S, Tejani GG, Mehta P, Sait SM, Yildiz AR, Mirjalili S (2024) Optimization of truss structures using multi-objective cheetah optimizer. In: Mechanics based design of structures and machines. pp 1–22
Lam AYS, Li VOK (2012) Chemical reaction optimization: a tutorial. Memetic Comput 4:3–17
Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime Mould algorithm: a new method for stochastic optimization. Future Gener Comput Syst 111:300–323
Lian J, Hui G, Ma L, Zhu T, Wu X, Heidari AA, Chen Y, Chen H (2024) Parrot optimizer: algorithm and applications to medical problems. Comput Biol Med 172:108064
Mezura-Montes E, Coello CA (2005) Useful infeasible solutions in engineering optimization with evolutionary algorithms. In: Mexican international conference on artificial intelligence. Springer, pp 652–662
Mirjalili S (2015) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl-Based Syst 89:228–249
Mirjalili S (2016) Sca: a sine cosine algorithm for solving optimization problems. Knowl-Based Syst 96:120–133
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Appl 27(2):495–513
Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM (2017) Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191
Miroslaw B (2020) Heuristics, metaheuristics, and hyperheuristics for rich vehicle routing problems. Smart delivery systems. Elsevier, Amsterdam, pp 101–156
Mohamed AW, Hadi AA, Mohamed AK (2020) Gaining-sharing knowledge based algorithm for solving optimization problems: a novel nature-inspired algorithm. Int J Mach Learn Cybern 11(7):1501–1529
Moosavian N, Roodsari BK (2014) Soccer league competition algorithm: a novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol Comput 17:14–24
Mora-Gutiérrez RA, Ramírez-Rodríguez J, Rincón-García EA (2014) An optimization algorithm inspired by musical composition. Artif Intell Rev 41:301–315
Motevali MM, Shanghooshabad AM, Aram RZ, Keshavarz H (2019) Who: a new evolutionary algorithm bio-inspired by wildebeests with a case study on bank customer segmentation. Int J Pattern Recognit Artif Intell 33(05):1959017
Nonut A, Kanokmedhakul Y, Bureerat S, Kumar S, Tejani GG, Artrit P, Yıldız AR, Pholdee N (2022) A small fixed-wing uav system identification using metaheuristics. Cogent Eng 9(1):2114196
Pan Wen-Tsao (2012) A new fruit fly optimization algorithm: taking the financial distress model as an example. Knowl-Based Syst 26:69–74
Panagant N, Kumar S, Tejani GG, Pholdee N, Bureerat S (2023) Many-objective meta-heuristic methods for solving constrained truss optimisation problems: a comparative analysis. MethodsX 10:102181
Pereira DG, Afonso A, Medeiros FM (2015) Overview of Friedman’s test and post-hoc analysis. Commun Stat-Simul Comput 44(10):2636–2653
Price KV (1996) Differential evolution: a fast and simple numerical optimizer. In: Proceedings of North American fuzzy information processing. IEEE, pp 524–527
Qais MH, Hasanien HM, Turky RA, Alghuwainem S, Tostado-Véliz M, Jurado F (2022) Circle search algorithm: a geometry-based metaheuristic optimization algorithm. Mathematics 10(10):1626
Qi A, Zhao D, Heidari AA, Liu L, Chen Y, Chen H (2024) Fata: an efficient optimization method based on geophysics. Neurocomputing 607:128289
Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) Gsa: a gravitational search algorithm. Inf Sci 179(13):2232–2248
Ray T, Liew K-M (2003) Society and civilization: an optimization algorithm based on the simulation of social behavior. IEEE Trans Evol Comput 7(4):386–396
Rezk H, Olabi AG, Wilberforce T, Sayed ET (2024) Metaheuristic optimization algorithms for real-world electrical and civil engineering application: a review. Res Eng 23:102437
Rodan A, Sheta AF, Faris H (2017) Bidirectional reservoir networks trained using SVM+ privileged information for manufacturing process modeling. Soft Comput 21(22):6811–6824
Runarsson TP, Yao X (2000) Stochastic ranking for constrained evolutionary optimization. IEEE Trans Evol Comput 4(3):284–294
Sadollah A, Bahreininejad A, Eskandar H, Hamdi M (2013) Mine blast algorithm: a new population based algorithm for solving constrained engineering optimization problems. Appl Soft Comput 13(5):2592–2612
Sayed GI, Hassanien AE, Azar AT (2019) Feature selection via a novel chaotic crow search algorithm. Neural Comput Appl 31(1):171–188
SciJinks (2024). How are Tornadoes formed? https://www.pmfias.com/tornado/. [Online; accessed 1-August-2024]
SciJinks (2024). What Causes a Tornado? https://scijinks.gov/what-causes-a-tornado-video/. [Online; accessed 1-September-2024]
Sharma P, Raju S (2024) Metaheuristic optimization algorithms: a comprehensive overview and classification of benchmark test functions. Soft Comput 28(4):3123–3186
Shirgir S, Farahmand-Tabar S, Aghabeigi P (2024) Optimum design of real-size reinforced concrete bridge via charged system search algorithm trained by Nelder-Mead simplex. Expert Syst Appl 238:121815
Simon Dan (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713
Song Y, Wang F, Chen X (2019) An improved genetic algorithm for numerical function optimization. Appl Intell 49(5):1880–1902
Sowmya R, Premkumar M, Jangir P (2024) Newton-Raphson-based optimizer: a new population-based metaheuristic algorithm for continuous optimization problems. Eng Appl Artif Intell 128:107532
Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359
Su H, Zhao D, Heidari AA, Liu L, Zhang X, Mafarja M, Chen H (2023) A physics-based optimization. Neurocomputing 532:183–214
Tanyildizi E, Demir G (2017) Golden sine algorithm: a novel math-inspired algorithm. Adv Electr Comput Eng 17(2):71–78
Tejani GG, Bhensdadia VH, Bureerat S (2016) Examination of three meta-heuristic algorithms for optimal design of planar steel frames. Adv Comput Design 1(1):79–86
Tejani GG, Savsani VJ, Patel VK, Bureerat S (2017) Topology, shape, and size optimization of truss structures using modified teaching-learning based optimization. Adv Comput Design 2(4):313–331
Trojovskỳ P, Dehghani M (2022) Pelican optimization algorithm: a novel nature-inspired algorithm for engineering applications. Sensors 22(3):855
Tu J, Chen H, Wang M, Gandomi AH (2021) The colony predation algorithm. J Bionic Eng 18:674–710
Vagaská A, Gombár M (2021) Mathematical optimization and application of nonlinear programming. In: Algorithms as a basis of modern applied mathematics. pp 461–486
Vallis GK (2017) Atmospheric and oceanic fluid dynamics. Cambridge University Press
Venkata Rao R, Savsani VJ, Vakharia DP (2011) Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Des 43(3):303–315
Wang GG, Guo L, Gandomi AH, Hao GS, Wang H (2014) Chaotic krill herd algorithm. Inform Sci 274:17–34
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Xu Y, Cui Z, Zeng J (2010) Social emotional optimization algorithm for nonlinear constrained optimization problems. In: International conference on swarm, evolutionary, and memetic computing. Springer, pp 583–590
Xue J, Shen Bo (2020) A novel swarm intelligence optimization approach: sparrow search algorithm. Syst Sci Control Eng 8(1):22–34
Yan J, He W, Jiang X, Zhang Z (2017) A novel phase performance evaluation method for particle swarm optimization algorithms using velocity-based state estimation. Appl Soft Comput 57:517–525
Yang X-S, Deb S (2009) Cuckoo search via lévy flights. In: Nature & biologically inspired computing. NaBIC 2009. World Congress on. IEEE, pp 210–214
Yang XS (2009) Firefly algorithms for multimodal optimization. In: International symposium on stochastic algorithms. Springer, pp 169–178
Yang X-S (2010) A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO 2010). Springer, pp 65–74
Yang X-S (2010) Firefly algorithm, levy flights and global optimization. In: Research and development in intelligent systems XXVI: incorporating applications and innovations in intelligent systems XVII. Springer, pp 209–218
Yang X-S (2010) Nature-inspired metaheuristic algorithms. Luniver Press
Yang X-S, Deb S (2014) Cuckoo search: recent advances and applications. Neural Comput Appl 24:169–174
Yang Y, Chen H, Heidari AA, Gandomi AH (2021) Hunger games search: visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst Appl 177:114864
Yuan C, Zhao D, Heidari AA, Liu L, Chen Y, Chen H (2024) Polar lights optimizer: algorithm and applications in image segmentation and feature selection. Neurocomputing 607:128427
Zhang LM, Dahlmann C, Zhang Y (2009) Human-inspired algorithms for continuous function optimization. In: 2009 IEEE international conference on intelligent computing and intelligent systems. IEEE, volume 1, pp 318–321
Zhao W, Wang L, Zhang Z (2019) Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl-Based Syst 163:283–304
Zhao W, Wang L, Zhang Z, Fan H, Zhang J, Mirjalili S, Khodadadi N, Cao Q (2024) Electric eel foraging optimization: a new bio-inspired optimizer for engineering applications. Expert Syst Appl 238:122200
Zhong C, Li G, Meng Z (2022) Beluga whale optimization: a novel nature-inspired metaheuristic algorithm. Knowl-Based Syst 251:109215
Zhu D, Wang S, Zhou C, Yan S, Xue J (2024) Human memory optimization algorithm: a memory-inspired optimizer for global optimization problems. Expert Syst Appl 237:121597
Zou S, He X (2023) Effect of tornado near-ground winds on aerodynamic characteristics of the high-speed railway viaduct. Eng Struct 275:115189
Author information
Authors and Affiliations
Contributions
Malik Braik contributed to conceptualization, methodology, investigation, writing original draft, formal analysis, editing, and software provision. Heba Al-Hiary participated in the formal analysis, investigation, and data curation. Hussein Alzoubi contributed to writing the original draft. Abdelaziz contributed to conceptualization and validation. Mohammed Azmi Al-Betar was involved in visualization and validation. Mohammed A. Awadallah was involved in investigation and editing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there are no Conflict of interest regarding the publication of this paper.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Informed consent
None.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Braik, M., Al-Hiary, H., Alzoubi, H. et al. Tornado optimizer with Coriolis force: a novel bio-inspired meta-heuristic algorithm for solving engineering problems. Artif Intell Rev 58, 123 (2025). https://doi.org/10.1007/s10462-025-11118-9
Accepted:
Published:
DOI: https://doi.org/10.1007/s10462-025-11118-9