Probability-directed random search algorithm for unconstrained optimization problem
Introduction
The performance of too many systems can be captured by mathematical functions that define the systems’ important attributes. For example, the revenue of a company, which is a function of variables such as the amount of the sales, determines the performance of this company. Finding the optima for such functions is extremely important because it allows for better design decisions for the systems captured by these functions. Unfortunately, a tremendous number of the mathematical functions, which model real-world systems, are difficult to optimize. They are mostly not continuous in their domain, have multiple local optimal values, not differentiable, and so on. This makes the optimizing these functions using analytical techniques so difficult if not impossible.
Researchers have devised a large number of methods to optimize functions [[1], [2], [3]]. Some of the methods use the functions' properties [4]. These methods use some deterministic model typically represented as iterative formula. While these methods are effective, they largely depend on the properties of the function to be optimized. They mostly require these functions to be continuous, differentiable, and other mathematical properties. The main stream of the optimization techniques, however, are those that use random search as their main mechanism [[5], [6], [7], [8]]. Examples of the latter methods include Tabu Search [9], Simulated Annealing [10], Genetic Algorithms [11,12], Scatter Search [13], and particle swarm optimization [[14], [15], [16]], estimation distribution algorithm (EDA) [17], and imperialist competitive algorithm [18,19]. They use random-based techniques and some heuristics to guide the search within the domain towards the best values of the functions. Due to their search technique, the random based methods are less sensitive to the functions’ properties. They do not assume much about the properties of the functions. Therefore, they offer better ways for optimizing function that lack properties needed by deterministic methods. Although proposed random based methods can be effective in finding the optimum, they are difficult to use and are computationally expensive [3].
This paper proposes an innovative technique for finding the global optimal values for hard functions [20]. They typically lack continuity and differentiability, and have many local optimums. Our algorithm, which we call the Moving Directives Algorithm (MDA − 3), uses an effective random search technique.1 It quickly drives the search to the global best value for the function. To accomplish this, MDA − 3 builds triangular directives on the unity interval (i.e. [0,1]) and employs effective mapping techniques that map random numbers generated from the unity interval to the corresponding points in the triangular directives. This mapping quickly leads the search to the global optima. Furthermore, the proposed algorithm, MDA − 3, can dynamically adjust the focus of the search by adjusting the directives using gained knowledge during the search; thereby always moving the directives and consequently the mapping to the promising regions of the search domains.
This work is a detailed extension to our previously published work [20]. The proposed technique in this paper provides additional fundamental contributions to the previous work. First, we relaxed the termination conditions based on exhaustive simulations and found that only one condition is sufficient to guarantee convergence and, therefore, the termination of the algorithm execution. Second, we determined the best choice of the reduction factor d based on our simulations. Third, we estimated the best value for the number of experiments m that should be conducted. Forth, we conducted exhaustive simulations on the effective selection of parameters of the triangular directives. Fifth, this paper includes additional performance comparative studies with large set of effective algorithms. The comparisons with the ED, EDA, EDA-1, EDA-2, SaJADE, PSO-3P, PSO-EO, SaDE, and many more algorithms based on benchmark problems were added. Sixth, we enriched the paper with comparisons between our directives and the fuzzy logic functions as there is superficial similarity between the two concepts.
This paper, in general, presents a probability-guided approach that identifies and moves the search to the regions of the domain in which the optima most likely reside without ignoring the other regions in which the optima less likely reside. Second, the algorithm can dynamically adjust its search directives thereby quickly finding the optima. Third, the algorithm performs better than other approaches in terms of both approximating the global optima and the execution time for finding the optima.
We present our contributions as follows. Section 2 formalizes the problem for which we define the solution. Section 3 describes our algorithm. Section 4 evaluates the performance of algorithm. We conclude and give directions for future work in section 5.
Section snippets
Problem formalization
Suppose f(x1, x2, …, xn) is a function whose variables xi ∈ Di (i = 1, 2, …, n). Each domain Di is a bounded interval [ai, bi]. The goal is to find a solution for the following problem.
More specifically, we should find the values that belong to the domains Di such that the function f is in its global optimal value. The optimal value for a function f can be either the minimum or the maximum value.
We impose no restriction on the function
Moving directives algorithm
This section introduces the fundamental components of the moving directives algorithm. We call our algorithm (MDA − 3). We particularly discuss the triangular directives in Section 3.1 and the mapping between random values generated from the unity interval and the triangular directives in Section 3.2. We discuss the termination conditions in Section 3.3. Finally, we present the algorithmic steps of MDA − 3 in Section 3.4.
Performance analysis
We analyze in this section the performance of the proposed algorithm (MDA − 3). We first determine its upper bound time complexity. We then present and discuss the results of our experiments conducted using a large number of challenging benchmark functions.
Conclusions and future work
The paper proposed a random search algorithm for finding the global optimal value for a function f. Our algorithm is effective in finding global optima, time efficient, and easy to implement. It uses probability-guided directives to create a dynamic coverage for the search space in addition to effective mapping techniques to guide the searching process. Therefore the algorithm quickly directs the search to the parts of the domain that most likely contain the global optima.
The algorithm can
References (32)
Genetic algorithms in constrained optimization
Math. Comput. Modell.
(1996)- et al.
A genetic algorithm for unconstrained multi-objective optimization
Swarm Evol. Comput.
(2015) - et al.
A survey on the imperialist competitive algorithm metaheuristic
Appl. Soft Comput.
(2014) Enhanced leader PSO (ELPSO): a new PSO variant for solving global optimisation problems
Appl. Soft Comput. J.
(2015)- et al.
An improved shuffled frog-leaping algorithm with extremal optimisation for continuous optimisation
Inf. Sci.
(2012) - et al.
A novel particle swarm optimizer hybridized with extremal optimization
Appl. Soft Comput. J.
(2010) - et al.
Adaptive directed mutation for real-coded genetic algorithms
Appl. Soft Comput. J.
(2013) - et al.
A new mutation operator for real coded genetic algorithms
Appl. Math. Comput.
(2007) - et al.
An overview of evolutionary algorithm for parameter optimization
Evol. Comput.
(1993) - et al.
A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems
J. Global Optim.
(2005)
On the randomized firefly algorithm
Cuckoo Search and Firefly Algorithm
Numerical Methods for Engineers
A novel hybrid firefly algorithm for global optimization
PLoS One
A new variant of the memory gradient method for unconstrained optimization
Optim. Lett.
Firefly algorithm for unconstrained optimization
J. Comput. Eng.
A simple and new optimization algorithm for solving constrained and unconstrained optimization problems
Int. J. Ind. Eng. Comput.
Cited by (22)
Comparative study on damage effects of penetration and explosion modes on airport runway
2024, Construction and Building MaterialsChance constrained dynamic optimization approach for single machine scheduling involving flexible maintenance, production, and uncertainty
2022, Engineering Applications of Artificial IntelligenceCitation Excerpt :However, the solution accuracy is low and the computational cost is high because of the lack of gradient guidance during the search process. In other words, the performance of these algorithms is poor in terms of convergence (Al-Muhammed and Zitar, 2018). For the hybrid algorithms, some stochastic search techniques are combined with deterministic algorithms to speed up convergence process.
Random search with adaptive boundaries algorithm for obtaining better initial solutions
2022, Advances in Engineering SoftwareCitation Excerpt :The authors asserted that a random search method provides global convergence by having randomness and multi-points. Al-Muhammed and Zitar [1] presented an algorithm that includes a random search technique for providing a probability-guided approach without ignoring the regions. The authors show that the proposed algorithm can find global optimum solutions for well-known benchmarks.
Optimum sensors allocation for drones multi-target tracking under complex environment using improved prairie dog optimization
2024, Neural Computing and Applications