Elsevier

Applied Soft Computing

Volume 71, October 2018, Pages 165-182
Applied Soft Computing

Probability-directed random search algorithm for unconstrained optimization problem

https://doi.org/10.1016/j.asoc.2018.06.043Get rights and content

Highlights

  • The MDA-3 algorithm was able to discover some of best solutions for wide range of benchmark testing problems.

  • The MDA-3 was compared with some of the best algorithms such as DE, EDA, and PSO.

  • The strength of the algorithm comes from its simplicity and its ability to directly dig in the search space.

Abstract

Devising ways for handling problem optimization is an important yet a challenging task. The aims are always for methods that can effectively and quickly discover the global optimum for rather complicated mathematical functions that model real-world settings. Typically these functions are too difficult to discover their global optima because they may (1) lack the continuity and differentiability, (2) have multiple local optima, and (3) have complex expressions. In this paper, we address this challenge by offering an algorithm that combines the random search techniques with both an effective mapping and a dynamic adjustment of its search behavior. Our proposed algorithm automatically builds two types of triangular search directives over the unity intervals: principal and marginal. These search directives guide the search within both the effective regions of the search domain that most likely contain the optimum and the marginal regions of the search domain that less likely contain the optimum. During the search the algorithm monitors the intermediate search results and dynamically adjusts the directives' parameters to quickly move the search towards the optimum. Experiments with our prototype implementation showed that our method can effectively find the global optima for rather complicated mathematical functions chosen from well-known benchmarks, and performed better than other algorithms.

Introduction

The performance of too many systems can be captured by mathematical functions that define the systems’ important attributes. For example, the revenue of a company, which is a function of variables such as the amount of the sales, determines the performance of this company. Finding the optima for such functions is extremely important because it allows for better design decisions for the systems captured by these functions. Unfortunately, a tremendous number of the mathematical functions, which model real-world systems, are difficult to optimize. They are mostly not continuous in their domain, have multiple local optimal values, not differentiable, and so on. This makes the optimizing these functions using analytical techniques so difficult if not impossible.

Researchers have devised a large number of methods to optimize functions [[1], [2], [3]]. Some of the methods use the functions' properties [4]. These methods use some deterministic model typically represented as iterative formula. While these methods are effective, they largely depend on the properties of the function to be optimized. They mostly require these functions to be continuous, differentiable, and other mathematical properties. The main stream of the optimization techniques, however, are those that use random search as their main mechanism [[5], [6], [7], [8]]. Examples of the latter methods include Tabu Search [9], Simulated Annealing [10], Genetic Algorithms [11,12], Scatter Search [13], and particle swarm optimization [[14], [15], [16]], estimation distribution algorithm (EDA) [17], and imperialist competitive algorithm [18,19]. They use random-based techniques and some heuristics to guide the search within the domain towards the best values of the functions. Due to their search technique, the random based methods are less sensitive to the functions’ properties. They do not assume much about the properties of the functions. Therefore, they offer better ways for optimizing function that lack properties needed by deterministic methods. Although proposed random based methods can be effective in finding the optimum, they are difficult to use and are computationally expensive [3].

This paper proposes an innovative technique for finding the global optimal values for hard functions [20]. They typically lack continuity and differentiability, and have many local optimums. Our algorithm, which we call the Moving Directives Algorithm (MDA − 3), uses an effective random search technique.1 It quickly drives the search to the global best value for the function. To accomplish this, MDA − 3 builds triangular directives on the unity interval (i.e. [0,1]) and employs effective mapping techniques that map random numbers generated from the unity interval to the corresponding points in the triangular directives. This mapping quickly leads the search to the global optima. Furthermore, the proposed algorithm, MDA − 3, can dynamically adjust the focus of the search by adjusting the directives using gained knowledge during the search; thereby always moving the directives and consequently the mapping to the promising regions of the search domains.

This work is a detailed extension to our previously published work [20]. The proposed technique in this paper provides additional fundamental contributions to the previous work. First, we relaxed the termination conditions based on exhaustive simulations and found that only one condition is sufficient to guarantee convergence and, therefore, the termination of the algorithm execution. Second, we determined the best choice of the reduction factor d based on our simulations. Third, we estimated the best value for the number of experiments m that should be conducted. Forth, we conducted exhaustive simulations on the effective selection of parameters of the triangular directives. Fifth, this paper includes additional performance comparative studies with large set of effective algorithms. The comparisons with the ED, EDA, EDA-1, EDA-2, SaJADE, PSO-3P, PSO-EO, SaDE, and many more algorithms based on benchmark problems were added. Sixth, we enriched the paper with comparisons between our directives and the fuzzy logic functions as there is superficial similarity between the two concepts.

This paper, in general, presents a probability-guided approach that identifies and moves the search to the regions of the domain in which the optima most likely reside without ignoring the other regions in which the optima less likely reside. Second, the algorithm can dynamically adjust its search directives thereby quickly finding the optima. Third, the algorithm performs better than other approaches in terms of both approximating the global optima and the execution time for finding the optima.

We present our contributions as follows. Section 2 formalizes the problem for which we define the solution. Section 3 describes our algorithm. Section 4 evaluates the performance of algorithm. We conclude and give directions for future work in section 5.

Section snippets

Problem formalization

Suppose f(x1, x2, …, xn) is a function whose variables xiDi (i = 1, 2, …, n). Each domain Di is a bounded interval [ai, bi]. The goal is to find a solution for the following problem.OptimalxiDifx1,x2,...,xn=fx1*,x2*,...,xn*

More specifically, we should find the values xi* that belong to the domains Di such that the function f is in its global optimal value. The optimal value for a function f can be either the minimum or the maximum value.

We impose no restriction on the functionf(x1,x2,...,xn)

Moving directives algorithm

This section introduces the fundamental components of the moving directives algorithm. We call our algorithm (MDA − 3). We particularly discuss the triangular directives in Section 3.1 and the mapping between random values generated from the unity interval and the triangular directives in Section 3.2. We discuss the termination conditions in Section 3.3. Finally, we present the algorithmic steps of MDA − 3 in Section 3.4.

Performance analysis

We analyze in this section the performance of the proposed algorithm (MDA − 3). We first determine its upper bound time complexity. We then present and discuss the results of our experiments conducted using a large number of challenging benchmark functions.

Conclusions and future work

The paper proposed a random search algorithm for finding the global optimal value for a function f. Our algorithm is effective in finding global optima, time efficient, and easy to implement. It uses probability-guided directives to create a dynamic coverage for the search space in addition to effective mapping techniques to guide the searching process. Therefore the algorithm quickly directs the search to the parts of the domain that most likely contain the global optima.

The algorithm can

References (32)

  • I. Fister et al.

    On the randomized firefly algorithm

    Cuckoo Search and Firefly Algorithm

    (2014)
  • S. Chapra et al.

    Numerical Methods for Engineers

    (2014)
  • L. Zhang et al.

    A novel hybrid firefly algorithm for global optimization

    PLoS One

    (2016)
  • Y. Zheng et al.

    A new variant of the memory gradient method for unconstrained optimization

    Optim. Lett.

    (2012)
  • A. Hashmi et al.

    Firefly algorithm for unconstrained optimization

    J. Comput. Eng.

    (2013)
  • R. Rao Jaya

    A simple and new optimization algorithm for solving constrained and unconstrained optimization problems

    Int. J. Ind. Eng. Comput.

    (2016)
  • Cited by (22)

    • Chance constrained dynamic optimization approach for single machine scheduling involving flexible maintenance, production, and uncertainty

      2022, Engineering Applications of Artificial Intelligence
      Citation Excerpt :

      However, the solution accuracy is low and the computational cost is high because of the lack of gradient guidance during the search process. In other words, the performance of these algorithms is poor in terms of convergence (Al-Muhammed and Zitar, 2018). For the hybrid algorithms, some stochastic search techniques are combined with deterministic algorithms to speed up convergence process.

    • Random search with adaptive boundaries algorithm for obtaining better initial solutions

      2022, Advances in Engineering Software
      Citation Excerpt :

      The authors asserted that a random search method provides global convergence by having randomness and multi-points. Al-Muhammed and Zitar [1] presented an algorithm that includes a random search technique for providing a probability-guided approach without ignoring the regions. The authors show that the proposed algorithm can find global optimum solutions for well-known benchmarks.

    View all citing articles on Scopus
    View full text