Abstract
An important class of deterministic methods for global optimization is based on the theory of terminal attractors and repellers. Unfortunately, the utilization of scalar repellers is unsuitable, when the dimension n of the problem assumes values of operational interest. In previous papers the author et al. showed that BFSG-type methods, approximating the Hessian of twice continuously differentiable functions with a structured matrix, are very efficient to compute local minima, particularly in the secant case. On the other hand, the algorithms founded on the classical αBB technique are often ineffective for computational reasons. In order to increase the power of repellers in the tunneling phases, the utilization of repeller matrices with a proper structure is certainly promising and deserves investigation. In this work, it is shown that a BFGS-type method of low complexity, implemented in the local optimizations, can be effectively matched with proper repeller matrices in the tunneling phases. The novel algorithm FBαBB, which can be applied in the frame of the αBB computational scheme, is very efficient in terms of Number of Functions Generations (NFG), Success Rates (SR) in the evaluation of the global minimum and Number of Local Searches (NLS).
Similar content being viewed by others
References
Levy, A.V., Montalvo, A.: The tunneling algorithm for the global minimization of functions. SIAM J. Sci. Stat. Comput. 6, 15–29 (1985)
Yao, Y.: Dynamical tunneling algorithm for global optimization. IEEE Trans. Syst. Man Cybern. 19, 1222–1230 (1989)
Cetin, B.C., Barhen, J., Burdick, J.W.: Terminal repeller unconstrained subenergy tunneling for fast global optimization. J. Optim. Theory Appl. 77, 97–126 (1993)
Barhen, J., Protopopescu, V., Reister, D.: TRUST: A deterministic algorithm for global optimization. Science 276, 1094–1097 (1997)
Barhen, J., Burdick, J.W., Cetin, B.C.: Global descent replaces gradient descent to avoid local minima problem in learning with artificial neural networks. In: ICNN93, vol. 2, pp. 836–842 (1993)
Di Fiore, C., Fanelli, S., Zellini, P.: Computational experiences of a novel global algorithm for optimal learning in MLP-networks. In: ICONIP’02, vol. 1, pp. 317–321 (2002)
Jiang, H., Cai, W., Shao, X.: A random tunneling algorithm for the structural optimization problem. Phys. Chem. Chem. Phys. 4, 4782–4788 (2002)
Srinivasa, M., Rangaiah, G.P.: Implementation and evaluation of random tunneling algorithm for chemical engineering applications. Comput. Chem. Eng. 30(9), 1400–1415 (2006)
Di Fiore, C., Fanelli, S., Lepore, F., Zellini, P.: Matrix algebras in quasi-Newton methods for unconstrained optimization. Numer. Math. 94, 479–500 (2003)
Bortoletti, A., Di Fiore, C., Fanelli, S., Zellini, P.: A new class of quasi-Newtonian methods for optimal learning in MLP-networks. IEEE Trans. Neural Netw. 14, 263–273 (2003)
Di Fiore, C., Fanelli, S., Zellini, P.: An efficient generalization of Battiti-Shanno’s quasi-Newton algorithm for optimal learning in MLP-networks. In: Neural Information Processing, vol. 1, pp. 483–488. Springer, Calcuta (2004)
Di Fiore, C., Fanelli, S., Zellini, P.: Low complexity minimization algorithms. Numer. Linear Algebra Appl., 12, 755–768 (2005)
Di Fiore, C., Fanelli, S., Zellini, P.: Low complexity secant quasi-Newton minimization algorithms for nonconvex functions. J. Comput. Appl. Math., 210, 167–174 (2007)
Floudas, C.A., Visweswaran, V.: A primal relaxed dual global optimization approach. J. Optim. Theory Appl. 78(2), 187–225 (1993)
Floudas, C.A.: Deterministic Global Optimization. Dordrecht, Kluwer (2000)
Meyer, C.A., Floudas, C.A.: Convex underestimation of twice continuously differentiable function by piecewise quadratic perturbation: spline αBB underestimators. J. Glob. Optim. 232(2), 221–258 (2005)
Gounaris, C.E., Floudas, C.A.: Tight convex underestimators for \(\mathcal{C}^{2}\)-continuous problems: I univariate functions. J. Glob. Optim. 42(1), 51–67 (2008)
Gounaris, C.E., Floudas, C.A.: Tight convex underestimators for \(\mathcal{C}^{2}\)-continuous problems: II multivariate functions. J. Glob. Optim. 42(1), 69–89 (2008)
Gounaris, C.E., Floudas, C.A.: Convexity of products of univariate functions and convexification transformations for geometric programming. J. Optim. Theory Appl. 138(3), 407–427 (2008)
Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, Berlin (1999)
Powell, M.J.D.: Some global convergence properties of a variable metric algorithm for minimization without exact line search. In: Nonlinear Programming. SIAM-AMS Proc., vol. 9, pp. 53–72 (1976)
Oseledets, I., Tyrtyshnikov, E.: A unifying approach to the construction of circulant preconditioners. Linear Algebra Appl., 418, 435–449 (2006)
Wolfe, P.: Convergence conditions for descent methods. SIAM Rev. 11, 226–235 (1969)
Tyrtyshnikov, E.: Private communication
Cai, W., Shao, X.: A Fast annealing evolutionary algorithm for global optimization. J. Comput. Chem. 23, 427–435 (2002)
Yiu, K.F.C., Liu, Y., Teo, K.L.: A hybrid descent method for global optimization. J. Glob. Optim. 28, 229–238 (2004)
Liang, J.J., Qin, A.K.: Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput., 10(3), 281–295 (2006)
Whitley, D., Mathias, K., Rana, S., Dzubera, J.: Evaluating evolutionary algorithms. Artif. Intell. 85, 245–276 (1996)
Musrrat, A., Millie, P., Ajith, A.: Simplex differential evolution. Acta Polytech. Hung., 6, 95–115 (2009)
Storn, R., Price, K.: DE-a simple and efficient heuristics for global optimization over continuous space. J. Glob. Optim. 11(4), 41–359 (1997)
Rahnamayan, S., Tizhoosh, H.R., Salman, M.A.: Opposition-based differential evolution. IEEE Trans. Evol. Comput. 12(1), 64–79 (2008)
Akrotirianakis, I.G., Floudas, C.A.: A new class of improved convex underestimators for twice continuously differentiable constrained NLP’s. J. Glob. Optim., 30(4), 367–390 (2004)
Floudas, C.A., Gounaris, C.E.: A review of recent advances in global optimization. J. Glob. Optim. 45, 3–38 (2009)
Akrotirianakis, I.G., Floudas, C.A.: Computational experience with a new class of convex underestimators: box-constrained NLP problems. J. Glob. Optim. 29(3), 249–264 (2004)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by F. Zirilli.
The author wishes to thank the students Valeria Tozzi and Francesco Tudisco for their kind support in the implementation of Algorithm FBαBB. The author is also indebted to an anonymous referee for some useful suggestions.
Rights and permissions
About this article
Cite this article
Fanelli, S. A New Algorithm for Box-Constrained Global Optimization. J Optim Theory Appl 149, 175–196 (2011). https://doi.org/10.1007/s10957-010-9780-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-010-9780-4