Skip to main content

Advertisement

Log in

Annealing evolutionary stochastic approximation Monte Carlo for global optimization

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Alcock, J., Burrage, K.: A genetic algorithm for parameters of stochastic ordinary differential equations. Comput. Stat. Data Anal. 47, 255–275 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  • Ali, M.M., Storey, C.: Modified controlled random search algorithms. Int. J. Comput. Math. 54, 229–235 (1994)

    Article  Google Scholar 

  • Ali, M.M., Khompatraporn, C., Zabinsky, Z.B.: A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Glob. Optim. 31, 635–672 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Andrieu, C., Robert, C.P.: Controlled MCMC for optimal sampling. Technical Report 0125, Cahiers du Cérémade (2001)

  • Andrieu, C., Moulines, É., Priouret, P.: Stability of stochastic approximation under verifiable conditions. SIAM J. Control Optim. 44, 283–312 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Angelis, L., Bora-Senta, E., Moyssiadis, C.: Optimal exact experimental designs with correlated errors through a simulated annealing. Comput. Stat. Data Anal. 37, 275–296 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  • Atchadé, Y., Liu, J.S.: The Wang-Landau algorithm for Monte Carlo computation on general state-spaces. Technical Report, University of Michigan (2009)

  • Baragona, R., Battaglia, F., Calzini, C.: Genetic algorithms for the identification of additive and innovation outliers in time series. Comput. Stat. Data Anal. 37, 1–12 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  • Billingsley, P.: Probability and Measure, 2nd edn. Wiley, New York (1986)

    MATH  Google Scholar 

  • Cappé, O., Guillin, A., Marin, J.M., Robert, C.P.: Population Monte Carlo. J. Comput. Graph. Stat. 13, 907–929 (2004)

    Article  Google Scholar 

  • Dorea, C.C.Y.: Stopping rules for a random optimization method. SIAM J. Control Optim. 28, 841–850 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  • Dorsey, R.E., Mayer, W.J.: Genetic algorithms for estimation problems with multiple optima, non-differentiability, and other irregular features. J. Bus. Econ. Stat. 13, 53–66 (1995)

    Article  Google Scholar 

  • Duczmal, L., Assuncão, R.: A simulated annealing strategy for the detection of arbitrarily shaped spatial clusters. Comput. Stat. Data Anal. 45, 269–286 (2004)

    Article  MATH  Google Scholar 

  • Duczmal, L., Cancado, A.L.F., Takahashi, R.H.C., Bessegato, L.F.: A genetic algorithm for irregularly shaped spatial scan statistics. Comput. Stat. Data Anal. 52, 43–52 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Eberhart, R.C., Kennedy, J.: A new optimizer using particle swarm theory. In: Proc. 6th Int. Symp. Micromachine Human Sci., Nagoya, Japan, pp. 39–43 (1995)

  • Ferri, M., Piccioni, M.: Optimal selection of statistical units: an approach via simulated annealing. Comput. Stat. Data Anal. 13, 47–61 (1992)

    Article  MathSciNet  Google Scholar 

  • Geman, S., Geman, D.: Stochastic relaxation, Gibbs distribution and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 6, 721–741 (1984)

    Article  MATH  Google Scholar 

  • Gilks, W.R., Roberts, G.O., George, E.I.: Adaptive direction sampling. Statistician 43, 179–189 (1994)

    Article  Google Scholar 

  • Gilks, W.R., Roberts, G.O., Sahu, S.K.: Adaptive Markov chain Monte Carlo through regeneration. J. Am. Stat. Assoc. 93, 1045–1054 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  • Glover, F., Laguna, M.: Tabu Search. Kluwer Academic, Dordrecht (1997)

    MATH  Google Scholar 

  • Goldberg, D.E.: Genetic Algorithms in Search, Optimization, & Machine Learning. Addison-Wesley, Reading (1989)

    MATH  Google Scholar 

  • Goswami, G.R., Liu, J.S.: On learning strategies for evolutionary Monte Carlo. Stat. Comput. 17, 23–38 (2007)

    Article  MathSciNet  Google Scholar 

  • Harrio, H., Saksman, E., Tamminen, J.: An adaptive Metropolis algorithm. Bernoulli 7, 223–242 (2001)

    Article  MathSciNet  Google Scholar 

  • Hart, W.E.: Sequential stopping rules for random optimization methods with application to multistart local search. SIAM J. Optim. 9, 270–290 (1998)

    Article  MATH  Google Scholar 

  • Hastings, W.K.: Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 97–109 (1970)

    Article  MATH  Google Scholar 

  • Hedar, A.R., Fukushima, M.: Tabu search directed by direct search methods for nonlinear global optimization. Eur. J. Oper. Res. 170, 329–349 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Hesselbo, B., Stinchcomble, R.B.: Monte Carlo simulation and global optimization without parameters. Phys. Rev. Lett. 74, 2151–2155 (1995)

    Article  Google Scholar 

  • Hirsch, M.J., Pardalos, P.M., Resende, M.G.C.: Speeding up continuous GRASP. Eur. J. Oper. Res. (2006, submitted)

  • Hirsch, M.J., Meneses, C.N., Pardalos, P.M., Resende, M.G.C.: Global optimization by continuous GRASP. Optim. Lett. 1, 201–212 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Hoffman, D.L., Schmidt, P.: Testing the restrictions implied by the rational expectations hypothesis. J. Econom. 15, 265–287 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  • Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor (1975)

    Google Scholar 

  • Jasra, A., Stephens, D.A., Holmes, C.C.: On population-based simulation for static inference. Stat. Comput. 17, 263–279 (2007)

    Article  MathSciNet  Google Scholar 

  • Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220, 671–680 (1983)

    Article  MathSciNet  Google Scholar 

  • Laguna, M., Martí, R.: Experimental testing of advanced scatter search designs for global optimization of multimodal functions. J. Glob. Optim. 33, 235–255 (2005)

    Article  MATH  Google Scholar 

  • Liang, F.: Dynamically weighted importance sampling in Monte Carlo computation. J. Am. Stat. Assoc. 97, 807–821 (2002)

    Article  MATH  Google Scholar 

  • Liang, F.: Annealing stochastic approximation Monte Carlo for neural network training. Mach. Learn. 68, 201–233 (2007)

    Article  Google Scholar 

  • Liang, F., Wong, W.H.: Evolutionary Monte Carlo sampling: applications to C p model sampling and change-point problem. Stat. Sin. 10, 317–342 (2000)

    MATH  Google Scholar 

  • Liang, F., Wong, W.H.: Real parameter evolutionary sampling with applications in Bayesian mixture models. J. Am. Stat. Assoc. 96, 653–666 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  • Liang, F., Liu, C., Carroll, R.J.: Stochastic approximation in Monte Carlo computation. J. Am. Stat. Assoc. 102, 305–320 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Liang, J.J., Qing, A.K., Suganthan, P.N., Baskar, S.: Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10, 281–295 (2006)

    Article  Google Scholar 

  • Liu, J.S.: Monte Carlo Strategies in Scientific Computing. Springer, New York (2001)

    MATH  Google Scholar 

  • Mengersen, K.L., Tweedie, R.L.: Rates of convergence of the Hastings and Metropolis algorithms. Ann. Stat. 24, 101–121 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  • Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E.: Equation of state calculations by fast computing machines. J. Chem. Phys. 21, 1087–1091 (1953)

    Article  Google Scholar 

  • Meyn, S.P., Tweedie, R.L.: Markov Chains and Stochastic Stability. Springer, London (1993)

    MATH  Google Scholar 

  • Michalewicz, Z., Nazhiyath, G.: Genocop III: A co-evolutionary algorithm for numerical optimization problems with nonlinear constraints. In: Proceedings of the Second IEEE ICEC, Perth, Australia (1995)

  • Müller, P.: A generic approach to posterior integration and Gibbs sampling. Technical Report, Purdue University, West Lafayette, Indiana (1991)

  • Price, W.L.: Global optimization by controlled random search. J. Optim. Theory Appl. 40, 333–348 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  • Robert, C.P., Casella, G.: Monte Carlo Statistical Methods, 2nd edn. Springer, New York (2004)

    MATH  Google Scholar 

  • Salomon, R.: Reevaluating genetic algorithm performance under coordinate rotation of benchmark functions. BioSystems 39, 263–278 (1996)

    Article  Google Scholar 

  • Schmitt, L.M.: Theory of genetic algorithms. Theor. Comput. Sci. 259, 1–61 (2001)

    Article  MATH  Google Scholar 

  • Schmitt, L.M.: Asymptotic convergence of scaled genetic algorithms to global optima. In: Menon, A. (ed.) Frontiers of Evolutionary Computation, pp. 157–200. Kluwer Academic, Dordrecht (2004)

    Chapter  Google Scholar 

  • Storn, R., Price, K.: Differential evolution: A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim 11, 341–359 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  • Tierney, L.: Introduction to general state-space Markov chain theory. In: Gilks, W.R., Richardson, S., Spiegelhalter, D.J. (eds.) Markov Chain Monte Carlo in Practice, pp. 59–88. Chapman & Hall, London (1996)

    Google Scholar 

  • Törn, A., Žilinskas, A.: Global Optimization. Lecture Notes in Computer Science, vol. 350. Springer, Berlin (1989)

    MATH  Google Scholar 

  • Winker, P.: Identification of multivariate AR-models by threshold accepting. Comput. Stat. Data Anal. 20, 295–307 (1995)

    Article  MATH  Google Scholar 

  • Wong, W.H., Liang, F.: Dynamic weighting in Monte Carlo and optimization. Proc. Natl. Acad. Sci. USA 94, 14220–14224 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  • Wu, B., Chang, C.-L.: Using genetic algorithms to parameters (d,r) estimation for threshold autoregressive models. Comput. Stat. Data Anal. 38, 315–330 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  • Zhu, H.T., Liang, F., Gu, M., Peterson, B.: Stochastic approximation algorithms for estimation of spatial mixed models. In: Lee, S.Y. (ed.) Handbook of Computing and Statistics with Applications, vol. 1, pp. 399–421. Elsevier, Amsterdam (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Faming Liang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liang, F. Annealing evolutionary stochastic approximation Monte Carlo for global optimization. Stat Comput 21, 375–393 (2011). https://doi.org/10.1007/s11222-010-9176-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11222-010-9176-1

Keywords

Navigation