Skip to main content
Log in

Convergence Analysis of a New Self Organizing Map Based Optimization (SOMO) Algorithm

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

The self-organizing map (SOM) approach has been used to perform cognitive and biologically inspired computing in a growing range of cross-disciplinary fields. Recently, the SOM based neural network framework was adapted to solve continuous derivative-free optimization problems through the development of a novel algorithm, termed SOM-based optimization (SOMO). However, formal convergence questions remained unanswered which we now aim to address in this paper. Specifically, convergence proofs are developed for the SOMO algorithm using a specific distance measure. Numerical simulation examples are provided using two benchmark test functions to support our theoretical findings, which illustrate that the distance between neurons decreases at each iteration and finally converges to zero. We also prove that the function value of the winner in the network decreases after each iteration. The convergence performance of SOMO has been benchmarked against the conventional particle swarm optimization algorithm, with preliminary results showing that SOMO can provide a more accurate solution for the case of large population sizes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Galilei G, Drake S, O’Malley CD. The Controversy on the comets of 1618: Galileo Galilei, Horatio Grassi, Mario Guiducci, Johann Kepler. 1st ed. Philadelphia: University of Pennsylvania Press; 1960.

    Google Scholar 

  2. Turing AM. Computing machinery and intelligence. Mind. 1950;49:433–460.

  3. Fogel LJ, Owens AJ, Walsh MJ. Intelligent decision making through a simulation of evolution. Behav Sci. 1966;11(4):253–72.

    Article  CAS  PubMed  Google Scholar 

  4. Fogel GB. Computational intelligence approaches for pattern discovery in biological systems. Brief Bioinform. 2008;9(4):307–316.

  5. De Jong K. Evolutionary computation: a unified approach. In: Proceedings of the 15th annual conference companion on genetic and evolutionary computation. ACM 2013 p. 293–306.

  6. Holland JH. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. Ann Arbor: University Michigan Press; 1975.

    Google Scholar 

  7. Manning T, Sleator RD, Walsh P. Naturally selecting solutions: the use of genetic algorithms in bioinformatics. Bioengineered. 2012;4(5):266–278.

  8. Farmer DJ, Packard NH., Perelson AS. The immune system, adaptation, and machine learning. Physica D: Nonlinear Phenomena. 1986;22(1):187–204.

  9. Fernandez-Leon JA, Acosta GG, Rozenfeld A. How simple autonomous decisions evolve into robust behaviours? A review from neurorobotics, cognitive, self-organized and artificial immune systems fields. Biosystems. 2014;124:7–20.

    Article  PubMed  Google Scholar 

  10. McDowell JJ, Andrei P. Beyond continuous mathematics and traditional scientific analysis: understanding and mining Wolfram’s a new kind of science. Behav Process. 2009; 81(2):343–52

  11. Cook M. Universality in elementary cellular automata. Complex Syst. 2004;15(1):1–40.

    Google Scholar 

  12. Dorigo M, Maniezzo V, Colorni A. Ant system: optimization by a colony of cooperating agents. IEEE Trans Syst Man Cybern Part B Cybern. 1996;26(1):29–41.

    Article  CAS  Google Scholar 

  13. An J, Kang Q, Wang L. Mussels wandering optimization: an ecologically inspired algorithm for global optimization. Cogn Comput. 2013;5(2):188–99.

    Article  Google Scholar 

  14. Kennedy J. The particle swarm: social adaptation of knowledge. In: IEEE international conference on evolutionary computation, 1997, p. 303–308. IEEE, 1997.

  15. Townsend J, Keedwell E, Galton A. Artificial development of biologically plausible neural-symbolic networks. Cogn Comput. 2014;6(1):18–34.

    Article  Google Scholar 

  16. Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65(6):386.

    Article  CAS  PubMed  Google Scholar 

  17. Cox DD, Dean T. Neural networks and neuroscience-inspired computer vision. Curr Biol. 2014;24(18):R921–9.

    Article  CAS  PubMed  Google Scholar 

  18. Kohonen T. Analysis of simple self-organizing process. Biol Cybern. 1975;44:135–40.

    Article  Google Scholar 

  19. Kohonen T. Self-organized formation of topologically correct feature maps. Biol Cybern. 1982b;43:59–69.

    Article  Google Scholar 

  20. Kohonen T. Self-organizing and associatve memeory. 3rd ed. Berlein: Springer; 1985.

    Google Scholar 

  21. Kohonen T. Self-organization maps. 3rd ed. Heidelberg: Springer; 2001.

    Book  Google Scholar 

  22. James M, Kenneth M, Stefan W, Chris B. Data mining using rule extraction from Kohonen self-organising maps map: application in protein sequence classification. Neural Comput Appl. 2005;15:9–17.

    Google Scholar 

  23. Sharpe PK, Caleb P. Self organising maps for the investigation of clinical data: a case study. Neural Comput Appl. 1998;7:65–70.

    Article  Google Scholar 

  24. Hasan M. Self-organizing map artificial neural network application in multidimensional soil data analysis. Neural Comput Appl. 2011;20:1295C1303.

    Google Scholar 

  25. Jolanta JA, Maria K, Young SP, Kruk A. Application of a Kohonens self-organizing map for evaluation of long-term changes in forest vegetation. J Veg Sci. 2013;24(2):405–414.

  26. Mu CS, Yu XZ. A variant of the SOM algorithm and its interpretation in the viewpoint of social influence and learning. Neural Comput Appl. 2009;18:1043–55.

    Article  Google Scholar 

  27. Wu W, Atlas K. SOMO-m optimization algorithm with multiple winners. Discrete Dynamics in Nature and Society, 2012.

  28. Wu W, Atlas K. MaxMin-SOMO: an SOM optimization algorithm for simultaneously finding maximum and minimum of a function. In: Advances in neural networks VISNN 2012. Springer, Berlin; 2012. p. 598–606.

  29. Jieh HC, Li RY, Mu CS. Comparison of SOM-based optimization and particle swarm optimization for minimizing the construction time for a secant pile wall. Autom Constr. 2009;18:844–8.

    Article  Google Scholar 

  30. Mu CS, Ta LL, Hsiao TC. Improivng the self-organzing feature map alogorithm using an efficient intlitazation scheme. Tamkang J Sci Eng. 2002;5(1):35–48.

  31. Jieh HC, Li RY, Mu CS, Jia ZL. Optimal construction sequencing for Secant pile wall. In: Proceedings of the IEEE IEEM. 2008.

  32. Mu CS, Yu XZ, Lee J. SOM-based optimization. In: IEEE international joint conference on neural networks. Budapest. 2004. p. 781–786.

  33. De Jong KA. Analysis of the behavior of a class of genetic adaptive systems. Doctoral dissertation, University of Michigan, Ann Arbor, MI, USA. 1975.

  34. Powell MJD. Convergence properties of algorithms for nonlinear optimization. Siam Rev. 1986;28(4):487–500.

    Article  Google Scholar 

  35. Nocedal J. Theory of algorithms for unconstrained optimization. Acta Numer. 1992;1:199–242.

    Article  Google Scholar 

  36. Huang GB, Qin Z, Chee S. Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of 2004 IEEE international joint conference on neural networks, 2004. Vol. 2. IEEE, 2004.

  37. Huang GB, Lei C, Chee S. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw. 2006;17(4):879–92.

    Article  PubMed  Google Scholar 

  38. Huang GB, Lei C. Convex incremental extreme learning machine. Neurocomputing. 2007;70(16):3056–62.

    Article  Google Scholar 

  39. Huang GB. An insight into extreme learning machines: random neurons, random features and kernels. Cogn Comput. 2014;6(3):1–15.

  40. Cao J, Xiong L. Protein sequence classification with improved extreme learning machine algorithms. BioMed Res Int. 2014;2014:103054.

  41. Huang GB, Zhou H, Ding X, Zhang R. Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B Cybern. 2012;42(2):513–29.

    Article  Google Scholar 

  42. Atlas K, Yang J, Wu W. Double parallel feedforward neural network based on extreme learning machine with \(L_{1/2}\) regularizer. Neurocomputing. 2014;128:113–8.

    Article  Google Scholar 

  43. Lan Y, Yeng CS, Huang GB. Two-stage extreme learning machine for regression. Neurocomputing. 2010;73(16):3028–38.

    Article  Google Scholar 

Download references

Acknowledgments

The authors wish to thank the associate editor and the anonymous reviewers for their helpful comments. This work was supported by the National Science Foundation of China (11171367), the Fundamental Research Funds for the Central Universities of China and Fundacao da Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) [2012/23329-5] Brazil.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Atlas Khan.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khan, A., Xue, L.Z., Wei, W. et al. Convergence Analysis of a New Self Organizing Map Based Optimization (SOMO) Algorithm. Cogn Comput 7, 477–486 (2015). https://doi.org/10.1007/s12559-014-9315-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-014-9315-7

Keywords