Abstract
The latest AI techniques are usually computer intensive, as opposed to the traditional ones which rely on the consistency of the logic principles on which they are based. In contrast, many algorithms of Computational Intelligence (CI) are meta-heuristic, i.e. methods where the particular selection of parameters defines the details and characteristics of the heuristic proper. In this paper we discuss a method which allows us to ascertain, with high statistical significance, the relative performance of several meta-heuristics. To achieve our goal we must find a statistical goodness-of-fit (gof) test which allows us to determine the moment when the sample becomes normal. Most statistical gof tests are designed to reject the null hypothesis (i.e. the samples do NOT fit the same distribution). In this case we wish to determine the case where the sample IS normal. Using a Monte Carlo simulation we are able to find a practical gof test to this effect. We discuss the methodology and describe its application to the analysis of three case studies: training of neural networks, genetic algorithms and unsupervised clustering.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Dahiya, R.C., Gurland, J.: Pearson chi-squared test of fit with random intervals. Biometrika 59(1), 147–153 (1972)
Lilliefors, H.W.: On the Kolmogorov-Smirnov test for normality with mean and variance unknown. J. Am. Stat. Assoc. 62(318), 399–402 (1967)
Darling, D.A.: The Kolmogorov-Smirnov, Cramer-von Mises tests. Ann. Math. Stat. 28(4), 823–838 (1957)
Razali, N.M., Wah, Y.B.: Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. J. Stat. Model. Anal. 2(1), 21–33 (2011)
Royston, P.: Approximating the Shapiro-Wilk W-Test for non-normality. Stat. Comput. 2(3), 117–119 (1992)
Akaike, H.: AKaike’s information criterion. In: Lovric, M. (ed.) International Encyclopedia of Statistical Science, p. 25. Springer, Heidelberg (2011). doi:10.1007/978-3-642-04898-2_110
Binder, K.: Introduction: theory and “Technical” aspects of Monte Carlo simulations. In: Binder, K. (ed.) Monte Carlo Methods in Statistical Physics. Topics in Current Physics, pp. 1–45. Springer, Heidelberg (1986). doi:10.1007/978-3-642-82803-4_1
Kuri-Morales, A.: Non-standard norms in genetically trained neural networks. In: 2000 IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks. IEEE (2000)
Barnard, E.: Optimization for training neural nets. IEEE Trans. Neural Netw. 3(2), 232–240 (1992)
Kuri-Morales, A.F., Aldana-Bobadilla, E., López-Peña, I.: The best genetic algorithm II. In: Castro, F., Gelbukh, A., González, M. (eds.) MICAI 2013. LNCS, vol. 8266, pp. 16–29. Springer, Heidelberg (2013). doi:10.1007/978-3-642-45111-9_2
Aldana-Bobadilla, E., Kuri-Morales, A.: A clustering method based on the maximum entropy principle. Entropy 17(1), 151–180 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Kuri-Morales, A.F., López-Peña, I. (2017). Normality from Monte Carlo Simulation for Statistical Validation of Computer Intensive Algorithms. In: Pichardo-Lagunas, O., Miranda-Jiménez, S. (eds) Advances in Soft Computing. MICAI 2016. Lecture Notes in Computer Science(), vol 10062. Springer, Cham. https://doi.org/10.1007/978-3-319-62428-0_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-62428-0_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-62427-3
Online ISBN: 978-3-319-62428-0
eBook Packages: Computer ScienceComputer Science (R0)