Abstract
The N-dimensional parity problem is frequently a difficult classification task for Neural Networks. We found an expression for the minimum number of errors νf as function of N for this problem, performed by a perceptron. We verified this quantity experimentally for N=1,...,15 using an optimal train perceptron. With a constructive approach we solved the full N-dimensional parity problem using a minimal feedforward neural network with a single hidden layer of h=N units.
Similar content being viewed by others
References
Minsky, M. and Papert, S.: Perceptrons. MIT Press, Cambridge 1969.
Gordon, M. B. and Berchier, D.: Minimerror: A perceptron learning rule that finds the optimal weights. In: Michel Verleysen, (ed.), European Symposium on Artificial Neural Networks, pp. 105-110, Brussels, 1993. D facto.
Raffin, B. and Gordon, M. B.: Learning and generalization with minimerror, a temperature dependent learning algorithm. Neural Comput. 7(6) (1995), 1206-1224.
Torres Moreno, J.-M. and Gordon, M.: An evolutive architecture coupled with optimal perceptron learning for classification. In: Michel Verleysen, (ed.), European Symposium on Artificial Neural Networks, pp. 365-370, Brussels, 1995. D facto.
Torres Moreno, J.-M.: Apprentissage et Généralisation par des Réseaux de Neurones: étude de nouveaux algorithmes constructifs. Ph.D. these INPG, Grenoble, France, 1997.
Cover, T. M.: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers, EC-14 (1965), 326-334.
Gardner, E.: Maximum storage capacity in neural networks. Europhysics Letters, 4 (1987), 481-485.
Torres Moreno, J. M. and Gordon, M. B.: Efficient adaptive learning for classification tasks with binary units. Neural Comput. 10(4) (1997), 1007-1030.
Torres Moreno, J. M. and Gordon, M. B.: Characterization complete of the sonar benchmark. Neural Processing Letters. 7(1) (1998), 1-4.
Gordon, M. B.: A convergence theorem for incremental learning with real-valued inputs. In IEEE International Conference on Neural Networks, pp. 381-386, Washington, 1996.
Martinez, D. and Estéve, D.: The offset algorithm: building and learning method for multilayer neural networks. Europhysics Letters, 18 (1992), 95-100.
Biehl, M. and Opper, M.: Tilinglike learning in the parity machine. Physical Review A, 44 (1991), 6888.
Peretto, P.: An introduction to the Modeling of Neural Networks. Cambridge, University Press, 1992.
Gorman, R. P. and Sejnowski, T. J.: Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks, 1 (1988), 75-89.
Berthold, M. A.: probabilistic extension for the DDA algorithm. In: IEEE International Conference on Neural Networks, pp. 341-346, Washington, 1996.
Berthold, M. R. and Diamond, J.: Boosting the performance of RBF networks with dynamic decay adjustment. In: G. Tesauro, D. Touretzky, and T. Leen, (eds.), Advances in Neural Information Processing Systems, volume 7, pp. 521-528. The MIT Press, 1995.
Bruske, J. and Sommer, G.: Dynamic cell structures. In: G. Tesauro, D. Touretzky, and T. Leen, (eds.), Advances in Neural Information Processing Systems, volume 7, pp. 497-504. The MIT Press, 1995.
Chakraborty, B. and Sawada, Y.: Fractal connection structure: Effect on generalization supervised feed-forward networks. In: IEEE International Conference on Neural Networks, pp. 264-269, Washington, 1996.
Karouia, M., Lengellé, R. and Denoeux, T.: Performance analysis of a MLP weight initialization algorithm. In: Michel Verleysen, (ed.), European Symposium on Artificial Neural Networks, pp. 347-352, Brussels, D facto, 1995.
Hasenäger, M. and Ritter, H.: Perceptron Learning Revisited: The Sonar Targets Problem. Neural Processing Letters. 10(1) (1999), 1-8.
Perantonis, S. J. and Virvilis, V.: Input Feature Extraction for Multilayered Perceptrons Using Supervised Principal Component Analysis. Neural Processing Letters. 10(3) (1999), 243-252.
Kim, J. H. and Kwong-Park, S.: The Geometrical Learning of Binary Neural Networks. IEEE Transactions on Neural Networks. 6(1) (1995), 237-247.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Torres-Moreno, J.M., Aguilar, J.C. & Gordon, M.B. The Minimum Number of Errors in the N-Parity and its Solution with an Incremental Neural Network. Neural Processing Letters 16, 201–210 (2002). https://doi.org/10.1023/A:1021726007566
Issue Date:
DOI: https://doi.org/10.1023/A:1021726007566