Skip to main content
Log in

The Minimum Number of Errors in the N-Parity and its Solution with an Incremental Neural Network

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

The N-dimensional parity problem is frequently a difficult classification task for Neural Networks. We found an expression for the minimum number of errors νf as function of N for this problem, performed by a perceptron. We verified this quantity experimentally for N=1,...,15 using an optimal train perceptron. With a constructive approach we solved the full N-dimensional parity problem using a minimal feedforward neural network with a single hidden layer of h=N units.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Minsky, M. and Papert, S.: Perceptrons. MIT Press, Cambridge 1969.

    Google Scholar 

  2. Gordon, M. B. and Berchier, D.: Minimerror: A perceptron learning rule that finds the optimal weights. In: Michel Verleysen, (ed.), European Symposium on Artificial Neural Networks, pp. 105-110, Brussels, 1993. D facto.

  3. Raffin, B. and Gordon, M. B.: Learning and generalization with minimerror, a temperature dependent learning algorithm. Neural Comput. 7(6) (1995), 1206-1224.

    Google Scholar 

  4. Torres Moreno, J.-M. and Gordon, M.: An evolutive architecture coupled with optimal perceptron learning for classification. In: Michel Verleysen, (ed.), European Symposium on Artificial Neural Networks, pp. 365-370, Brussels, 1995. D facto.

  5. Torres Moreno, J.-M.: Apprentissage et Généralisation par des Réseaux de Neurones: étude de nouveaux algorithmes constructifs. Ph.D. these INPG, Grenoble, France, 1997.

    Google Scholar 

  6. Cover, T. M.: Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers, EC-14 (1965), 326-334.

    Google Scholar 

  7. Gardner, E.: Maximum storage capacity in neural networks. Europhysics Letters, 4 (1987), 481-485.

    Google Scholar 

  8. Torres Moreno, J. M. and Gordon, M. B.: Efficient adaptive learning for classification tasks with binary units. Neural Comput. 10(4) (1997), 1007-1030.

    Google Scholar 

  9. Torres Moreno, J. M. and Gordon, M. B.: Characterization complete of the sonar benchmark. Neural Processing Letters. 7(1) (1998), 1-4.

    Google Scholar 

  10. Gordon, M. B.: A convergence theorem for incremental learning with real-valued inputs. In IEEE International Conference on Neural Networks, pp. 381-386, Washington, 1996.

  11. Martinez, D. and Estéve, D.: The offset algorithm: building and learning method for multilayer neural networks. Europhysics Letters, 18 (1992), 95-100.

    Google Scholar 

  12. Biehl, M. and Opper, M.: Tilinglike learning in the parity machine. Physical Review A, 44 (1991), 6888.

    Google Scholar 

  13. Peretto, P.: An introduction to the Modeling of Neural Networks. Cambridge, University Press, 1992.

    Google Scholar 

  14. Gorman, R. P. and Sejnowski, T. J.: Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks, 1 (1988), 75-89.

    Google Scholar 

  15. Berthold, M. A.: probabilistic extension for the DDA algorithm. In: IEEE International Conference on Neural Networks, pp. 341-346, Washington, 1996.

  16. Berthold, M. R. and Diamond, J.: Boosting the performance of RBF networks with dynamic decay adjustment. In: G. Tesauro, D. Touretzky, and T. Leen, (eds.), Advances in Neural Information Processing Systems, volume 7, pp. 521-528. The MIT Press, 1995.

  17. Bruske, J. and Sommer, G.: Dynamic cell structures. In: G. Tesauro, D. Touretzky, and T. Leen, (eds.), Advances in Neural Information Processing Systems, volume 7, pp. 497-504. The MIT Press, 1995.

  18. Chakraborty, B. and Sawada, Y.: Fractal connection structure: Effect on generalization supervised feed-forward networks. In: IEEE International Conference on Neural Networks, pp. 264-269, Washington, 1996.

  19. Karouia, M., Lengellé, R. and Denoeux, T.: Performance analysis of a MLP weight initialization algorithm. In: Michel Verleysen, (ed.), European Symposium on Artificial Neural Networks, pp. 347-352, Brussels, D facto, 1995.

    Google Scholar 

  20. Hasenäger, M. and Ritter, H.: Perceptron Learning Revisited: The Sonar Targets Problem. Neural Processing Letters. 10(1) (1999), 1-8.

    Google Scholar 

  21. Perantonis, S. J. and Virvilis, V.: Input Feature Extraction for Multilayered Perceptrons Using Supervised Principal Component Analysis. Neural Processing Letters. 10(3) (1999), 243-252.

    Google Scholar 

  22. Kim, J. H. and Kwong-Park, S.: The Geometrical Learning of Binary Neural Networks. IEEE Transactions on Neural Networks. 6(1) (1995), 237-247.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Torres-Moreno, J.M., Aguilar, J.C. & Gordon, M.B. The Minimum Number of Errors in the N-Parity and its Solution with an Incremental Neural Network. Neural Processing Letters 16, 201–210 (2002). https://doi.org/10.1023/A:1021726007566

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1021726007566

Navigation