Abstract
This article addresses the issue of symbolic processing with Multi-Layer Perceptrons through encoding. Given an encoding, we propose a lower bound of the number of parameters for an MLP to perform a random mapping of its input symbolic space to its output symbolic space. In the case of what we call binary encoding, the needed number of parameters may be theoretically computed. Given these two results, we show that the most efficient encodings are the ones which use one input unit per value.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
G.E. Hinton, “Distributed representations,” Tech. Rep. CMU-CS–84–157, Computer Science Department, Carnegie-Mellon University, Pittsburgh, PA, 1984.
K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward neural networks are universal approximators,” Neural Networks, Vol. 2, No. 5, pp. 359–366, 1989.
S.S. Stevens, “Mathematics, measurement and psychophysics,” in S.S. Stevens (ed.) Handbook of Experimental Psychology, John Whiley, New York, 1951.
P. Smolensky, “Representation in connectionist networks,” Intellectica, No. 9–10, pp. 127–165, 1990.
P.F. Velleman and L. Wilkinson, “Nominal, ordinal, interval and ratio typologies are misleading,” The American Statistician, Vol. 47, No. 1, pp. 65–72, 1993.
D. Bonnet, V. Perrault, and A. Grumbach, “Using symbolic data to improve connectionist forecasting: A methodology,” technical report 97C002, ENST, Paris, France, 1997.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Bonnet, D., Grumbach, A. & Labouisse, V. On the Parsimony of the Multi-Layer Perceptrons when Processing Encoded Symbolic Variables. Neural Processing Letters 8, 145–153 (1998). https://doi.org/10.1023/A:1009696629113
Issue Date:
DOI: https://doi.org/10.1023/A:1009696629113