Abstract
Surface- and prototype-based models are often regarded as alternative paradigms to represent internal knowledge in trained neural networks. This paper analyses a network model (Circular Back-Propagation) that overcomes such dualism by choosing the best-fitting representation adaptively. The model involves a straightforward modification to classical feed-forward structures to let neurons implement hyperspherical boundaries; as a result, it exhibits a notable representation power, and benefits from the simplicity and effectiveness of classical back-propagation training. Artificial testbeds support the model definition by demonstrating its basic properties; an application to a real, complex problem in the clinical field shows the practical advantages of the approach.
Similar content being viewed by others
References
Rumelhart DE, Hinton GH, Williams RJ. Learning internal representation by error propagation. In: Rumelhart DE, McClelland JL (Eds) Parallel Distributed Processing — Explorations in the Microstructure of Cognition, Vol. 1, MIT Press, 1986
Bichsel M, Seitz P. Minimum class entropy: a maximum information approach to layered networks. Neural Networks 1989;2: 133–141
Kohonen T. Self-Organization and Associative Memory, 3rd Ed, Springer-Verlag, 1989
Martinetz TM, Berkovich SG, Schulten KJ. Neuralgas network for vector quantization and its application to time-series prediction. IEEE Trans Neural Networks 1993;4:558–569
Martinetz TM, Schulten KJ. Topology representing networks. Neural Networks 1994:7: 507–522
Specht DF. Probabilistic neural networks. Neural Networks 1990;3: 109–118
Casasent DP. Multifunctional hybrid neural net. Neural Networks 1992;5: 361–370
Telfer BA, Casasent DP. Minimum-cost associative processor for piecewise-hyperspherical classification. Neural Networks 1993;6: 1117–1130
Sarajedini A, Hecht-Nielsen R. The best of both worlds: Casasent networks integrate multilayer perceptrons and radial basis functions. In: Proceedings of the Int. Joint Conf. on Neural Networks (IJCNN '92), Baltimore, MD, 1992; III: 905–910
Lianglu B, Bai Y, Kita H, Nishikawa Y. An efficient multilayer quadratic perceptron for pattern classification and function approximation. Proceedings of the Int. Joint Conf. on Neural Networks (IJCNN '93), Nagoya, Japan, 1993; 1385–1388
Anguita D, Parodi GC, Zunino R. An efficient implementation of BP on RISC-based workstations. Neurocomputing 1994;6: 57–65
Pearlmutter BA. Fast exact multiplication by the Hessian. Neural Computation 1994;6: 147–160
Møller M. A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 1993;6: 525–533
Ridella S, Speroni GL, Trebino P, Zunino R. Class-entropy minimization networks for domain analysis and rule extraction. Neural Comput Applic 1994;2: 40–52
Moneta C, Parodi GC, Rovetta S, Zunino R. Automated diagnosis and disease characterization using neural network analysis. In: IEEE Int. Conf. on Systems, Man and Cybern., Chicago, IL, 1992; 123–128
Bianchi G, Buffrini L, Monteforte P, Rovetta G, Rovetta S, Zunino R. Neural approaches to the diagnosis and characterization of the Lyme disease. In: 7th IEEE Symp. Computer-based Medical Systems, Winston-Salem, USA, 1994; 194–199
Poggio T, Girosi F. Networks for approximation and learning. Proc IEEE 199078: 1481–1497
Barinaga M. Furor at Lyme disease conference. Science 1992;256: 1384
Kramer MA. Nonlinear principal component analysis using autoassociative neural networks. AIChE J 1991;37: 233–243
Anthony M, Biggs N. Computational Learning Theory, Cambridge University Press, 1992
Vapnik VN, Chervonenkis AY. On the uniform convergence of relative frequencies of events to their probabilities. Th Prob Appl 1971;17: 264–280
Baum EB, Haussler D. What size net gives valid generalization? Neural Computation 1989;1: 151–160
Wenocur RS, Dudley RM. Some special Vapnik-Chervonenkis classes. Discrete Mathematics 1981;33: 313–318
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Ridella, S., Rovetta, S. & Zunino, R. Adaptive internal representation in circular back-propagation networks. Neural Comput & Applic 3, 222–233 (1995). https://doi.org/10.1007/BF01414647
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01414647