Skip to main content
Log in

Associative Neural Network

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

An associative neural network (ASNN) is a combination of an ensemble of the feed-forward neural networks and the K-nearest neighbor technique. The introduced network uses correlation between ensemble responses as a measure of distance among the analyzed cases for the nearest neighbor technique and provides an improved prediction by the bias correction of the neural network ensemble both for function approximation and classification. Actually, the proposed method corrects a bias of a global model for a considered data case by analyzing the biases of its nearest neighbors determined in the space of calculated models. An associative neural network has a memory that can coincide with the training set. If new data become available the network can provide a reasonable approximation of such data without a need to retrain the neural network ensemble. Applications of ASNN for prediction of lipophilicity of chemical compounds and classification of UCI letter and satellite data set are presented. The developed algorithm is available on-line at http://www.virtuallaboratory.org/lab/asnn.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Dasarthy, B.: Nearest neighbor (NN) norms, IEEE Computer Society Press, Washington, DC, 1991.

    Google Scholar 

  2. Härdle, W.: Smoothing techniques with implementation in S, Springer-Verlag, New York, 1990.

    Google Scholar 

  3. Lawrence, S., Tsoi, A. C. and Back, A. D.: Function approximation withneural networks and local methods: bias, variance and smoothness, In: P. Bartlett, A. Burkitt and R. Williamson (eds), Australian Conference on Neural Networks, Australian NationalUniversity, Australian National University, 1996, pp. 16–21.

  4. Geman, S., Bienenstock, E. and Doursat, R.: Neural networks and the bias/variance dilemma, Neural Computation 4 (1992), 1–58.

    Google Scholar 

  5. Press, W. H., Teukolsky, S. A., Vetterling, W. T. and Flannery, B. P.: Numerical Recipes in C, Cambridge University Press, New York, 1994.

    Google Scholar 

  6. Tetko, I. V., Livingstone, D. J. and Luik, A. I.: Neural network studies. 1. Comparison of overfitting and overtraining, Journal of Chemical Information & Computer Sciences 35 (1995), 826–833.

    Google Scholar 

  7. Bishop, M.: Neural Networks for Pattern Recognition, Oxford University Press, Oxford, 1995.

    Google Scholar 

  8. Tetko, I. V. and Villa, A. E. P.: Efficient partition of learning data sets for neural network training, Neural Networks 10 (1997), 1361–1374.

    PubMed  Google Scholar 

  9. Schwenk, H. and Bengio, Y.: Boosting neural networks, Neural Computation 12 (2000), 1869–1887.

    Google Scholar 

  10. Marcus, G. F.: Rethinking eliminates connectionism, Cognitive Psychology 37 (1998), 243–282.

    Google Scholar 

  11. Blake, E. K. and Merz, C. UCI repository of machine learning databases, http://www.ics.uci.edu/~mlearn/MLRepository.html, 1998.

  12. Tetko, I. V., Tanchuk, V. Y. and Villa, A. E. P.: Prediction of n-octanol/water partition coefficients from physprop database using artificial neural networks and E-state indices, J. Chem. Inf. Comput. Sci. 41 (2001), 1407–1421.

    PubMed  Google Scholar 

  13. Tetko, I. V. and Villa, A. E. P.: An efficient partition of training data set improves speed and accuracy of cascade-correlation algorithm, Neural Processing Letters 6 (1997), 51–59.

    Google Scholar 

  14. Fahlman, S. and Lebiere, C.: The cascade-correlation learning architecture, NIPS 2 (1990), 524–532.

    Google Scholar 

  15. Schwenk, H. and Bengio, Y. Adaptive boosting of neural networks for character recognition, Universitè de Montrèal, Montrèal, 1997, pp. 1–9.

    Google Scholar 

  16. Tetko, I. V., Tanchuk, V. Y., Kasheva, T. N. and Villa, A. E.: Internet software for the calculation of the lipophilicity and aqueous solubility of chemical compounds, Journal of Chemical Information & Computer Sciences 41 (2001), 246–252.

    Google Scholar 

  17. Tetko, I. V. and Tanchuk, V. Y.: Application of associative neural networks for prediction of lipophilicity in ALOGPS 2.1 program, Journal of Chemical Information & Computer Sciences in press (2002).

  18. Tetko, I. V.: Neural network studies. 4. Introduction to associative neural networks, Journal of Chemical Information & Computer Sciences 42 (2002), 717–728.

    Google Scholar 

  19. Abeles, M.: Corticotronics: Neural circuits of the cerebral cortex, Cambridge University Press, New York, 1991.

    Google Scholar 

  20. Villa, A. E. P., Tetko, I. V., Hyland, B. and Najem, A.: Spatiotemporal activity patterns of rat cortical neurons predict responses in a conditioned task, Proceedings of the National Academy of Sciences of the Unites States of America 96 (1999), 1106–1111.

    Google Scholar 

  21. Thorpe, S., Fize, D. and Marlot, C.: Speed of processing in the human visual system, Nature 381 (1996), 520–522.

    Article  PubMed  Google Scholar 

  22. Gautrais, J. and Thorpe, S.: Rate coding versus temporal order coding: a theoretical approach, Biosystems 48 (1998), 57–65.

    Google Scholar 

  23. Kohonen, T.: Self-Organizing Maps, Springer, Berlin, 2001.

    Google Scholar 

  24. Vapnik, V.: The Nature of Statistical Learning Theory, Springer-Verlag, New York, 1995.

    Google Scholar 

  25. Tetko, I. V., Aksenova, T. I., Volkovich, V. V., Kasheva, T. N., Filipov, D. V., Welsh, W. J., Livingstone, D. J. and Villa, A. E. P.: Polynomial neural network for linear and non-linear model selection in quantitative-structure activity relationship studies on the Internet, SAR & QSAR in Environmental Research 11 (2000), 263–280.

    Google Scholar 

  26. Breiman, L.: Arcing classifiers, Annals of Statistics 26 (1998), 801–824.

    Google Scholar 

  27. Freund, Y. and Schapire, R. E.: Experiments with a new boosting algorithm, In: L. Saitta (ed.), Machine Learning: Proceedings of the Thirteen National Conference, Morgan Kaufmann, 1996, pp. 148–156.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tetko, I.V. Associative Neural Network. Neural Processing Letters 16, 187–199 (2002). https://doi.org/10.1023/A:1019903710291

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1019903710291

Navigation