Skip to main content

A Hierarchical Self-organizing Associative Memory for Machine Learning

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4491))

Abstract

This paper proposes novel hierarchical self-organizing associative memory architecture for machine learning. This memory architecture is characterized with sparse and local interconnections, self-organizing processing elements (PE), and probabilistic synaptic transmission. Each PE in the network dynamically estimates its output value from the observed input data distribution and remembers the statistical correlations between its inputs. Both feed forward and feedback signal propagation is used to transfer signals and make associations. Feed forward processing is used to discover relationships in the input patterns, while feedback processing is used to make associations and predict missing signal values. Classification and image recovery applications are used to demonstrate the effectiveness of the proposed memory for both hetero-associative and auto-associative learning.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Chang, J.Y., Cho, C.W.: Second-order Asymmetric BAM Design with a Maximal Basin of Attraction. IEEE Trans. on System, Man, and Cybernetics, Part A: Systems and Humans 33, 421–428 (2003)

    Article  Google Scholar 

  2. Salih, I., Smith, S.H., Liu, D.: Synthesis Approach for Bidirectional Associative Memories Based on the Perceptron Training Algorithm. Neurocomputing 35, 137–148 (2000)

    Article  MATH  Google Scholar 

  3. Wang, L.: Multi-associative Neural Networks and Their Applications to Learning and Retrieving Complex Spatio-temporal Sequences. IEEE Trans. on System, Man, and Cybernetics, part B-Cybernetics 29, 73–82 (1999)

    Article  Google Scholar 

  4. Hopfield, J.J.: Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proc. Nat. Acad. Sci. USA 79, 2554–2558 (1982)

    Article  MathSciNet  Google Scholar 

  5. Vogel, D.: Auto-associative Memory Produced by Disinhibition in a Sparsely Connected Network. Neural Networks 5(11), 897–908 (1998)

    Article  Google Scholar 

  6. Vogel, D., Boos, W.: Sparsely Connected, Hebbian Networks with Strikingly Large Storage Capacities. Neural Networks 4(10), 671–682 (1997)

    Article  Google Scholar 

  7. Wang, M., Chen, S.: Enhanced EMAM Based on Empirical Kernel Map. IEEE Trans. on Neural Network 16, 557–563 (2005)

    Article  Google Scholar 

  8. Starzyk, J.A., Zhu, Z., Liu, T.-H.: Self-Organizing Learning Array. IEEE Trans. on Neural Networks 16(2), 355–363 (2005)

    Article  Google Scholar 

  9. Triesch, J.: Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons. Neural Information Processing System (NIPS) 17 (2004)

    Google Scholar 

  10. Starzyk, J.A., Wang, F.: Dynamic Probability Estimator for Machine Learning. IEEE Trans. on Neural Networks 15(2), 298–308 (2004)

    Article  Google Scholar 

  11. Fisher, R.A.: The Use of Multiple Measurements in Taxonomic Problem. Ann. Eugenics 7(2), 179–188 (1936)

    Article  Google Scholar 

  12. Djuric, P.M., Huang, Y., Ghirmai, E.: Perfect Sampling: a Review and Applications to Signal Processing. IEEE Trans. on Signal Processing 50(2), 345–356 (2002)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Starzyk, J.A., He, H., Li, Y. (2007). A Hierarchical Self-organizing Associative Memory for Machine Learning. In: Liu, D., Fei, S., Hou, ZG., Zhang, H., Sun, C. (eds) Advances in Neural Networks – ISNN 2007. ISNN 2007. Lecture Notes in Computer Science, vol 4491. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72383-7_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72383-7_49

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72382-0

  • Online ISBN: 978-3-540-72383-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics