Skip to main content
Log in

Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

Associative memories retrieve stored information given partial or erroneous input patterns. A new family of associative memories based on Sparse Clustered Networks (SCNs) has been recently introduced that can store many more messages than classical Hopfield-Neural Networks (HNNs). In this paper, we propose fully-parallel hardware architectures of such memories for partial or erroneous inputs. The proposed architectures eliminate winner-take-all modules and thus reduce the hardware complexity by consuming 65 % fewer FPGA lookup tables and increase the operating frequency by approximately 1.9 times compared to that of previous work. Furthermore, the scaling behaviour of the implemented architectures for various design choices are investigated. We explore the effect of varying design variables such as the number of clusters, network nodes, and erased symbols on the error performance and the hardware resources.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11
Figure 12

Similar content being viewed by others

References

  1. Agarwal, A., Hsu, S., Mathew, S., Anders, M., Kaul, H., Sheikh, F., Krishnamurthy, R. (2011). A 128x128b high-speed wide-AND match-line content addressable memory in 32nm CMOS. In 2011 Proceedings of the ESSCIRC (ESSCIRC) (pp. 83–86). doi:1109/ESSCIRC.2011.6044920.

  2. Chang, YJ, & Lan, MF (2007). Two new techniques integrated for energy-efficient TLB design. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 15(1), 13–23. doi:1109/TVLSI.2006.887813.

    Article  Google Scholar 

  3. Chao, H. (2002). Next generation routers. Proceedings of the IEEE, 90(9), 1518–1558. doi:1109/JPROC.2002.802001.

    Article  Google Scholar 

  4. Clark, L., & Chaudhary, V. (2010). Fast low power translation look aside buffers using hierarchical NAND match lines. In Proceedings of 2010 IEEE international symposium on circuits and systems (ISCAS) (pp. 3493–3496). doi:1109/ISCAS.2010.5537832.

  5. Gripon, V., & Berrou, C. (2011). Sparse neural networks with large learning diversity. IEEE Transactions on Neural Networks, 22(7), 1087–1096. doi:1109/TNN.2011.2146789.

    Article  Google Scholar 

  6. Gripon, V., & Berrou, C. (2012). Nearly-optimal associative memories based on distributed constant weight codes. In Information theory and applications workshop (ITA), 2012 (pp. 269–273). doi:1109/ITA.2012.6181790.

  7. Hopfield, J.J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the USA, 79(8), 2554–2558.

    Article  MathSciNet  Google Scholar 

  8. Huang, N.F., Chen, W.E., Luo, J.Y., Chen, J.M. (2001). Design of multi-field IPv6 packet classifiers using ternary CAMs. In Global telecommunications conference (GLOBECOM) (Vol. 3, pp. 1877–1881). doi:1109/GLOCOM.2001.965900.

  9. Jarollahi, H., Gripon, V., Onizawa, N., Gross, W.J. (2013). A low-power content-addressable memory based on clustered-sparse networks. In Proceedings of the 24th IEEE international conference on application-specific systems, architectures and processors (ASAP) (pp. 305–308). doi:1109/ASAP.2013.6567594.

  10. Jarollahi, H., Onizawa, N., Gripon, V., Gross, W.J. (2012). Architecture and implementation of an associative memory using sparse clustered networks. In IEEE international symposium on circuits and systems (ISCAS) (pp. 2901–2904). Seoul, Korea. doi:1109/ISCAS.2012.6271922.

  11. Jarollahi, H., Onizawa, N., Gripon, V., Gross, W.J. (2013). Reduced-complexity binary-weight-coded associative memories. In Proceedings of the 2013 IEEE international conference on acoustics, speech, and signal processing (ICASSP) (pp. 2523–2527). doi:1109/ICASSP.2013.6638110.

  12. Knoblauch, A. (2010). Optimal synaptic learning in non-linear associative memory. In The 2010 international joint conference on neural networks (IJCNN) (pp. 1–7). doi:1109/IJCNN.2010.5596604.

  13. Komoto, E., Homma, T., Nakamura, T. (1993). A high-speed and compact-size JPEG Huffman decoder using CAM. In Symposium on vlsi circuits, digest of technical papers (pp. 37–38). doi:1109/VLSIC.1993.920528.

  14. Pagiamtzis, K., & Sheikholeslami, A. (2006). Content-addressable memory (CAM) circuits and architectures: a tutorial and survey. IEEE Journal of Solid-State Circuits, 41(3), 712–727. doi:1109/JSSC.2005.864128.

    Article  Google Scholar 

  15. Qadir, O., Liu, J., Timmis, J., Tempesti, G., Tyrrell, A. (2011). Hardware architecture for a bidirectional hetero-associative protein processing associative memory. In 2011 IEEE congress on evolutionary computation (CEC) (pp. 208–215). doi:1109/CEC.2011.5949620.

  16. Tutanescu, I., Anton, C., Ionescu, L., Serban, G., Mazare, A. (2012). Hybrid error detecting and correcting system using hardware associative memories. In Communications and information systems conference (MCC), 2012 Military (pp. 1–5).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hooman Jarollahi.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jarollahi, H., Onizawa, N., Gripon, V. et al. Algorithm and Architecture of Fully-Parallel Associative Memories Based on Sparse Clustered Networks. J Sign Process Syst 76, 235–247 (2014). https://doi.org/10.1007/s11265-014-0886-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-014-0886-z

Keywords

Navigation