Skip to main content

A 32-Bit Binary Floating Point Neuro-Chip

  • Conference paper
Advances in Natural Computation (ICNC 2005)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3612))

Included in the following conference series:

Abstract

The need for high precision calculations in various scientific disciplines has led to development of systems with various solutions specific to the problem on hand. The complexity of such systems not withstanding, a generic solution could be the use of neural networks. To be able to leverage the best out of the neural network, hardware implementations are ideal as they give speed-up of several orders of magnitude over software simulations. A simple architecture for such a neuro-chip is proposed in this paper. The neuro-chip supports the current draft version of the IEEE-754 standard for floating-point arithmetic. The synthesis results indicate an estimated 84 MCUPS speed of operation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ayala, J.L., Lomena, A.G., Lopez Vallejo, M., Fernandez, A.: Design of a Pipelined Hardware Architecture for Real-Time Neural Network Computations. In: IEEE Midwest Symposium on Circuits and Systems, Tulsa, Oklahama, USA (2002)

    Google Scholar 

  2. McClelland, J.L., Rumelhart, D.E.: Explorations in Parallel Distributed Processing: A Handbook of Models, Programs and Exercises. MIT Press, Cambridge (1988)

    Google Scholar 

  3. Guccione, S.A., Gonzalez, M.J.: A Neural Network Implementation using Reconfigurable Architectures “More FPGAs”, Will Moore and Wayne Luk, Abingdon EE & CS Books, Abingdon, England, pp. 443–451 (1993)

    Google Scholar 

  4. Thompson, J., Karra, N., Schulte, M.J.: A 64-bit Decimal Floating-Point Adder. In: Proceedings of IEEE Computer Society Annual Symposium on Emerging Trends in VLSI Systems Design, ISVLSI 2004 (2004)

    Google Scholar 

  5. Nordstrõm, T., Svensson, B.: Using and Designing Massively Parallel Computers for Artificial Neural Networks. Journal of Parallel and Distributed Processing, 260–285 (1992)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kala, K.L., Srinivas, M.B. (2005). A 32-Bit Binary Floating Point Neuro-Chip. In: Wang, L., Chen, K., Ong, Y.S. (eds) Advances in Natural Computation. ICNC 2005. Lecture Notes in Computer Science, vol 3612. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11539902_130

Download citation

  • DOI: https://doi.org/10.1007/11539902_130

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28320-1

  • Online ISBN: 978-3-540-31863-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics