Skip to main content
Log in

Run-Time Reconfiguration: A method for enhancing the functional density of SRAM-based FPGAs

  • Published:
Journal of VLSI signal processing systems for signal, image and video technology Aims and scope Submit manuscript

Abstract

One way to further exploit the reconfigurable resources of SRAM FPGAs and increase functional density is to reconfigure them during system operation. This proces is referred to as Run-Time Reconfiguration (RTR). RTR is an approach to system implementation that divides an application or algorithm into time-exclusive operations that are implemented as separate configurations. The Run-Time Reconfiguration Artificial Neural Network (RRANN) is a proof-of-concept system that demonstrates the effectiveness of RTR for implementing neural networks. It implements the popular backpropagation training algorithm as three distinct time-exclusive FPGA configurations: feed-forward, backpropagation and update. System operation consists of sequencing through these three reconfigurationsat run-time, one configuration at a time. RRANN has been fully implemented with Xilinx FPGAs, tested and shown to increase the functional density of a network up to 500% when compared to FPGA-based implementations that do not use RTR.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. N. Weste and K. Eshraghian,Principles of CMOS VLSI Design, a Systems Perspective, Addison-Wesley Publishing Co., 2nd edition, pp. 399–403, 1993.

  2. S.D. Brown, R.J. Francis, J. Rose, and Z. Vranesic,Field-Programmable Gate Arrays, Ch. 1. Kluwer Academic Publishers, 1992.

  3. C.E. Cox and W.E. Blanz, “Ganglion—A fast field-programmable gate array implementation of a connectionist classifier”,IEEE Journal of Solid State Circuits, Vol. 27, pp. 288–299, 1992.

    Article  Google Scholar 

  4. M. van Daalen, P. Jeavons, and J. Shawe-Taylor, “A stochastic neural architecture that exploits dynamically reconfigurable FPGAs”,IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, pp. 202–211, April 1993.

  5. S. Bade and B.L. Hutchings, “FPGA-based stocastic neural networks: Implementation,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, pp. 189–198, April 1994.

  6. M.S. Tomlinson and M.A. Sivilotti, “A high-performance scalable digital neural network architecture for VLSI”, C. Sequin (ed.),Advanced Research in VLSI: Proceedings of the 1991 University of California/Santa Cruz Conference, Santa Cruz, CA, pp. 262–273, March 1991.

  7. S.A. Guccione and M.J. Gonzalez, “A data-parallel programming model for reconfigurable architectures,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, pp. 79–87, April 1993.

  8. S.A. Guccione and M.J. Gonzalez, “A neural network implementation using reconfigurable architectures,” W. Moore and W. Luk (eds.),More FPGAs: Proceedings of the 1993 International workshop on field-programmable logic and applications, Oxford, England, pp. 443–451, September 1993.

  9. S.S. Erdogan and T.H. Hong, “Massively parallel back-propagation algorithm using the reconfigurable machine”,World Congress on Neural Networks '93, Portland, Oregon, Vol. 4, pp. 861–864, 1993.

    Google Scholar 

  10. A.T. Ferrucci, “A field-programmable gate array implementation of a self-adapting and scalable connectionist network,” Master's Thesis, University of California, Santa Cruz, Santa Cruz, California, January 1994.

    Google Scholar 

  11. J.P. Gray and T.A. Kean, “Configurable hardware: New paradigm for computation,”Decennical CalTech Conference on VLSI, Pasadena, CA, pp. 277–293, March 1989.

  12. J.M. Armold, “The splash 2 software environment,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, pp. 88–93, April 1993.

  13. D. Pryor, M. Thistle, and N. Shirazi, “Text searching on splash 2,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, pp. 185–191, April 1993.

  14. J.M. Arnold and D.A. Buell, “Splash 2,”Proceedings of the 4th Annual Symposium on Parallel Algorithms and Architectures, pp. 316–324, June 1992.

  15. D.t. Hoang, “Searching genetic databases on splash 2,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, pp. 185–191, April 1993.

  16. J.M. Arnold, D.A. Buell, and E.G. Davis, “VHDL programming on splash 2,”More FPGAs: Proceedings of the 1993 International Workshop on Field-Programable Logic and Applications, Oxford, England, pp. 182–191, September 1993.

  17. D.P. Lopresti, “Rapid implementation of a genetic sequence comparator using field-programmable gate arrays,” C. Sequin (ed.),Advanced Research in VLSI: Proceedings of the 1991 University of California/Santa Cruz Conference, Santa Cruz, CA, pp. 138–152, March 1991.

  18. P.M. Athanas and H.F. Silverman, “Processor reconfiguration through instruction-set metamorphosis,”IEEE Computer, Vol. 26, No. 3, pp. 11–18, March 1993.

    Article  Google Scholar 

  19. M. Wazlowski, L. Agarwal, T. Lee, E. Lam, P. Athanas, H. Silverman, and S. Ghosh, “PRISM-II compiler and architecture,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa, California, pp. 9–16, April 1993.

  20. M. Wazlowski, L. Agarwal, and S. Goosh. “An asynchronous approach to efficient execution of programs on adaptive architectures utilizing FPGAs”,IEEE Workshop on FPGAs for Custom Computing Machines, Napa, California, pp. 101–110, April 1994.

  21. P. Bertin, D. Roncin, and J. Vuillemin, “Programmable active memories: A performance assessment,” G. Borriello and C. Ebeling (eds.),Research on Integrated Systems: Proceedings of the 1993 Symposium, pp. 88–102, 1993.

  22. M.J. Wirthlin, B.L. Hutchings, and K. Glison, “The nanoprocessor: A low resource reconfigurable processor,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, pp. 23–30, April 1994.

  23. C. Iseli and E. Sanchez, “Spyder: A reconfigurable VLIW processor using FPGAs,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, pp. 17–24, April 1993.

  24. S. Monaghan and C.P. Cowen, “Reconfigurable mutli-bit processor for DSP applications in statistical physics,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, pp. 103–110, April 1993.

  25. B.K. Fawcett, “Applications of reconfigurable logic,” W. Moore and W. Luk (eds.),More FPGAs: Proceedings of the 1993 International Workshop on Field-Programmable Logic and Applications, Oxford, England, pp. 57–69, September 1993.

  26. W. Moore and W. Luk (eds.),FPGAs: Proceedings of the 1991 International Workshop on Field-Programmable Logic and Applications, Oxford, England, September 1991. Abingdon EE and CS books.

  27. H. Grunbacher and R. Hartenstein (eds.),FPGAs: Proceedings of the 1992 International Workshop on Field-Programmable Logic and Applications, Vienna, Austria, September 1992. Spinger-Verlag.

  28. W. Moore and W. Luk (eds.),More FPGAs: Proceedings of the 1993 International Workshop on Field-Programmable Logic and Application, Oxford, England, September 1993. Abingdon EE and CS Books.

  29. D.A. Buell and K.L. Pocek (eds.),Proceedings, of IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, April 1993.

  30. D.A. Buell and K.L. Pocek (eds.),Proceedings of IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, April 1994.

  31. X.P. Ling and H. Amano, “WASMII: A data driven computer on a virtual hardware,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa CA, pp. 33–42, April 1993.

  32. P.C. French and R.W. Taylor, “A self-reconfiguring processor”,IEEE Workshop on FPGAs for Custom Computing Machines, Napa, CA, pp. 50–59, April 1993.

  33. P. Lysaught and J. Dunlop, “Dynamic reconfiguration of FPGAs,” W. Moore and W. Luk (eds.),More FPGAs: Proceedings of the 1993 International Workshop on Field-Programmable Logic and Applications, Oxford, England, pp. 82–94, September 1993.

  34. A. Dehon, “DPGA-coupled microprocessors: Commodity ICs for the 21st century,”IEEE Workshop on FPGAs for Custom Computing Machines, Napa California, pp. 31–39, April 1994.

  35. D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning internal representations by error propagation,”Parallel and Distributed Processing, Vol. 1, pp. 318–362, 1986.

    Google Scholar 

  36. T. Watanabe et al., “Neural network simulation on a massively parallel cellular array processor,”IEEE International Joint Conference on Neural Networks, Washington, D.C., Vol. 2, pp. 155–161, 1989.

    Article  Google Scholar 

  37. D. Hammerstrom, “A VLSI architecture for high-performance, low-cost, on-chip learning,”IEEE International Joint Conference on Neural Networks, San Diego, California, Vol. 2, pp. 537–544, 1990.

    Google Scholar 

  38. J.G. Eldredge, “Density enhancement of a neural network through run-time reconfiguration,” Master's thesis, Brigham Young University, Provo, UT, December 1993.

    Google Scholar 

  39. J.L. McClelland and D.E. Rumelhart,Explorations in Parallel Distributed Processing, A Handbook of Models, Programs, and Exercises, The MIT Press, pp. 121–160, 1991.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Eldredge, J.G., Hutchings, B.L. Run-Time Reconfiguration: A method for enhancing the functional density of SRAM-based FPGAs. J VLSI Sign Process Syst Sign Image Video Technol 12, 67–86 (1996). https://doi.org/10.1007/BF00936947

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00936947

Keywords

Navigation