Skip to main content

Minimising Contrastive Divergence with Dynamic Current Mirrors

  • Conference paper
Book cover Artificial Neural Networks – ICANN 2009 (ICANN 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5768))

Included in the following conference series:

Abstract

Implementing probabilistic models in Very-Large-Scale-Integration (VLSI) has been attractive to implantable biomedical devices for improving sensor fusion. However, hardware non-idealities can introduce training errors, hindering optimal modelling through on-chip adaptation. This paper investigates the feasibility of using the dynamic current mirrors to implement a simple and precise training circuit. The precision required for training the Continuous Restricted Boltzmann Machine (CRBM) is first identified. A training circuit based on accumulators formed by dynamic current mirrors is then proposed. By measuring the accumulators in VLSI, the feasibility of training the CRBM on chip according to its minimizing-contrastive-divergence rule is concluded.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Schwartz, A.B.: Cortical Neural Prothetics. Annual Review Neuroscience, 487–507 (2004)

    Google Scholar 

  2. Lebedev, M.A., Nicolelis, M.A.L.: Brain-machine interfaces: past, present and future. TRENDS in Neuroscience 29[9], 536–546 (2006)

    Article  Google Scholar 

  3. Chen, H., Fleury, P., Murray: Continuous-Valued Probabilistic Behaviour in a VLSI Generative Model. IEEE Trans. on Neural Networks 17(3), 755–770 (2006)

    Article  Google Scholar 

  4. Genov, R., Cauwenberghs, G.: Kerneltron: support vector machine in silicon. IEEE Trans. on Neural Networks 14(8), 1426–1433 (2003)

    Article  MATH  Google Scholar 

  5. Hsu, D., Bridges, S., Figueroa, M., Diorio, C.: Adaptive Quantization and Density Estimation in Silicon. In: Advances in Neural Information Processing Systems (2002)

    Google Scholar 

  6. Hinton, G.E.: Training Products of Experts by Minimizing Contrastive Divergence. Neural Computation 14(8), 1771–1800 (2002)

    Article  MATH  Google Scholar 

  7. Chen, H., Murray, A.F.: A Continuous Restricted Boltzmann Machine with an Implementable Training Algorithm. IEE Proc. of Vision, Image and Signal Processing 150(3), 153–158 (2003)

    Article  Google Scholar 

  8. Hinton, G.E., Sejnowski, T.J.: Learning and Relearning in Boltzmann Machine. In: Rumelhart, D., McClelland, J.L., The PDP Research Group (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, pp. 283–317. MIT, Cambridge (1986)

    Google Scholar 

  9. MIT-BIH Database Distribution, http://ecg.mit.edu/index.htm

  10. Chen, H., Fleury, P., Murray, A.F.: Minimizing Contrastive Divergence in Noisy, Mixed-mode VLSI Neurons. In: Advances in Neural Information Processing Systems, vol. 16 (2004)

    Google Scholar 

  11. Chiang, P.C., Chen, H.: Training Probabilistic VLSI models On-chip to Recognise Biomedical Signals under Hardware Nonidealities. In: IEEE International Conf. of Engineering in Medicine and Biology Society (2006)

    Google Scholar 

  12. Wegmann, G., Vittoz, E.: Analysis and Improvements of Accurate Dynamic Current Mirrors. IEEE J. of Solid-State Circuits 25[3], 699–706 (1990)

    Article  Google Scholar 

  13. Fleury, P., Chen, H., Murray, A.F.: On-chip Contrastive Divergence Learning in Analogue VLSI. In: Proc. of the International Joint Conference on Neural Networks (2004)

    Google Scholar 

  14. Teh, Y.W., Hinton, G.E.: Rate-coded Restricted Boltzmann Machine for Face Recognition. In: Advances in Neural Information Processing System. MIT Press, Cambridge (2001)

    Google Scholar 

  15. Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice Hall, Englewood Cliffs (1998)

    MATH  Google Scholar 

  16. Peterson, C., Anderson, J.R.: A Mean Field Theory Learning Algorithm for Neural Networks. Complex Systems 1, 995–1019 (1987)

    MATH  Google Scholar 

  17. Murray, A.F.: Analogue Noise-enhanced Learning in Neural Network Circuits. Electronics Letters 27(17), 1546–1548 (1991)

    Article  Google Scholar 

  18. Murray, A.F., Edwards, P.J.: Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Trans. on Neural Networks 5(5), 792–802 (1994)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lu, CC., Chen, H. (2009). Minimising Contrastive Divergence with Dynamic Current Mirrors. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds) Artificial Neural Networks – ICANN 2009. ICANN 2009. Lecture Notes in Computer Science, vol 5768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04274-4_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04274-4_43

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04273-7

  • Online ISBN: 978-3-642-04274-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics