Skip to main content

Towards Addressing Noise and Static Variations of Analog Computations Using Efficient Retraining

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)

Abstract

One of the most promising technologies to solve the energy efficiency problem for artificial neural networks on embedded systems is analog computing, which, however, is fraught with noise due to summations of unwanted or disturbing energy, and static variations related to manufacturing. While these inaccuracies can have a negative effect on the accuracy, in particular for naively deployed networks, the robustness of the networks can be significantly enhanced by a retraining procedure that considers the particular hardware instance. However, this hardware-in-the-loop retraining is very slow and thus often the bottleneck hindering the development of larger networks. Furthermore, it is hardware-instance-specific and requires access to the instance in question.

Therefore, we propose a representation of a hardware instance in software, based on simple, parallelization-friendly software structures, which could replace the hardware for the major fraction of retraining. The representation is based on lookup tables, splines as interpolated functions and additive Gaussian noise to cover static variations together with electrical noise of the multiplier array and column-wise integrators. The combined approach using the proposed representation together with some final epochs of hardware-in-the-loop retraining reduces the overall training time from over 10 h to less than 2 h compared to a full hardware-in-the-loop retraining, while notably increasing accuracy. This work highlights that including device-specific static variations and noise in the training process is essential for a time-efficient hardware-aware network training for analog computations, and that major parts can be extracted from the hardware instance and represented with simple and efficient software structures. This work is the first step towards hardware-specific but hardware-inaccessible training, addressing speed and accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Courbariaux, M., Bengio, Y., David, J.P.: BinaryConnect: training deep neural networks with binary weights during propagations. In: Advances in Neural Information Processing Systems, vol. 28. Curran Associates, Inc. (2015). https://dl.acm.org/doi/10.5555/2969442.2969588

  2. Cramer, B., et al.: Training spiking multi-layer networks with surrogate gradients on an analog neuromorphic substrate (2020). https://arxiv.org/abs/2006.07239

  3. Feinberg, B., Wang, S., Ipek, E.: Making memristive neural network accelerators reliable. In: 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA), pp. 52–65 (2018). https://doi.org/10.1109/HPCA.2018.00015

  4. Jain, S., Sengupta, A., Roy, K., Raghunathan, A.: Rx-caffe: Framework for evaluating and training deep neural networks on resistive crossbars (2018). http://arxiv.org/abs/1809.00072

  5. Joshi, V., et al.: Accurate deep neural network inference using computational phase-change memory. Nature Commun. 11(1), 2473 (2020). https://doi.org/10.1038/s41467-020-16108-9

    Article  Google Scholar 

  6. Lin, X., et al.: All-optical machine learning using diffractive deep neural networks. Science 361(6406), 1004–1008 (2018). https://doi.org/10.1126/science.aat8084

    Article  MathSciNet  MATH  Google Scholar 

  7. Liu, Z., et al.: Rethinking the value of network pruning. In: International Conference on Learning Representations (2019). https://arxiv.org/abs/1810.05270

  8. Mermelstein, P.: Distance measures for speech recognition, psychological and instrumental. Pattern Recognit. Artif. Intell. 116, 374–388 (1976)

    Google Scholar 

  9. Murmann, B.: Mixed-signal computing for deep neural network inference. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 29(1), 3–13 (2021). https://doi.org/10.1109/TVLSI.2020.3020286

  10. Murray, A., Edwards, P.: Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Trans. Neural Netw. 5(5), 792–802 (1994). https://doi.org/10.1109/72.317730

    Article  Google Scholar 

  11. Nandakumar, S.R., Le Gallo, M., Boybat, I., Rajendran, B., Sebastian, A., Eleftheriou, E.: A phase-change memory model for neuromorphic computing. J. Appl. Phys. 124(15), 152135 (2018). https://doi.org/10.1063/1.5042408

  12. Qin, M., Vucinic, D.: Noisy computations during inference: Harmful or helpful? CoRR abs/1811.10649 (2018). http://arxiv.org/abs/1811.10649

  13. Rekhi, A.S., et al.: Analog/mixed-signal hardware error modeling for deep learning inference. In: 56th Annual Design Automation Conference. DAC, Association for Computing Machinery (2019). https://doi.org/10.1145/3316781.3317770

  14. Roth, W., et al.: Resource-efficient neural networks for embedded systems. CoRR abs/2001.03048 (2020). http://arxiv.org/abs/2001.03048

  15. Schemmel, J., Billaudelle, S., Dauer, P., Weis, J.: Accelerated analog neuromorphic computing. CoRR abs/2003.11996 (2020). https://arxiv.org/abs/2003.11996

  16. Shen, Y., et al.: Deep learning with coherent nanophotonic circuits. Nat. Photonics 11(7), 441–446 (2017). https://doi.org/10.1038/nphoton.2017.93

    Article  Google Scholar 

  17. Spilger, P., et al.: hxtorch: PyTorch for BrainScaleS-2. In: Gama, J., et al. (eds.) ITEM/IoT Streams -2020. CCIS, vol. 1325, pp. 189–200. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-66770-2_14

    Chapter  Google Scholar 

  18. Torres-Huitzil, C., Girau, B.: Fault and error tolerance in neural networks: a review. IEEE Access 5, 17322–17341 (2017). https://doi.org/10.1109/ACCESS.2017.2742698

    Article  Google Scholar 

  19. Vittoz, E.: Future of analog in the VLSI environment. In: IEEE International Symposium on Circuits and Systems, vol. 2, pp. 1372–1375 (1990). https://doi.org/10.1109/ISCAS.1990.112386

  20. Warden, P.: Speech commands: A dataset for limited-vocabulary speech recognition. CoRR abs/1804.03209 (2018). http://arxiv.org/abs/1804.03209

  21. Weis, J., et al.: Inference with artificial neural networks on analog neuromorphic hardware. In: Gama, J., et al. (eds.) ITEM/IoT Streams -2020. CCIS, vol. 1325, pp. 201–212. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-66770-2_15

    Chapter  Google Scholar 

  22. Whatmough, P., Wei, G.Y., Brooks, D.: Deep Learning for Computer Architects. Morgan & Claypool Publishers, San Rafael (2017)

    Google Scholar 

  23. Zhou, C., et al.: Noisy machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation. CoRR (2020). https://arxiv.org/abs/2001.04974

Download references

Acknowledgements

The development of BrainScaleS-2 has received funding from the German Federal Ministry of Education and Research under grant number 16ES1127, the EU (H2020/2014-2020: 720270, 785907, 945539 (HBP)) and the Lautenschläger-Forschungspreis 2018 for Karlheinz Meier. We also acknowledge the financial support from the COMET program within the K2 Center “Integrated Computational Material, Process and Product Engineering (IC-MPPE)” (Project No. 859480). This program is supported by the Austrian Federal Ministries for Transport, Innovation and Technology (BMVIT) and for Digital and Economic Affairs (BMDW), represented by the Austrian research funding association (FFG), and the federal states of Styria, Upper Austria and Tyrol.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bernhard Klein .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Klein, B. et al. (2021). Towards Addressing Noise and Static Variations of Analog Computations Using Efficient Retraining. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93736-2_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93735-5

  • Online ISBN: 978-3-030-93736-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics