Skip to main content

Design Space Exploration of Time, Energy, and Error Rate Trade-offs for CNNs Using Accuracy-Programmable Instruction Set Processors

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)

Abstract

We proclaim the use of application-specific instruction set processors with programmable accuracy called Anytime Instruction Processors (AIPs) for Convolutional Neural Network (CNN) inference. For a floating-point operation, the number of correctly computed mantissa result bits can be freely adjusted, allowing for a fine-grained trade-off analysis between accuracy, execution time and energy. We propose a Design Space Exploration (DSE) technique in which the accuracy of CNN computations is determined layer-by-layer. As one result, we show that reductions of up to 62% in energy consumption are achievable for a representative ResNet-18 benchmark in comparison to a solution where each layer is computed at full accuracy according to the IEEE 754 single precision floating-point format.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that \(|V|=21\) for ResNet-18 when also counting three 1 \(\times \) 1 convolutional layers.

References

  1. Alizadeh, M., Behboodi, A., van Baalen, M., Louizos, C., Blankevoort, T., Welling, M.: Gradient \(\ell _{1}\) regularization for quantization robustness. In: 8th International Conference on Learning Representations, ICLR (2020)

    Google Scholar 

  2. Brand, M., Witterauf, M., Bosio, A., Teich, J.: Anytime floating-point addition and multiplication-concepts and implementations. In: 31st IEEE International Conference on Application-Specific Systems, Architectures and Processors, ASAP, pp. 157–164. IEEE (2020)

    Google Scholar 

  3. Brand, M., Witterauf, M., Hannig, F., Teich, J.: Anytime instructions for programmable accuracy floating-point arithmetic. In: Proceedings of the 16th ACM International Conference on Computing Frontiers, CF, pp. 215–219. ACM (2019)

    Google Scholar 

  4. Cai, Y., Yao, Z., Dong, Z., Gholami, A., Mahoney, M.W., Keutzer, K.: ZeroQ: a novel zero shot quantization framework. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, pp. 13166–13175 (2020)

    Google Scholar 

  5. Cheng, Y., Wang, D., Zhou, P., Zhang, T.: Model compression and acceleration for deep neural networks: the principles, progress, and challenges. IEEE Sig. Process. Mag. 35(1), 126–136 (2018)

    Article  Google Scholar 

  6. Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

    Article  Google Scholar 

  7. Fu, Y., et al.: FracTrain: fractionally squeezing bit savings both temporally and spatially for efficient DNN training. In: Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS, pp. 12127–12139 (2020)

    Google Scholar 

  8. Garófalo, A., Rusci, M., Conti, F., Rossi, D., Benini, L.: PULP-NN: accelerating quantized neural networks on parallel ultra-low-power RISC-V processors. Phil. Trans. R. Soc. A 378, 20190155 (2019)

    Article  MathSciNet  Google Scholar 

  9. Genc, H., et al.: Gemmini: an agile systolic array generator enabling systematic evaluations of deep-learning architectures. CoRR abs/1911.09925 (2019)

    Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 770–778. IEEE Computer Society (2016)

    Google Scholar 

  11. Heidorn, C., Witterauf, M., Hannig, F., Teich, J.: Efficient mapping of CNNs onto tightly coupled processor arrays. J. Comput. 14(8), 541–556 (2019)

    Article  Google Scholar 

  12. Keszocze, O., Brand, M., Witterauf, M., Heidorn, C., Teich, J.: Aarith: an arbitrary precision number library. In: ACM/SIGAPP Symposium On Applied Computing (2021). Aarith is publicly available at https://github.com/keszocze/aarith

  13. Krizhevsky, A., Nair, V., Hinton, G.: The CIFAR-10 dataset (2010). https://www.cs.toronto.edu/~kriz/cifar.html. Accessed 20 Feb 2020

  14. LeCun, Y., Cortes, C., Burges, C.J.: MNIST handwritten digit database (1998). http://yann.lecun.com/exdb/mnist/. Accessed 18 Nov 2020

  15. Lukasiewycz, M., Glaß, M., Reimann, F., Teich, J.: Opt4j: a modular framework for meta-heuristic optimization. In: Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, pp. 1723–1730 (2011)

    Google Scholar 

  16. Mrazek, V., Vasícek, Z., Sekanina, L., Hanif, M.A., Shafique, M.: ALWANN: automatic layer-wise approximation of deep neural network accelerators without retraining. In: Proceedings of the International Conference on Computer-Aided Design, ICCAD 2019, pp. 1–8. ACM (2019)

    Google Scholar 

  17. la Parra, C.D., Guntoro, A., Kumar, A.: ProxSim: GPU-based simulation framework for cross-layer approximate DNN optimization. In: 2020 Design, Automation & Test in Europe Conference & Exhibition, DATE 2020, pp. 1193–1198. IEEE (2020)

    Google Scholar 

  18. Rek, P., Sekanina, L.: TypeCNN: CNN development framework with flexible data types. In: Design, Automation & Test in Europe Conference & Exhibition, DATE 2019, pp. 292–295 (2019)

    Google Scholar 

  19. Synopsys: Synopsys 32/28 nm and 90 nm generic libraries (2020). https://www.synopsys.com/community/university-program/teaching-resources.html. Accessed 06 Aug 2020

  20. Sze, V., Chen, Y.H., Yang, T.J., Emer, J.S.: Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105(12), 2295–2329 (2017)

    Article  Google Scholar 

  21. Umuroglu, Y., Rasnayake, L., Själander, M.: BISMO: a scalable bit-serial matrix multiplication overlay for reconfigurable computing. In: 28th International Conference on Field Programmable Logic and Applications, FPL 2018, pp. 307–314. IEEE Computer Society (2018)

    Google Scholar 

  22. Wang, K., Liu, Z., Lin, Y., Lin, J., Han, S.: HAQ: hardware-aware automated quantization with mixed precision. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, pp. 8612–8620 (2019)

    Google Scholar 

Download references

Acknowledgments

This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project Number 146371743—TRR 89 Invasive Computing and the German Federal Ministry for Education and Research (BMBF) within project KISS (01IS19070B).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Armin Schuster .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schuster, A., Heidorn, C., Brand, M., Keszocze, O., Teich, J. (2021). Design Space Exploration of Time, Energy, and Error Rate Trade-offs for CNNs Using Accuracy-Programmable Instruction Set Processors. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93736-2_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93735-5

  • Online ISBN: 978-3-030-93736-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics