skip to main content
10.1145/3489517.3530505acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article

NN-LUT: neural approximation of non-linear operations for efficient transformer inference

Published:23 August 2022Publication History

ABSTRACT

Non-linear operations such as GELU, Layer normalization, and Soft-max are essential yet costly building blocks of Transformer models. Several prior works simplified these operations with look-up tables or integer computations, but such approximations suffer inferior accuracy or considerable hardware cost with long latency. This paper proposes an accurate and hardware-friendly approximation framework for efficient Transformer inference. Our framework employs a simple neural network as a universal approximator with its structure equivalently transformed into a Look-up table(LUT). The proposed framework called Neural network generated LUT(NN-LUT) can accurately replace all the non-linear operations in popular BERT models with significant reductions in area, power consumption, and latency.

References

  1. NVIDIA Deep Learning Accelerator. http://nvdla.org/primer.html.Google ScholarGoogle Scholar
  2. A. Cantoni. 1971. Optimal Curve Fitting With Piecewise Linear Functions. IEEE Trans. Comput. C-20, 1 (1971), 59--67.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. J. Chen and X. Liu. 2017. A high-performance deeply pipelined architecture for elementary transcendental function evaluation. In ICCD.Google ScholarGoogle Scholar
  4. G. Cybenko. 1989. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2, 4 (1989), 303--314.Google ScholarGoogle Scholar
  5. S. Eldridge, F. Raudies, D. Zou, and A. Joshi. 2014. Neural network-based accelerators for transcendental function approximation. In GLSVLSI.Google ScholarGoogle Scholar
  6. H. Esmaeilzadeh, A. Sampson, L. Ceze, and D. Burger. 2012. Neural acceleration for general-purpose approximate programs. In MICRO.Google ScholarGoogle Scholar
  7. J.-W. Jang et al. 2021. Sparsity-Aware and Re-configurable NPU Architecture for Samsung Flagship Mobile SoC. In ISCA.Google ScholarGoogle Scholar
  8. S. Kim et al. 2021. I-BERT: Integer-only BERT Quantizatio. In ICML.Google ScholarGoogle Scholar
  9. Z. Lu et al. 2017. The expressive power of neural networks. In NeurIPS.Google ScholarGoogle Scholar
  10. J. R. Stevens et al. 2021. Softermax: Hardware/Software Co-Design of an Efficient Softmax for Transformers. In DAC.Google ScholarGoogle Scholar
  11. A. Vaswani et al. 2017. Attention is all you need. In NeurIPS.Google ScholarGoogle Scholar
  12. H. Wang, Z. Zhang, and S. Han. 2021. SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning. In HPCA.Google ScholarGoogle Scholar
  13. W. Zhang et al. 2020. TernaryBERT: Distillation-aware Ultra-low Bit BERT. In EMNLP.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    DAC '22: Proceedings of the 59th ACM/IEEE Design Automation Conference
    July 2022
    1462 pages
    ISBN:9781450391429
    DOI:10.1145/3489517

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 23 August 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate1,770of5,499submissions,32%

    Upcoming Conference

    DAC '24
    61st ACM/IEEE Design Automation Conference
    June 23 - 27, 2024
    San Francisco , CA , USA

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader