Loading [a11y]/accessibility-menu.js
Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale | IEEE Journals & Magazine | IEEE Xplore

Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale


Abstract:

Tremendous success of machine learning (ML) and the unabated growth in model complexity motivated many ML-specific designs in hardware architectures to speed up the model...Show More

Abstract:

Tremendous success of machine learning (ML) and the unabated growth in model complexity motivated many ML-specific designs in hardware architectures to speed up the model inference. While these architectures are diverse, highly optimized low-precision arithmetic is a component shared by most. Nevertheless, recommender systems important to Facebook’s personalization services are demanding and complex: They must serve billions of users per month responsively with low latency while maintaining high prediction accuracy. Do these low-precision architectures work well with our production recommendation systems? They do. But not without significant effort. In this article, we share our search strategies to adapt reference recommendation models to low-precision hardware, our optimization of low-precision compute kernels, and the tool chain to maintain our models’ accuracy throughout their lifespan. We believe our lessons from the trenches can promote better codesign between hardware architecture and software engineering, and advance the state of the art of ML in industry.
Published in: IEEE Micro ( Volume: 41, Issue: 5, 01 Sept.-Oct. 2021)
Page(s): 93 - 100
Date of Publication: 19 May 2021

ISSN Information:


References

References is not available for this document.