Processing math: 100%
Improved Models for Policy-Agent Learning of Compiler Directives in HLS | IEEE Conference Publication | IEEE Xplore

Improved Models for Policy-Agent Learning of Compiler Directives in HLS


Abstract:

Acceleration by Field-Programmable Gate Array (FPGA) continues to be deployed into data center and edge computing hardware designs; the tools and integration for ac-celer...Show More

Abstract:

Acceleration by Field-Programmable Gate Array (FPGA) continues to be deployed into data center and edge computing hardware designs; the tools and integration for ac-celerating computationally-intensive tasks continue to increase in practicality. In this paper, we build on previous work in applying machine learning to automatically tune the transformation of high-level language (HLL) \mathrm{C} code by a High Level Synthesis (HLS) system to generate an FPGA hardware design that runs at high speed. This tuning is done primarily through the selection of code transformations (optimizations) and an ordering in which to apply them. We present more detailed results from the use of reinforcement learning (RL), and improve on previous results in several ways: by developing additional strategies that perform better and more consistently, by normalizing the learning rate to the frequency of new (yet untried) action sequences, and by informing the model from aggregate statistics of optimization sub-orderings.
Date of Conference: 25-29 September 2023
Date Added to IEEE Xplore: 25 December 2023
ISBN Information:

ISSN Information:

Conference Location: Boston, MA, USA

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.