skip to main content
10.1145/3649329.3663500acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article
Open access

Late Breaking Results: Language-level QoR modeling for High-Level Synthesis

Published: 07 November 2024 Publication History

Abstract

This paper proposes a language-level modeling approach for HighLevel Synthesis based on the state-of-the-art Transformer architecture. Our approach estimates the performance and required resources of HLS applications directly from the source code when different synthesis directives, in terms of HLS #pragmas, are applied. Results show that the proposed architecture achieves 96.02% accuracy for predicting the feasibility class of applications and an average of 0.95 and 0.91 R2 scores for predicting the actual performance and required resources, respectively.

References

[1]
Ferikoglou Aggelos et al. 2023. CollectiveHLS: Ultra-fast Knowledge-Based HLS Design Optimization. IEEE Embedded Systems Letters (2023).
[2]
Yunsheng Bai et al. 2023. Towards a Comprehensive Benchmark for High-Level Synthesis Targeted to FPGAs. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
[3]
Ustun Ecenur et al. 2020. Accurate operation delay prediction for FPGA HLS using graph neural networks. In 39th Intl. conference on computer-aided design.
[4]
Zhong Guanwen et al. 2016. Lin-analyzer: A high-level performance analysis tool for FPGA-based accelerators. In Proceedings of 53rd Design Automation Conference.
[5]
Mingjie Liu et al. 2023. Chipnemo: Domain-adapted llms for chip design. arXiv preprint arXiv:2311.00176 (2023).
[6]
Ferretti Lorenzo et al. 2022. Graph neural networks for high-level synthesis design space exploration. ACM Trans. on Design Automation of Electronic Systems (2022).
[7]
Wu Nan et al. 2022. IronMan-Pro: Multiobjective Design Space Exploration in HLS via Reinforcement Learning and Graph Neural Network-Based Modeling. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 42, 3 (2022), 900--913.
[8]
Feng Zhangyin et al. 2020. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155 (2020).
[9]
Lin Zhe et al. 2020. HL-Pow: A learning-based power modeling framework for highlevel synthesis. In 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE.
Index terms have been assigned to the content through auto-classification.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
DAC '24: Proceedings of the 61st ACM/IEEE Design Automation Conference
June 2024
2159 pages
ISBN:9798400706011
DOI:10.1145/3649329
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 November 2024

Check for updates

Qualifiers

  • Research-article

Conference

DAC '24
Sponsor:
DAC '24: 61st ACM/IEEE Design Automation Conference
June 23 - 27, 2024
CA, San Francisco, USA

Acceptance Rates

Overall Acceptance Rate 1,770 of 5,499 submissions, 32%

Upcoming Conference

DAC '25
62nd ACM/IEEE Design Automation Conference
June 22 - 26, 2025
San Francisco , CA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 123
    Total Downloads
  • Downloads (Last 12 months)123
  • Downloads (Last 6 weeks)37
Reflects downloads up to 13 Feb 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media