skip to main content
10.1145/3649476.3658754acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article
Open access

Deep-Learning-Based Pre-Layout Parasitic Capacitance Prediction on SRAM Designs

Published: 12 June 2024 Publication History

Abstract

To achieve higher system energy efficiency, SRAM in SoCs is often customized. The parasitic effects cause notable discrepancies between pre-layout and post-layout circuit simulations, leading to difficulty in converging design parameters and excessive design iterations. Is it possible to well predict the parasitics based on the pre-layout circuit, so as to perform parasitic-aware pre-layout simulation? In this work, we propose a deep-learning-based 2-stage model to accurately predict these parasitics in pre-layout stages. The model combines a Graph Neural Network (GNN) classifier and Multi-Layer Perceptron (MLP) regressors, effectively managing class imbalance of the net parasitics in SRAM circuits. We also employ Focal Loss to mitigate the impact of abundant internal net samples and integrate subcircuit information into the graph to abstract the hierarchical structure of schematics. Experiments on 4 real SRAM designs show that our approach not only surpasses the state-of-the-art model in parasitic prediction by a maximum of 19X reduction of error but also significantly boosts the simulation process by up to 598X speedup.

References

[1]
ARM. 2023. Artisan Embedded Memory IP. https://www.arm.com/en/products/silicon-ip-physical/embedded-memory
[2]
Yung-Chen Chien and Jinn-Shyan Wang. 2018. A 0.2 V 32-Kb 10T SRAM with 41 nW standby power for IoT applications. IEEE Trans. Circuits Syst. I 65, 8 (2018), 2443–2454.
[3]
Weibing Gong, Wenjian Yu, Yongqiang Lü, Qiming Tang, Qiang Zhou, and Yici Cai. 2010. A parasitic extraction method of VLSI interconnects for pre-route timing analysis. In Proc. Int. Conf. on Commun., Circuits and Syst. (ICCCAS). 871–875.
[4]
Marco Gori, Gabriele Monfardini, and Franco Scarselli. 2005. A new model for learning in graph domains. In Proc. Int. Joint Conf. on Neural Netw. (IJCNN). 729–734.
[5]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in Neural Information Process. Syst. 30 (2017).
[6]
Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[7]
Chenfeng Li, Dezhong Hu, and Xiaoyan Zhang. 2023. Pre-Layout Parasitic-Aware Design Optimizing for RF Circuits Using Graph Neural Network. Electronics 12, 2 (2023), 465.
[8]
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2017. Focal loss for dense object detection. In Proc. Int. Conf. on Comput. Vision (ICCV). 2980–2988.
[9]
Mingjie Liu, Walker J Turner, George F Kokai, Brucek Khailany, David Z Pan, and Haoxing Ren. 2021. Parasitic-aware analog circuit sizing with graph neural networks and Bayesian optimization. In Proc. DATE. 1372–1377.
[10]
Daniela Sánchez Lopera, Lorenzo Servadei, Gamze Naz Kiprit, Souvik Hazra, Robert Wille, and Wolfgang Ecker. 2021. A survey of graph neural networks for electronic design automation. In Proc. MLCAD. 1–6.
[11]
Saibal Mukhopadhyay, Hamid Mahmoodi, and Kaushik Roy. 2005. Modeling of failure probability and statistical design of SRAM array for yield enhancement in nanoscaled CMOS. IEEE Trans. Comput.-Aided Design Integr. Circuits Syst. 24, 12 (2005), 1859–1880.
[12]
Sai Surya Kiran Pentapati, Bon Woong Ku, and Sung Kyu Lim. 2021. ML-based wire RC prediction in monolithic 3D ICs with an application to full-chip optimization. In Proc. ISPD. 75–82.
[13]
Haoxing Ren, George F Kokai, Walker J Turner, and Ting-Sheng Ku. 2020. ParaGraph: Layout parasitics and device parameter prediction using graph neural networks. In Proc. DAC. 1–6.
[14]
Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. June 3–7, 2018. Modeling relational data with graph convolutional networks. In Proc. European Semantic Web Conference (ESWC), Heraklion, Crete, Greece. 593–607.
[15]
Shan Shen, Tianxiang Shao, Xiaojing Shang, Yichen Guo, Ming Ling, Jun Yang, and Longxing Shi. 2019. TS cache: A fast cache with timing-speculation mechanism under low supply voltages. IEEE Trans. VLSI Syst. 28, 1 (2019), 252–262.
[16]
Shan Shen, Hao Xu, Yongliang Zhou, Ming Ling, and Wenjian Yu. 2023. Ultra8T: A Sub-Threshold 8T SRAM with Leakage Detection. arXiv preprint arXiv:2306.08936 (2023).
[17]
Brett Shook, Prateek Bhansali, Chandramouli Kashyap, Chirayu Amin, and Siddhartha Joshi. 2020. MLParest: Machine learning based parasitic estimation for custom circuit design. In Proc. DAC. 1–6.
[18]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
[19]
Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, 2019. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315 (2019).
[20]
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. 2020. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32, 1 (2020), 4–24.
[21]
Dingcheng Yang, Haoyuan Li, Wenjian Yu, Yuanbo Guo, and Wenjie Liang. 2023. CNN-Cap: Effective Convolutional Neural Network-based Capacitance Models for Interconnect Capacitance Extraction. ACM Transactions on Design Automation of Electronic Systems 28, 4 (2023), 1–22.
[22]
Jun Yang, Yuyao Kong, Zhen Wang, Yan Liu, Bo Wang, Shouyi Yin, and Longxin Shi. 2019. 24.4 sandwich-RAM: An energy-efficient in-memory BWN architecture with pulse-width modulation. In Proc. Int. Solid-State Circuits Conf. (ISSCC). 394–396.
[23]
Chengshuo Yu, Taegeun Yoo, Kevin Tshun Chuan Chai, Tony Tae-Hyoung Kim, and Bongjin Kim. 2022. A 65-nm 8T SRAM compute-in-memory macro with column ADCs for processing neural networks. IEEE J. Solid-State Circuits 57, 11 (2022), 3466–3476.
[24]
Wenjian Yu and Xiren Wang. 2014. Advanced field-solver techniques for RC extraction of integrated circuits. Springer.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
GLSVLSI '24: Proceedings of the Great Lakes Symposium on VLSI 2024
June 2024
797 pages
ISBN:9798400706059
DOI:10.1145/3649476
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 June 2024

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • the National Science and Technology Major Project
  • the National Natural Science Foundation of China

Conference

GLSVLSI '24
Sponsor:
GLSVLSI '24: Great Lakes Symposium on VLSI 2024
June 12 - 14, 2024
FL, Clearwater, USA

Acceptance Rates

Overall Acceptance Rate 312 of 1,156 submissions, 27%

Upcoming Conference

GLSVLSI '25
Great Lakes Symposium on VLSI 2025
June 30 - July 2, 2025
New Orleans , LA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 465
    Total Downloads
  • Downloads (Last 12 months)465
  • Downloads (Last 6 weeks)140
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media