当前位置: X-MOL 学术ACM Trans. Des. Autom. Electron. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Predicting Memory Compiler Performance Outputs Using Feed-forward Neural Networks
ACM Transactions on Design Automation of Electronic Systems ( IF 2.2 ) Pub Date : 2020-07-06 , DOI: 10.1145/3385262
Felix Last 1 , Max Haeberlein 2 , Ulf Schlichtmann 3
Affiliation  

Typical semiconductor chips include thousands of mostly small memories. As memories contribute an estimated 25% to 40% to the overall power, performance, and area (PPA) of a product, memories must be designed carefully to meet the system’s requirements. Memory arrays are highly uniform and can be described by approximately 10 parameters depending mostly on the complexity of the periphery. Thus, to improve PPA utilization, memories are typically generated by memory compilers. A key task in the design flow of a chip is to find optimal memory compiler parametrizations that, on the one hand, fulfill system requirements while, on the other hand, they optimize PPA. Although most compiler vendors also provide optimizers for this task, these are often slow or inaccurate. To enable efficient optimization in spite of long compiler runtimes, we propose training fully connected feed-forward neural networks to predict PPA outputs given a memory compiler parametrization. Using an exhaustive search-based optimizer framework that obtains neural network predictions, PPA-optimal parametrizations are found within seconds after chip designers have specified their requirements. Average model prediction errors of less than 3%, a decision reliability of over 99%, and productive usage of the optimizer for successful, large volume chip design projects illustrate the effectiveness of the approach.

中文翻译:

使用前馈神经网络预测内存编译器性能输出

典型的半导体芯片包括数千个主要是小型存储器。由于存储器对产品的总功率、性能和面积 (PPA) 贡献了大约 25% 到 40%,因此必须仔细设计存储器以满足系统要求。存储器阵列高度统一,可以用大约 10 个参数来描述,主要取决于外围设备的复杂性。因此,为了提高 PPA 利用率,存储器通常由存储器编译器生成。芯片设计流程中的一项关键任务是找到最佳的内存编译器参数化,一方面满足系统要求,另一方面优化 PPA。尽管大多数编译器供应商也为此任务提供优化器,但这些通常很慢或不准确。为了在编译器运行时间较长的情况下实现高效优化,我们建议训练完全连接的前馈神经网络,以在给定内存编译器参数化的情况下预测 PPA 输出。使用获得神经网络预测的详尽的基于搜索的优化器框架,在芯片设计人员指定他们的要求后几秒钟内就可以找到 PPA 最佳参数化。平均模型预测误差低于 3%,决策可靠性超过 99%,优化器在成功的大容量芯片设计项目中的高效使用说明了该方法的有效性。在芯片设计人员指定他们的要求后,几秒钟内就可以找到 PPA 最佳参数化。平均模型预测误差低于 3%,决策可靠性超过 99%,优化器在成功的大容量芯片设计项目中的高效使用说明了该方法的有效性。在芯片设计人员指定他们的要求后,几秒钟内就可以找到 PPA 最佳参数化。平均模型预测误差低于 3%,决策可靠性超过 99%,优化器在成功的大容量芯片设计项目中的高效使用说明了该方法的有效性。
更新日期:2020-07-06
down
wechat
bug