当前位置: X-MOL 学术arXiv.cs.OH › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Predicting Memory Compiler Performance Outputs using Feed-Forward Neural Networks
arXiv - CS - Other Computer Science Pub Date : 2020-03-05 , DOI: arxiv-2003.03269
Felix Last, Max Haeberlein, Ulf Schlichtmann

Typical semiconductor chips include thousands of mostly small memories. As memories contribute an estimated 25% to 40% to the overall power, performance, and area (PPA) of a chip, memories must be designed carefully to meet the system's requirements. Memory arrays are highly uniform and can be described by approximately 10 parameters depending mostly on the complexity of the periphery. Thus, to improve PPA utilization, memories are typically generated by memory compilers. A key task in the design flow of a chip is to find optimal memory compiler parametrizations which on the one hand fulfill system requirements while on the other hand optimize PPA. Although most compiler vendors also provide optimizers for this task, these are often slow or inaccurate. To enable efficient optimization in spite of long compiler run times, we propose training fully connected feed-forward neural networks to predict PPA outputs given a memory compiler parametrization. Using an exhaustive search-based optimizer framework which obtains neural network predictions, PPA-optimal parametrizations are found within seconds after chip designers have specified their requirements. Average model prediction errors of less than 3%, a decision reliability of over 99% and productive usage of the optimizer for successful, large volume chip design projects illustrate the effectiveness of the approach.

中文翻译:

使用前馈神经网络预测内存编译器性能输出

典型的半导体芯片包括数千个主要是小型存储器。由于存储器对芯片的整体功耗、性能和面积 (PPA) 的贡献估计为 25% 到 40%,因此必须仔细设计存储器以满足系统要求。存储器阵列高度统一,可以用大约 10 个参数来描述,主要取决于外围设备的复杂程度。因此,为了提高 PPA 利用率,内存通常由内存编译器生成。芯片设计流程中的一个关键任务是找到最佳内存编译器参数化,一方面满足系统要求,另一方面优化 PPA。尽管大多数编译器供应商也为此任务提供优化器,但这些优化器通常很慢或不准确。尽管编译器运行时间很长,但为了实现有效的优化,我们建议训练完全连接的前馈神经网络,以在给定内存编译器参数化的情况下预测 PPA 输出。使用一个详尽的基于搜索的优化器框架来获得神经网络预测,在芯片设计者指定他们的要求后几秒钟内就可以找到 PPA 最优参数化。小于 3% 的平均模型预测误差、超过 99% 的决策可靠性以及优化器在成功的大容量芯片设计项目中的高效使用说明了该方法的有效性。在芯片设计人员指定他们的要求后几秒钟内即可找到 PPA 最佳参数化。小于 3% 的平均模型预测误差、超过 99% 的决策可靠性以及优化器在成功的大容量芯片设计项目中的高效使用说明了该方法的有效性。在芯片设计人员指定他们的要求后几秒钟内即可找到 PPA 最佳参数化。小于 3% 的平均模型预测误差、超过 99% 的决策可靠性以及优化器在成功的大容量芯片设计项目中的高效使用说明了该方法的有效性。
更新日期:2020-07-13
down
wechat
bug