当前位置: X-MOL 学术ACM Trans. Des. Autom. Electron. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Machine Learning for Congestion Management and Routability Prediction within FPGA Placement
ACM Transactions on Design Automation of Electronic Systems ( IF 2.2 ) Pub Date : 2020-07-07 , DOI: 10.1145/3373269
Hannah Szentimrey 1 , Abeer Al-Hyari 1 , Jeremy Foxcroft 1 , Timothy Martin 1 , David Noel 1 , Gary Grewal 1 , Shawki Areibi 1
Affiliation  

Placement for Field Programmable Gate Arrays (FPGAs) is one of the most important but time-consuming steps for achieving design closure. This article proposes the integration of three unique machine learning models into the state-of-the-art analytic placement tool GPlace3.0 with the aim of significantly reducing placement runtimes. The first model, MLCong, is based on linear regression and replaces the computationally expensive global router currently used in GPlace3.0 to estimate switch-level congestion. The second model, DLManage, is a convolutional encoder-decoder that uses heat maps based on the switch-level congestion estimates produced by MLCong to dynamically determine the amount of inflation to apply to each switch to resolve congestion. The third model, DLRoute, is a convolutional neural network that uses the previous heat maps to predict whether or not a placement solution is routable. Once a placement solution is determined to be routable, further optimization may be avoided, leading to improved runtimes. Experimental results obtained using 372 benchmarks provided by Xilinx Inc. show that when all three models are integrated into GPlace3.0, placement runtimes decrease by an average of 48%.

中文翻译:

用于 FPGA 布局内的拥塞管理和可路由性预测的机器学习

现场可编程门阵列 (FPGA) 的布局是实现设计收敛的最重要但最耗时的步骤之一。本文建议将三种独特的机器学习模型集成到最先进的分析放置工具 GPlace3.0 中,目的是显着减少放置运行时间。第一个模型,MLCong,基于线性回归,取代了目前在 GPlace3.0 中使用的计算成本高的全局路由器来估计交换机级拥塞。第二个模型 DLManage 是一个卷积编码器-解码器,它使用基于 MLCong 生成的交换机级拥塞估计的热图来动态确定应用于每个交换机以解决拥塞的膨胀量。第三种模型,DLRoute,是一个卷积神经网络,它使用之前的热图来预测放置解决方案是否可路由。一旦确定放置解决方案是可路由的,就可以避免进一步的优化,从而提高运行时间。使用赛灵思公司提供的 372 个基准测试获得的实验结果表明,当所有三个模型都集成到 GPlace3.0 中时,布局运行时间平均减少了 48%。
更新日期:2020-07-07
down
wechat
bug