当前位置: X-MOL 学术IEEE J. Emerg. Sel. Top. Circuits Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
COIN: Communication-Aware In-Memory Acceleration for Graph Convolutional Networks
IEEE Journal on Emerging and Selected Topics in Circuits and Systems ( IF 3.7 ) Pub Date : 2022-04-22 , DOI: 10.1109/jetcas.2022.3169899
Sumit K. Mandal 1 , Gokul Krishnan 2 , A. Alper Goksoy 1 , Gopikrishnan Ravindran Nair 2 , Yu Cao 2 , Umit Y. Ogras 1
Affiliation  

Graph convolutional networks (GCNs) have shown remarkable learning capabilities when processing graph-structured data found inherently in many application areas. GCNs distribute the outputs of neural networks embedded in each vertex over multiple iterations to take advantage of the relations captured by the underlying graphs. Consequently, they incur a significant amount of computation and irregular communication overheads, which call for GCN-specific hardware accelerators. To this end, this paper presents a communication-aware in-memory computing architecture (COIN) for GCN hardware acceleration. Besides accelerating the computation using custom compute elements (CE) and in-memory computing, COIN aims at minimizing the intra- and inter-CE communication in GCN operations to optimize the performance and energy efficiency. Experimental evaluations with widely used datasets show up to $105\times $ improvement in energy consumption compared to state-of-the-art GCN accelerator.

中文翻译:

COIN:图卷积网络的通信感知内存加速

在处理许多应用领域固有的图结构数据时,图卷积网络 (GCN) 已显示出卓越的学习能力。GCN 将嵌入在每个顶点中的神经网络的输出分布在多次迭代中,以利用底层图捕获的关系。因此,它们会产生大量的计算和不规则的通信开销,这需要 GCN 特定的硬件加速器。为此,本文提出了一种用于 GCN 硬件加速的通信感知内存计算架构(COIN)。除了使用自定义计算元素 (CE) 和内存计算来加速计算之外,COIN 还旨在最大限度地减少 GCN 操作中的 CE 内和 CE 间通信,以优化性能和能源效率。 $105\次 $与最先进的 GCN 加速器相比,能耗有所改善。
更新日期:2022-04-22
down
wechat
bug