当前位置: X-MOL 学术Nat. Electron. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Energy-efficient memcapacitor devices for neuromorphic computing
Nature Electronics ( IF 33.7 ) Pub Date : 2021-10-11 , DOI: 10.1038/s41928-021-00649-y
Kai-Uwe Demasius 1 , Stuart Parkin 1 , Aron Kirschen 2
Affiliation  

Data-intensive computing operations, such as training neural networks, are essential for applications in artificial intelligence but are energy intensive. One solution is to develop specialized hardware onto which neural networks can be directly mapped, and arrays of memristive devices can, for example, be trained to enable parallel multiply–accumulate operations. Here we show that memcapacitive devices that exploit the principle of charge shielding can offer a highly energy-efficient approach for implementing parallel multiply–accumulate operations. We fabricate a crossbar array of 156 microscale memcapacitor devices and use it to train a neural network that could distinguish the letters ‘M’, ‘P’ and ‘I’. Modelling these arrays suggests that this approach could offer an energy efficiency of 29,600 tera-operations per second per watt, while ensuring high precision (6–8 bits). Simulations also show that the devices could potentially be scaled down to a lateral size of around 45 nm.



中文翻译:

用于神经形态计算的节能型memcapacitor设备

数据密集型计算操作,例如训练神经网络,对于人工智能应用来说是必不可少的,但也是能源密集型的。一种解决方案是开发可以直接映射神经网络的专用硬件,并且可以训练忆阻设备阵列以实现并行乘法累加操作。在这里,我们展示了利用电荷屏蔽原理的记忆电容设备可以提供一种高能效的方法来实现并行乘法累加操作。我们制作了一个由 156 个微型记忆电容器设备组成的交叉阵列,并用它来训练一个可以区分字母“M”、“P”和“I”的神经网络。对这些阵列进行建模表明,这种方法可以提供每秒每瓦 29,600 兆兆次运算的能源效率,同时确保高精度(6-8位)。模拟还表明,这些设备可能会缩小到大约 45 nm 的横向尺寸。

更新日期:2021-10-11
down
wechat
bug