当前位置: X-MOL 学术IEEE J. Emerg. Sel. Top. Circuits Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Overview of Processing-in-Memory Circuits for Artificial Intelligence and Machine Learning
IEEE Journal on Emerging and Selected Topics in Circuits and Systems ( IF 4.6 ) Pub Date : 2022-03-17 , DOI: 10.1109/jetcas.2022.3160455
Donghyuk Kim 1 , Chengshuo Yu 2 , Shanshan Xie 3 , Yuzong Chen 4 , Joo-Young Kim 1 , Bongjin Kim 5 , Jaydeep P. Kulkarni 3 , Tony Tae-Hyoung Kim 2
Affiliation  

Artificial intelligence (AI) and machine learning (ML) are revolutionizing many fields of study, such as visual recognition, natural language processing, autonomous vehicles, and prediction. Traditional von-Neumann computing architecture with separated processing elements and memory devices have been improving their computing performances rapidly with the scaling of process technology. However, in the era of AI and ML, data transfer between memory devices and processing elements becomes the bottleneck of the system. To address this data movement issue, memory-centric computing takes an approach of merging the memory devices with processing elements so that computations can be done in the same location without moving any data. Processing-In-Memory (PIM) has attracted research community’s attention because it can improve the energy efficiency of memory-centric computing systems substantially by minimizing the data movement. Even though the benefits of PIM are well accepted, its limitations and challenges have not been investigated thoroughly. This paper presents a comprehensive investigation of state-of-the-art PIM research works based on various memory device types, such as static-random-access-memory (SRAM), dynamic-random-access-memory (DRAM), and resistive memory (ReRAM). We will present the overview of PIM designs in each memory type, covering from bit cells, circuits, and architecture. Then, a new software stack standard and its challenges for incorporating PIM with the conventional computing architecture will be discussed. Finally, we will discuss various future research directions in PIM for further reducing the data conversion overhead, improving test accuracy, and minimizing intra-memory data movement.

中文翻译:

人工智能和机器学习的内存处理电路概述

人工智能 (AI) 和机器学习 (ML) 正在彻底改变许多研究领域,例如视觉识别、自然语言处理、自动驾驶汽车和预测。传统的冯诺依曼计算架构具有分离的处理元件和存储设备,随着工艺技术的扩展,其计算性能正在迅速提高。然而,在 AI 和 ML 时代,存储设备和处理元件之间的数据传输成为系统的瓶颈。为了解决这个数据移动问题,以内存为中心的计算采用了一种将内存设备与处理元件合并的方法,这样计算就可以在同一位置完成而无需移动任何数据。内存处理 (PIM) 引起了研究界的关注,因为它可以通过最小化数据移动来显着提高以内存为中心的计算系统的能源效率。尽管 PIM 的好处已被广泛接受,但其局限性和挑战尚未得到彻底研究。本文全面研究了基于各种存储设备类型的最新 PIM 研究工作,例如静态随机存取存储器 (SRAM)、动态随机存取存储器 (DRAM) 和电阻式存储器。内存(ReRAM)。我们将概述每种存储器类型的 PIM 设计,涵盖位单元、电路和架构。然后,将讨论新的软件堆栈标准及其将 PIM 与传统计算架构相结合所面临的挑战。最后,
更新日期:2022-03-17
down
wechat
bug