Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Energy-Efficient CNN Personalized Training by Adaptive Data Reformation
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems ( IF 2.7 ) Pub Date : 4-27-2022 , DOI: 10.1109/tcad.2022.3170845
Youngbeom Jung 1 , Hyeonuk Kim 1 , Seungkyu Choi 1 , Jaekang Shin 1 , Lee-Sup Kim 1
Affiliation  

To adopt deep neural networks in resource-constrained edge devices, various energy- and memory-efficient embedded accelerators have been proposed. However, most off-the-shelf networks are well trained with vast amounts of data, but unexplored users’ data or accelerator’s constraints can lead to unexpected accuracy loss. Therefore, a network adaptation suitable for each user and device is essential to make a high confidence prediction in given environment. We propose simple but efficient data reformation methods that can effectively reduce the communication cost with off-chip memory during the adaptation. Our proposal utilizes the data’s zero-centered distribution and spatial correlation to concentrate the sporadically spread bit-level zeros to the units of value. Consequently, we reduced the communication volume by up to 55.6% per task with an area overhead of 0.79% during the personalization training.

中文翻译:


通过自适应数据重组实现节能 CNN 个性化训练



为了在资源受限的边缘设备中采用深度神经网络,人们提出了各种节能和内存高效的嵌入式加速器。然而,大多数现成的网络都经过大量数据的良好训练,但未经探索的用户数据或加速器的限制可能会导致意外的准确性损失。因此,适合每个用户和设备的网络适应对于在给定环境中进行高置信度预测至关重要。我们提出了简单但有效的数据重组方法,可以有效降低适应过程中与片外存储器的通信成本。我们的建议利用数据的零中心分布和空间相关性将零星分布的位级零集中到值单位。因此,在个性化训练期间,我们将每个任务的通信量减少了 55.6%,区域开销减少了 0.79%。
更新日期:2024-08-26
down
wechat
bug