当前位置: X-MOL 学术arXiv.cs.AR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enabling Large NNs on Tiny MCUs with Swapping
arXiv - CS - Hardware Architecture Pub Date : 2021-01-14 , DOI: arxiv-2101.08744
Hongyu Miao, Felix Xiaozhu Lin

Running neural network (NN) on microcontroller unites (MCU) is becoming increasingly important, but is very difficult due to the tiny SRAM size of MCU. Prior work proposes many algorithm-level techniques to reduce NN memory footprints, but all at the cost of sacrificing accuracy and generality, which disqualifies MCUs for many important use cases. We investigate a system solution for MCUs to execute NNs out of core: dynamically swapping NN data chunks between an MCU's tiny SRAM and its large, low-cost external flash. Out-of-core NNs on MCUs raise multiple concerns: execution slowdown, storage wear out, energy consumption, and data security. We present a study showing that none is a showstopper; the key benefit -- MCUs being able to run large NNs with full accuracy and generality -- triumphs the overheads. Our findings suggest that MCUs can play a much greater role in edge intelligence.

中文翻译:

通过交换在微型MCU上启用大型NN

在微控制器单元(MCU)上运行神经网络(NN)变得越来越重要,但由于MCU的SRAM尺寸很小,因此变得非常困难。先前的工作提出了许多算法级技术来减少NN内存占用,但是所有这些都以牺牲准确性和通用性为代价,这使MCU在许多重要的用例中都失去了资格。我们研究了一种用于MCU的内核外执行NN的系统解决方案:在MCU的小型SRAM与大型低成本外部闪存之间动态交换NN数据块。MCU的核外NN引起了许多关注:执行速度减慢,存储损耗,能耗和数据安全性。我们进行的一项研究表明,没有哪一项是最有效的。关键的好处-MCU能够以完全的准确性和通用性运行大型NN-大大节省了开销。
更新日期:2021-01-22
down
wechat
bug