当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ItNet: iterative neural networks with tiny graphs for accurate and efficient anytime prediction
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2021-01-21 , DOI: arxiv-2101.08685
Thomas Pfeil

Deep neural networks have usually to be compressed and accelerated for their usage in low-power, e.g. mobile, devices. Recently, massively-parallel hardware accelerators were developed that offer high throughput and low latency at low power by utilizing in-memory computation. However, to exploit these benefits the computational graph of a neural network has to fit into the in-computation memory of these hardware systems that is usually rather limited in size. In this study, we introduce a class of network models that have a tiny memory footprint in terms of their computational graphs. To this end, the graph is designed to contain loops by iteratively executing a single network building block. Furthermore, the trade-off between accuracy and latency of these so-called iterative neural networks is improved by adding multiple intermediate outputs both during training and inference. We show state-of-the-art results for semantic segmentation on the CamVid and Cityscapes datasets that are especially demanding in terms of computational resources. In ablation studies, the improvement of network training by intermediate network outputs as well as the trade-off between weight sharing over iterations and the network size are investigated.

中文翻译:

ItNet:具有微小图的迭代神经网络,可随时随地进行准确,高效的预测

深度神经网络通常必须压缩和加速以用于低功耗(例如移动设备)中。最近,开发了大规模并行的硬件加速器,这些加速器通过利用内存内计算来提供高吞吐量和低延迟,低功耗。但是,要利用这些好处,神经网络的计算图必须适合这些硬件系统的计算内存,而这些内存通常在大小上受到限制。在这项研究中,我们介绍了一类网络模型,这些模型的计算图很少。为此,该图被设计为通过迭代执行单个网络构建块来包含循环。此外,这些所谓的迭代神经网络的准确性和等待时间之间的折衷通过在训练和推理期间添加多个中间输出而得到改善。我们显示了对CamVid和Cityscapes数据集进行语义分割的最新结果,这些数据在计算资源方面特别苛刻。在消融研究中,研究了中间网络输出对网络训练的改进以及迭代权重分配与网络大小之间的权衡。
更新日期:2021-01-22
down
wechat
bug