当前位置: X-MOL 学术Opt. Mem. Neural Networks › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Study of Fault Tolerance Methods for Hardware Implementations of Convolutional Neural Networks
Optical Memory and Neural Networks Pub Date : 2019-07-01 , DOI: 10.3103/s1060992x19020103
R. A. Solovyev , A. L. Stempkovsky , D. V. Telpukhov

Abstract

The paper concentrates on methods of fault protection of neural networks implemented as hardware operating in fixed-point mode. We have explored possible variants of error occurrence, as well as ways to eliminate them. For this purpose, networks of identical architecture based on VGG model have been studied. VGG SIMPLE neural network that has been chosen for experiments is a simplified version (with smaller number of layers) of well-known networks VGG16 and VGG19. To eliminate the effect of failures on network accuracy, we have proposed a method of training neural networks with additional dropout layers. Such approach removes extra dependencies for neighboring perceptrons. We have also investigated method of network architecture complication to reduce probability of misclassification because of failures in neurons. Based on results of the experiments, we see that adding dropout layers reduces the effect of failures on classification ability of error-prone neural networks, while classification accuracy remains the same as of the reference networks.


中文翻译:

卷积神经网络硬件实现的容错方法研究

摘要

本文着重介绍了以固定点模式运行的硬件实现的神经网络故障保护方法。我们探索了错误发生的可能变体,以及消除它们的方法。为此,已经研究了基于VGG模型的相同架构的网络。已选择用于实验的VGG SIMPLE神经网络是著名网络VGG16和VGG19的简化版本(具有较少的层数)。为了消除故障对网络准确性的影响,我们提出了一种训练带有附加辍学层的神经网络的方法。这种方法消除了对邻近感知器的额外依赖性。我们还研究了网络架构复杂化的方法,以减少由于神经元故障而导致分类错误的可能性。
更新日期:2019-07-01
down
wechat
bug