当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Evolving artificial neural networks with feedback.
Neural Networks ( IF 6.0 ) Pub Date : 2019-12-14 , DOI: 10.1016/j.neunet.2019.12.004
Sebastian Herzog 1 , Christian Tetzlaff 1 , Florentin Wörgötter 1
Affiliation  

Neural networks in the brain are dominated by sometimes more than 60% feedback connections, which most often have small synaptic weights. Different from this, little is known how to introduce feedback into artificial neural networks. Here we use transfer entropy in the feed-forward paths of deep networks to identify feedback candidates between the convolutional layers and determine their final synaptic weights using genetic programming. This adds about 70% more connections to these layers all with very small weights. Nonetheless performance improves substantially on different standard benchmark tasks and in different networks. To verify that this effect is generic we use 36000 configurations of small (2-10 hidden layer) conventional neural networks in a non-linear classification task and select the best performing feed-forward nets. Then we show that feedback reduces total entropy in these networks always leading to performance increase. This method may, thus, supplement standard techniques (e.g. error backprop) adding a new quality to network learning.

中文翻译:

不断发展的人工神经网络,具有反馈功能。

大脑中的神经网络有时被60%以上的反馈连接所控制,而这些反馈连接通常具有较小的突触权重。与此不同的是,如何将反馈引入人工神经网络知之甚少。在这里,我们在深层网络的前馈路径中使用传递熵来识别卷积层之间的反馈候选,并使用遗传编程确定它们的最终突触权重。这给这些层增加了约70%的连接,所有连接的重量都非常小。但是,在不同的标准基准测试任务和不同的网络中,性能都得到了显着提高。为了验证这种效果是否通用,我们在非线性分类任务中使用了36000个小型(2-10个隐藏层)常规神经网络配置,并选择了性能最佳的前馈网络。然后我们表明,反馈减少了这些网络中的总熵,从而始终导致性能提高。因此,该方法可以补充标准技术(例如,错误反向传播),从而为网络学习增加新的质量。
更新日期:2019-12-17
down
wechat
bug