当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Excitation Dropout: Encouraging Plasticity in Deep Neural Networks
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2021-01-09 , DOI: 10.1007/s11263-020-01422-y
Andrea Zunino , Sarah Adel Bargal , Pietro Morerio , Jianming Zhang , Stan Sclaroff , Vittorio Murino

We propose a guided dropout regularizer for deep networks based on the evidence of a network prediction defined as the firing of neurons in specific paths. In this work, we utilize the evidence at each neuron to determine the probability of dropout, rather than dropping out neurons uniformly at random as in standard dropout. In essence, we dropout with higher probability those neurons which contribute more to decision making at training time. This approach penalizes high saliency neurons that are most relevant for model prediction, i.e. those having stronger evidence. By dropping such high-saliency neurons, the network is forced to learn alternative paths in order to maintain loss minimization, resulting in a plasticity-like behavior, a characteristic of human brains too. We demonstrate better generalization ability, an increased utilization of network neurons, and a higher resilience to network compression using several metrics over four image/video recognition benchmarks.



中文翻译:

激发辍学:鼓励深度神经网络中的可塑性。

我们基于定义为特定路径中的神经元放电的网络预测证据,为深层网络提出了一种指导性的辍学调节器。在这项工作中,我们利用每个神经元处的证据确定退出的可能性,而不是像标准退出中那样随机地均匀退出神经元。从本质上讲,我们更有可能放弃那些在训练时对决策做出更大贡献的神经元。这种方法惩罚了与模型预测最相关的高显着性神经元,即那些具有更强证据的神经元。通过丢弃这种高显着性的神经元,网络被迫学习替代路径,以保持最小的损失,从而导致类似可塑性的行为,这也是人脑的特征。我们展示出更好的泛化能力,

更新日期:2021-01-10
down
wechat
bug