当前位置: X-MOL 学术Ad Hoc Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explaining the Behavior of Neuron Activations in Deep Neural Networks
Ad Hoc Networks ( IF 4.8 ) Pub Date : 2020-10-23 , DOI: 10.1016/j.adhoc.2020.102346
Longwei Wang , Chengfei Wang , Yupeng Li , Rui Wang

Deep Neural Networks has shown superior performance in various applications. But it is often seen as black box in real world applications, which is challenging to explain from the viewpoint of humans. It is important to understand the behavior of deep neural networks so as to trust the decision made and improve the classification accuracy of deep neural networks. In this study, the information theoretical analysis is used to investigate the behavior of layer-wise neurons in deep neural networks. The activation patterns of individual neurons in fully connected layers can provide insights for the performance of the neural network model. The behavior of neuron activation is investigated based on state-of-art classification network model. We study and compare the layer-wise pattern of neurons activation in fully connected layers given the same image input. Experiments are conducted on various data sets. We find that in a well trained classification model, the randomness level of the neurons activation pattern is reduced with the depth of the fully connected layers. This means that the neuron activation patterns of deep layers is more stable than that of shallow layers. The results in this study can also answer the question of how many layers are needed to avoid overfitting in deep neural networks. Corresponding experiments are conducted to validate the assumptions.



中文翻译:

解释深度神经网络中神经元激活的行为

深度神经网络已在各种应用中显示出卓越的性能。但是它在现实世界的应用程序中经常被视为黑匣子,从人类的角度来解释它具有挑战性。了解深度神经网络的行为,以信任所做的决策并提高深度神经网络的分类准确性,这一点很重要。在这项研究中,信息理论分析用于研究深度神经网络中分层神经元的行为。完全连接的层中单个神经元的激活模式可以为神经网络模型的性能提供见解。基于最新的分类网络模型研究神经元激活的行为。我们研究并比较了在给定相同图像输入的情况下,全连接层中神经元激活的分层模式。对各种数据集进行了实验。我们发现,在训练有素的分类模型中,神经元激活模式的随机性水平随完全连接层的深度而降低。这意味着深层的神经元激活模式比浅层的神经元激活模式更稳定。这项研究的结果还可以回答避免在深度神经网络中过度拟合需要多少层的问题。进行相应的实验以验证这些假设。这意味着深层的神经元激活模式比浅层的神经元激活模式更稳定。这项研究的结果还可以回答避免在深度神经网络中过度拟合需要多少层的问题。进行相应的实验以验证这些假设。这意味着深层的神经元激活模式比浅层的神经元激活模式更稳定。这项研究的结果还可以回答避免在深度神经网络中过度拟合需要多少层的问题。进行相应的实验以验证这些假设。

更新日期:2020-10-30
down
wechat
bug