当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-04-02 , DOI: arxiv-2004.01254
Richard Meyes, Constantin Waubert de Puiseau, Andres Posada-Moreno, Tobias Meisen

The need for more transparency of the decision-making processes in artificial neural networks steadily increases driven by their applications in safety critical and ethically challenging domains such as autonomous driving or medical diagnostics. We address today's lack of transparency of neural networks and shed light on the roles of single neurons and groups of neurons within the network fulfilling a learned task. Inspired by research in the field of neuroscience, we characterize the learned representations by activation patterns and network ablations, revealing functional neuron populations that a) act jointly in response to specific stimuli or b) have similar impact on the network's performance after being ablated. We find that neither a neuron's magnitude or selectivity of activation, nor its impact on network performance are sufficient stand-alone indicators for its importance for the overall task. We argue that such indicators are essential for future advances in transfer learning and modern neuroscience.

中文翻译:

在神经网络的幕后:通过功能性神经元群和网络消融表征学习到的表示

由于人工神经网络在安全关键和具有道德挑战性的领域(如自动驾驶或医疗诊断)中的应用,对人工神经网络决策过程透明度的需求稳步增加。我们解决了当今神经网络缺乏透明度的问题,并阐明了网络中单个神经元和神经元组在完成学习任务时的作用。受神经科学领域研究的启发,我们通过激活模式和网络消融来表征学习到的表征,揭示功能性神经元群体:a) 共同响应特定刺激或 b) 在被消融后对网络性能有类似影响。我们发现,无论是神经元的大小还是激活的选择性,其对网络性能的影响也不足以作为其对整体任务重要性的独立指标。我们认为,这些指标对于迁移学习和现代神经科学的未来发展至关重要。
更新日期:2020-05-12
down
wechat
bug