当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Interpret Neural Networks by Extracting Critical Subnetworks.
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2020-05-14 , DOI: 10.1109/tip.2020.2993098
Yulong Wang , Hang Su , Bo Zhang , Xiaolin Hu

In recent years, deep neural networks have achieved excellent performance in many fields of artificial intelligence. The requirements for the interpretability and robustness of neural networks are also increasing. In this paper, we propose to understand the functional mechanism of neural networks by extracting critical subnetworks. Specifically, we denote the critical subnetworks as a group of important channels across layers such that if they were suppressed to zeros, the final test performance would deteriorate severely. This novel perspective can not only reveal the layerwise semantic behavior within the model but also present more accurate visual explanations appearing in the data through attribution methods. Moreover, we propose two adversarial example detection methods based on the properties of sample-specific and class-specific subnetworks, which provides the possibility for increasing the model robustness.

中文翻译:

通过提取关键子网来解释神经网络。

近年来,深度神经网络在人工智能的许多领域中都取得了出色的性能。对神经网络的可解释性和鲁棒性的要求也在增加。在本文中,我们建议通过提取关键子网络来了解神经网络的功能机制。具体来说,我们将关键子网表示为跨层的一组重要通道,这样,如果将它们抑制为零,则最终测试性能将严重恶化。这种新颖的观点不仅可以揭示模型中的分层语义行为,还可以通过归因方法呈现出出现在数据中的更准确的视觉解释。此外,我们根据样本特定子网和类别特定子网的属性,提出了两种对抗性示例检测方法,
更新日期:2020-07-03
down
wechat
bug