当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Extraction of an Explanatory Graph to Interpret a CNN
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 2020-05-04 , DOI: 10.1109/tpami.2020.2992207
Quanshi Zhang , Xin Wang , Ruiming Cao , Ying Nian Wu , Feng Shi , Song-Chun Zhu

This paper introduces an explanatory graph representation to reveal object parts encoded inside convolutional layers of a CNN. Given a pre-trained CNN, each filter1 in a conv-layer usually represents a mixture of object parts. We develop a simple yet effective method to learn an explanatory graph, which automatically disentangles object parts from each filter without any part annotations. Specifically, given the feature map of a filter, we mine neural activations from the feature map, which correspond to different object parts. The explanatory graph is constructed to organize each mined part as a graph node. Each edge connects two nodes, whose corresponding object parts usually co-activate and keep a stable spatial relationship. Experiments show that each graph node consistently represented the same object part through different images, which boosted the transferability of CNN features. The explanatory graph transferred features of object parts to the task of part localization, and our method significantly outperformed other approaches.

中文翻译:


提取解释图来解释 CNN



本文引入了一种解释性图表示来揭示 CNN 卷积层内编码的对象部分。给定一个预先训练的 CNN,卷积层中的每个过滤器通常代表对象部分的混合。我们开发了一种简单而有效的方法来学习解释图,该方法可以自动从每个过滤器中分离出对象部分,而无需任何部分注释。具体来说,给定过滤器的特征图,我们从特征图中挖掘对应于不同对象部分的神经激活。构建解释图以将每个挖掘的部分组织为图节点。每条边连接两个节点,其相应的对象部分通常共同激活并保持稳定的空间关系。实验表明,每个图节点通过不同的图像一致地表示相同的对象部分,这提高了 CNN 特征的可迁移性。解释图将对象部件的特征转移到部件定位任务中,我们的方法显着优于其他方法。
更新日期:2020-05-04
down
wechat
bug