当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Locality Guided Neural Networks for Explainable Artificial Intelligence
arXiv - CS - Artificial Intelligence Pub Date : 2020-07-12 , DOI: arxiv-2007.06131
Randy Tan, Naimul Khan, and Ling Guan

In current deep network architectures, deeper layers in networks tend to contain hundreds of independent neurons which makes it hard for humans to understand how they interact with each other. By organizing the neurons by correlation, humans can observe how clusters of neighbouring neurons interact with each other. In this paper, we propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN) for training networks that preserves locality between neighbouring neurons within each layer of a deep network. Heavily motivated by Self-Organizing Map (SOM), the goal is to enforce a local topology on each layer of a deep network such that neighbouring neurons are highly correlated with each other. This method contributes to the domain of Explainable Artificial Intelligence (XAI), which aims to alleviate the black-box nature of current AI methods and make them understandable by humans. Our method aims to achieve XAI in deep learning without changing the structure of current models nor requiring any post processing. This paper focuses on Convolutional Neural Networks (CNNs), but can theoretically be applied to any type of deep learning architecture. In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100. In depth analyses presenting both qualitative and quantitative results demonstrate that our method is capable of enforcing a topology on each layer while achieving a small increase in classification accuracy

中文翻译:

用于可解释人工智能的局部引导神经网络

在当前的深度网络架构中,网络中的更深层往往包含数百个独立的神经元,这使得人类很难理解它们如何相互交互。通过相关性组织神经元,人类可以观察相邻神经元集群如何相互作用。在本文中,我们提出了一种新的反向传播算法,称为局部引导神经网络 (LGNN),用于训练网络,该算法在深度网络的每一层内保留相邻神经元之间的局部性。在自组织映射 (SOM) 的强烈推动下,目标是在深度网络的每一层上强制执行局部拓扑,以便相邻的神经元彼此高度相关。这种方法有助于可解释人工智能 (XAI) 领域,它旨在减轻当前人工智能方法的黑盒性质,并使它们能够被人类理解。我们的方法旨在在不改变当前模型结构也不需要任何后处理的情况下,在深度学习中实现 XAI。本文侧重于卷积神经网络 (CNN),但理论上可以应用于任何类型的深度学习架构。在我们的实验中,我们训练了各种 VGG 和 Wide ResNet (WRN) 网络,用于在 CIFAR100 上进行图像分类。呈现定性和定量结果的深入分析表明,我们的方法能够在每一层上实施拓扑结构,同时实现分类精度的小幅提高 我们的方法旨在在不改变当前模型结构也不需要任何后处理的情况下,在深度学习中实现 XAI。本文侧重于卷积神经网络 (CNN),但理论上可以应用于任何类型的深度学习架构。在我们的实验中,我们训练了各种 VGG 和 Wide ResNet (WRN) 网络,用于在 CIFAR100 上进行图像分类。呈现定性和定量结果的深入分析表明,我们的方法能够在每一层上实施拓扑结构,同时实现分类精度的小幅提高 我们的方法旨在在不改变当前模型结构也不需要任何后处理的情况下,在深度学习中实现 XAI。本文侧重于卷积神经网络 (CNN),但理论上可以应用于任何类型的深度学习架构。在我们的实验中,我们训练了各种 VGG 和 Wide ResNet (WRN) 网络,用于在 CIFAR100 上进行图像分类。呈现定性和定量结果的深入分析表明,我们的方法能够在每一层上实施拓扑结构,同时实现分类精度的小幅提高
更新日期:2020-07-14
down
wechat
bug