当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning with Interpretable Structure from Gated RNN
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2018-10-25 , DOI: arxiv-1810.10708
Bo-Jian Hou and Zhi-Hua Zhou

The interpretability of deep learning models has raised extended attention these years. It will be beneficial if we can learn an interpretable structure from deep learning models. In this paper, we focus on Recurrent Neural Networks~(RNNs) especially gated RNNs whose inner mechanism is still not clearly understood. We find that Finite State Automaton~(FSA) that processes sequential data has more interpretable inner mechanism according to the definition of interpretability and can be learned from RNNs as the interpretable structure. We propose two methods to learn FSA from RNN based on two different clustering methods. With the learned FSA and via experiments on artificial and real datasets, we find that FSA is more trustable than the RNN from which it learned, which gives FSA a chance to substitute RNNs in applications involving humans' lives or dangerous facilities. Besides, we analyze how the number of gates affects the performance of RNN. Our result suggests that gate in RNN is important but the less the better, which could be a guidance to design other RNNs. Finally, we observe that the FSA learned from RNN gives semantic aggregated states and its transition graph shows us a very interesting vision of how RNNs intrinsically handle text classification tasks.

中文翻译:

从门控 RNN 中学习可解释的结构

近年来,深度学习模型的可解释性引起了广泛关注。如果我们可以从深度学习模型中学习可解释的结构,那将是有益的。在本文中,我们专注于循环神经网络~(RNNs)特别是门控 RNNs,其内部机制仍不清楚。我们发现,根据可解释性的定义,处理序列数据的有限状态自动机~(FSA)具有更多可解释的内部机制,并且可以作为可解释结构从 RNN 中学习。我们提出了两种基于两种不同聚类方法从 RNN 学习 FSA 的方法。通过学习的 FSA 以及对人工和真实数据集的实验,我们发现 FSA 比从中学习的 RNN 更值得信赖,这让 FSA 有机会在涉及人类的应用中替代 RNN。生命或危险设施。此外,我们分析了门的数量如何影响 RNN 的性能。我们的结果表明 RNN 中的门很重要,但越少越好,这可以作为设计其他 RNN 的指导。最后,我们观察到从 RNN 学习的 FSA 给出了语义聚合状态,其转换图向我们展示了一个非常有趣的视角,即 RNN 本质上如何处理文本分类任务。
更新日期:2020-01-15
down
wechat
bug