当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Interpretable Dictionary in Sparse Coding
arXiv - CS - Artificial Intelligence Pub Date : 2020-11-24 , DOI: arxiv-2011.11805
Edward Kim, Connor Onweller, Andrew O'Brien, Kathleen McCoy

Artificial neural networks (ANNs), specifically deep learning networks, have often been labeled as black boxes due to the fact that the internal representation of the data is not easily interpretable. In our work, we illustrate that an ANN, trained using sparse coding under specific sparsity constraints, yields a more interpretable model than the standard deep learning model. The dictionary learned by sparse coding can be more easily understood and the activations of these elements creates a selective feature output. We compare and contrast our sparse coding model with an equivalent feed forward convolutional autoencoder trained on the same data. Our results show both qualitative and quantitative benefits in the interpretation of the learned sparse coding dictionary as well as the internal activation representations.

中文翻译:

稀疏编码中的可解释字典

人工神经网络(ANN),特别是深度学习网络,由于数据的内部表示不容易解释,因此经常被标记为黑匣子。在我们的工作中,我们说明了在特定稀疏性约束下使用稀疏编码训练的ANN所产生的模型比标准深度学习模型更具可解释性。通过稀疏编码学习的词典可以更容易理解,并且激活这些元素会创建选择性特征输出。我们将稀疏编码模型与在相同数据上训练的等效前馈卷积自动编码器进行比较和对比。我们的结果在解释学习的稀疏编码字典以及内部激活表示方面都显示了质和量方面的好处。
更新日期:2020-11-25
down
wechat
bug