当前位置: X-MOL 学术Front. Neurorobotics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Deep Non-negative Matrix Factorization Model for Big Data Representation learning
Frontiers in Neurorobotics ( IF 3.1 ) Pub Date : 2021-06-14 , DOI: 10.3389/fnbot.2021.701194
Zhikui Chen 1 , Shan Jin 1 , Runze Liu 1 , Jianing Zhang 1
Affiliation  

Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negative constraints is proposed to learn deep part-based representations of interpretability for big data in this paper. Specifically, a deep architecture with a supervisor network suppressing noise in data and a student network learning deep representations of interpretability is designed, which is an end-to-end framework for pattern mining. Furthermore, to train the deep matrix factorization architecture, an interpretability loss is defined, including a symmetric loss, an apposition loss, and a non-negative constraint loss, which can ensure the knowledge transfer from the supervisor network to the student network, enhancing the robustness of deep representations. Finally, extensive experimental results on two benchmark datasets demonstrate the superiority of the deep matrix factorization method.

中文翻译:

用于大数据表示学习的深度非负矩阵分解模型

如今,深度表征由于在各种任务中的出色表现而备受关注。然而,深度表征的可解释性对现实世界的应用提出了巨大的挑战。为了缓解这一挑战,本文提出了一种具有非负约束的深度矩阵分解方法来学习基于部分的大数据可解释性表示。具体来说,设计了一个具有抑制数据噪声的监督网络和学习可解释性深度表示的学生网络的深度架构,这是一个端到端的模式挖掘框架。此外,为了训练深度矩阵分解架构,定义了可解释性损失,包括对称损失、并置损失和非负约束损失,这可以确保知识从监督者网络转移到学生网络,增强深度表示的鲁棒性。最后,在两个基准数据集上的大量实验结果证明了深度矩阵分解方法的优越性。
更新日期:2021-06-14
down
wechat
bug