当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Discriminative deep semi-nonnegative matrix factorization network with similarity maximization for unsupervised feature learning
Pattern Recognition Letters ( IF 5.1 ) Pub Date : 2021-07-02 , DOI: 10.1016/j.patrec.2021.06.013
Wei Wang 1 , Feiyu Chen 2, 3, 4 , Yongxin Ge 5 , Sheng Huang 5 , Xiaohong Zhang 5 , Dan Yang 5
Affiliation  

Deep Semi-NMF (DSN), which learns hierarchical representations by stacking multiple layers Semi-NMF, shows competitive performance in unsupervised data analysis. However, the features learned from DSN always lack of representativity and discriminativity. In this paper, we build a novel Deep Semi-NMF network (DSNnet) to address the issues of DSN. Specifically, DSNnet contains multiple fully-connected layers, in which the activation function of each layer adopts Smoothly Clipped Absolute Deviation (SCAD). The non-negative hidden features are computed forwardly, while the network parameters are updated by the stochastic gradient descent method. Moreover, to enhance the discriminativity of features, we suggest simultaneously minimizing the reconstruction error of input and output, and maximizing the similarity between input and learned features. The proposed similarity measurement, which consists of global geometric similarity and local pointwise similarity, encourages the compactness between similar points and separateness between dissimilar points in the feature space, and is beneficial to preserve intrinsic information of original data. Extensive experiments conducted on several datasets illustrate the superiority of the proposed approach in comparison with state-of-the-art methods.



中文翻译:

用于无监督特征学习的具有相似性最大化的判别深度半非负矩阵分解网络

Deep Semi-NMF (DSN) 通过堆叠多层 Semi-NMF 来学习分层表示,在无监督数据分析中显示出具有竞争力的性能。然而,从DSN中学到的特征总是缺乏代表性和判别性。在本文中,我们构建了一个新颖的深度半 NMF 网络(DSNnet)来解决 DSN 的问题。具体来说,DSNnet 包含多个全连接层,其中每一层的激活函数采用平滑裁剪绝对偏差(SCAD)。前向计算非负隐藏特征,而通过随机梯度下降法更新网络参数。此外,为了增强特征的判别性,我们建议同时最小化输入和输出的重构误差,并最大化输入和学习特征之间的相似性。所提出的相似性度量由全局几何相似性和局部逐点相似性组成,鼓励特征空间中相似点之间的紧密性和不同点之间的分离性,有利于保留原始数据的内在信息。在几个数据集上进行的大量实验表明,与最先进的方法相比,所提出的方法的优越性。

更新日期:2021-07-12
down
wechat
bug