当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep random walk of unitary invariance for large-scale data representation
Information Sciences Pub Date : 2020-12-01 , DOI: 10.1016/j.ins.2020.11.039
Shiping Wang , Zhaoliang Chen , William Zhu , Fei-Yue Wang

Data representation aims at learning an efficient low-dimensional representation, which is always a challenging task in machine learning and computer vision. It can largely improve the performance of specific learning tasks. Unsupervised methods are extensively applied to data representation, which considers the internal connection among data. Most of existing unsupervised models usually use a specific norm to favor certain distributions of the input data, leading to an unsustainable encouraging performance for given learning tasks. In this paper, we propose an efficient data representation method to address large-scale feature representation problems, where the deep random walk of unitary invariance is exploited for learning discriminative features. First, the data representation is formulated as deep random walk problems, where unitarily invariant norms are employed to capture diverse beneficial perspectives hidden in the data. It is embedded into a state transition matrix model, where an arbitrary number of transition steps is available for an accurate affinity evaluation. Second, data representation problems are then transformed as high-order matrix factorization tasks with unitary invariance. Third, a closed-form solution is proved for the formulated data representation problem, which may provide a new perspective for solving high-order matrix factorization problems. Finally, extensive comparative experiments are conducted in publicly available real-world data sets. In addition, experimental results demonstrate that the proposed method achieves better performance than other compared state-of-the-art approaches in terms of data clustering.



中文翻译:

ary不变性的深度随机游动,用于大规模数据表示

数据表示旨在学习有效的低维表示,这在机器学习和计算机视觉中始终是一项艰巨的任务。它可以在很大程度上提高特定学习任务的性能。无监督方法广泛应用于数据表示,它考虑了数据之间的内部联系。大多数现有的无监督模型通常使用特定的规范来支持输入数据的某些分布,从而导致给定学习任务的表现不可持续。在本文中,我们提出了一种有效的数据表示方法,用于解决大规模特征表示问题,其中利用unit不变性的深度随机游动来学习判别特征。首先,将数据表示形式表达为深度随机游走问题,其中采用统一不变的规范来捕获隐藏在数据中的各种有益的观点。它被嵌入到状态转换矩阵模型中,其中任意数量的转换步骤可用于准确的亲和力评估。第二,然后将数据表示问题转换为具有unit不变性的高阶矩阵分解任务。第三,证明了所提出的数据表示问题的闭式解,这为解决高阶矩阵分解问题提供了新的视角。最后,在可公开获得的实际数据集中进行了广泛的比较实验。此外,实验结果表明,在数据聚类方面,所提出的方法比其他已比较的最新方法具有更好的性能。

更新日期:2020-12-31
down
wechat
bug