当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Structure Transfer Machine Theory and Applications.
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2019-11-25 , DOI: 10.1109/tip.2019.2954178
Baochang Zhang , Wankou Yang , Ze Wang , Lian Zhuo , Jungong Han , Xiantong Zhen

Representation learning is a fundamental but challenging problem, especially when the distribution of data is unknown. In this paper, we propose a new representation learning method, named Structure Transfer Machine (STM), which enables feature learning process to converge at the representation expectation in a probabilistic way. We theoretically show that such an expected value of the representation (mean) is achievable if the manifold structure can be transferred from the data space to the feature space. The resulting structure regularization term, named manifold loss, is incorporated into the loss function of the typical deep learning pipeline. The STM architecture is constructed to enforce the learned deep representation to satisfy the intrinsic manifold structure from the data, which results in robust features that suit various application scenarios, such as digit recognition, image classification and object tracking. Compared with state-of-the-art CNN architectures, we achieve better results on several commonly used public benchmarks.

中文翻译:

结构传递机理论与应用。

表示学习是一个基本但具有挑战性的问题,尤其是在数据分布未知的情况下。在本文中,我们提出了一种新的表示学习方法,称为结构转移机器(STM),它使特征学习过程以概率的方式收敛于表示期望。我们从理论上证明,如果流形结构可以从数据空间转移到特征空间,那么表示(平均值)的这种期望值是可以实现的。所得的结构正则化术语(称为流形损失)被合并到典型深度学习管道的损失函数中。STM体系结构的构建是为了强制学习的深度表示满足数据中的固有流形结构,从而产生了适合各种应用场景的强大功能,例如数字识别,图像分类和对象跟踪。与最新的CNN架构相比,我们在几种常用的公共基准上取得了更好的结果。
更新日期:2020-04-22
down
wechat
bug