当前位置: X-MOL 学术J. Am. Stat. Assoc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Embedding Learning
Journal of the American Statistical Association ( IF 3.7 ) Pub Date : 2020-07-20 , DOI: 10.1080/01621459.2020.1775614
Ben Dai 1 , Xiaotong Shen 1 , Junhui Wang 2
Affiliation  

Abstract

Numerical embedding has become one standard technique for processing and analyzing unstructured data that cannot be expressed in a predefined fashion. It stores the main characteristics of data by mapping it onto a numerical vector. An embedding is often unsupervised and constructed by transfer learning from large-scale unannotated data. Given an embedding, a downstream learning method, referred to as a two-stage method, is applicable to unstructured data. In this article, we introduce a novel framework of embedding learning to deliver a higher learning accuracy than the two-stage method while identifying an optimal learning-adaptive embedding. In particular, we propose a concept of U-minimal sufficient learning-adaptive embeddings, based on which we seek an optimal one to maximize the learning accuracy subject to an embedding constraint. Moreover, when specializing the general framework to classification, we derive a graph embedding classifier based on a hyperlink tensor representing multiple hypergraphs, directed or undirected, characterizing multi-way relations of unstructured data. Numerically, we design algorithms based on blockwise coordinate descent and projected gradient descent to implement linear and feed-forward neural network classifiers, respectively. Theoretically, we establish a learning theory to quantify the generalization error of the proposed method. Moreover, we show, in linear regression, that the one-hot encoder is more preferable among two-stage methods, yet its dimension restriction hinders its predictive performance. For a graph embedding classifier, the generalization error matches up to the standard fast rate or the parametric rate for linear or nonlinear classification. Finally, we demonstrate the utility of the classifiers on two benchmarks in grammatical classification and sentiment analysis. Supplementary materials for this article are available online.



中文翻译:

嵌入学习

摘要

数值嵌入已成为处理和分析无法以预定义方式表达的非结构化数据的一种标准技术。它通过将数据映射到数值向量来存储数据的主要特征。嵌入通常是无监督的,是通过从大规模未注释数据进行迁移学习来构建的。给定嵌入,称为两阶段方法的下游学习方法适用于非结构化数据。在本文中,我们介绍了一种新的嵌入学习框架,以提供比两阶段方法更高的学习准确性,同时确定最佳的学习自适应嵌入。特别地,我们提出了U的概念- 最小足够的学习自适应嵌入,我们在此基础上寻求最佳嵌入,以在嵌入约束条件下最大化学习精度。此外,当将通用框架专门用于分类时,我们基于表示多个有向或无向超图的超链接张量推导了图嵌入分类器,表征了非结构化数据的多路关系。在数值上,我们设计了基于分块坐标下降和投影梯度下降的算法,分别实现线性和前馈神经网络分类器。从理论上讲,我们建立了一个学习理论来量化所提出方法的泛化误差。此外,我们表明,在线性回归中,one-hot 编码器在两阶段方法中更受欢迎,然而它的维度限制阻碍了它的预测性能。对于图嵌入分类器,泛化误差与标准快速率或线性或非线性分类的参数率相匹配。最后,我们在语法分类和情感分析的两个基准上展示了分类器的效用。本文的补充材料可在线获取。

更新日期:2020-07-20
down
wechat
bug