当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Trajectory-Based Meta-Learning for Out-Of-Vocabulary Word Embedding Learning
arXiv - CS - Computation and Language Pub Date : 2021-02-24 , DOI: arxiv-2102.12266
Gordon Buck, Andreas Vlachos

Word embedding learning methods require a large number of occurrences of a word to accurately learn its embedding. However, out-of-vocabulary (OOV) words which do not appear in the training corpus emerge frequently in the smaller downstream data. Recent work formulated OOV embedding learning as a few-shot regression problem and demonstrated that meta-learning can improve results obtained. However, the algorithm used, model-agnostic meta-learning (MAML) is known to be unstable and perform worse when a large number of gradient steps are used for parameter updates. In this work, we propose the use of Leap, a meta-learning algorithm which leverages the entire trajectory of the learning process instead of just the beginning and the end points, and thus ameliorates these two issues. In our experiments on a benchmark OOV embedding learning dataset and in an extrinsic evaluation, Leap performs comparably or better than MAML. We go on to examine which contexts are most beneficial to learn an OOV embedding from, and propose that the choice of contexts may matter more than the meta-learning employed.

中文翻译:

基于轨迹的元学习,用于词汇外词嵌入学习

词嵌入学习方法需要大量出现单词才能准确地学习其嵌入。但是,未出现在训练语料库中的词汇外(OOV)词经常出现在较小的下游数据中。最近的工作将OOV嵌入学习公式化为几次回归问题,并证明了元学习可以改善获得的结果。但是,已知使用不可知模型元学习(MAML)的算法是不稳定的,并且在将大量梯度步骤用于参数更新时性能会更差。在这项工作中,我们建议使用Leap,这是一种元学习算法,它利用学习过程的整个轨迹而不只是起点和终点,从而改善了这两个问题。在我们针对基准OOV嵌入学习数据集的实验和外部评估中,Leap的表现与MAML相当或更好。我们继续研究哪些上下文最适合从中学习OOV嵌入,并建议上下文的选择可能比所采用的元学习更为重要。
更新日期:2021-02-25
down
wechat
bug