当前位置: X-MOL 学术Appl. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Augmented low-rank methods for gaussian process regression
Applied Intelligence ( IF 5.3 ) Pub Date : 2021-05-19 , DOI: 10.1007/s10489-021-02481-5
Emil Thomas , Vivek Sarin

This paper presents techniques to improve the prediction accuracy of approximation methods used in Gaussian process regression models. Conventional methods such as Nyström and subset of data methods rely on low-rank approximations to the kernel matrix derived from a set of representative data points. Prediction accuracy suffers when the number of representative points is small or when the length scale is small. The techniques proposed here augment the set of representative points with neighbors of each test input to improve accuracy. Our approach leverages the general structure of the problem through the low-rank approximation and improves its accuracy further by exploiting locality at each test input. Computations involving neighbor points are cast as updates to the base approximation which result in significant savings. To ensure numerical stability, prediction is done via orthogonal projection onto the subspace of the kernel approximation derived from the augmented set. Experiments on synthetic and real datasets show that our approach is robust with respect to changes in length scale and matches the prediction accuracy of the full kernel matrix while using fewer points for kernel approximation. This results in faster and more accurate predictions compared to conventional methods.



中文翻译:

高斯过程回归的增强低秩方法

本文提出了提高高斯过程回归模型中使用的近似方法的预测精度的技术。Nyström等常规方法和数据子集方法依赖于从一组代表性数据点派生的核矩阵的低秩近似。当代表点的数量少或长度标度小时,预测精度会受到影响。此处提出的技术使用每个测试输入的邻居来扩充代表点集,以提高准确性。我们的方法通过低秩逼近来利用问题的一般结构,并通过利用每个测试输入处的局部性来进一步提高其准确性。涉及邻点的计算被转换为基本近似的更新,从而节省了大量资金。为了确保数值稳定性,通过正交投影到从增强集派生的核近似的子空间上进行预测。在合成数据集和真实数据集上进行的实验表明,我们的方法在长度尺度变化方面具有鲁棒性,并且与完整核矩阵的预测精度相匹配,同时使用更少的点进行核近似。与传统方法相比,这将导致更快,更准确的预测。

更新日期:2021-05-19
down
wechat
bug