当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Function Approximation via Sparse Random Features
arXiv - CS - Machine Learning Pub Date : 2021-03-04 , DOI: arxiv-2103.03191
Abolfazl Hashemi, Hayden Schaeffer, Robert Shi, Ufuk Topcu, Giang Tran, Rachel Ward

Random feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, for accuracy, random feature methods require more measurements than trainable parameters, limiting their use for data-scarce applications or problems in scientific machine learning. This paper introduces the sparse random feature method that learns parsimonious random feature models utilizing techniques from compressive sensing. We provide uniform bounds on the approximation error for functions in a reproducing kernel Hilbert space depending on the number of samples and the distribution of features. The error bounds improve with additional structural conditions, such as coordinate sparsity, compact clusters of the spectrum, or rapid spectral decay. We show that the sparse random feature method outperforms shallow networks for well-structured functions and applications to scientific machine learning tasks.

中文翻译:

通过稀疏随机特征进行函数逼近

随机特征方法已在各种机器学习任务中获得成功,易于计算,并且具有理论上的精度范围。它们可以作为标准神经网络的替代方法,因为它们可以表示相似的功能空间,而无需进行昂贵的培训。但是,为了准确性,随机特征方法比可训练参数需要更多的测量,从而限制了它们在数据稀缺应用或科学机器学习中的问题中的使用。本文介绍了一种稀疏随机特征方法,该方法利用压缩感知技术学习简约的随机特征模型。我们根据样本数和特征分布为再现内核Hilbert空间中的函数提供逼近误差的统一界限。误差范围会随其他结构条件(例如,坐标稀疏性,光谱的紧凑簇或快速光谱衰减)而改善。我们证明了稀疏随机特征方法在结构良好的功能和应用于科学机器学习任务方面的性能优于浅层网络。
更新日期:2021-03-05
down
wechat
bug