当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Statistical generalization performance guarantee for meta-learning with data dependent prior
Neurocomputing ( IF 6 ) Pub Date : 2021-09-13 , DOI: 10.1016/j.neucom.2021.09.018
Tianyu Liu 1 , Jie Lu 1 , Zheng Yan 1 , Guangquan Zhang 1
Affiliation  

Meta-learning aims to leverage experience from previous tasks to achieve an effective and fast adaptation ability when encountering new tasks. However, it is unclear how the generalization property applies to new tasks. Probably approximately correct (PAC) Bayes bound theory provides a theoretical framework to analyze the generalization performance for meta-learning with an explicit numerical generalization error upper bound. A tighter upper bound may achieve better generalization performance. However, for the PAC-Bayes meta-learning bound, the prior distribution is selected randomly which results in poor generalization performance.

In this paper, we derive three novel generalization error upper bounds for meta-learning based on the PAC-Bayes relative entropy bound. Furthermore, in order to avoid randomly prior distribution, based on the empirical risk minimization (ERM) method, a data-dependent prior for the PAC-Bayes meta-learning bound algorithm is developed and the sample complexity and computational complexity are analyzed. The experiments illustrate that the proposed three PAC-Bayes bounds for meta-learning achieve a competitive generalization guarantee, and the extended PAC-Bayes bound with a data-dependent prior can achieve rapid convergence ability.



中文翻译:

具有数据依赖先验的元学习的统计泛化性能保证

元学习旨在利用先前任务的经验,在遇到新任务时获得有效且快速的适应能力。但是,尚不清楚泛化属性如何应用于新任务。可能近似正确(PAC)贝叶斯界限理论提供了一个理论框架来分析具有显式数值泛化误差上限的元学习的泛化性能。更严格的上限可以实现更好的泛化性能。然而,对于 PAC-Bayes 元学习边界,先验分布是随机选择的,这导致泛化性能较差。

在本文中,我们基于 PAC-Bayes 相对熵界限为元学习推导出三个新的泛化误差上限。此外,为了避免随机先验分布,基于经验风险最小化(ERM)方法,开发了PAC-Bayes元学习边界算法的数据相关先验,并分析了样本复杂度和计算复杂度。实验表明,为元学习提出的三个 PAC-Bayes 边界实现了竞争性泛化保证,并且具有数据相关先验的扩展 PAC-Bayes 边界可以实现快速收敛能力。

更新日期:2021-09-22
down
wechat
bug