当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Risk Bounds for Robust Deep Learning
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-09-14 , DOI: arxiv-2009.06202
Johannes Lederer

It has been observed that certain loss functions can render deep-learning pipelines robust against flaws in the data. In this paper, we support these empirical findings with statistical theory. We especially show that empirical-risk minimization with unbounded, Lipschitz-continuous loss functions, such as the least-absolute deviation loss, Huber loss, Cauchy loss, and Tukey's biweight loss, can provide efficient prediction under minimal assumptions on the data. More generally speaking, our paper provides theoretical evidence for the benefits of robust loss functions in deep learning.

中文翻译:

稳健深度学习的风险界限

已经观察到某些损失函数可以使深度学习管道对数据中的缺陷具有鲁棒性。在本文中,我们用统计理论支持这些实证发现。我们特别展示了具有无界、Lipschitz 连续损失函数的经验风险最小化,例如最小绝对偏差损失、Huber 损失、Cauchy 损失和 Tukey 的双权重损失,可以在对数据的最小假设下提供有效的预测。更一般地说,我们的论文为深度学习中鲁棒损失函数的好处提供了理论证据。
更新日期:2020-09-15
down
wechat
bug