当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Why Does Regularization Help with Mitigating Poisoning Attacks?
Neural Processing Letters ( IF 3.1 ) Pub Date : 2021-05-25 , DOI: 10.1007/s11063-021-10539-1
Farhad Farokhi

We use distributionally-robust optimization for machine learning to mitigate the effect of data poisoning attacks. We provide performance guarantees for the trained model on the original data (not including the poison records) by training the model for the worst-case distribution on a neighbourhood around the empirical distribution (extracted from the training dataset corrupted by a poisoning attack) defined using the Wasserstein distance. We relax the distributionally-robust machine learning problem by finding an upper bound for the worst-case fitness based on the empirical sampled-averaged fitness and the Lipschitz-constant of the fitness function (on the data for given model parameters) as regularizer. For regression models, we prove that this regularizer is equal to the dual norm of the model parameters.



中文翻译:

为什么进行正则化有助于减轻中毒攻击?

我们使用机器学习的分布式稳健优化来减轻数据中毒攻击的影响。通过对使用以下方法定义的经验分布(从中毒攻击破坏的训练数据集中提取)周围的邻域中最坏情况的分布进行训练,我们可以为原始数据(不包括毒物记录)上的训练模型提供性能保证。 Wasserstein距离。我们通过基于经验采样平均适应度和适应度函数的Lipschitz常数(在给定模型参数的数据上)作为正则化项,找到最坏情况适应度的上限,从而缓解了分布稳健的机器学习问题。对于回归模型,我们证明该正则化函数等于模型参数的对偶范数。

更新日期:2021-05-25
down
wechat
bug