当前位置: X-MOL 学术Ann. Math. Artif. Intel. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning under p-tampering poisoning attacks
Annals of Mathematics and Artificial Intelligence ( IF 1.2 ) Pub Date : 2019-12-03 , DOI: 10.1007/s10472-019-09675-1
Saeed Mahloujifar , Dimitrios I. Diochnos , Mohammad Mahmoody

Recently, Mahloujifar and Mahmoody (Theory of Cryptography Conference’17) studied attacks against learning algorithms using a special case of Valiant’s malicious noise, called p-tampering, in which the adversary gets to change any training example with independent probability p but is limited to only choose ‘adversarial’ examples with correct labels. They obtained p-tampering attacks that increase the error probability in the so called ‘targeted’ poisoning model in which the adversary’s goal is to increase the loss of the trained hypothesis over a particular test example. At the heart of their attack was an efficient algorithm to bias the expected value of any bounded real-output function through p-tampering. In this work, we present new biasing attacks for increasing the expected value of bounded real-valued functions. Our improved biasing attacks, directly imply improved p-tampering attacks against learners in the targeted poisoning model. As a bonus, our attacks come with considerably simpler analysis. We also study the possibility of PAC learning under p-tampering attacks in the non-targeted (aka indiscriminate) setting where the adversary’s goal is to increase the risk of the generated hypothesis (for a random test example). We show that PAC learning is possible under p-tampering poisoning attacks essentially whenever it is possible in the realizable setting without the attacks. We further show that PAC learning under ‘no-mistake’ adversarial noise is not possible, if the adversary could choose the (still limited to only p fraction of) tampered examples that she substitutes with adversarially chosen ones. Our formal model for such ‘bounded-budget’ tampering attackers is inspired by the notions of adaptive corruption in cryptography.

中文翻译:

在 p-tampering 中毒攻击下学习

最近,Mahloujifar 和 Mahmoody(Theory of Cryptography Conference'17)研究了使用 Valiant 恶意噪声的一种特殊情况(称为 p-tampering)对学习算法的攻击,其中对手可以独立概率 p 更改任何训练示例,但仅限于只选择带有正确标签的“对抗性”示例。他们获得了 p 篡改攻击,在所谓的“有针对性的”中毒模型中增加了错误概率,在该模型中,对手的目标是增加特定测试示例上训练假设的损失。他们攻击的核心是一种有效的算法,可以通过 p 篡改来偏置任何有界实输出函数的期望值。在这项工作中,我们提出了新的偏置攻击,以增加有界实值函数的期望值。我们改进的偏置攻击,直接意味着改进了针对目标中毒模型中学习者的 p-tampering 攻击。作为奖励,我们的攻击带有相当简单的分析。我们还研究了在非目标(又名不分皂白)设置中在 p 篡改攻击下 PAC 学习的可能性,其中对手的目标是增加生成假设的风险(对于随机测试示例)。我们表明,只要在没有攻击的可实现设置中可能,就可以在 p-tampering 中毒攻击下进行 PAC 学习。我们进一步表明,在“无错误”对抗性噪声下进行 PAC 学习是不可能的,如果敌手可以选择(仍然仅限于 p 部分)篡改示例,她将其替换为对抗性选择的示例。
更新日期:2019-12-03
down
wechat
bug