当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Legal Risks of Adversarial Machine Learning Research
arXiv - CS - Computers and Society Pub Date : 2020-06-29 , DOI: arxiv-2006.16179
Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert

Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, "What are the potential legal risks to adversarial ML researchers when they attack ML systems?" Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA's application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

中文翻译:

对抗性机器学习研究的法律风险

对抗性机器学习正在蓬勃发展,机器学习研究人员越来越多地瞄准商业机器学习系统,例如 Facebook、特斯拉、微软、IBM、谷歌使用的系统来证明漏洞。在本文中,我们问:“对抗性 ML 研究人员在攻击 ML 系统时有哪些潜在的法律风险?” 研究或测试任何操作系统的安全性可能会违反计算机欺诈和滥用法案 (CFAA),这是美国对黑客行为规定责任的主要联邦法规。我们声称对抗性 ML 研究可能没有什么不同。我们的分析表明,由于 CFAA 的解释方式存在分歧,对抗性 ML 攻击的各个方面,例如模型反转、成员推断、模型窃取、重新编程 ML 系统和中毒攻击,可能在某些司法管辖区受到制裁,而在其他司法管辖区不受处罚。我们最后进行了一项分析,预测美国最高法院将如何解决 CFAA 在 Van Buren v. United States 中的申请中存在的一些不一致之处,该上诉预计将于 2021 年作出裁决。我们认为,法院可能会采用狭义的解释CFAA,并且从长远来看,这实际上会导致更好的对抗性 ML 安全结果。
更新日期:2020-06-30
down
wechat
bug