当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Machine Learning Security in Industry: A Quantitative Survey
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 3-2-2023 , DOI: 10.1109/tifs.2023.3251842
Kathrin Grosse 1 , Lukas Bieringer 2 , Tarek R. Besold 3 , Battista Biggio 4 , Katharina Krombholz 5
Affiliation  

Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze attack occurrence and concern and evaluate statistical hypotheses on factors influencing threat perception and exposure. Our results shed light on real-world attacks on deployed machine learning. On the organizational level, while we find no predictors for threat exposure in our sample, the amount of implement defenses depends on exposure to threats or expected likelihood to become a target. We also provide a detailed analysis of practitioners’ replies on the relevance of individual machine learning attacks, unveiling complex concerns like unreliable decision making, business information leakage, and bias introduction into models. Finally, we find that on the individual level, prior knowledge about machine learning security influences threat perception. Our work paves the way for more research about adversarial machine learning in practice, but yields also insights for regulation and auditing.

中文翻译:


工业中的机器学习安全:定量调查



尽管在机器学习安全方面有大量的学术工作,但人们对机器学习系统在野外遭受攻击的情况知之甚少。在本文中,我们报告了对 139 名工业从业者进行的定量研究。我们分析攻击的发生和关注,并评估影响威胁感知和暴露因素的统计假设。我们的结果揭示了现实世界中对部署的机器学习的攻击。在组织层面上,虽然我们在样本中没有发现威胁暴露的预测因素,但实施防御的数量取决于威胁的暴露程度或成为目标的预期可能性。我们还详细分析了从业者对单个机器学习攻击相关性的答复,揭示了决策不可靠、业务信息泄露和模型中引入偏差等复杂问题。最后,我们发现在个人层面上,有关机器学习安全的先验知识会影响威胁感知。我们的工作为在实践中进行更多关于对抗性机器学习的研究铺平了道路,但也为监管和审计提供了见解。
更新日期:2024-08-28
down
wechat
bug