当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Critical Evaluation of Open-World Machine Learning
arXiv - CS - Cryptography and Security Pub Date : 2020-07-08 , DOI: arxiv-2007.04391
Liwei Song, Vikash Sehwag, Arjun Nitin Bhagoji, Prateek Mittal

Open-world machine learning (ML) combines closed-world models trained on in-distribution data with out-of-distribution (OOD) detectors, which aim to detect and reject OOD inputs. Previous works on open-world ML systems usually fail to test their reliability under diverse, and possibly adversarial conditions. Therefore, in this paper, we seek to understand how resilient are state-of-the-art open-world ML systems to changes in system components? With our evaluation across 6 OOD detectors, we find that the choice of in-distribution data, model architecture and OOD data have a strong impact on OOD detection performance, inducing false positive rates in excess of $70\%$. We further show that OOD inputs with 22 unintentional corruptions or adversarial perturbations render open-world ML systems unusable with false positive rates of up to $100\%$. To increase the resilience of open-world ML, we combine robust classifiers with OOD detection techniques and uncover a new trade-off between OOD detection and robustness.

中文翻译:

对开放世界机器学习的批判性评估

开放世界机器学习 (ML) 将在分布内数据上训练的封闭世界模型与分布外 (OOD) 检测器相结合,旨在检测和拒绝 OOD 输入。以前在开放世界 ML 系统上的工作通常无法在各种可能的对抗条件下测试其可靠性。因此,在本文中,我们试图了解最先进的开放世界 ML 系统对系统组件变化的弹性如何?通过我们对 6 个 OOD 检测器的评估,我们发现分布内数据、模型架构和 OOD 数据的选择对 OOD 检测性能有很大影响,导致误报率超过 70%$。我们进一步表明,具有 22 次无意损坏或对抗性扰动的 OOD 输入导致开放世界 ML 系统无法使用,误报率高达 100 美元\%。
更新日期:2020-07-10
down
wechat
bug