当前位置: X-MOL 学术Qual. Reliab. Eng. Int. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable AI methods in cyber risk management
Quality and Reliability Engineering International ( IF 2.2 ) Pub Date : 2021-06-15 , DOI: 10.1002/qre.2939
Paolo Giudici 1 , Emanuela Raffinetti 1
Affiliation  

Artificial intelligence (AI) methods are becoming widespread, especially when data are not sufficient to build classical statistical models, as is the case for cyber risk management. However, when applied to regulated industries, such as energy, finance, and health, AI methods lack explainability. Authorities aimed at validating machine learning models in regulated fields will not consider black-box models, unless they are supplemented with further methods that explain why certain predictions have been obtained, and which are the variables that mostly concur to such predictions. Recently, Shapley values have been introduced for this purpose: They are model agnostic, and powerful, but are not normalized and, therefore, cannot become a standardized procedure. In this paper, we provide an explainable AI model that embeds Shapley values with a statistical normalization, based on Lorenz Zonoids, particularly suited for ordinal measurement variables that can be obtained to assess cyber risk.

中文翻译:

网络风险管理中可解释的人工智能方法

人工智能 (AI) 方法正变得越来越普遍,尤其是当数据不足以构建经典统计模型时,例如网络风险管理。然而,当应用于能源、金融和健康等受监管的行业时,人工智能方法缺乏可解释性。旨在验证受监管领域的机器学习模型的权威机构不会考虑黑盒模型,除非它们辅以进一步的方法来解释为什么获得某些预测,以及哪些变量大多与此类预测一致。最近,已为此目的引入了 Shapley 值:它们与模型无关且功能强大,但未标准化,因此不能成为标准化程序。在本文中,
更新日期:2021-06-15
down
wechat
bug