当前位置: X-MOL 学术Science › 论文详情
“Explaining” machine learning reveals policy challenges
Science ( IF 41.845 ) Pub Date : 2020-06-26 , DOI: 10.1126/science.aba9647
Diane Coyle, Adrian Weller

There is a growing demand to be able to “explain” machine learning (ML) systems' decisions and actions to human users, particularly when used in contexts where decisions have substantial implications for those affected and where there is a requirement for political accountability or legal compliance (1). Explainability is often discussed as a technical challenge in designing ML systems and decision procedures, to improve understanding of what is typically a “black box” phenomenon. But some of the most difficult challenges are nontechnical and raise questions about the broader accountability of organizations using ML in their decision-making. One reason for this is that many decisions by ML systems may exhibit bias, as systemic biases in society lead to biases in data used by the systems (2). But there is another reason, less widely appreciated. Because the quantities that ML systems seek to optimize have to be specified by their users, explainable ML will force policy-makers to be more explicit about their objectives, and thus about their values and political choices, exposing policy trade-offs that may have previously only been implicit and obscured. As the use of ML in policy spreads, there may have to be public debate that makes explicit the value judgments or weights to be used. Merely technical approaches to “explaining” ML will often only be effective if the systems are deployed by trustworthy and accountable organizations.
更新日期:2020-06-26

 

全部期刊列表>>
材料学研究精选
Springer Nature Live 产业与创新线上学术论坛
胸腔和胸部成像专题
自然科研论文编辑服务
ACS ES&T Engineering
ACS ES&T Water
屿渡论文,编辑服务
杨超勇
周一歌
华东师范大学
段炼
清华大学
中科大
唐勇
跟Nature、Science文章学绘图
隐藏1h前已浏览文章
中洪博元
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍
ACS材料视界
x-mol收录
福州大学
南京大学
王杰
左智伟
电子显微学
何凤
洛杉矶分校
吴杰
赵延川
试剂库存
天合科研
down
wechat
bug