当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Software-Supported Audits of Decision-Making Systems: Testing Google and Facebook's Political Advertising Policies
arXiv - CS - Human-Computer Interaction Pub Date : 2021-02-26 , DOI: arxiv-2103.00064
J. Nathan Matias, Austin Hounsel, Nick Feamster

How can society understand and hold accountable complex human and algorithmic decision-making systems whose systematic errors are opaque to the outside? These systems routinely make decisions on individual rights and well-being, and on protecting society and the democratic process. Practical and statistical constraints on external audits can lead researchers to miss important sources of error in these complex decision-making systems. In this paper, we design and implement a software-supported approach to audit studies that auto-generates audit materials and coordinates volunteer activity. We implemented this software in the case of political advertising policies enacted by Facebook and Google during the 2018 U.S. election. Guided by this software, a team of volunteers posted 477 auto-generated ads and analyzed the companies' actions, finding systematic errors in how companies enforced policies. We find that software can overcome some common constraints of audit studies, within limitations related to sample size and volunteer capacity.

中文翻译:

决策系统的软件支持的审计:测试Google和Facebook的政治广告政策

社会如何理解和追究系统错误对外界不透明的复杂的人类和算法决策系统?这些制度通常就个人权利和福祉以及保护社会和民主进程作出决定。外部审计的实际和统计限制可能导致研究人员错过这些复杂决策系统中重要的错误来源。在本文中,我们设计并实施了软件支持的审计研究方法,该方法可自动生成审计材料并协调志愿者活动。在Facebook和Google在2018年美国大选期间制定的政治广告政策的情况下,我们实施了该软件。在该软件的指导下,一组志愿者发布了477个自动生成的广告,并分析了公司的行动,发现公司如何执行政策方面的系统性错误。我们发现,软件可以克服审计研究中的一些常见限制,而这些限制与样本量和志愿者能力有关。
更新日期:2021-03-02
down
wechat
bug