当前位置: X-MOL 学术arXiv.cs.GT › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Disinformation, Stochastic Harm, and Costly Filtering: A Principal-Agent Analysis of Regulating Social Media Platforms
arXiv - CS - Computer Science and Game Theory Pub Date : 2021-06-17 , DOI: arxiv-2106.09847
Shehroze Khan, James R. Wright

The spread of disinformation on social media platforms such as Facebook is harmful to society. This harm can take the form of a gradual degradation of public discourse; but it can also take the form of sudden dramatic events such as the recent insurrection on Capitol Hill. The platforms themselves are in the best position to prevent the spread of disinformation, as they have the best access to relevant data and the expertise to use it. However, filtering disinformation is costly, not only for implementing filtering algorithms or employing manual filtering effort, but also because removing such highly viral content impacts user growth and thus potential advertising revenue. Since the costs of harmful content are borne by other entities, the platform will therefore have no incentive to filter at a socially-optimal level. This problem is similar to the problem of environmental regulation, in which the costs of adverse events are not directly borne by a firm, the mitigation effort of a firm is not observable, and the causal link between a harmful consequence and a specific failure is difficult to prove. In the environmental regulation domain, one solution to this issue is to perform costly monitoring to ensure that the firm takes adequate precautions according a specified rule. However, classifying disinformation is performative, and thus a fixed rule becomes less effective over time. Encoding our domain as a Markov decision process, we demonstrate that no penalty based on a static rule, no matter how large, can incentivize adequate filtering by the platform. Penalties based on an adaptive rule can incentivize optimal effort, but counterintuitively, only if the regulator sufficiently overreacts to harmful events by requiring a greater-than-optimal level of filtering.

中文翻译:

虚假信息、随机危害和代价高昂的过滤:监管社交媒体平台的委托代理分析

在 Facebook 等社交媒体平台上传播虚假信息对社会有害。这种危害可能表现为公共话语逐渐退化;但它也可能以突然的戏剧性事件的形式出现,例如最近国会山的叛乱。平台本身处于防止虚假信息传播的最佳位置,因为它们可以最好地访问相关数据和使用这些数据的专业知识。然而,过滤虚假信息的成本很高,不仅因为实施过滤算法或使用手动过滤工作,而且因为删除这种病毒性很强的内容会影响用户增长,从而影响潜在的广告收入。由于有害内容的成本由其他实体承担,因此平台将没有动力在社会最佳水平上进行过滤。这个问题类似于环境规制的问题,其中不良事件的成本不直接由企业承担,企业的缓解努力不可观察,有害后果与特定失败之间的因果关系很难证明。在环境监管领域,解决此问题的一个方法是执行成本高昂的监控,以确保公司根据特定规则采取足够的预防措施。然而,对虚假信息进行分类是有表现力的,因此固定规则随着时间的推移变得不那么有效。将我们的域编码为马尔可夫决策过程,我们证明了基于静态规则的惩罚,无论多大,都不能激励平台进行足够的过滤。基于自适应规则的惩罚可以激励最佳努力,但与直觉相反,
更新日期:2021-06-25
down
wechat
bug