当前位置: X-MOL 学术Media and Communication › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Automated Trouble: The Role of Algorithmic Selection in Harms on Social Media Platforms
Media and Communication ( IF 2.7 ) Pub Date : 2021-11-18 , DOI: 10.17645/mac.v9i4.4062
Florian Saurwein , Charlotte Spencer-Smith

Social media platforms like Facebook, YouTube, and Twitter have become major objects of criticism for reasons such as privacy violations, anticompetitive practices, and interference in public elections. Some of these problems have been associated with algorithms, but the roles that algorithms play in the emergence of different harms have not yet been systematically explored. This article contributes to closing this research gap with an investigation of the link between algorithms and harms on social media platforms. Evidence of harms involving social media algorithms was collected from media reports and academic papers within a two-year timeframe from 2018 to 2019, covering Facebook, YouTube, Instagram, and Twitter. Harms with similar casual mechanisms were grouped together to inductively develop a typology of algorithmic harm based on the mechanisms involved in their emergence: (1) algorithmic errors, undesirable, or disturbing selections; (2) manipulation by users to achieve algorithmic outputs to harass other users or disrupt public discourse; (3) algorithmic reinforcement of pre-existing harms and inequalities in society; (4) enablement of harmful practices that are opaque and discriminatory; and (5) strengthening of platform power over users, markets, and society. Although the analysis emphasizes the role of algorithms as a cause of online harms, it also demonstrates that harms do not arise from the application of algorithms alone. Instead, harms can be best conceived of as socio-technical assemblages, composed of the use and design of algorithms, platform design, commercial interests, social practices, and context. The article concludes with reflections on possible governance interventions in response to identified socio-technical mechanisms of harm. Notably, while algorithmic errors may be fixed by platforms themselves, growing platform power calls for external oversight.

中文翻译:

自动故障:算法选择在社交媒体平台上的危害中的作用

Facebook、YouTube 和 Twitter 等社交媒体平台已成为主要批评对象,原因包括侵犯隐私、反竞争行为和干扰公共选举。其中一些问题与算法有关,但算法在不同危害的出现中所起的作用尚未系统地探讨。本文通过调查算法与社交媒体平台上的危害之间的联系,有助于缩小这一研究差距。从 2018 年至 2019 年的两年时间范围内,从媒体报道和学术论文中收集了涉及社交媒体算法的危害的证据,涵盖 Facebook、YouTube、Instagram 和 Twitter。具有类似偶然机制的危害被组合在一起,以根据其出现所涉及的机制归纳开发一种算法危害类型:(1)算法错误、不受欢迎或令人不安的选择;(2) 用户操纵以实现算法输出以骚扰其他用户或扰乱公共话语;(3) 算法强化社会中预先存在的危害和不平等;(4) 促成不透明和歧视性的有害做法;(5) 加强对用户、市场和社会的平台权力。尽管分析强调了算法作为在线危害原因的作用,但它也表明危害不仅仅来自算法的应用。取而代之的是,最好将危害理解为社会技术组合,由算法的使用和设计组成,平台设计、商业利益、社会实践和背景。文章最后反思了针对已确定的社会技术危害机制可能采取的治理干预措施。值得注意的是,虽然算法错误可以由平台自己修复,但日益增长的平台能力需要外部监督。
更新日期:2021-11-18
down
wechat
bug