当前位置: X-MOL 学术Media International Australia › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’
Media International Australia ( IF 1.5 ) Pub Date : 2022-02-07 , DOI: 10.1177/1329878x221076288
Maya Indira Ganesh 1 , Emanuel Moss 2, 3
Affiliation  

Industrial, academic, activist, and policy research and advocacy movements formed around resisting ‘machine bias’, promoting ‘ethical AI’, and ‘fair ML’ have discursive implications for what constitutes harm, and what resistance to algorithmic influence itself means, and is deeply connected to which actors makes epistemic claims about harm and resistance. We present a loose categorization of kinds of resistance to algorithmic systems: a dominant mode of resistance as ‘filtering up’ and being translated into design fixes by Big Tech; and advocacy and scholarship which bring a critical frame of lived experiences and scholarship around algorithmic systems as socio-technical entities. Three recent cases delve into how Big Tech responds to harms documented by marginalized groups; these highlight how harms are valued differently. Finally, we identify modes of refusal that recognize the limits of Big Tech's resistance; built on practices of feminist organizing, decoloniality, and New-Luddism, they encourage a rethinking of the place and value of technologies in mediating human social and personal life; and not just how they can deterministically ‘improve’ social relations.



中文翻译:

抵制和拒绝算法危害:各种“知识项目”

围绕抵制“机器偏见”、促进“道德 AI”和“公平 ML”而形成的工业、学术、激进主义和政策研究和倡导运动对什么构成伤害以及抵制算法影响本身意味着什么具有话语意义,并且是与哪些行为者对伤害和抵抗提出认知主张密切相关。我们对算法系统的各种阻力进行了松散的分类:一种主要的阻力模式是“过滤”并被 Big Tech 转化为设计修复;以及倡导和学术,它们为作为社会技术实体的算法系统带来了一个关键的生活经验和学术框架。最近的三个案例深入探讨了大型科技公司如何应对边缘化群体记录的伤害;这些突出了伤害的不同价值。最后,我们确定了承认大型科技公司抵抗极限的拒绝模式;它们建立在女权主义组织、非殖民主义和新卢德主义的实践之上,鼓励重新思考技术在调解人类社会和个人生活中的地位和价值;而不仅仅是他们如何确定性地“改善”社会关系。

更新日期:2022-02-07
down
wechat
bug