当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power?
arXiv - CS - Human-Computer Interaction Pub Date : 2021-09-16 , DOI: arxiv-2109.08131
Milagros Miceli, Julian Posada, Tianling Yang

Research in machine learning (ML) has primarily argued that models trained on incomplete or biased datasets can lead to discriminatory outputs. In this commentary, we propose moving the research focus beyond bias-oriented framings by adopting a power-aware perspective to "study up" ML datasets. This means accounting for historical inequities, labor conditions, and epistemological standpoints inscribed in data. We draw on HCI and CSCW work to support our argument, critically analyze previous research, and point at two co-existing lines of work within our community -- one bias-oriented, the other power-aware. This way, we highlight the need for dialogue and cooperation in three areas: data quality, data work, and data documentation. In the first area, we argue that reducing societal problems to "bias" misses the context-based nature of data. In the second one, we highlight the corporate forces and market imperatives involved in the labor of data workers that subsequently shape ML datasets. Finally, we propose expanding current transparency-oriented efforts in dataset documentation to reflect the social contexts of data design and production.

中文翻译:

研究机器学习数据:当我们指的是权力时为什么要谈论偏见?

机器学习 (ML) 研究主要认为,在不完整或有偏见的数据集上训练的模型会导致歧视性输出。在这篇评论中,我们建议通过采用权力意识的视角来“研究”机器学习数据集,将研究重点从偏见导向框架转移。这意味着要考虑数据中铭刻的历史不平等、劳动条件和认识论立场。我们利用 HCI 和 CSCW 的工作来支持我们的论点,批判性地分析以前的研究,并指出我们社区内的两个共存工作线——一个面向偏见,另一个关注权力。通过这种方式,我们强调了在三个领域进行对话与合作的必要性:数据质量、数据工作和数据文档。在第一方面,我们认为将社会问题归结为“偏见” 错过了数据的基于上下文的性质。在第二个中,我们强调了数据工作者的劳动力所涉及的企业力量和市场需求,这些劳动力随后塑造了 ML 数据集。最后,我们建议扩大当前在数据集文档中以透明为导向的努力,以反映数据设计和生产的社会背景。
更新日期:2021-09-17
down
wechat
bug