当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Designing Toxic Content Classification for a Diversity of Perspectives
arXiv - CS - Computers and Society Pub Date : 2021-06-04 , DOI: arxiv-2106.04511
Deepak Kumar, Patrick Gage Kelley, Sunny Consolvo, Joshua Mason, Elie Bursztein, Zakir Durumeric, Kurt Thomas, Michael Bailey

In this work, we demonstrate how existing classifiers for identifying toxic comments online fail to generalize to the diverse concerns of Internet users. We survey 17,280 participants to understand how user expectations for what constitutes toxic content differ across demographics, beliefs, and personal experiences. We find that groups historically at-risk of harassment - such as people who identify as LGBTQ+ or young adults - are more likely to to flag a random comment drawn from Reddit, Twitter, or 4chan as toxic, as are people who have personally experienced harassment in the past. Based on our findings, we show how current one-size-fits-all toxicity classification algorithms, like the Perspective API from Jigsaw, can improve in accuracy by 86% on average through personalized model tuning. Ultimately, we highlight current pitfalls and new design directions that can improve the equity and efficacy of toxic content classifiers for all users.

中文翻译:

为多种视角设计有毒成分分类

在这项工作中,我们展示了现有的用于识别在线有毒评论的分类器如何无法推广到互联网用户的各种关注点。我们调查了 17,280 名参与者,以了解用户对有害内容的期望在人口统计、信仰和个人经历方面有何不同。我们发现,历史上有遭受骚扰风险的群体——例如自称为 LGBTQ+ 或年轻人的人——更有可能将来自 Reddit、Twitter 或 4chan 的随机评论标记为有毒,个人经历过骚扰的人也是如此在过去。根据我们的发现,我们展示了当前的通用毒性分类算法(如 Jigsaw 的 Perspective API)如何通过个性化模型调整将准确率平均提高 86%。最终,
更新日期:2021-06-09
down
wechat
bug