当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
What are you optimizing for? Aligning Recommender Systems with Human Values
arXiv - CS - Information Retrieval Pub Date : 2021-07-22 , DOI: arxiv-2107.10939
Jonathan Stray, Ivan Vendrov, Jeremy Nixon, Steven Adler, Dylan Hadfield-Menell

We describe cases where real recommender systems were modified in the service of various human values such as diversity, fairness, well-being, time well spent, and factual accuracy. From this we identify the current practice of values engineering: the creation of classifiers from human-created data with value-based labels. This has worked in practice for a variety of issues, but problems are addressed one at a time, and users and other stakeholders have seldom been involved. Instead, we look to AI alignment work for approaches that could learn complex values directly from stakeholders, and identify four major directions: useful measures of alignment, participatory design and operation, interactive value learning, and informed deliberative judgments.

中文翻译:

你优化什么?使推荐系统与人类价值观保持一致

我们描述了真实推荐系统被修改以服务于各种人类价值观的情况,例如多样性、公平性、幸福感、花费的时间和事实准确性。由此,我们确定了当前的价值工程实践:从具有基于价值的标签的人工数据中创建分类器。这在实践中适用于各种问题,但一次解决一个问题,用户和其他利益相关者很少参与。相反,我们着眼于 AI 对齐工作,寻找可以直接从利益相关者那里学习复杂价值观的方法,并确定四个主要方向:对齐的有用措施、参与式设计和操作、交互式价值学习和明智的审慎判断。
更新日期:2021-07-26
down
wechat
bug