当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Scaling up Psychology via Scientific Regret Minimization: A Case Study in Moral Decisions
arXiv - CS - Computers and Society Pub Date : 2019-10-16 , DOI: arxiv-1910.07581
Mayank Agrawal, Joshua C. Peterson, Thomas L. Griffiths

Do large datasets provide value to psychologists? Without a systematic methodology for working with such datasets, there is a valid concern that analyses will produce noise artifacts rather than true effects. In this paper, we offer a way to enable researchers to systematically build models and identify novel phenomena in large datasets. One traditional approach is to analyze the residuals of models---the biggest errors they make in predicting the data---to discover what might be missing from those models. However, once a dataset is sufficiently large, machine learning algorithms approximate the true underlying function better than the data, suggesting instead that the predictions of these data-driven models should be used to guide model-building. We call this approach "Scientific Regret Minimization" (SRM) as it focuses on minimizing errors for cases that we know should have been predictable. We demonstrate this methodology on a subset of the Moral Machine dataset, a public collection of roughly forty million moral decisions. Using SRM, we found that incorporating a set of deontological principles that capture dimensions along which groups of agents can vary (e.g. sex and age) improves a computational model of human moral judgment. Furthermore, we were able to identify and independently validate three interesting moral phenomena: criminal dehumanization, age of responsibility, and asymmetric notions of responsibility.

中文翻译:

通过科学后悔最小化扩大心理学:道德决策案例研究

大型数据集是否为心理学家提供价值?如果没有用于处理此类数据集的系统方法,则存在一个合理的担忧,即分析会产生噪声伪影而不是真实效果。在本文中,我们提供了一种方法,使研究人员能够系统地构建模型并识别大型数据集中的新现象。一种传统方法是分析模型的残差——它们在预测数据时犯的最大错误——以发现这些模型中可能遗漏了什么。然而,一旦数据集足够大,机器学习算法比数据更接近真实的底层函数,这表明应该使用这些数据驱动模型的预测来指导模型构建。我们称这种方法为“科学后悔最小化” (SRM),因为它专注于最大限度地减少我们知道应该可以预测的情况的错误。我们在道德机器数据集的一个子集上演示了这种方法,该数据集是一个包含大约四千万道德决策的公共集合。使用 SRM,我们发现结合一组道义原则来捕捉代理组可以变化的维度(例如性别和年龄),改进了人类道德判断的计算模型。此外,我们能够识别并独立验证三个有趣的道德现象:刑事非人化、责任年龄和不对称责任概念。我们发现,结合一组道义原则来捕捉代理组可以变化的维度(例如性别和年龄),可以改进人类道德判断的计算模型。此外,我们能够识别并独立验证三个有趣的道德现象:刑事非人化、责任年龄和不对称责任概念。我们发现,结合一组道义原则来捕捉代理组可以变化的维度(例如性别和年龄),可以改进人类道德判断的计算模型。此外,我们能够识别并独立验证三个有趣的道德现象:刑事非人化、责任年龄和不对称责任概念。
更新日期:2020-01-10
down
wechat
bug