当前位置: X-MOL 学术arXiv.cs.MA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Value Alignment Equilibrium in Multiagent Systems
arXiv - CS - Multiagent Systems Pub Date : 2020-09-16 , DOI: arxiv-2009.07619
Nieves Montes, Carles Sierra

Value alignment has emerged in recent years as a basic principle to produce beneficial and mindful Artificial Intelligence systems. It mainly states that autonomous entities should behave in a way that is aligned with our human values. In this work, we summarize a previously developed model that considers values as preferences over states of the world and defines alignment between the governing norms and the values. We provide a use-case for this framework with the Iterated Prisoner's Dilemma model, which we use to exemplify the definitions we review. We take advantage of this use-case to introduce new concepts to be integrated with the established framework: alignment equilibrium and Pareto optimal alignment. These are inspired on the classical Nash equilibrium and Pareto optimality, but are designed to account for any value we wish to model in the system.

中文翻译:

多智能体系统中的价值对齐均衡

近年来,价值对齐已成为产生有益和有意识的人工智能系统的基本原则。它主要指出自主实体的行为方式应符合我们的人类价值观。在这项工作中,我们总结了先前开发的模型,该模型将价值观视为对世界状态的偏好,并定义了治理规范与价值观之间的一致性。我们为这个框架提供了一个使用迭代囚徒困境模型的用例,我们用它来举例说明我们审查的定义。我们利用这个用例引入了与既定框架集成的新概念:对齐平衡和帕累托最优对齐。这些受到经典纳什均衡和帕累托最优的启发,
更新日期:2020-11-03
down
wechat
bug