当前位置: X-MOL 学术arXiv.cs.MA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Emergent Reciprocity and Team Formation from Randomized Uncertain Social Preferences
arXiv - CS - Multiagent Systems Pub Date : 2020-11-10 , DOI: arxiv-2011.05373
Bowen Baker

Multi-agent reinforcement learning (MARL) has shown recent success in increasingly complex fixed-team zero-sum environments. However, the real world is not zero-sum nor does it have fixed teams; humans face numerous social dilemmas and must learn when to cooperate and when to compete. To successfully deploy agents into the human world, it may be important that they be able to understand and help in our conflicts. Unfortunately, selfish MARL agents typically fail when faced with social dilemmas. In this work, we show evidence of emergent direct reciprocity, indirect reciprocity and reputation, and team formation when training agents with randomized uncertain social preferences (RUSP), a novel environment augmentation that expands the distribution of environments agents play in. RUSP is generic and scalable; it can be applied to any multi-agent environment without changing the original underlying game dynamics or objectives. In particular, we show that with RUSP these behaviors can emerge and lead to higher social welfare equilibria in both classic abstract social dilemmas like Iterated Prisoner's Dilemma as well in more complex intertemporal environments.

中文翻译:

随机不确定社会偏好的紧急互惠和团队组建

多智能体强化学习 (MARL) 最近在日益复杂的固定团队零和环境中取得了成功。然而,现实世界不是零和,也没有固定的团队;人类面临着无数的社会困境,必须学会何时合作,何时竞争。为了成功地将代理部署到人类世界中,他们能够理解我们的冲突并提供帮助可能很重要。不幸的是,自私的 MARL 代理在面临社会困境时通常会失败。在这项工作中,我们展示了在训练具有随机不确定社会偏好 (RUSP) 的智能体时出现的直接互惠、间接互惠和声誉以及团队组建的证据,RUSP 是一种新的环境增强,可扩展智能体所处的环境分布。 RUSP 是通用的,并且可扩展;它可以应用于任何多智能体环境,而不会改变原始的底层游戏动态或目标。特别是,我们表明,通过 RUSP,这些行为可以在经典抽象社会困境(如迭代囚徒困境)以及更复杂的跨期环境中出现并导致更高的社会福利均衡。
更新日期:2020-11-12
down
wechat
bug