当前位置: X-MOL 学术Cogn. Syst. Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When to (or not to) trust intelligent machines: Insights from an evolutionary game theory analysis of trust in repeated games
Cognitive Systems Research ( IF 3.9 ) Pub Date : 2021-04-08 , DOI: 10.1016/j.cogsys.2021.02.003
The Anh Han , Cedric Perret , Simon T. Powers

The actions of intelligent agents, such as chatbots, recommender systems, and virtual assistants are typically not fully transparent to the user. Consequently, users take the risk that such agents act in ways opposed to the users’ preferences or goals. It is often argued that people use trust as a cognitive shortcut to reduce the complexity of such interactions. Here we formalise this by using the methods of evolutionary game theory to study the viability of trust-based strategies in repeated games. These are reciprocal strategies that cooperate as long as the other player is observed to be cooperating. Unlike classic reciprocal strategies, once mutual cooperation has been observed for a threshold number of rounds they stop checking their co-player’s behaviour every round, and instead only check it with some probability. By doing so, they reduce the opportunity cost of verifying whether the action of their co-player was actually cooperative. We demonstrate that these trust-based strategies can outcompete strategies that are always conditional, such as Tit-for-Tat, when the opportunity cost is non-negligible. We argue that this cost is likely to be greater when the interaction is between people and intelligent agents, because of the reduced transparency of the agent. Consequently, we expect people to use trust-based strategies more frequently in interactions with intelligent agents. Our results provide new, important insights into the design of mechanisms for facilitating interactions between humans and intelligent agents, where trust is an essential factor.



中文翻译:

何时(或不)信任智能机器:对重复游戏信任的进化博弈论分析得出的见解

智能代理(例如聊天机器人,推荐系统和虚拟助手)的操作通常对用户而言并不完全透明。因此,用户冒这样的代理采取与用户的偏好或目标相反的方式的风险。人们通常认为,人们将信任作为一种认知捷径来降低这种互动的复杂性。在这里,我们通过使用演化博弈论的方法来研究正规化,以研究重复博弈中基于信任的策略的可行性。这些策略是相互配合的策略,只要观察到其他参与者正在合作即可。与经典的对等策略不同,一旦观察到相互合作达到阈值轮数,他们便停止在每一轮中检查其共同玩家的行为,而只是以一定的概率进行检查。通过这样做,验证其共同参与者的行为是否实际上是合作的机会成本。我们证明,当机会成本不可忽略时,这些基于信任的策略可以胜过总是有条件的策略,例如“针锋相对”。我们认为,当人与智能代理之间进行交互时,此成本可能会更高,因为代理的透明性降低了。因此,我们希望人们在与智能代理进行交互时更频繁地使用基于信任的策略。我们的结果为促进人与智能主体之间相互作用的机制设计提供了新的重要见解,其中信任是必不可少的因素。

更新日期:2021-04-22
down
wechat
bug