当前位置: X-MOL 学术Theoretical Economics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning with minimal information in continuous games
Theoretical Economics ( IF 1.2 ) Pub Date : 2020-01-01 , DOI: 10.3982/te3435
Sebastian Bervoets 1, 2, 3 , Mario Bravo 4 , Mathieu Faure 1, 2, 3
Affiliation  

We introduce a stochastic learning process called the dampened gradient approximation process. While learning models have almost exclusively focused on finite games, in this paper we design a learning process for games with continuous action sets. It is payoff-based and thus requires from players no sophistication and no knowledge of the game. We show that despite such limited information, players will converge to Nash in large classes of games. In particular, convergence to a Nash equilibrium which is stable is guaranteed in all games with strategic complements as well as in concave games; convergence to Nash often occurs in all locally ordinal potential games; convergence to a stable Nash occurs with positive probability in all games with isolated equilibria.

中文翻译:

在连续游戏中以最少的信息学习

我们引入了一种称为阻尼梯度近似过程的随机学习过程。虽然学习模型几乎完全专注于有限游戏,但在本文中,我们为具有连续动作集的游戏设计了一个学习过程。它是基于回报的,因此不需要玩家的复杂性和游戏知识。我们表明,尽管信息如此有限,玩家仍会在大型游戏中趋向于纳什。特别是,在所有具有战略互补的博弈以及凹博弈中都保证收敛到稳定的纳什均衡;收敛到 Nash 经常发生在所有局部有序的潜在博弈中;在所有具有孤立均衡的博弈中,收敛到稳定的纳什的概率为正。
更新日期:2020-01-01
down
wechat
bug