当前位置: X-MOL 学术Quantitative Economics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Discrete‐time dynamic principal–agent models: Contraction mapping theorem and computational treatment
Quantitative Economics ( IF 2.190 ) Pub Date : 2020-11-20 , DOI: 10.3982/qe960
Philipp Renner 1 , Karl Schmedders 2
Affiliation  

We consider discrete‐time dynamic principal–agent problems with continuous choice sets and potentially multiple agents. We prove the existence of a unique solution for the principal's value function only assuming continuity of the functions and compactness of the choice sets. We do this by a contraction mapping theorem and so also obtain a convergence result for the value function iteration. To numerically compute a solution for the problem, we have to solve a collection of static principal–agent problems at each iteration. As a result, in the discrete‐time setting solving the static problem is the difficult step. If the agent's expected utility is a rational function of his action, then we can transform the bi‐level optimization problem into a standard nonlinear program. The final results of our solution method are numerical approximations of the policy and value functions for the dynamic principal–agent model. We illustrate our solution method by solving variations of two prominent social planning models from the economics literature.

中文翻译:

离散时间动态委托-代理模型:收缩映射定理和计算处理

我们考虑具有连续选择集和潜在多个代理的离散时间动态委托-代理问题。仅假设功能的连续性和选择集的紧凑性,我们证明了本金的价值函数存在唯一解决方案。我们通过收缩映射定理执行此操作,因此还获得了值函数迭代的收敛结果。为了用数字方式计算问题的解决方案,我们必须在每次迭代时解决一组静态委托人问题。结果,在离散时间设置中解决静态问题是困难的一步。如果主体的预期效用是其行动的合理函数,则可以将双层优化问题转换为标准的非线性程序。我们的求解方法的最终结果是动态委托-代理模型的策略和价值函数的数值近似。我们通过解决经济学文献中两个著名的社会计划模型的变体来说明我们的解决方法。
更新日期:2020-11-20
down
wechat
bug