当前位置: X-MOL 学术Comput. Optim. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Policy iteration for Hamilton–Jacobi–Bellman equations with control constraints
Computational Optimization and Applications ( IF 1.6 ) Pub Date : 2021-04-24 , DOI: 10.1007/s10589-021-00278-3
Sudeep Kundu , Karl Kunisch

Policy iteration is a widely used technique to solve the Hamilton Jacobi Bellman (HJB) equation, which arises from nonlinear optimal feedback control theory. Its convergence analysis has attracted much attention in the unconstrained case. Here we analyze the case with control constraints both for the HJB equations which arise in deterministic and in stochastic control cases. The linear equations in each iteration step are solved by an implicit upwind scheme. Numerical examples are conducted to solve the HJB equation with control constraints and comparisons are shown with the unconstrained cases.



中文翻译:

具有控制约束的Hamilton–Jacobi–Bellman方程的策略迭代

策略迭代是一种广泛使用的技术,用于解决汉密尔顿·雅各比·贝尔曼(HJB)方程,该方程源自非线性最优反馈控制理论。在非约束情况下,其收敛性分析引起了很多关注。在这里,我们对在确定性和随机控制情况下出现的HJB方程均具有控制约束的情况进行分析。每个迭代步骤中的线性方程都是通过隐式迎风方案求解的。进行了数值例子来求解带控制约束的HJB方程,并与无约束情况进行了比较。

更新日期:2021-04-24
down
wechat
bug