当前位置: X-MOL 学术arXiv.cs.NA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Policy iteration for Hamilton-Jacobi-Bellman equations with control constraints
arXiv - CS - Numerical Analysis Pub Date : 2020-04-07 , DOI: arxiv-2004.03558
Sudeep Kundu and Karl Kunisch

Policy iteration is a widely used technique to solve the Hamilton Jacobi Bellman (HJB) equation, which arises from nonlinear optimal feedback control theory. Its convergence analysis has attracted much attention in the unconstrained case. Here we analyze the case with control constraints both for the HJB equations which arise in deterministic and in stochastic control cases. The linear equations in each iteration step are solved by an implicit upwind scheme. Numerical examples are conducted to solve the HJB equation with control constraints and comparisons are shown with the unconstrained cases.

中文翻译:

具有控制约束的 Hamilton-Jacobi-Bellman 方程的策略迭代

策略迭代是一种广泛用于求解 Hamilton Jacobi Bellman (HJB) 方程的技术,该方程源自非线性最优反馈控制理论。其收敛性分析在无约束情况下备受关注。在这里,我们分析了在确定性和随机控制情况下出现的 HJB 方程的控制约束情况。每个迭代步骤中的线性方程通过隐式逆风方案求解。数值例子用于求解具有控制约束的 HJB 方程,并与无约束情况进行比较。
更新日期:2020-05-19
down
wechat
bug