当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hamiltonian-Driven Adaptive Dynamic Programming With Approximation Errors
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 2021-09-08 , DOI: 10.1109/tcyb.2021.3108034
Yongliang Yang 1 , Hamidreza Modares 2 , Kyriakos G. Vamvoudakis 3 , Wei He 1 , Cheng-Zhong Xu 4 , Donald C. Wunsch 5
Affiliation  

In this article, we consider an iterative adaptive dynamic programming (ADP) algorithm within the Hamiltonian-driven framework to solve the Hamilton–Jacobi–Bellman (HJB) equation for the infinite-horizon optimal control problem in continuous time for nonlinear systems. First, a novel function, “min-Hamiltonian,” is defined to capture the fundamental properties of the classical Hamiltonian. It is shown that both the HJB equation and the policy iteration (PI) algorithm can be formulated in terms of the min-Hamiltonian within the Hamiltonian-driven framework. Moreover, we develop an iterative ADP algorithm that takes into consideration the approximation errors during the policy evaluation step. We then derive a sufficient condition on the iterative value gradient to guarantee closed-loop stability of the equilibrium point as well as convergence to the optimal value. A model-free extension based on an off-policy reinforcement learning (RL) technique is also provided. Finally, numerical results illustrate the efficacy of the proposed framework.

中文翻译:


具有近似误差的哈密顿驱动自适应动态规划



在本文中,我们考虑哈密顿驱动框架内的迭代自适应动态规划(ADP)算法来求解非线性系统连续时间内无限范围最优控制问题的哈密尔顿-雅可比-贝尔曼(HJB)方程。首先,定义了一个新函数“min-Hamiltonian”来捕获经典哈密顿量的基本属性。结果表明,HJB 方程和策略迭代 (PI) 算法都可以在哈密顿驱动框架内根据最小哈密顿方程来表示。此外,我们开发了一种迭代 ADP 算法,该算法考虑了策略评估步骤中的近似误差。然后,我们推导出迭代值梯度的充分条件,以保证平衡点的闭环稳定性以及收敛到最优值。还提供了基于离策略强化学习(RL)技术的无模型扩展。最后,数值结果说明了所提出框架的有效性。
更新日期:2021-09-08
down
wechat
bug