当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Recurrent Model Predictive Control
arXiv - CS - Artificial Intelligence Pub Date : 2021-02-23 , DOI: arxiv-2102.11736
Zhengyu Liu, Jingliang Duan, Wenxuan Wang, Shengbo Eben Li, Yuming Yin, Ziyu Lin, Qi Sun, Bo Cheng

This paper proposes an off-line algorithm, called Recurrent Model Predictive Control (RMPC), to solve general nonlinear finite-horizon optimal control problems. Unlike traditional Model Predictive Control (MPC) algorithms, it can make full use of the current computing resources and adaptively select the longest model prediction horizon. Our algorithm employs a recurrent function to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. The number of prediction steps is equal to the number of recurrent cycles of the learned policy function. With an arbitrary initial policy function, the proposed RMPC algorithm can converge to the optimal policy by directly minimizing the designed loss function. We further prove the convergence and optimality of the RMPC algorithm thorough Bellman optimality principle, and demonstrate its generality and efficiency using two numerical examples.

中文翻译:

递归模型预测控制

本文提出了一种离线算法,称为递归模型预测控制(RMPC),以解决一般的非线性有限水平最优控制问题。与传统的模型预测控制(MPC)算法不同,它可以充分利用当前的计算资源,并自适应地选择最长的模型预测范围。我们的算法采用递归函数来逼近最佳策略,该策略将系统状态和参考值直接映射到控制输入。预测步骤数等于学习到的策略函数的循环周期数。利用任意初始策略函数,通过直接最小化设计的损失函数,所提出的RMPC算法可以收敛到最优策略。
更新日期:2021-02-24
down
wechat
bug