当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Model-Reference Reinforcement Learning Control of Autonomous Surface Vehicles with Uncertainties
arXiv - CS - Robotics Pub Date : 2020-03-30 , DOI: arxiv-2003.13839
Qingrui Zhang and Wei Pan and Vasso Reppa

This paper presents a novel model-reference reinforcement learning control method for uncertain autonomous surface vehicles. The proposed control combines a conventional control method with deep reinforcement learning. With the conventional control, we can ensure the learning-based control law provides closed-loop stability for the overall system, and potentially increase the sample efficiency of the deep reinforcement learning. With the reinforcement learning, we can directly learn a control law to compensate for modeling uncertainties. In the proposed control, a nominal system is employed for the design of a baseline control law using a conventional control approach. The nominal system also defines the desired performance for uncertain autonomous vehicles to follow. In comparison with traditional deep reinforcement learning methods, our proposed learning-based control can provide stability guarantees and better sample efficiency. We demonstrate the performance of the new algorithm via extensive simulation results.

中文翻译:

具有不确定性的自主水面车辆的模型参考强化学习控制

本文提出了一种用于不确定自主表面车辆的新型模型参考强化学习控制方法。所提出的控制将传统控制方法与深度强化学习相结合。通过常规控制,我们可以确保基于学习的控制律为整个系统提供闭环稳定性,并潜在地提高深度强化学习的样本效率。通过强化学习,我们可以直接学习控制律来补偿建模的不确定性。在建议的控制中,使用标称系统来设计使用传统控制方法的基线控制律。标称系统还定义了不确定的自动驾驶汽车要遵循的期望性能。与传统的深度强化学习方法相比,我们提出的基于学习的控制可以提供稳定性保证和更好的样本效率。我们通过广泛的模拟结果证明了新算法的性能。
更新日期:2020-04-01
down
wechat
bug