当前位置: X-MOL 学术IEEE Control Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Everyone in Control, Everywhere [About This Issue]
IEEE Control Systems ( IF 5.7 ) Pub Date : 2023-01-11 , DOI: 10.1109/mcs.2022.3216648
Rodolphe Sepulchre

The theme of this month’s issue of IEEE Control Systems is “Everyone in Control, Everywhere.” The magazine presents two features and one control education article. The first feature is an example-driven tutorial introduction to quantum control. It is the product of a three-year multidisciplinary collaboration between a team of control engineers and a team of quantum scientists: Marco M. Nicotra, Jieqiu Shao, Joshua Combes, Anne Cross Theurkauf, Penina Axelrad, Liang-Ying Chih, Murray Holland, Alex A. Zozulya, Catie K. LeDesma, Kendall Mehling, and Dana Z. Anderson. In the author’s experiences, the greatest challenge one faces when entering the field of quantum control is the language barrier between the two communities. The aim of this article is to lower this barrier by showing how familiar control strategies (that is, Lyapunov, optimal control, and learning) can be applied in the unfamiliar setting of a quantum system (that is, a cloud of trapped, ultracold atoms). Particular emphasis is given to the derivation of the model and the description of its structural properties. Sidebars throughout the article prove a brief overview of the essential notions/notation that are/is required to establish an effective communication channel with quantum physicists and quantum engineers. In essence, this article is a collection of everything that this control team wished they had known at the beginning of the project. They hope that it may be of assistance to members of this community wanting to embark on their first quantum control project. The second feature proposes a model-free deep reinforcement learning strategy for shared control of robot manipulators with obstacle avoidance. It is coauthored by Matteo Rubagotti, Bianca Sangiovanni, Aigerim Nurbayeva, Gian Paolo Incremona, Antonella Ferrara, and Almas Shintemirov. The proposed strategy is tested in simulation and experimentally on a UR5 manipulator, and it is compared with a model predictive control approach. The article shows that deep reinforcement learning exhibits a better performance than model predictive control, but only if the provided reference falls within the distribution of the deep reinforcement learning algorithm policy. Indeed, the model-based nature of model predictive control allows it to address unforeseen situations that are compatible with the process model, while deep reinforcement learning provides poor performance in all the situations not minimally experienced during the training process.

中文翻译:

人人尽在掌握 [关于本期]

本月刊的主题IEEE 控制系统是“人人皆可控,无处不在”。该杂志提供两个专题和一篇控制教育文章。第一个功能是示例驱动的量子控制教程介绍。它是控制工程师团队和量子科学家团队之间三年多学科合作的产物:Marco M. Nicotra、Jieqiu Shao、Joshua Combes、Anne Cross Theurka​​uf、Penina Axelrad、Liang-Ying Chih、Murray Holland、 Alex A. Zozulya、Catie K. LeDesma、Kendall Mehling 和 Dana Z. Anderson。根据笔者的经验,进入量子控制领域面临的最大挑战是两个社区之间的语言障碍。本文的目的是通过展示熟悉的控制策略(即 Lyapunov、最优控制、和学习)可以应用于不熟悉的量子系统设置(即被困的超冷原子云)。特别强调了模型的推导及其结构特性的描述。整篇文章的边栏简要概述了与量子物理学家和量子工程师建立有效沟通渠道所需的基本概念/符号。从本质上讲,这篇文章是该控制团队希望他们在项目开始时就知道的所有内容的集合。他们希望这对希望开始他们的第一个量子控制项目的社区成员有所帮助。第二个特征提出了一种无模型的深度强化学习策略,用于机器人机械手的避障共享控制。它由 Matteo Rubagotti、Bianca Sangiovanni、Aigerim Nurbayeva、Gian Paolo Incremona、Antonella Ferrara 和 Almas Shintemirov 合着。所提出的策略在 UR5 机械手上进行了仿真和实验测试,并与模型预测控制方法进行了比较。文章表明,深度强化学习表现出比模型预测控制更好的性能,但前提是提供的参考落在深度强化学习算法策略的分布范围内。事实上,模型预测控制的基于模型的特性使其能够解决与过程模型兼容的不可预见的情况,而深度强化学习在训练过程中遇到的所有情况下都表现不佳。Gian Paolo Incremona、Antonella Ferrara 和 Almas Shintemirov。所提出的策略在 UR5 机械手上进行了仿真和实验测试,并与模型预测控制方法进行了比较。文章表明,深度强化学习表现出比模型预测控制更好的性能,但前提是提供的参考落在深度强化学习算法策略的分布范围内。事实上,模型预测控制的基于模型的特性使其能够解决与过程模型兼容的不可预见的情况,而深度强化学习在训练过程中遇到的所有情况下都表现不佳。Gian Paolo Incremona、Antonella Ferrara 和 Almas Shintemirov。所提出的策略在 UR5 机械手上进行了仿真和实验测试,并与模型预测控制方法进行了比较。文章表明,深度强化学习表现出比模型预测控制更好的性能,但前提是提供的参考落在深度强化学习算法策略的分布范围内。事实上,模型预测控制的基于模型的特性使其能够解决与过程模型兼容的不可预见的情况,而深度强化学习在训练过程中遇到的所有情况下都表现不佳。所提出的策略在 UR5 机械手上进行了仿真和实验测试,并与模型预测控制方法进行了比较。文章表明,深度强化学习表现出比模型预测控制更好的性能,但前提是提供的参考落在深度强化学习算法策略的分布范围内。事实上,模型预测控制的基于模型的特性使其能够解决与过程模型兼容的不可预见的情况,而深度强化学习在训练过程中遇到的所有情况下都表现不佳。所提出的策略在 UR5 机械手上进行了仿真和实验测试,并与模型预测控制方法进行了比较。文章表明,深度强化学习表现出比模型预测控制更好的性能,但前提是提供的参考落在深度强化学习算法策略的分布范围内。事实上,模型预测控制的基于模型的特性使其能够解决与过程模型兼容的不可预见的情况,而深度强化学习在训练过程中遇到的所有情况下都表现不佳。但前提是提供的参考落在深度强化学习算法策略的分布范围内。事实上,模型预测控制的基于模型的特性使其能够解决与过程模型兼容的不可预见的情况,而深度强化学习在训练过程中遇到的所有情况下都表现不佳。但前提是提供的参考落在深度强化学习算法策略的分布范围内。事实上,模型预测控制的基于模型的特性使其能够解决与过程模型兼容的不可预见的情况,而深度强化学习在训练过程中遇到的所有情况下都表现不佳。
更新日期:2023-01-13
down
wechat
bug