当前位置: X-MOL 学术J. Neuroeng. Rehabil. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep reinforcement learning for modeling human locomotion control in neuromechanical simulation
Journal of NeuroEngineering and Rehabilitation ( IF 5.1 ) Pub Date : 2021-08-16 , DOI: 10.1186/s12984-021-00919-y
Seungmoon Song 1 , Łukasz Kidziński 2 , Xue Bin Peng 3 , Carmichael Ong 2 , Jennifer Hicks 2 , Sergey Levine 3 , Christopher G Atkeson 4 , Scott L Delp 1, 2
Affiliation  

Modeling human motor control and predicting how humans will move in novel environments is a grand scientific challenge. Researchers in the fields of biomechanics and motor control have proposed and evaluated motor control models via neuromechanical simulations, which produce physically correct motions of a musculoskeletal model. Typically, researchers have developed control models that encode physiologically plausible motor control hypotheses and compared the resulting simulation behaviors to measurable human motion data. While such plausible control models were able to simulate and explain many basic locomotion behaviors (e.g. walking, running, and climbing stairs), modeling higher layer controls (e.g. processing environment cues, planning long-term motion strategies, and coordinating basic motor skills to navigate in dynamic and complex environments) remains a challenge. Recent advances in deep reinforcement learning lay a foundation for modeling these complex control processes and controlling a diverse repertoire of human movement; however, reinforcement learning has been rarely applied in neuromechanical simulation to model human control. In this paper, we review the current state of neuromechanical simulations, along with the fundamentals of reinforcement learning, as it applies to human locomotion. We also present a scientific competition and accompanying software platform, which we have organized to accelerate the use of reinforcement learning in neuromechanical simulations. This “Learn to Move” competition was an official competition at the NeurIPS conference from 2017 to 2019 and attracted over 1300 teams from around the world. Top teams adapted state-of-the-art deep reinforcement learning techniques and produced motions, such as quick turning and walk-to-stand transitions, that have not been demonstrated before in neuromechanical simulations without utilizing reference motion data. We close with a discussion of future opportunities at the intersection of human movement simulation and reinforcement learning and our plans to extend the Learn to Move competition to further facilitate interdisciplinary collaboration in modeling human motor control for biomechanics and rehabilitation research

中文翻译:

深度强化学习在神经力学模拟中对人体运动控制进行建模

对人类运动控制进行建模并预测人类将如何在新环境中移动是一项巨大的科学挑战。生物力学和运动控制领域的研究人员通过神经力学模拟提出并评估了运动控制模型,该模拟产生了肌肉骨骼模型的物理正确运动。通常,研究人员开发了控制模型,对生理上合理的运动控制假设进行编码,并将产生的模拟行为与可测量的人体运动数据进行比较。虽然这种似是而非的控制模型能够模拟和解释许多基本的运动行为(例如步行、跑步和爬楼梯),但可以对更高层的控制进行建模(例如处理环境线索、规划长期运动策略、以及协调基本运动技能以在动态和复杂的环境中导航)仍然是一个挑战。深度强化学习的最新进展为模拟这些复杂的控制过程和控制各种人类运动奠定了基础;然而,强化学习很少应用于神经力学模拟来模拟人类控制。在本文中,我们回顾了神经力学模拟的当前状态,以及强化学习的基础知识,因为它适用于人类运动。我们还展示了一个科学竞赛和配套的软件平台,我们组织该平台以加速强化学习在神经力学模拟中的使用。本次“Learn to Move”竞赛是 2017 年至 2019 年 NeurIPS 大会的正式比赛,吸引了来自全球 1300 多个团队。顶级团队采用了最先进的深度强化学习技术并制作了动作,例如快速转动和步行到站立的过渡,这些动作以前在不使用参考运动数据的情况下在神经力学模拟中未得到证明。最后,我们讨论了人体运动模拟和强化学习交叉领域的未来机会,以及我们扩展“学习运动”竞赛的计划,以进一步促进在生物力学和康复研究的人体运动控制建模方面的跨学科合作 例如快速转动和从步行到站立的转换,以前在没有使用参考运动数据的情况下在神经力学模拟中没有得到证明。最后,我们讨论了人体运动模拟和强化学习交叉领域的未来机会,以及我们扩展“学习运动”竞赛的计划,以进一步促进在生物力学和康复研究的人体运动控制建模方面的跨学科合作 例如快速转动和从步行到站立的转换,以前在没有使用参考运动数据的情况下在神经力学模拟中没有得到证明。最后,我们讨论了人体运动模拟和强化学习交叉领域的未来机会,以及我们扩展“学习运动”竞赛的计划,以进一步促进在生物力学和康复研究的人体运动控制建模方面的跨学科合作
更新日期:2021-08-16
down
wechat
bug