当前位置: X-MOL 学术Complexity › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Novel Behavioral Strategy for RoboCode Platform Based on Deep Q-Learning
Complexity ( IF 1.7 ) Pub Date : 2021-07-16 , DOI: 10.1155/2021/9963018
Hakan Kayakoku 1 , Mehmet Serdar Guzel 2 , Erkan Bostanci 3 , Ihsan Tolga Medeni 4 , Deepti Mishra 5
Affiliation  

This paper addresses a new machine learning-based behavioral strategy using the deep Q-learning algorithm for the RoboCode simulation platform. According to this strategy, a new model is proposed for the RoboCode platform, providing an environment for simulated robots that can be programmed to battle against other robots. Compared to Atari Games, RoboCode has a fairly wide set of actions and situations. Due to the challenges of training a CNN model for such a continuous action space problem, the inputs obtained from the simulation environment were generated dynamically, and the proposed model was trained by using these inputs. The trained model battled against the predefined rival robots of the environment (standard robots) by cumulatively benefiting from the experience of these robots. The comparison between the proposed model and standard robots of RoboCode Platform was statistically verified. Finally, the performance of the proposed model was compared with machine learning based-customized robots (community robots). Experimental results reveal that the proposed model is mostly superior to community robots. Therefore, the deep Q-learning-based model has proven to be successful in such a complex simulation environment. It should also be noted that this new model facilitates simulation performance in adaptive and partially cluttered environments.

中文翻译:

基于深度 Q 学习的 RoboCode 平台新行为策略

本文针对 RoboCode 模拟平台使用深度 Q 学习算法,提出了一种新的基于机器学习的行为策略。根据这一策略,RoboCode 平台提出了一种新模型,为模拟机器人提供了一个环境,可以通过编程与其他机器人进行战斗。与 Atari Games 相比,RoboCode 具有相当广泛的动作和情境。由于针对此类连续动作空间问题训练 CNN 模型的挑战,从模拟环境中获得的输入是动态生成的,并通过使用这些输入来训练所提出的模型。经过训练的模型通过从这些机器人的经验中累积受益,与环境中预定义的竞争对手机器人(标准机器人)进行了斗争。所提出的模型与 RoboCode 平台的标准机器人之间的比较得到了统计验证。最后,将所提出模型的性能与基于机器学习的定制机器人(社区机器人)进行了比较。实验结果表明,所提出的模型大多优于社区机器人。因此,基于深度 Q 学习的模型已被证明在如此复杂的仿真环境中是成功的。还应该指出的是,这种新模型有助于提高自适应和部分杂乱环境中的仿真性能。事实证明,基于深度 Q 学习的模型在如此复杂的仿真环境中是成功的。还应该指出的是,这种新模型有助于提高自适应和部分杂乱环境中的仿真性能。事实证明,基于深度 Q 学习的模型在如此复杂的仿真环境中是成功的。还应该指出的是,这种新模型有助于提高自适应和部分杂乱环境中的仿真性能。
更新日期:2021-07-16
down
wechat
bug