To read this content please select one of the options below:

Integrating human experience in deep reinforcement learning for multi-UAV collision detection and avoidance

Guanzheng Wang (College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China)
Yinbo Xu (College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China)
Zhihong Liu (College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China)
Xin Xu (College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China)
Xiangke Wang (College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China)
Jiarun Yan (College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China)

Industrial Robot

ISSN: 0143-991x

Article publication date: 24 September 2021

Issue publication date: 11 February 2022

402

Abstract

Purpose

This paper aims to realize a fully distributed multi-UAV collision detection and avoidance based on deep reinforcement learning (DRL). To deal with the problem of low sample efficiency in DRL and speed up the training. To improve the applicability and reliability of the DRL-based approach in multi-UAV control problems.

Design/methodology/approach

In this paper, a fully distributed collision detection and avoidance approach for multi-UAV based on DRL is proposed. A method that integrates human experience into policy training via a human experience-based adviser is proposed. The authors propose a hybrid control method which combines the learning-based policy with traditional model-based control. Extensive experiments including simulations, real flights and comparative experiments are conducted to evaluate the performance of the approach.

Findings

A fully distributed multi-UAV collision detection and avoidance method based on DRL is realized. The reward curve shows that the training process when integrating human experience is significantly accelerated and the mean episode reward is higher than the pure DRL method. The experimental results show that the DRL method with human experience integration has a significant improvement than the pure DRL method for multi-UAV collision detection and avoidance. Moreover, the safer flight brought by the hybrid control method has also been validated.

Originality/value

The fully distributed architecture is suitable for large-scale unmanned aerial vehicle (UAV) swarms and real applications. The DRL method with human experience integration has significantly accelerated the training compared to the pure DRL method. The proposed hybrid control strategy makes up for the shortcomings of two-dimensional light detection and ranging and other puzzles in applications.

Keywords

Acknowledgements

This work is supported by Science and Technology Innovation 2030-Key Project of “New Generation Artificial Intelligence” under Grant 2020AAA0108200, National Natural Science Foundation of China under Grant 61906209, 61973309 and 61825305 and Hunan Provincial Natural Science Foundation of China under Grant 2020JJ5668.

Citation

Wang, G., Xu, Y., Liu, Z., Xu, X., Wang, X. and Yan, J. (2022), "Integrating human experience in deep reinforcement learning for multi-UAV collision detection and avoidance", Industrial Robot, Vol. 49 No. 2, pp. 256-270. https://doi.org/10.1108/IR-06-2021-0116

Publisher

:

Emerald Publishing Limited

Copyright © 2021, Emerald Publishing Limited

Related articles