当前位置: X-MOL 学术Comput. Intell. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Study of Modified Infotaxis Algorithms in 2D and 3D Turbulent Environments.
Computational Intelligence and Neuroscience Pub Date : 2020-08-25 , DOI: 10.1155/2020/4159241
Shurui Fan 1 , Dongxia Hao 1 , Xudong Sun 1 , Yusuf Mohamed Sultan 1 , Zirui Li 1 , Kewen Xia 1
Affiliation  

Emergency response to hazardous gases in the environment is an important research field in environmental monitoring. In recent years, with the rapid development of sensor technology and mobile device technology, more autonomous search algorithms for hazardous gas emission sources are proposed in uncertain environment, which can avoid emergency personnel from contacting hazardous gas in a short distance. Infotaxis is an autonomous search strategy without a concentration gradient, which uses scattered sensor data to track the location of the release source in turbulent environment. This paper optimizes the imbalance of exploitation and exploration in the reward function of Infotaxis algorithm and proposes a mobile strategy for the three-dimensional scene. In two-dimensional and three-dimensional scenes, the average steps of search tasks are used as the evaluation criteria to analyze the information trend algorithm combined with different reward functions and mobile strategies. The results show that the balance between the exploitation item and exploration item of the reward function proposed in this paper is better than that of the reward function in the Infotaxis algorithm, no matter in the two-dimensional scenes or in the three-dimensional scenes.

中文翻译:


2D 和 3D 湍流环境中改进的 Infotaxis 算法的研究。



环境中有害气体应急响应是环境监测的重要研究领域。近年来,随着传感器技术和移动设备技术的快速发展,更多不确定环境下危险气体排放源的自主搜索算法被提出,可以避免应急人员近距离接触危险气体。 Infotaxis是一种无浓度梯度的自主搜索策略,利用分散的传感器数据来跟踪湍流环境中释放源的位置。针对Infotaxis算法奖励函数中开发与探索的不平衡问题进行优化,提出了针对三维场景的移动策略。在二维和三维场景中,以搜索任务的平均步数为评价标准,结合不同的奖励函数和移动策略来分析信息趋势算法。结果表明,无论在二维场景还是在三维场景中,本文提出的奖励函数的开发项和探索项的平衡性均优于Infotaxis算法中的奖励函数。
更新日期:2020-08-26
down
wechat
bug