当前位置: X-MOL 学术Auton. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Gaussian process decentralized data fusion meets transfer learning in large-scale distributed cooperative perception
Autonomous Robots ( IF 3.7 ) Pub Date : 2019-01-28 , DOI: 10.1007/s10514-018-09826-z
Ruofei Ouyang , Bryan Kian Hsiang Low

This paper presents novel Gaussian process decentralized data fusion algorithms exploiting the notion of agent-centric support sets for distributed cooperative perception of large-scale environmental phenomena. To overcome the limitations of scale in existing works, our proposed algorithms allow every mobile sensing agent to utilize a different support set and dynamically switch to another during execution for encapsulating its own data into a local summary that, perhaps surprisingly, can still be assimilated with the other agents’ local summaries (i.e., based on their current support sets) into a globally consistent summary to be used for predicting the phenomenon. To achieve this, we propose a novel transfer learning mechanism for a team of agents capable of sharing and transferring information encapsulated in a summary based on a support set to that utilizing a different support set with some loss that can be theoretically bounded and analyzed. To alleviate the issue of information loss accumulating over multiple instances of transfer learning, we propose a new information sharing mechanism to be incorporated into our algorithms in order to achieve memory-efficient lazy transfer learning. Empirical evaluation on three real-world datasets for up to 128 agents show that our algorithms outperform the state-of-the-art methods.

中文翻译:

高斯过程分散数据融合在大规模分布式合作感知中满足转移学习

本文提出了一种新颖的高斯过程分散数据融合算法,该算法利用了以代理为中心的支持集的概念来对大规模环境现象进行分布式协作感知。为了克服现有工作规模的局限性,我们提出的算法允许每个移动传感代理使用不同的支持集,并在执行过程中动态切换到另一个支持集,以将其自身的数据封装为本地摘要,这也许令人惊讶地仍然可以与其他特工的本地摘要(即基于他们当前的支持集)为全局一致的摘要,用于预测现象。为了达成这个,我们为一组代理提出了一种新颖的转移学习机制,该机制能够共享和转移基于支持集的摘要中封装的信息,从而利用具有不同损失的其他支持集可以在理论上进行界定和分析。为了减轻在转移学习的多个实例上累积的信息丢失的问题,我们提出了一种新的信息共享机制,该机制将被并入我们的算法中,以实现内存有效的延迟转移学习。对多达128个代理的三个真实数据集的经验评估表明,我们的算法优于最新方法。我们提出了一种新的信息共享机制,该机制将被并入我们的算法中,以实现内存高效的惰性传输学习。对多达128个代理的三个真实数据集的经验评估表明,我们的算法优于最新方法。我们提出了一种新的信息共享机制,该机制将被并入我们的算法中,以实现内存高效的惰性传输学习。对多达128个代理的三个真实数据集的经验评估表明,我们的算法优于最新方法。
更新日期:2019-01-28
down
wechat
bug