当前位置: X-MOL 学术Distrib. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Defending non-Bayesian learning against adversarial attacks
Distributed Computing ( IF 1.3 ) Pub Date : 2018-06-20 , DOI: 10.1007/s00446-018-0336-4
Lili Su , Nitin H. Vaidya

This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state out of m alternatives. We focus on the impact of adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local learning updates with consensus primitives. In particular, we consider the scenario where an unknown subset of agents suffer Byzantine faults—agents suffering Byzantine faults behave arbitrarily. We propose two learning rules. In our learning rules, each non-faulty agent keeps a local variable which is a stochastic vector over the m possible states. Entries of this stochastic vector can be viewed as the scores assigned to the corresponding states by that agent. We say a non-faulty agent learns the underlying truth if it assigns one to the true state and zeros to the wrong states asymptotically.In our first update rule, each agent updates its local score vector as (up to normalization) the product of (1) the likelihood of the cumulative private signals and (2) the weighted geometric average of the score vectors of its incoming neighbors and itself. Under reasonable assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically learn the true state almost surely.We propose a modified variant of our first learning rule whose complexity per iteration per agent is $$O(m^2 n \log n)$$O(m2nlogn), where n is the number of agents in the network. In addition, we show that this modified learning rule works under a less restrictive network identifiability condition.

中文翻译:

保护非贝叶斯学习免受对抗性攻击

本文解决了多智能体网络上的非贝叶斯学习问题,其中智能体反复收集关于世界未知状态的部分信息观察,并尝试从 m 个替代方案中协作学习真实状态。我们关注对抗代理对基于共识的非贝叶斯学习性能的影响,其中非故障代理将本地学习更新与共识原语相结合。特别地,我们考虑了一个未知的代理子集遭受拜占庭错误的场景——遭受拜占庭错误的代理行为任意。我们提出了两个学习规则。在我们的学习规则中,每个非故障代理保持一个局部变量,它是 m 个可能状态的随机向量。这个随机向量的条目可以被视为该代理分配给相应状态的分数。我们说一个无故障的智能体如果它逐渐将一个分配给真实状态并将零分配给错误状态,那么它就会学习到潜在的真相。在我们的第一个更新规则中,每个智能体将其局部得分向量更新为(直至归一化)的乘积(直到归一化) 1) 累积私有信号的可能性和 (2) 其传入邻居和自身的得分向量的加权几何平均值。在对底层网络结构和网络全局可识别性的合理假设下,我们表明所有非故障代理几乎可以肯定地渐近学习真实状态。我们提出了第一个学习规则的修改变体,其每个代理每次迭代的复杂度为$$O(m^2 n \log n)$$O(m2nlogn), 其中 n 是网络中的代理数量。此外,我们表明这种修改后的学习规则在限制较少的网络可识别性条件下工作。
更新日期:2018-06-20
down
wechat
bug