当前位置: X-MOL 学术Evol. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions
Evolutionary Computation ( IF 6.8 ) Pub Date : 2021-09-01 , DOI: 10.1162/evco_a_00286
Anil Yaman 1 , Giovanni Iacca 2 , Decebal Constantin Mocanu 3 , Matt Coler 4 , George Fletcher 5 , Mykola Pechenizkiy 5
Affiliation  

A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey–predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.



中文翻译:

在不断变化的环境条件下自主学习的进化可塑性

生物神经网络学习的一个基本方面是可塑性,它允许它们在其生命周期内修改它们的配置。Hebb 学习是一种生物学上合理的机制,用于基于神经元的局部相互作用对人工神经网络 (ANN) 中的可塑性特性进行建模。然而,从局部赫布可塑性规则中出现连贯的全局学习行为还不是很清楚。这项工作的目标是发现可以提供自主全局学习的可解释本地 Hebbian 学习规则。为此,我们使用离散表示在有限搜索空间中对学习规则进行编码。然后根据神经元的局部交互,使用这些规则来执行突触变化。我们使用遗传算法来优化这些规则,以允许在在线终身学习环境中学习两个独立的任务(觅食和猎物-捕食者场景)。由此产生的进化规则汇聚成一组定义明确的可解释类型,并进行了彻底的讨论。值得注意的是,这些规则的性能在学习任务期间调整 ANN 的同时,与爬山等离线学习方法的性能相当。

更新日期:2021-09-12
down
wechat
bug