当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning hierarchically-structured concepts
Neural Networks ( IF 6.0 ) Pub Date : 2021-08-16 , DOI: 10.1016/j.neunet.2021.07.033
Nancy Lynch 1 , Frederik Mallmann-Trenn 2
Affiliation  

We use a recently developed synchronous Spiking Neural Network (SNN) model to study the problem of learning hierarchically-structured concepts. We introduce an abstract data model that describes simple hierarchical concepts. We define a feed-forward layered SNN model, with learning modeled using Oja’s local learning rule, a well known biologically-plausible rule for adjusting synapse weights. We define what it means for such a network to recognize hierarchical concepts; our notion of recognition is robust, in that it tolerates a bounded amount of noise.

Then, we present a learning algorithm by which a layered network may learn to recognize hierarchical concepts according to our robust definition. We analyze correctness and performance rigorously; the amount of time required to learn each concept, after learning all of the sub-concepts, is approximately O1ηkℓmaxlog(k)+1ɛ+blog(k), where k is the number of sub-concepts per concept, ℓmax is the maximum hierarchical depth, η is the learning rate, ɛ describes the amount of uncertainty allowed in robust recognition, and b describes the amount of weight decrease for “irrelevant” edges. An interesting feature of this algorithm is that it allows the network to learn sub-concepts in a highly interleaved manner. This algorithm assumes that the concepts are presented in a noise-free way; we also extend these results to accommodate noise in the learning process. Finally, we give a simple lower bound saying that, in order to recognize concepts with hierarchical depth two with noise-tolerance, a neural network should have at least two layers.

The results in this paper represent first steps in the theoretical study of hierarchical concepts using SNNs. The cases studied here are basic, but they suggest many directions for extensions to more elaborate and realistic cases.



中文翻译:

学习分层结构的概念

我们使用最近开发的同步尖峰神经网络 (SNN) 模型来研究学习分层结构概念的问题。我们引入了一个抽象数据模型,它描述了简单的分层概念。我们定义了一个前馈分层 SNN 模型,使用 Oja 的本地学习规则建模学习,这是一个众所周知的用于调整突触权重的生物学合理规则。我们定义了这种网络识别层次概念的含义;我们的识别概念是稳健的,因为它可以容忍有限数量的噪音。

然后,我们提出了一种学习算法,通过该算法,分层网络可以根据我们的稳健定义学习识别分层概念。我们严格分析正确性和性能;学习所有子概念后,学习每个概念所需的时间约为1ηℓ最大日志()+1ɛ+日志(), 在哪里 是每个概念的子概念数量, ℓ最大 是最大层次深度, η 是学习率, ɛ 描述稳健识别中允许的不确定性数量,以及 描述了“不相关”边缘的重量减少量。该算法的一个有趣特性是它允许网络以高度交错的方式学习子概念。该算法假设概念以无噪声的方式呈现;我们还扩展了这些结果以适应学习过程中的噪音。最后,我们给出一个简单的下界说,为了识别具有噪声容忍度的分层深度为 2 的概念,神经网络应该至少有两层。

本文的结果代表了使用 SNN 对分层概念进行理论研究的第一步。这里研究的案例是基本的,但它们为扩展到更复杂和更现实的案例提出了许多方向。

更新日期:2021-08-16
down
wechat
bug