当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Iterative Retrieval and Block Coding in Autoassociative and Heteroassociative Memory
Neural Computation ( IF 2.7 ) Pub Date : 2020-01-01 , DOI: 10.1162/neco_a_01247
Andreas Knoblauch 1 , Günther Palm 2
Affiliation  

Neural associative memories (NAM) are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Gripon and Berrou (2011) investigated NAM employing block coding, a particular sparse coding method, and reported a significant increase in storage capacity. Here we verify and extend their results for both heteroassociative and recurrent autoassociative networks. For this we provide a new analysis of iterative retrieval in finite autoassociative and heteroassociative networks that allows estimating storage capacity for random and block patterns. Furthermore, we have implemented various retrieval algorithms for block coding and compared them in simulations to our theoretical results and previous simulation data. In good agreement of theory and experiments, we find that finite networks employing block coding can store significantly more memory patterns. However, due to the reduced information per block pattern, it is not possible to significantly increase stored information per synapse. Asymptotically, the information retrieval capacity converges to the known limits C=ln2≈0.69 and C=(ln2)/4≈0.17 also for block coding. We have also implemented very large recurrent networks up to n=2·106 neurons, showing that maximal capacity C≈0.2 bit per synapse occurs for finite networks having a size n≈105 similar to cortical macrocolumns.

中文翻译:

自联想和异联想记忆中的迭代检索和块编码

神经联想记忆 (NAM) 是类似感知器的单层网络,具有快速突触学习,通常存储神经活动模式对之间的离散关联。Gripon 和 Berrou (2011) 研究了采用块编码(一种特殊的稀疏编码方法)的 NAM,并报告了存储容量的显着增加。在这里,我们验证并扩展了他们对异关联和循环自关联网络的结果。为此,我们提供了有限自关联和异关联网络中迭代检索的新分析,允许估计随机和块模式的存储容量。此外,我们已经实现了用于块编码的各种检索算法,并将它们在模拟中与我们的理论结果和先前的模拟数据进行了比较。理论与实验吻合良好,我们发现采用块编码的有限网络可以存储更多的内存模式。然而,由于每个块模式的信息减少,不可能显着增加每个突触存储的信息。渐近地,对于块编码,信息检索容量收敛到已知限制 C=ln2≈0.69 和 C=(ln2)/4≈0.17。我们还实现了高达 n=2·106 个神经元的非常大的循环网络,表明对于大小为 n≈105 的有限网络,每个突触的最大容量 C≈0.2 位,类似于皮质宏柱。对于块编码,信息检索能力收敛到已知限制 C=ln2≈0.69 和 C=(ln2)/4≈0.17。我们还实现了高达 n=2·106 个神经元的非常大的循环网络,表明对于大小为 n≈105 的有限网络,每个突触的最大容量 C≈0.2 位,类似于皮质宏柱。对于块编码,信息检索能力收敛到已知限制 C=ln2≈0.69 和 C=(ln2)/4≈0.17。我们还实现了高达 n=2·106 个神经元的非常大的循环网络,表明对于具有类似于皮质宏柱的大小 n≈105 的有限网络,每个突触的最大容量 C≈0.2 位。
更新日期:2020-01-01
down
wechat
bug