当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On Neural Associative Memory Structures: Storage and Retrieval of Sequences in a Chain of Tournaments
Neural Computation ( IF 2.9 ) Pub Date : 2021-08-19 , DOI: 10.1162/neco_a_01417
Asieh Abolpour Mofrad 1 , Samaneh Abolpour Mofrad 2 , Anis Yazidi 3 , Matthew Geoffrey Parker 1
Affiliation  

Associative memories enjoy many interesting properties in terms of error correction capabilities, robustness to noise, storage capacity, and retrieval performance, and their usage spans over a large set of applications. In this letter, we investigate and extend tournament-based neural networks, originally proposed by Jiang, Gripon, Berrou, and Rabbat (2016), a novel sequence storage associative memory architecture with high memory efficiency and accurate sequence retrieval. We propose a more general method for learning the sequences, which we call feedback tournament-based neural networks. The retrieval process is also extended to both directions: forward and backward—in other words, any large-enough segment of a sequence can produce the whole sequence. Furthermore, two retrieval algorithms, cache-winner and explore-winner, are introduced to increase the retrieval performance. Through simulation results, we shed light on the strengths and weaknesses of each algorithm.



中文翻译:

关于神经联想记忆结构:锦标赛链中序列的存储和检索

联想存储器在纠错能力、噪声鲁棒性、存储容量和检索性能方面具有许多有趣的特性,并且它们的使用范围很广。在这封信中,我们研究和扩展了基于锦标赛的神经网络,最初由 Jiang、Gripon、Berrou 和 Rabbat (2016) 提出,这是一种具有高内存效率和准确序列检索的新型序列存储关联内存架构。我们提出了一种更通用的学习序列的方法,我们称之为基于反馈锦标赛的神经网络。检索过程也扩展到两个方向:向前和向后——换句话说,序列的任何足够大的片段都可以生成整个序列。此外,两种检索算法,缓存赢家和探索赢家,引入以提高检索性能。通过仿真结果,我们阐明了每种算法的优缺点。

更新日期:2021-09-12
down
wechat
bug