当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Variable Binding for Sparse Distributed Representations: Theory and Applications
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-09-14 , DOI: arxiv-2009.06734
E. Paxon Frady, Denis Kleyko, Friedrich T. Sommer

Symbolic reasoning and neural networks are often considered incompatible approaches. Connectionist models known as Vector Symbolic Architectures (VSAs) can potentially bridge this gap. However, classical VSAs and neural networks are still considered incompatible. VSAs encode symbols by dense pseudo-random vectors, where information is distributed throughout the entire neuron population. Neural networks encode features locally, often forming sparse vectors of neural activation. Following Rachkovskij (2001); Laiho et al. (2015), we explore symbolic reasoning with sparse distributed representations. The core operations in VSAs are dyadic operations between vectors to express variable binding and the representation of sets. Thus, algebraic manipulations enable VSAs to represent and process data structures in a vector space of fixed dimensionality. Using techniques from compressed sensing, we first show that variable binding between dense vectors in VSAs is mathematically equivalent to tensor product binding between sparse vectors, an operation which increases dimensionality. This result implies that dimensionality-preserving binding for general sparse vectors must include a reduction of the tensor matrix into a single sparse vector. Two options for sparsity-preserving variable binding are investigated. One binding method for general sparse vectors extends earlier proposals to reduce the tensor product into a vector, such as circular convolution. The other method is only defined for sparse block-codes, block-wise circular convolution. Our experiments reveal that variable binding for block-codes has ideal properties, whereas binding for general sparse vectors also works, but is lossy, similar to previous proposals. We demonstrate a VSA with sparse block-codes in example applications, cognitive reasoning and classification, and discuss its relevance for neuroscience and neural networks.

中文翻译:

稀疏分布式表示的变量绑定:理论与应用

符号推理和神经网络通常被认为是不兼容的方法。称为矢量符号架构 (VSA) 的联结主义模型可以潜在地弥合这一差距。然而,经典的 VSA 和神经网络仍然被认为是不兼容的。VSA 通过密集的伪随机向量对符号进行编码,其中信息分布在整个神经元群中。神经网络在本地编码特征,通常形成神经激活的稀疏向量。继 Rachkovskij (2001) 之后;莱霍等人。(2015),我们探索了稀疏分布式表示的符号推理。VSA 中的核心操作是向量之间的二元操作,用于表达变量绑定和集合的表示。因此,代数操作使 VSA 能够在固定维数的向量空间中表示和处理数据结构。使用来自压缩感知的技术,我们首先表明 VSA 中密集向量之间的变量绑定在数学上等同于稀疏向量之间的张量积绑定,这是一种增加维数的操作。这个结果意味着一般稀疏向量的维数保留绑定必须包括将张量矩阵缩减为单个稀疏向量。研究了两种保持稀疏性的变量绑定选项。一般稀疏向量的一种绑定方法扩展了早期的提议,将张量积减少为向量,例如循环卷积。另一种方法仅针对稀疏块代码、逐块循环卷积定义。我们的实验表明,块代码的变量绑定具有理想的特性,而一般稀疏向量的绑定也有效,但有损,与之前的提议类似。我们在示例应用、认知推理和分类中展示了具有稀疏块代码的 VSA,并讨论了它与神经科学和神经网络的相关性。
更新日期:2020-09-16
down
wechat
bug