当前位置: X-MOL 学术J. Math. Psychol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adding a bias to vector models of association memory provides item memory for free
Journal of Mathematical Psychology ( IF 1.8 ) Pub Date : 2020-08-01 , DOI: 10.1016/j.jmp.2020.102358
Jeremy B. Caplan , Kaiyuan Xu , Sucheta Chakravarty , Kelvin E. Jones

Abstract Anderson (1970) introduced two models that are at the core of artificial neural network models as well as cognitive mathematical models of memory. The first, a simple summation of items, represented as vectors, can support rudimentary item-recognition. The second, a heteroassociative model consisting of a summation of outer products between paired item vectors, can support cued recall of associations. Anderson recommended fixing the element-value mean to zero, for tractability, and with minimal loss of generality. However, in a neural network model, if element values are represented by firing rates, this mean-centering is violated, because firing rates cannot be negative. We show, analytically, that adding a bias to item representations produces interference from other studied list items. Although this worsens cued recall, it also tempts the model to make intrusion responses to other studied items, not unlike human participants. Moreover, an unexpected feature appears: when probed with a constant vector, containing no “information,” the model retrieves a weighted sum of studied items, formally equivalent to Anderson’s item-memory model. This speaks to Hockley and Cristi’s (1996) findings that associative study strategies led to high item-recognition, but not vice versa. We show that such a model can achieve high levels of performance (d ′ ), when the bias is greater than zero but not too large relative to the standard deviation of element values. We demonstrate these effects in a two-layer spiking-neuron network model. Thus, when modelers have striven for realism and relaxed mean-centering, such models may not only still function at adequate levels, but acquire a spin-off functionality that can actually be used, without the need for additional encoding terms specific to item-memory.

中文翻译:

向关联记忆的向量模型添加偏差可免费提供项目记忆

摘要 Anderson (1970) 介绍了两种模型,它们是人工神经网络模型以及记忆的认知数学模型的核心。第一种,简单的项目总和,表示为向量,可以支持基本的项目识别。第二个是由成对项目向量之间的外积求和组成的异关联模型,可以支持关联的线索回忆。Anderson 建议将元素值均值固定为零,以提高易处理性,并尽量减少一般性损失。但是,在神经网络模型中,如果元素值由激发率表示,则违反了这种均值中心化,因为激发率不能为负。我们通过分析表明,向项目表示添加偏差会产生来自其他研究列表项目的干扰。虽然这会恶化提示回忆,它还诱使模型对其他研究项目做出入侵响应,与人类参与者不同。此外,出现了一个意想不到的特征:当用一个不包含“信息”的常数向量进行探测时,该模型检索研究项目的加权总和,形式上等同于安德森的项目记忆模型。这与 Hockley 和 Cristi (1996) 的发现有关,即联想学习策略导致高项目识别,但反之则不然。我们表明,当偏差大于零但相对于元素值的标准偏差不会太大时,这样的模型可以实现高水平的性能 (d ' )。我们在两层尖峰神经元网络模型中展示了这些效果。因此,当建模者努力追求现实主义和放松的均值中心化时,这些模型可能不仅仍能在足够的水平上发挥作用,
更新日期:2020-08-01
down
wechat
bug