Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Extending a misallocation model to children's choice behavior.
Journal of Experimental Psychology: Animal Learning and Cognition ( IF 1.3 ) Pub Date : 2021-07-01 , DOI: 10.1037/xan0000299
Sarah Cowie 1 , Javier Virués-Ortega 1 , Jessica McCormack 1 , Paula Hogg 1 , Christopher A Podlesnik 2
Affiliation  

Nonhuman animal models show that reinforcers control behavior through what they signal about the likelihood of future events, but such control is generally imperfect. Imperfect control by the relation between past and likely future events may result from imperfect detection of those events as they occur, which result in imperfect detection of the relation between events. Such an approach would suggest the involvement of more complex psychological processes like memory in simple operant learning. We extended a research paradigm previously examined with nonhuman animals to test the ability of a quantitative model that assumes imperfect control by the relation between events arises because of (a) occasional misallocation of reinforcers to the wrong response, causing imperfect control by the relation between events; and (b) a tendency to explore or exploit which is independent of the relation between events. Children played a game in which one of two different responses could produce a reinforcer. The likelihood of a reinforcer for the same response that produced the last one varied across three conditions (.1, .5, .9). As with nonhuman animal models, children's choices followed these probabilities closely but not perfectly, suggesting strong control by what one reinforcer signals about subsequent reinforcers. Choice was well described by the quantitative model. This same model also provides a good description of nonhuman animal-model data, suggesting fundamentally similar mechanisms of control across species. These findings suggest reinforcers control behavior to the extent the relation between reinforcers can be detected-that is, simple operant learning may be more complex than is typically assumed. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

中文翻译:

将错误分配模型扩展到儿童的选择行为。

非人类动物模型表明,强化物通过发出有关未来事件可能性的信号来控制行为,但这种控制通常是不完美的。对过去事件和可能的未来事件之间的关系的不完美控制可能是由于对这些事件发生时的不完美检测,从而导致对事件之间关系的不完美检测。这种方法表明在简单的操作学习中涉及更复杂的心理过程,如记忆。我们扩展了先前用非人类动物检查过的研究范式,以测试定量模型的能力,该模型假设事件之间的关系不完全控制是由于(a)偶尔将强化物错误分配到错误的反应,导致事件之间的关系不完全控制; (b) 一种独立于事件之间关系的探索或利用倾向。孩子们玩了一个游戏,其中两种不同的反应之一可以产生强化物。产生最后一种反应的相同反应的强化物的可能性在三种情况下(.1、.5、.9)有所不同。与非人类动物模型一样,儿童的选择与这些概率密切相关,但并不完美,这表明一种强化物对后续强化物发出的信号具有很强的控制力。定量模型很好地描述了选择。同样的模型也很好地描述了非人类动物模型数据,表明跨物种的控制机制在根本上是相似的。这些发现表明,强化物控制行为的程度可以检测到强化物之间的关系——也就是说,简单的操作学习可能比通常假设的更复杂。(PsycInfo 数据库记录 (c) 2021 APA,保留所有权利)。
更新日期:2021-07-01
down
wechat
bug