当前位置: X-MOL 学术J. Psychopathol. Clin. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reward and punishment reversal-learning in major depressive disorder.
Journal of Psychopathology and Clinical Science ( IF 3.1 ) Pub Date : 2020-10-01 , DOI: 10.1037/abn0000641
Dahlia Mukherjee , Alexandre L. S. Filipowicz , Khoi Vo , Theodore D. Satterthwaite , Joseph W. Kable

Depression has been associated with impaired reward and punishment processing, but the specific nature of these deficits is still widely debated. We analyzed reinforcement-based decision making in individuals with major depressive disorder (MDD) to identify the specific decision mechanisms contributing to poorer performance. Individuals with MDD (n = 64) and matched healthy controls (n = 64) performed a probabilistic reversal-learning task in which they used feedback to identify which of two stimuli had the highest probability of reward (reward condition) or lowest probability of punishment (punishment condition). Learning differences were characterized using a hierarchical Bayesian reinforcement learning model. Depressed individuals made fewer optimal choices and adjusted more slowly to reversals in both the reward and punishment conditions. Computational modeling revealed that depressed individuals showed lower learning-rates and, to a lesser extent, lower value sensitivity in both the reward and punishment conditions. Learning-rates also predicted depression more accurately than simple performance metrics. These results demonstrate that depression is characterized by a hyposensitivity to positive outcomes, but not a hypersensitivity to negative outcomes. Additionally, we demonstrate that computational modeling provides a more precise characterization of the dynamics contributing to these learning deficits, offering stronger insights into the mechanistic processes affected by depression. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

中文翻译:

重度抑郁症的奖励和惩罚逆转学习。

抑郁症与奖励和惩罚过程受损有关,但这些缺陷的具体性质仍被广泛争论。我们分析了患有重性抑郁症(MDD)的个体中基于强化的决策,以确定导致较差表现的特定决策机制。患有MDD(n = 64)和相匹配的健康对照(n = 64)的个体执行了概率逆向学习任务,其中他们使用反馈来识别两种刺激中哪个具有最高的奖励(奖励条件)或最低的惩罚可能性(处罚条件)。使用分级贝叶斯强化学习模型来表征学习差异。沮丧的人做出更少的最佳选择,并且在奖励和惩罚条件下都更缓慢地适应逆转。计算模型表明,沮丧的人在奖励和惩罚条件下的学习率较低,对价值的敏感性较低。学习率也比简单的绩效指标更准确地预测了抑郁。这些结果表明,抑郁症的特征是对阳性结果的敏感性低,而不是对阴性结果的敏感性。此外,我们证明了计算建模可以更精确地描述导致这些学习缺陷的动力学特性,从而更深入地了解受抑郁症影响的机械过程。(PsycInfo数据库记录(c)2020 APA,保留所有权利)。在奖励和惩罚条件下的价值敏感性较低。学习率也比简单的绩效指标更准确地预测了抑郁。这些结果表明,抑郁症的特征是对阳性结果的敏感性低,而不是对阴性结果的敏感性。此外,我们证明了计算建模可以更精确地描述导致这些学习缺陷的动力学特性,从而更深入地了解受抑郁症影响的机械过程。(PsycInfo数据库记录(c)2020 APA,保留所有权利)。在奖励和惩罚条件下的价值敏感性较低。学习率也比简单的绩效指标更准确地预测了抑郁。这些结果表明,抑郁症的特征是对阳性结果的敏感性低,而不是对阴性结果的敏感性。此外,我们证明了计算建模可以更精确地描述导致这些学习缺陷的动力学特性,从而更深入地了解受抑郁症影响的机械过程。(PsycInfo数据库记录(c)2020 APA,保留所有权利)。但不要对负面结果过敏。此外,我们证明了计算建模可以更精确地描述导致这些学习缺陷的动力学特性,从而更深入地了解受抑郁症影响的机械过程。(PsycInfo数据库记录(c)2020 APA,保留所有权利)。但不要对负面结果过敏。此外,我们证明了计算建模可以更精确地描述导致这些学习缺陷的动力学特性,从而更深入地了解受抑郁症影响的机械过程。(PsycInfo数据库记录(c)2020 APA,保留所有权利)。
更新日期:2020-10-01
down
wechat
bug