当前位置: X-MOL 学术Journal of Consulting and Clinical Psychology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cautions, concerns, and future directions for using machine learning in relation to mental health problems and clinical and forensic risks: A brief comment on "Model complexity improves the prediction of nonsuicidal self-injury" (Fox et al., 2019).
Journal of Consulting and Clinical Psychology ( IF 7.156 ) Pub Date : 2020-04-01 , DOI: 10.1037/ccp0000485
Andy P Siddaway 1 , Leah Quinlivan 2 , Nav Kapur 2 , Rory C O'Connor 1 , Derek de Beurs 3
Affiliation  

Machine learning (ML) is an increasingly popular approach/technique for analyzing "Big Data" and predicting risk behaviors and psychological problems. However, few published critiques of ML as an approach currently exist. We discuss some fundamental cautions and concerns with ML that are relevant when attempting to predict all clinical and forensic risk behaviors (risk to self, risk to others, risk from others) and mental health problems. We hope to provoke a healthy scientific debate to ensure that ML's potential is realized and to highlight issues and directions for future risk prediction, assessment, management, and prevention research. ML, by definition, does not require the model to be specified by the researcher. This is both its key strength and its key weakness. We argue that it is critical that the ML algorithm (the model or models) and the results are both presented and that ML needs to be become machine-assisted learning like other statistical techniques; otherwise, we run the risk of becoming slaves to our machines. Emerging evidence potentially challenges the superiority of ML over other approaches, and we argue that ML's complexity significantly limits its clinical utility. Based on the available evidence, we believe that researchers and clinicians should emphasize identifying, understanding, and explaining (formulating) individual clinical needs and risks and providing individualized management and treatment plans, rather than trying to predict or putting too much trust in predictions that will inevitably be wrong some of the time (and we do not know when). (PsycINFO Database Record (c) 2020 APA, all rights reserved).

中文翻译:

使用机器学习解决心理健康问题以及临床和法医风险的注意事项、担忧和未来方向:关于“模型复杂性提高对非自杀性自伤的预测”的简短评论(Fox 等人,2019 年)。

机器学习 (ML) 是一种越来越流行的分析“大数据”和预测风险行为和心理问题的方法/技术。然而,目前很少有关于 ML 作为一种方法的已发表评论。我们讨论了在试图预测所有临床和法医风险行为(对自己的风险、对他人的风险、来自他人的风险)和心理健康问题时与 ML 相关的一些基本警告和担忧。我们希望引发一场健康的科学辩论,以确保 ML 的潜力得到发挥,并突出未来风险预测、评估、管理和预防研究的问题和方向。根据定义,ML 不需要研究人员指定模型。这既是它的关键优势,也是它的关键弱点。我们认为,ML 算法(一个或多个模型)和结果都呈现出来是至关重要的,并且 ML 需要像其他统计技术一样成为机器辅助学习;否则,我们就有成为机器奴隶的风险。新出现的证据可能会挑战 ML 优于其他方法的优势,我们认为 ML 的复杂性显着限制了其临床实用性。基于现有证据,我们认为研究人员和临床医生应该强调识别、理解和解释(制定)个体临床需求和风险,并提供个性化的管理和治疗计划,而不是试图预测或过分相信会导致有时不可避免地会出错(我们不知道什么时候)。
更新日期:2020-04-01
down
wechat
bug