当前位置: X-MOL 学术Acta Psychol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How humans impair automated deception detection performance
Acta Psychologica ( IF 2.1 ) Pub Date : 2021-01-13 , DOI: 10.1016/j.actpsy.2020.103250
Bennett Kleinberg , Bruno Verschuere

Background

Deception detection is a prevalent problem for security practitioners. With a need for more large-scale approaches, automated methods using machine learning have gained traction. However, detection performance still implies considerable error rates. Findings from different domains suggest that hybrid human-machine integrations could offer a viable path in detection tasks.

Method

We collected a corpus of truthful and deceptive answers about participants' autobiographical intentions (n = 1640) and tested whether a combination of supervised machine learning and human judgment could improve deception detection accuracy. Human judges were presented with the outcome of the automated credibility judgment of truthful or deceptive statements. They could either fully overrule it (hybrid-overrule condition) or adjust it within a given boundary (hybrid-adjust condition).

Results

The data suggest that in neither of the hybrid conditions did the human judgment add a meaningful contribution. Machine learning in isolation identified truth-tellers and liars with an overall accuracy of 69%. Human involvement through hybrid-overrule decisions brought the accuracy back to chance level. The hybrid-adjust condition did not improve deception detection performance. The decision-making strategies of humans suggest that the truth bias - the tendency to assume the other is telling the truth - could explain the detrimental effect.

Conclusions

The current study does not support the notion that humans can meaningfully add the deception detection performance of a machine learning system. All data are available at https://osf.io/45z7e/.



中文翻译:

人类如何损害自动欺骗检测性能

背景

欺骗检测是安全从业人员普遍存在的问题。由于需要更大规模的方法,使用机器学习的自动化方法已受到关注。但是,检测性能仍然意味着相当高的错误率。来自不同领域的发现表明,混合人机集成可以为检测任务提供可行的途径。

方法

我们收集了有关参与者自传意图的真实和欺骗性答案的语料库(n  = 1640),并测试了有监督的机器学习和人工判断的组合是否可以提高欺骗检测的准确性。向人类法官展示了对真实或欺骗性陈述的自动可信性判断的结果。他们可以完全否决(混合否决条件),也可以在给定的边界内进行调整(混合调节条件)。

结果

数据表明,在这两种混合条件下,人类的判断都没有增加有意义的贡献。孤立地进行机器学习可以确定真话者和说谎者,总准确率为69%。人类通过混合否决决策的参与使准确性回到机会水平。混合调整条件不能提高欺骗检测性能。人类的决策策略表明,真相偏差(假设他人倾向于说真话)可以解释其有害影响。

结论

当前的研究不支持人类可以有意义地增加机器学习系统的欺骗检测性能的观点。所有数据均可在https://osf.io/45z7e/上获得。

更新日期:2021-01-13
down
wechat
bug