当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
TruthfulQA: Measuring How Models Mimic Human Falsehoods
arXiv - CS - Computation and Language Pub Date : 2021-09-08 , DOI: arxiv-2109.07958
Stephanie Lin, Jacob Hilton, Owain Evans

We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. For example, the 6B-parameter GPT-J model was 17% less truthful than its 125M-parameter counterpart. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.

中文翻译:

TruthfulQA:衡量模型如何模仿人类谎言

我们提出了一个基准来衡量语言模型在生成问题答案时是否真实。该基准包含 817 个问题,涵盖健康、法律、金融和政治等 38 个类别。我们精心设计了一些问题,有些人会因为错误的信念或误解而错误地回答这些问题。为了表现良好,模型必须避免生成从模仿人类文本中学到的错误答案。我们测试了 GPT-3、GPT-Neo/J、GPT-2 和基于 T5 的模型。最好的模型在 58% 的问题上是真实的,而人类表现是 94%。模型生成了许多错误的答案,模仿流行的误解并有可能欺骗人类。最大的模型通常是最不真实的。例如,6B 参数 GPT-J 模型的真实性比其 125M 参数模型低 17%。这与其他 NLP 任务形成对比,性能随着模型大小的提高而提高。但是,如果从训练分布中学习到错误答案,则会出现这种结果。我们建议,与使用训练目标而不是模仿网络文本进行微调相比,单独扩大模型在提高真实性方面的前景更小。
更新日期:2021-09-17
down
wechat
bug