当前位置: X-MOL 学术arXiv.cs.SE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
arXiv - CS - Software Engineering Pub Date : 2020-09-22 , DOI: arxiv-2009.10297
Shuo Ren, Daya Guo, Shuai Lu, Long Zhou, Shujie Liu, Duyu Tang, Neel Sundaresan, Ming Zhou, Ambrosio Blanco, Shuai Ma

Evaluation metrics play a vital role in the growth of an area as it defines the standard of distinguishing between good and bad models. In the area of code synthesis, the commonly used evaluation metric is BLEU or perfect accuracy, but they are not suitable enough to evaluate codes, because BLEU is originally designed to evaluate the natural language, neglecting important syntactic and semantic features of codes, and perfect accuracy is too strict thus it underestimates different outputs with the same semantic logic. To remedy this, we introduce a new automatic evaluation metric, dubbed CodeBLEU. It absorbs the strength of BLEU in the n-gram match and further injects code syntax via abstract syntax trees (AST) and code semantics via data-flow. We conduct experiments by evaluating the correlation coefficient between CodeBLEU and quality scores assigned by the programmers on three code synthesis tasks, i.e., text-to-code, code translation, and code refinement. Experimental results show that our proposed CodeBLEU can achieve a better correlation with programmer assigned scores compared with BLEU and accuracy.

中文翻译:

CodeBLEU:一种自动评估代码合成的方法

评估指标在一个领域的发展中起着至关重要的作用,因为它定义了区分好模型和坏模型的标准。在代码合成领域,常用的评价指标是BLEU或完美准确率,但不足以评价代码,因为BLEU最初是为了评价自然语言而设计的,忽略了代码重要的句法和语义特征,而完善准确性太严格,因此它低估了具有相同语义逻辑的不同输出。为了解决这个问题,我们引入了一种新的自动评估指标,称为 CodeBLEU。它吸收了 BLEU 在 n-gram 匹配中的优势,并通过抽象语法树 (AST) 进一步注入代码语法,并通过数据流注入代码语义。我们通过评估 CodeBLEU 与程序员在三个代码合成任务(即文本到代码、代码翻译和代码细化)上分配的质量分数之间的相关系数来进行实验。实验结果表明,与 BLEU 和准确性相比,我们提出的 CodeBLEU 可以与程序员分配的分数实现更好的相关性。
更新日期:2020-09-29
down
wechat
bug