当前位置: X-MOL 学术arXiv.cs.SE › 论文详情
Pre-trained Contextual Embedding of Source Code
arXiv - CS - Software Engineering Pub Date : 2019-12-21 , DOI: arxiv-2001.00059
Aditya Kanade; Petros Maniatis; Gogul Balakrishnan; Kensen Shi

The source code of a program not only serves as a formal description of an executable task, but it also serves to communicate developer intent in a human-readable form. To facilitate this, developers use meaningful identifier names and natural-language documentation. This makes it possible to successfully apply sequence-modeling approaches, shown to be effective in natural-language processing, to source code. A major advancement in natural-language understanding has been the use of pre-trained token embeddings; BERT and other works have further shown that pre-trained contextual embeddings can be extremely powerful and can be fine-tuned effectively for a variety of downstream supervised tasks. Inspired by these developments, we present the first attempt to replicate this success on source code. We curate a massive corpus of Python programs from GitHub to pre-train a BERT model, which we call Code Understanding BERT (CuBERT). We also pre-train Word2Vec embeddings on the same dataset. We create a benchmark of five classification tasks and compare fine-tuned CuBERT against sequence models trained with and without the Word2Vec embeddings. Our results show that CuBERT outperforms the baseline methods by a margin of 2.9-22%. We also show its superiority when fine-tuned with smaller datasets, and over fewer epochs. We further evaluate CuBERT's effectiveness on a joint classification, localization and repair task involving prediction of two pointers.
更新日期:2020-01-04

 

全部期刊列表>>
化学/材料学中国作者研究精选
Springer Nature 2019高下载量文章和章节
《科学报告》最新环境科学研究
ACS材料视界
自然科研论文编辑服务
中南大学国家杰青杨华明
剑桥大学-
中国科学院大学化学科学学院
材料化学和生物传感方向博士后招聘
课题组网站
X-MOL
北京大学分子工程苏南研究院
华东师范大学分子机器及功能材料
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug