当前位置: X-MOL 学术ACM SIGMOD Rec. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Database Tuning using Natural Language Processing
ACM SIGMOD Record ( IF 1.1 ) Pub Date : 2021-12-02 , DOI: 10.1145/3503780.3503788
Immanuel Trummer 1
Affiliation  

Introduction. We have seen significant advances in the state of the art in natural language processing (NLP) over the past few years [20]. These advances have been driven by new neural network architectures, in particular the Transformer model [19], as well as the successful application of transfer learning approaches to NLP [13]. Typically, training for specific NLP tasks starts from large language models that have been pre-trained on generic tasks (e.g., predicting obfuscated words in text [5]) for which large amounts of training data are available. Using such models as a starting point reduces task-specific training cost as well as the number of required training samples by orders of magnitude [7]. These advances motivate new use cases for NLP methods in the context of databases.

中文翻译:

使用自然语言处理进行数据库调优

介绍。在过去的几年中,我们已经看到自然语言处理 (NLP) 的最新技术取得了重大进展 [20]。这些进步是由新的神经网络架构推动的,特别是 Transformer 模型 [19],以及迁移学习方法在 NLP [13] 中的成功应用。通常,针对特定 NLP 任务的训练从大型语言模型开始,这些模型已经在通用任务上进行了预训练(例如,预测文本中的混淆词 [5]),其中有大量训练数据可用。使用此类模型作为起点可将特定任务的训练成本以及所需训练样本的数量减少几个数量级 [7]。这些进步激发了 NLP 方法在数据库环境中的新用例。
更新日期:2021-12-02
down
wechat
bug