当前位置: X-MOL 学术Softw. Syst. Model. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A generic LSTM neural network architecture to infer heterogeneous model transformations
Software and Systems Modeling ( IF 2.0 ) Pub Date : 2021-05-31 , DOI: 10.1007/s10270-021-00893-y
Loli Burgueño , Jordi Cabot , Shuai Li , Sébastien Gérard

Models capture relevant properties of systems. During the models’ life-cycle, they are subjected to manipulations with different goals such as managing software evolution, performing analysis, increasing developers’ productivity, and reducing human errors. Typically, these manipulation operations are implemented as model transformations. Examples of these transformations are (i) model-to-model transformations for model evolution, model refactoring, model merging, model migration, model refinement, etc., (ii) model-to-text transformations for code generation and (iii) text-to-model ones for reverse engineering. These operations are usually manually implemented, using general-purpose languages such as Java, or domain-specific languages (DSLs) such as ATL or Acceleo. Even when using such DSLs, transformations are still time-consuming and error-prone. We propose using the advances in artificial intelligence techniques to learn these manipulation operations on models and automate the process, freeing the developer from building specific pieces of code. In particular, our proposal is a generic neural network architecture suitable for heterogeneous model transformations. Our architecture comprises an encoder–decoder long short-term memory with an attention mechanism. It is fed with pairs of input–output examples and, once trained, given an input, automatically produces the expected output. We present the architecture and illustrate the feasibility and potential of our approach through its application in two main operations on models: model-to-model transformations and code generation. The results confirm that neural networks are able to faithfully learn how to perform these tasks as long as enough data are provided and no contradictory examples are given.



中文翻译:

用于推断异构模型转换的通用 LSTM 神经网络架构

模型捕获系统的相关属性。在模型的生命周期中,它们会受到不同目标的操作,例如管理软件演化、执行分析、提高开发人员的生产力和减少人为错误。通常,这些操作操作是作为模型转换实现的。这些转换的示例是 (i) 用于模型演化、模型重构、模型合并、模型迁移、模型细化等的模型到模型转换,(ii) 用于代码生成的模型到文本转换和 (iii) 文本用于逆向工程的模型。这些操作通常使用 Java 等通用语言或 ATL 或 Acceleo 等领域特定语言 (DSL) 手动实现。即使使用此类 DSL,转换仍然耗时且容易出错。我们建议使用人工智能技术的进步来学习模型上的这些操作操作并自动化该过程,使开发人员无需构建特定的代码片段。特别是,我们的提议是一种适用于异构模型转换的通用神经网络架构。我们的架构包括一个带有注意力机制的编码器-解码器长短期记忆。它提供了成对的输入-输出示例,一旦经过训练,给定输入,就会自动产生预期的输出。我们展示了该架构,并通过其在模型的两个主要操作中的应用来说明我们方法的可行性和潜力:模型到模型的转换和代码生成。

更新日期:2021-05-31
down
wechat
bug