当前位置: X-MOL 学术Softw. Syst. Model. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Train Benchmark: cross-technology performance evaluation of continuous model queries.
Software and Systems Modeling ( IF 2 ) Pub Date : 2017-01-17 , DOI: 10.1007/s10270-016-0571-8
Gábor Szárnyas 1, 2, 3 , Benedek Izsó 1 , István Ráth 1, 4 , Dániel Varró 1, 2, 3
Affiliation  

In model-driven development of safety-critical systems (like automotive, avionics or railways), well-formedness of models is repeatedly validated in order to detect design flaws as early as possible. In many industrial tools, validation rules are still often implemented by a large amount of imperative model traversal code which makes those rule implementations complicated and hard to maintain. Additionally, as models are rapidly increasing in size and complexity, efficient execution of validation rules is challenging for the currently available tools. Checking well-formedness constraints can be captured by declarative queries over graph models, while model update operations can be specified as model transformations. This paper presents a benchmark for systematically assessing the scalability of validating and revalidating well-formedness constraints over large graph models. The benchmark defines well-formedness validation scenarios in the railway domain: a metamodel, an instance model generator and a set of well-formedness constraints captured by queries, fault injection and repair operations (imitating the work of systems engineers by model transformations). The benchmark focuses on the performance of query evaluation, i.e. its execution time and memory consumption, with a particular emphasis on reevaluation. We demonstrate that the benchmark can be adopted to various technologies and query engines, including modeling tools; relational, graph and semantic databases. The Train Benchmark is available as an open-source project with continuous builds from https://github.com/FTSRG/trainbenchmark.

中文翻译:

培训基准:连续模型查询的跨技术性能评估。

在安全性关键系统(例如汽车,航空电子设备或铁路)的模型驱动开发中,反复验证模型的格式正确性,以便尽早发现设计缺陷。在许多工业工具中,验证规则仍然经常通过大量的命令式遍历代码来实现,这使那些规则的实现变得复杂且难以维护。另外,由于模型的大小和复杂性正在迅速增加,因此对于当前可用的工具而言,有效执行验证规则具有挑战性。可以通过对图模型的声明性查询来捕获检查格式正确性约束的方法,而可以将模型更新操作指定为模型转换。本文提出了一个基准,用于系统地评估在大型图模型上验证和重新验证格式正确性约束的可伸缩性。该基准定义了铁路领域中的良好性验证场景:一个元模型,一个实例模型生成器以及一组通过查询,故障注入和修复操作(通过模型转换来模仿系统工程师的工作)捕获的良好性约束。该基准测试专注于查询评估的性能,即其执行时间和内存消耗,特别强调重新评估。我们证明了基准可以被各种技术和查询引擎采用,包括建模工具。关系,图和语义数据库。培训基准可以作为开源项目获得,并且可以从https:
更新日期:2017-01-17
down
wechat
bug