当前位置: X-MOL 学术Empir. Software Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
RePOR: Mimicking humans on refactoring tasks. Are we there yet?
Empirical Software Engineering ( IF 3.5 ) Pub Date : 2020-06-07 , DOI: 10.1007/s10664-020-09826-7
Rodrigo Morales , Foutse Khomh , Giuliano Antoniol

Refactoring is a maintenance activity that aims to improve design quality while preserving the behavior of a system. Several (semi)automated approaches have been proposed to support developers in this maintenance activity, based on the correction of anti-patterns, which are “poor” solutions to recurring design problems. However, little quantitative evidence exists about the impact of automatically refactored code on program comprehension, and in which context automated refactoring can be as effective as manual refactoring. Leveraging RePOR, an automated refactoring approach based on partial order reduction techniques, we performed an empirical study to investigate whether automated refactoring code structure affects the understandability of systems during comprehension tasks. (1) We surveyed 80 developers, asking them to identify from a set of 20 refactoring changes if they were generated by developers or by a tool, and to rate the refactoring changes according to their design quality; (2) we asked 30 developers to complete code comprehension tasks on 10 systems that were refactored by either a freelancer or an automated refactoring tool. To make comparison fair, for a subset of refactoring actions that introduce new code entities, only synthetic identifiers were presented to practitioners. We measured developers’ performance using the NASA task load index for their effort, the time that they spent performing the tasks, and their percentages of correct answers. Our findings, despite current technology limitations, show that it is reasonable to expect a refactoring tools to match developer code. Indeed, results show that for 3 out of the 5 anti-pattern types studied, developers could not recognize the origin of the refactoring (i.e., whether it was performed by a human or an automatic tool). We also observed that developers do not prefer human refactorings over automated refactorings, except when refactoring Blob classes; and that there is no statistically significant difference between the impact on code understandability of human refactorings and automated refactorings. We conclude that automated refactorings can be as effective as manual refactorings. However, for complex anti-patterns types like the Blob, the perceived quality achieved by developers is slightly higher.

中文翻译:

RePOR:在重构任务上模仿人类。我们到了吗?

重构是一种维护活动,旨在提高设计质量,同时保持系统的行为。已经提出了几种(半)自动化方法来支持开发人员进行这种维护活动,基于反模式的纠正,这是对重复出现的设计问题的“糟糕”解决方案。然而,关于自动重构代码对程序理解的影响的定量证据很少,在哪种情况下自动重构可以与手动重构一样有效。利用 RePOR,一种基于偏序约简技术的自动重构方法,我们进行了一项实证研究,以调查自动重构代码结构是否会影响理解任务期间系统的可理解性。(1) 我们调查了 80 位开发者,要求他们从一组 20 个重构更改中识别出它们是由开发人员还是由工具生成的,并根据其设计质量对重构更改进行评分;(2) 我们要求 30 名开发人员在由自由职业者或自动化重构工具重构的 10 个系统上完成代码理解任务。为了使比较公平,对于引入新代码实体的重构操作的子集,只向从业者提供了合成标识符。我们使用 NASA 任务负载指数来衡量开发人员的表现,包括他们的努力、他们执行任务所花费的时间以及他们正确答案的百分比。尽管当前的技术存在限制,但我们的发现表明,期望重构工具与开发人员代码相匹配是合理的。确实,结果表明,对于所研究的 5 种反模式类型中的 3 种,开发人员无法识别重构的来源(即,它是由人工执行还是由自动工具执行)。我们还观察到,开发人员不喜欢人工重构而不是自动重构,除非重构 Blob 类;并且在人工重构和自动重构对代码可理解性的影响之间没有统计上的显着差异。我们得出的结论是,自动重构与手动重构一样有效。但是,对于像 Blob 这样的复杂反模式类型,开发人员实现的感知质量略高。我们还观察到,开发人员不喜欢人工重构而不是自动重构,除非重构 Blob 类;并且在人工重构和自动重构对代码可理解性的影响之间没有统计上的显着差异。我们得出的结论是,自动重构与手动重构一样有效。但是,对于像 Blob 这样的复杂反模式类型,开发人员实现的感知质量略高。我们还观察到,开发人员不喜欢人工重构而不是自动重构,除非重构 Blob 类;并且在人工重构和自动重构对代码可理解性的影响之间没有统计上的显着差异。我们得出的结论是,自动重构与手动重构一样有效。但是,对于像 Blob 这样的复杂反模式类型,开发人员实现的感知质量略高。
更新日期:2020-06-07
down
wechat
bug