当前位置: X-MOL 学术Softw. Test. Verif. Reliab. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Special issue on new generations of UI testing
Software Testing, Verification and Reliability ( IF 1.5 ) Pub Date : 2021-03-17 , DOI: 10.1002/stvr.1770
Emil Alégroth 1 , Luca Ardito 2 , Riccardo Coppola 2 , Robert Feldt 3
Affiliation  

Market demands for faster delivery and higher software quality are progressively becoming more stringent. A key hindrance for software companies to meet those demands is how to test the software due to the intrinsic costs of development, maintenance and evolution of testware, especially since testware should be defined and aligned, with all layers of the system under test (SUT), including all user interface (UI) abstraction levels. UI‐based test approaches are forms of end‐to‐end testing. The interaction with the system is carried out by mimicking the operations that a human user would perform. Regarding graphical user interfaces (i.e., GUIs), different GUI‐based test approaches exist according to the layer of abstraction of the GUI that is considered for creating test locators and oracles: specifically, first generation, or coordinate‐based, tests use the exact position on the screen to identify the elements to interact with; second generation, or layout‐based, tests leverage GUI properties as locators; and third generation, or visual, tests make use of image recognition.

The three approaches provide various benefits and drawbacks. They are seldom used together because of the costs mentioned above, despite growing academic evidence of the complimentary benefits. User interfaces are, however, not limited to GUIs, especially with the recent diffusion of innovative typologies of user interfaces (e.g., conversational, voice‐recognition, gesture‐based and textual UIs) that are still rarely tested by developers; testing techniques can also be distinguished based on the way the test scripts are generated, i.e., if they are written inside JUnit‐like test scripts or obtained through the capture of interactions with the SUT, or automatically obtained traversing a model of the user interface, as modern model‐based testing tools do it. Test automation is a well‐rooted practice in the industrial environment. However, there are software development domains, e.g., web and mobile apps, where UI testing is still not adopted on a systematic basis. The results of many investigations in literature highlighted many reasons for this lack of penetration of the most evolved UI testing techniques among developers:

  1. Scarce documentation of the available testing tools;
  2. Significant maintenance effort when keeping the test scripts aligned with the evolution of the AUT, e.g., for performing regression testing;
  3. Limited perception of the benefits that advanced UI testing techniques yield when confronted with traditional manual testing.

The present special issue is focused on the concept of software testing in general, since it will not take into account other forms of testing (e.g., unit, integration, performance testing) that are at lower layers than E2E testing and that, in general, do not involve the final UI of the application under test. On the other hand, the proposed special issue will not have a focus on a specific application domain. The goal is to provide the reader with a broad perspective of the UI‐based E2E testing process automation regardless of the domain in which an application falls.

The reviewing process of the received papers followed the same process as for regular papers. At least three reviewers reviewed each paper, and after a rigorous selection process, five papers were proposed for publication.

The first paper, ‘Functional Test Generation from UI Test Scenarios using Reinforcement Learning for Android Applications’ by Yavuz Koroglu and Alper Sen, presents a methodology to generate GUI test scenarios for Android applications, named FARLEAD‐Android (Fully Automated Reinforcement LEArning‐Driven Specification‐BAsed Test Generator). The test generator defines test sequences based on formal specifications as linear‐time temporal logic formulas. The authors' evaluation proved the technique more efficient than three other state‐of‐the‐art testing engines, Random, Monkey and QBEa.

The second paper, ‘TESTAR – Scriptless Testing through Graphical User Interface’ by Tanja Vos, Pekka Aho, Fernando Pastor Ricos, Olivia Rodriguez Valdes and Ad Mulders, describes an open‐source tool, TESTAR, which complements scripted testing with scriptless test automation. The manuscript provides a comprehensive description of the characteristics and features of TESTAR, along with an overview of the state of the research and experimentation agenda with the tool.

The third paper, ‘Comparing the Effectiveness of Capture and Replay against Automatic Input Generation for Android GUI Testing,’ by Porfirio Tramontana, Sergio Di Martino, Anna Rita Fasolino and Luigi Starace, describes two experiments conducted to compare the effectiveness of capture & replay vs. freely available automated testing tools. The experiments that involved a sample of computer engineering students showed that the generation of test cases with capture & replay techniques outperformed the automated tools in terms of achieved coverage, especially for complex execution scenarios.

The fourth paper, ‘Generating and selecting resilient and maintainable locators for Web automated testing’ by Vu Nguyen, Than To and Gia‐Han Diep, defines an approach to generate and select resilient maintainable locators for automated web GUI testing, relying on the semantic structure of web pages. The approach outperformed state‐of‐the‐art tools (namely, Selenium IDE and Robula+) in the capability of locating target elements and avoiding wrong locators.

Finally, the fifth paper, ‘SIDEREAL: Statistical Adaptive Generation of Robust Locators for End‐to‐End Web Testing,’ by Maurizio Leotta, Filippo Ricca and Paolo Tonella, tackles the issue of generating robust XPath locators by interpreting it as a graph exploration problem instead of relying on ad‐hoc heuristics as the state‐of‐the‐art tool Robula+. The authors describe a tool, SIDEREAL, which outperforms Robula+ in robustness to broken XPath locators.

Hereby, we would also like to thank all the authors who considered the special issue on New Generations of UI Testing in the Software Testing, Verification and Reliability Journal for publishing their research articles, and also all the reviewers whose review comments and recommendations helped us to ensure the quality of the special issue and also helped the authors to improve their papers. A special thanks to the STVR chief editors Robert M. Hierons and Tao Xie for their guidance and support throughout this process.



中文翻译:

新一代UI测试特刊

市场对更快的交付速度和更高的软件质量的要求越来越严格。软件公司满足这些需求的主要障碍是,由于测试软件的开发,维护和演进的内在成本,如何测试软件,尤其是因为应该定义测试软件并将其与被测系统(SUT)的所有层保持一致,包括所有用户界面(UI)抽象级别。基于UI的测试方法是端到端测试的形式。通过模仿人类用户将执行的操作来执行与系统的交互。关于图形用户界面(即GUI),根据GUI的抽象层存在不同的基于GUI的测试方法,这些抽象方法被考虑用于创建测试定位符和预告片:具体而言,是第一代还是基于坐标,测试使用屏幕上的确切位置来识别要与之交互的元素;第二代或基于布局的测试利用GUI属性作为定位器; 第三代或视觉测试使用图像识别。

这三种方法具有各种优点和缺点。尽管有越来越多的证据表明它们具有互补的好处,但由于上述成本的原因,很少将它们一起使用。但是,用户界面不仅限于GUI,尤其是随着最近用户界面创新类型(例如,对话,语音识别,基于手势和文本UI)的普及,开发人员仍然很少对其进行测试。还可以基于生成测试脚本的方式来区分测试技术,即,如果它们是在类似JUnit的测试脚本中编写的,或者是通过捕获与SUT的交互而获得的,还是自动获得的遍历用户界面模型的信息,就像现代基于模型的测试工具一样。测试自动化是工业环境中根深蒂固的实践。然而,在某些软件开发领域(例如,Web和移动应用程序)中,仍然没有系统地采用UI测试。大量文献研究的结果突出表明了开发人员缺乏最先进的UI测试技术渗透的许多原因:

  1. 缺少可用测试工具的文档;
  2. 使测试脚本与AUT的发展保持一致时需要进行大量维护工作,例如,进行回归测试;
  3. 对高级UI测试技术在面对传统的手动测试时所产生的好处的了解有限。

本期特刊主要集中在软件测试的概念上,因为它不会考虑其他形式的测试(例如,单元,集成,性能测试),这些形式的测试层级低于E2E测试,并且一般而言,不涉及被测应用程序的最终用户界面。另一方面,建议的特刊将不会关注特定的应用领域。目的是为读者提供基于UI的E2E测试流程自动化的广泛视角,而与应用程序所处的领域无关。

收到论文的审阅过程与常规论文的审阅过程相同。至少有三位审稿人审阅每篇论文,经过严格的筛选过程,提议发表五篇论文。

Yavuz Koroglu和Alper Sen撰写的第一篇论文“使用Android应用程序的强化学习从UI测试场景生成功能测试”介绍了一种为Android应用程序生成GUI测试方案的方法,名为FARLEAD-Android(全自动增强LEARning-Driven规范) -基于测试生成器)。测试生成器根据形式规范将测试序列定义为线性时间时序逻辑公式。作者的评估证明,该技术比其他三个最新的测试引擎(Random,Monkey和QBEa)更有效。

Tanja Vos,Pekka Aho,Fernando Pastor Ricos,Olivia Rodriguez Valdes和Ad Mulders撰写的第二篇论文“ TESTAR –通过图形用户界面进行无脚本测试”描述了一种开源工具TESTAR,该工具通过无脚本测试自动化来补充脚本测试。该手稿对TESTAR的特性和功能进行了全面的描述,并概述了该工具的研究和实验议程。

Porfirio Tramontana,Sergio Di Martino,Anna Rita Fasolino和Luigi Starace撰写的第三篇论文“将捕获和重放的效果与Android GUI测试的自动输入生成进行比较”,描述了两个比较捕获和重放与vs的有效性的实验。免费提供的自动化测试工具。涉及一个计算机工程专业学生样本的实验表明,使用捕获和重播技术生成的测试用例在覆盖范围方面优于自动化工具,尤其是对于复杂的执行方案而言。

Vu Nguyen,Than To和Gia-Han Diep撰写的第四篇论文“为Web自动化测试生成和选择可恢复性和可维护的定位器”定义了一种方法,该方法根据语义结构生成和选择用于自动化Web GUI测试的可恢复性和可维护的定位器。网页。该方法在定位目标元素和避免错误定位器方面的性能优于最先进的工具(即Selenium IDE和Robula +)。

最后,由Maurizio Leotta,Filippo Ricca和Paolo Tonella撰写的第五篇论文“ SIDEREAL:用于端到端Web测试的鲁棒定位器的统计自适应生成”解决了生成健壮的XPath定位器的问题,方法是将其解释为图探索。问题,而不是依赖临时启发式技术作为最先进的工具Robula +。作者介绍了一种工具SIDEREAL,它在破坏性XPath定位器的鲁棒性方面优于Robula +。

在此,我们还要感谢所有在《软件测试,验证和可靠性》杂志上考虑过有关“新一代UI测试”的特刊的作者,以及所有发表评论意见和建议对我们有帮助的审稿人。确保特刊的质量,还帮助作者改进论文。特别感谢STVR的主编Robert M. Hierons和Tao Xie在整个过程中提供的指导和支持。

更新日期:2021-04-26
down
wechat
bug