当前位置: X-MOL 学术Softw. Test. Verif. Reliab. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The IEEE 12th International Conference on Software Testing, Verification & Validation
Software Testing, Verification and Reliability ( IF 1.5 ) Pub Date : 2021-04-28 , DOI: 10.1002/stvr.1773
Atif M. Memon 1 , Myra B. Cohen 2
Affiliation  

The IEEE 12th International Conference on Software Testing, Verification & Validation (ICST 2019) was held in Xi'an, China. The aim of the ICST conference is to bring together researchers and practitioners who study the theory, techniques, technologies, and applications that concern all aspects of software testing, verification, and validation of software systems. The program committee rigorously reviewed 110 full papers using a double‐blind reviewing policy. Each paper received at least three regular reviews and went through a discussion phase where the reviewers made final decisions on each paper, each discussion being led by a meta‐reviewer. Out of this process, the committee selected 31 full‐length papers that appeared in the conference. These were presented over nine sessions ranging from classical topics such as test generation and test coverage to emerging topics such as machine learning and security during the main conference track.

Based on the original reviewers' feedback, we selected five papers for consideration for this special issue of STVR. These papers were extended from their conference version by the authors and were reviewed according to the standard STVR reviewing process. We thank all the ICST and STVR reviewers for their hard work. Three papers successfully completed the review process and are contained in this special issue. The rest of this editorial provides a brief overview of these three papers.

The first paper, Automated Visual Classification of DOM‐based Presentation Failure Reports for Responsive Web Pages, by Ibrahim Althomali, Gregory Kapfhammer, and Phil McMinn, introduces VERVE, a tool that automatically classifies all hard to detect response layout failures (RLFs) in web applications. An empirical study reveals that VERVE's classification of all five types of RLFs frequently agrees with classifications produced manually by humans.

The second paper, BugsJS: A Benchmark and Taxonomy of JavaScript Bugs, by Péter Gyimesi, Béla Vancsics, Andrea Stocco, Davood Mazinanian, Árpád Beszédes, Rudolf Ferenc, and Ali Mesbah, presents, BugsJS, a benchmark of 453 real, manually validated JavaScript bugs from 10 popular JavaScript server‐side programs, comprising 444 k LOC in total. Each bug is accompanied by its bug report, the test cases that expose it, as well as the patch that fixes it. BugJS can help facilitate reproducible empirical studies and comparisons of JavaScript analysis and testing tools.

The third paper, Statically Driven Generation of Concurrent Tests for Thread‐Safe Classes, by Valerio Terragni and Mauro Pezzè presents DEPCON+, a novel approach that reduces the search space of concurrent tests by leveraging statically computed dependencies among public methods. DEPCON+ exploits the intuition that concurrent tests can expose thread‐safety violations that manifest exceptions or deadlocks, only if they exercise some specific method dependencies. The results show that DEPCON+ is more effective than state‐of‐the‐art approaches in exposing concurrency faults.

更新日期:2021-05-11
down
wechat
bug