第十二届IEEE软件测试、验证与确认国际会议

IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
A. Memon, Myra B. Cohen
{"title":"第十二届IEEE软件测试、验证与确认国际会议","authors":"A. Memon, Myra B. Cohen","doi":"10.1002/stvr.1773","DOIUrl":null,"url":null,"abstract":"The IEEE 12th International Conference on Software Testing, Verification & Validation (ICST 2019) was held in Xi’an, China. The aim of the ICST conference is to bring together researchers and practitioners who study the theory, techniques, technologies, and applications that concern all aspects of software testing, verification, and validation of software systems. The program committee rigorously reviewed 110 full papers using a double-blind reviewing policy. Each paper received at least three regular reviews and went through a discussion phase where the reviewers made final decisions on each paper, each discussion being led by a meta-reviewer. Out of this process, the committee selected 31 full-length papers that appeared in the conference. These were presented over nine sessions ranging from classical topics such as test generation and test coverage to emerging topics such as machine learning and security during the main conference track. Based on the original reviewers’ feedback, we selected five papers for consideration for this special issue of STVR. These papers were extended from their conference version by the authors and were reviewed according to the standard STVR reviewing process. We thank all the ICST and STVR reviewers for their hardwork. Three papers successfully completed the reviewprocess and are contained in this special issue. The rest of this editorial provides a brief overview of these three papers. The first paper, Automated Visual Classification of DOM-based Presentation Failure Reports for Responsive Web Pages, by Ibrahim Althomali, Gregory Kapfhammer, and Phil McMinn, introduces VERVE, a tool that automatically classifies all hard to detect response layout failures (RLFs) in web applications. An empirical study reveals that VERVE’s classification of all five types of RLFs frequently agrees with classifications produced manually by humans. The second paper, BugsJS: A Benchmark and Taxonomy of JavaScript Bugs, by Péter Gyimesi, Béla Vancsics, Andrea Stocco, Davood Mazinanian, Árpád Beszédes, Rudolf Ferenc, and Ali Mesbah, presents, BugsJS, a benchmark of 453 real, manually validated JavaScript bugs from 10 popular JavaScript server-side programs, comprising 444 k LOC in total. Each bug is accompanied by its bug report, the test cases that expose it, as well as the patch that fixes it. BugJS can help facilitate reproducible empirical studies and comparisons of JavaScript analysis and testing tools. The third paper, Statically Driven Generation of Concurrent Tests for Thread-Safe Classes, by Valerio Terragni andMauro Pezzè presentsDEPCON, a novel approach that reduces the search space ofconcurrent tests by leveraging statically computeddependencies amongpublicmethods.DEPCON exploits the intuition that concurrent tests can expose thread-safety violations thatmanifest exceptions or deadlocks, only if they exercise some specific method dependencies. The results show that DEPCON is more effective than state-of-the-art approaches in exposing concurrency faults.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"28 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The IEEE 12th International Conference on Software Testing, Verification & Validation\",\"authors\":\"A. Memon, Myra B. Cohen\",\"doi\":\"10.1002/stvr.1773\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The IEEE 12th International Conference on Software Testing, Verification & Validation (ICST 2019) was held in Xi’an, China. The aim of the ICST conference is to bring together researchers and practitioners who study the theory, techniques, technologies, and applications that concern all aspects of software testing, verification, and validation of software systems. The program committee rigorously reviewed 110 full papers using a double-blind reviewing policy. Each paper received at least three regular reviews and went through a discussion phase where the reviewers made final decisions on each paper, each discussion being led by a meta-reviewer. Out of this process, the committee selected 31 full-length papers that appeared in the conference. These were presented over nine sessions ranging from classical topics such as test generation and test coverage to emerging topics such as machine learning and security during the main conference track. Based on the original reviewers’ feedback, we selected five papers for consideration for this special issue of STVR. These papers were extended from their conference version by the authors and were reviewed according to the standard STVR reviewing process. We thank all the ICST and STVR reviewers for their hardwork. Three papers successfully completed the reviewprocess and are contained in this special issue. The rest of this editorial provides a brief overview of these three papers. The first paper, Automated Visual Classification of DOM-based Presentation Failure Reports for Responsive Web Pages, by Ibrahim Althomali, Gregory Kapfhammer, and Phil McMinn, introduces VERVE, a tool that automatically classifies all hard to detect response layout failures (RLFs) in web applications. An empirical study reveals that VERVE’s classification of all five types of RLFs frequently agrees with classifications produced manually by humans. The second paper, BugsJS: A Benchmark and Taxonomy of JavaScript Bugs, by Péter Gyimesi, Béla Vancsics, Andrea Stocco, Davood Mazinanian, Árpád Beszédes, Rudolf Ferenc, and Ali Mesbah, presents, BugsJS, a benchmark of 453 real, manually validated JavaScript bugs from 10 popular JavaScript server-side programs, comprising 444 k LOC in total. Each bug is accompanied by its bug report, the test cases that expose it, as well as the patch that fixes it. BugJS can help facilitate reproducible empirical studies and comparisons of JavaScript analysis and testing tools. The third paper, Statically Driven Generation of Concurrent Tests for Thread-Safe Classes, by Valerio Terragni andMauro Pezzè presentsDEPCON, a novel approach that reduces the search space ofconcurrent tests by leveraging statically computeddependencies amongpublicmethods.DEPCON exploits the intuition that concurrent tests can expose thread-safety violations thatmanifest exceptions or deadlocks, only if they exercise some specific method dependencies. The results show that DEPCON is more effective than state-of-the-art approaches in exposing concurrency faults.\",\"PeriodicalId\":49506,\"journal\":{\"name\":\"Software Testing Verification & Reliability\",\"volume\":\"28 1\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2021-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Software Testing Verification & Reliability\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1002/stvr.1773\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Testing Verification & Reliability","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1002/stvr.1773","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

第十二届IEEE软件测试、验证与验证国际会议(ICST 2019)在中国西安举行。ICST会议的目的是将研究理论、技术、技术和应用的研究人员和实践者聚集在一起,这些理论、技术、技术和应用涉及软件测试、验证和软件系统验证的各个方面。项目委员会采用双盲审查政策严格审查了110篇完整论文。每篇论文至少接受三次定期评审,并经过一个讨论阶段,审稿人对每篇论文做出最终决定,每次讨论都由元审稿人领导。在这个过程中,委员会选出了31篇发表在会议上的论文。这些内容在9个会议上进行了介绍,从经典主题(如测试生成和测试覆盖)到新兴主题(如机器学习和安全)。根据原审稿人的反馈,我们选择了5篇论文作为本期《STVR》特刊的考虑对象。这些论文是作者从会议版本中扩展出来的,并按照标准的STVR审查程序进行了审查。我们感谢所有ICST和STVR审稿人的辛勤工作。三篇论文成功地完成了审查过程,并包含在本期特刊中。这篇社论的其余部分提供了这三篇论文的简要概述。第一篇论文是Ibrahim Althomali、Gregory Kapfhammer和Phil McMinn撰写的《基于dom的响应性网页呈现失败报告的自动视觉分类》,介绍了VERVE,一个自动分类Web应用程序中所有难以检测的响应布局失败(rlf)的工具。一项实证研究表明,VERVE对所有五种类型的rlf的分类与人类手动生成的分类经常一致。第二篇论文,《BugsJS: JavaScript bug的基准和分类》,由passater Gyimesi, bassala vancics, Andrea Stocco, Davood Mazinanian, Árpád besz, Rudolf Ferenc和Ali Mesbah撰写,介绍了BugsJS,一个来自10个流行的JavaScript服务器端程序的453个真实的,手动验证的JavaScript bug的基准,总共包含444 k LOC。每个错误都伴随着它的错误报告,暴露它的测试用例,以及修复它的补丁。BugJS可以帮助促进JavaScript分析和测试工具的可重复的经验研究和比较。Valerio Terragni和mauro的第三篇论文《线程安全类的静态驱动并发测试生成》(Pezzè)提出了depcon,这是一种通过利用公共方法之间的静态计算依赖关系来减少并发测试搜索空间的新方法。DEPCON利用了这样一种直觉:只有当并发测试执行一些特定的方法依赖时,它们才会暴露线程安全违反,从而显示异常或死锁。结果表明,在暴露并发性错误方面,DEPCON比最先进的方法更有效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The IEEE 12th International Conference on Software Testing, Verification & Validation
The IEEE 12th International Conference on Software Testing, Verification & Validation (ICST 2019) was held in Xi’an, China. The aim of the ICST conference is to bring together researchers and practitioners who study the theory, techniques, technologies, and applications that concern all aspects of software testing, verification, and validation of software systems. The program committee rigorously reviewed 110 full papers using a double-blind reviewing policy. Each paper received at least three regular reviews and went through a discussion phase where the reviewers made final decisions on each paper, each discussion being led by a meta-reviewer. Out of this process, the committee selected 31 full-length papers that appeared in the conference. These were presented over nine sessions ranging from classical topics such as test generation and test coverage to emerging topics such as machine learning and security during the main conference track. Based on the original reviewers’ feedback, we selected five papers for consideration for this special issue of STVR. These papers were extended from their conference version by the authors and were reviewed according to the standard STVR reviewing process. We thank all the ICST and STVR reviewers for their hardwork. Three papers successfully completed the reviewprocess and are contained in this special issue. The rest of this editorial provides a brief overview of these three papers. The first paper, Automated Visual Classification of DOM-based Presentation Failure Reports for Responsive Web Pages, by Ibrahim Althomali, Gregory Kapfhammer, and Phil McMinn, introduces VERVE, a tool that automatically classifies all hard to detect response layout failures (RLFs) in web applications. An empirical study reveals that VERVE’s classification of all five types of RLFs frequently agrees with classifications produced manually by humans. The second paper, BugsJS: A Benchmark and Taxonomy of JavaScript Bugs, by Péter Gyimesi, Béla Vancsics, Andrea Stocco, Davood Mazinanian, Árpád Beszédes, Rudolf Ferenc, and Ali Mesbah, presents, BugsJS, a benchmark of 453 real, manually validated JavaScript bugs from 10 popular JavaScript server-side programs, comprising 444 k LOC in total. Each bug is accompanied by its bug report, the test cases that expose it, as well as the patch that fixes it. BugJS can help facilitate reproducible empirical studies and comparisons of JavaScript analysis and testing tools. The third paper, Statically Driven Generation of Concurrent Tests for Thread-Safe Classes, by Valerio Terragni andMauro Pezzè presentsDEPCON, a novel approach that reduces the search space ofconcurrent tests by leveraging statically computeddependencies amongpublicmethods.DEPCON exploits the intuition that concurrent tests can expose thread-safety violations thatmanifest exceptions or deadlocks, only if they exercise some specific method dependencies. The results show that DEPCON is more effective than state-of-the-art approaches in exposing concurrency faults.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Software Testing Verification & Reliability
Software Testing Verification & Reliability 工程技术-计算机:软件工程
CiteScore
3.70
自引率
0.00%
发文量
34
审稿时长
>12 weeks
期刊介绍: The journal is the premier outlet for research results on the subjects of testing, verification and reliability. Readers will find useful research on issues pertaining to building better software and evaluating it. The journal is unique in its emphasis on theoretical foundations and applications to real-world software development. The balance of theory, empirical work, and practical applications provide readers with better techniques for testing, verifying and improving the reliability of software. The journal targets researchers, practitioners, educators and students that have a vested interest in results generated by high-quality testing, verification and reliability modeling and evaluation of software. Topics of special interest include, but are not limited to: -New criteria for software testing and verification -Application of existing software testing and verification techniques to new types of software, including web applications, web services, embedded software, aspect-oriented software, and software architectures -Model based testing -Formal verification techniques such as model-checking -Comparison of testing and verification techniques -Measurement of and metrics for testing, verification and reliability -Industrial experience with cutting edge techniques -Descriptions and evaluations of commercial and open-source software testing tools -Reliability modeling, measurement and application -Testing and verification of software security -Automated test data generation -Process issues and methods -Non-functional testing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信