Open-source Defect Injection Benchmark Testbed for the Evaluation of Testing

Miroslav Bures, P. Herout, Bestoun S. Ahmed
{"title":"Open-source Defect Injection Benchmark Testbed for the Evaluation of Testing","authors":"Miroslav Bures, P. Herout, Bestoun S. Ahmed","doi":"10.1109/ICST46399.2020.00059","DOIUrl":null,"url":null,"abstract":"A natural method to evaluate the effectiveness of a testing technique is to measure the defect detection rate when applying the created test cases. Here, real or artificial software defects can be injected into the source code of software. For a more extensive evaluation, injection of artificial defects is usually needed and can be performed via mutation testing using code mutation operators. However, to simulate complex defects arising from a misunderstanding of design specifications, mutation testing might reach its limit in some cases. In this paper, we present an open-source benchmark testbed application that employs a complement method of artificial defect injection. The application is compiled after artificial defects are injected into its source code from predefined building blocks. The majority of the functions and user interface elements are covered by creating front-end-based automated test cases that can be used in experiments.","PeriodicalId":235967,"journal":{"name":"2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICST46399.2020.00059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

A natural method to evaluate the effectiveness of a testing technique is to measure the defect detection rate when applying the created test cases. Here, real or artificial software defects can be injected into the source code of software. For a more extensive evaluation, injection of artificial defects is usually needed and can be performed via mutation testing using code mutation operators. However, to simulate complex defects arising from a misunderstanding of design specifications, mutation testing might reach its limit in some cases. In this paper, we present an open-source benchmark testbed application that employs a complement method of artificial defect injection. The application is compiled after artificial defects are injected into its source code from predefined building blocks. The majority of the functions and user interface elements are covered by creating front-end-based automated test cases that can be used in experiments.
面向测试评估的开源缺陷注入基准测试平台
评估测试技术有效性的自然方法是在应用已创建的测试用例时度量缺陷检测率。在这里,真实的或人为的软件缺陷可以被注入到软件的源代码中。对于更广泛的评估,通常需要注入人工缺陷,并且可以通过使用代码突变操作符的突变测试来执行。然而,为了模拟由于对设计规范的误解而产生的复杂缺陷,突变测试在某些情况下可能会达到极限。在本文中,我们提出了一个开源的基准测试平台应用程序,它采用了人工缺陷注入的补充方法。在将人工缺陷从预定义的构建块注入到其源代码之后,对应用程序进行编译。通过创建可在实验中使用的基于前端的自动化测试用例,涵盖了大多数功能和用户界面元素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信