测试用例总结对Bug修复性能的影响:一项实证调查

Sebastiano Panichella, Annibale Panichella, M. Beller, A. Zaidman, H. Gall
{"title":"测试用例总结对Bug修复性能的影响:一项实证调查","authors":"Sebastiano Panichella, Annibale Panichella, M. Beller, A. Zaidman, H. Gall","doi":"10.1145/2884781.2884847","DOIUrl":null,"url":null,"abstract":"Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shownnot to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are difficult to understand and maintain. Our paper proposes an approach, coined TestDescriber, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which is considered particularly useful by developers.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"42 1","pages":"547-558"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"107","resultStr":"{\"title\":\"The Impact of Test Case Summaries on Bug Fixing Performance: An Empirical Investigation\",\"authors\":\"Sebastiano Panichella, Annibale Panichella, M. Beller, A. Zaidman, H. Gall\",\"doi\":\"10.1145/2884781.2884847\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shownnot to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are difficult to understand and maintain. Our paper proposes an approach, coined TestDescriber, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which is considered particularly useful by developers.\",\"PeriodicalId\":6485,\"journal\":{\"name\":\"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)\",\"volume\":\"42 1\",\"pages\":\"547-558\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-05-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"107\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2884781.2884847\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2884781.2884847","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 107

摘要

为了降低测试活动的成本,自动化测试生成工具已经得到了广泛的研究。然而,生成的测试已经被证明不能帮助开发人员检测和发现更多的bug,即使与手工测试相比,它们达到了更高的结构覆盖率。主要原因是生成的测试难以理解和维护。我们的论文提出了一种方法,称为testdescriptor,它自动生成由每个单独测试执行的代码部分的测试用例摘要,从而提高了可理解性。我们认为这种方法可以补充当前围绕自动单元测试生成或基于搜索的技术的技术,这些技术旨在生成可能最小的测试用例集。在评估我们的方法时,我们发现(1)开发人员发现了两倍的错误,(2)测试用例总结显著地提高了测试用例的可理解性,这被开发人员认为是特别有用的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Impact of Test Case Summaries on Bug Fixing Performance: An Empirical Investigation
Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shownnot to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are difficult to understand and maintain. Our paper proposes an approach, coined TestDescriber, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which is considered particularly useful by developers.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信