{"title":"Scalable isolation of failure-inducing changes via version comparison","authors":"M. Ghanavati, A. Andrzejak, Zhen Dong","doi":"10.1109/ISSREW.2013.6688895","DOIUrl":null,"url":null,"abstract":"Despite of indisputable progress, automated debugging methods still face difficulties in terms of scalability and runtime efficiency. To reach large-scale projects, we propose an approach which reports small sets of suspicious code changes. Its essential strength is that size of these reports is proportional to the amount of changes between code commits, and not the total project size. In our method we combine version comparison and information on failed tests with static and dynamic analysis. We evaluate our method on real bugs from Apache Hadoop, an open source project with over 2 million LOC1. In 2 out of 4 cases, the set of suspects produced by our approach contains exactly the location of the defective code (and no false positives). Another defect could be pinpointed by small approach extensions. Moreover, the time overhead of our approach is moderate, namely 3-4 times the duration of a failed software test.","PeriodicalId":332420,"journal":{"name":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSREW.2013.6688895","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Despite of indisputable progress, automated debugging methods still face difficulties in terms of scalability and runtime efficiency. To reach large-scale projects, we propose an approach which reports small sets of suspicious code changes. Its essential strength is that size of these reports is proportional to the amount of changes between code commits, and not the total project size. In our method we combine version comparison and information on failed tests with static and dynamic analysis. We evaluate our method on real bugs from Apache Hadoop, an open source project with over 2 million LOC1. In 2 out of 4 cases, the set of suspects produced by our approach contains exactly the location of the defective code (and no false positives). Another defect could be pinpointed by small approach extensions. Moreover, the time overhead of our approach is moderate, namely 3-4 times the duration of a failed software test.