{"title":"What to Fix? Distinguishing between Design and Non-design Rules in Automated Tools","authors":"Neil A. Ernst, S. Bellomo, I. Ozkaya, R. Nord","doi":"10.1109/ICSA.2017.25","DOIUrl":null,"url":null,"abstract":"Design problems, frequently the result of optimizing for delivery speed, are a critical part of long-term software costs. Automatically detecting such design issues is a high priority for software practitioners. Software quality tools promise automatic detection of common software quality rule violations. However, since these tools bundle a number of rules, including rules for code quality, it is hard for users to understand which rules identify design issues in particular. Research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classified 466 software quality rules from three industry tools—CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either non-design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, to support automatic detection.","PeriodicalId":6599,"journal":{"name":"2017 IEEE International Conference on Software Architecture (ICSA)","volume":"12 1","pages":"165-168"},"PeriodicalIF":0.0000,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Software Architecture (ICSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSA.2017.25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
Design problems, frequently the result of optimizing for delivery speed, are a critical part of long-term software costs. Automatically detecting such design issues is a high priority for software practitioners. Software quality tools promise automatic detection of common software quality rule violations. However, since these tools bundle a number of rules, including rules for code quality, it is hard for users to understand which rules identify design issues in particular. Research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classified 466 software quality rules from three industry tools—CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either non-design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, to support automatic detection.