ESEC/FSE '11Pub Date : 2011-09-05DOI: 10.1145/2025113.2025183
C. Cifuentes, Nathan Keynes, Lian Li, Nathan Hawes, Manuel Valdiviezo, Andrew Browne, J. Zimmermann, Andrew Craik, Douglas Teoh, Christian Hoermann
{"title":"Static deep error checking in large system applications using parfait","authors":"C. Cifuentes, Nathan Keynes, Lian Li, Nathan Hawes, Manuel Valdiviezo, Andrew Browne, J. Zimmermann, Andrew Craik, Douglas Teoh, Christian Hoermann","doi":"10.1145/2025113.2025183","DOIUrl":"https://doi.org/10.1145/2025113.2025183","url":null,"abstract":"In this paper, we introduce Parfait, a static bug-checking tool for C/C++ applications. Parfait achieves precision and scalability at the same time by employing a layered program analysis framework. In Parfait, different analyses varying in precision and runtime expense can be invoked on demand to detect defects of a specific type, effectively achieving higher precision with smaller runtime overheads. Several production organizations within Oracle have started to integrate Parfait into their development process. Feedback from various production teams suggests that it is precise and scalable: the tool is able to analyze the OpenSolarisTM operating system and network consolidation (ON) with more than 6 million lines of code in 1 hour, and report thousands of defects with a false positive rate of close to 10%.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134379097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ESEC/FSE '11Pub Date : 2011-09-05DOI: 10.1145/2025113.2025127
C. Jergensen, A. Sarma, Patrick Wagstrom
{"title":"The onion patch: migration in open source ecosystems","authors":"C. Jergensen, A. Sarma, Patrick Wagstrom","doi":"10.1145/2025113.2025127","DOIUrl":"https://doi.org/10.1145/2025113.2025127","url":null,"abstract":"Past research established that individuals joining an Open Source community typically follow a socialization process called \"the onion model\": newcomers join a project by first contributing at the periphery through mailing list discussions and bug trackers and as they develop skill and reputation within the community they advance to central roles of contributing code and making design decisions. However, the modern Open Source landscape has fewer projects that operate independently and many projects under the umbrella of software ecosystems that bring together projects with common underlying components, technology, and social norms. Participants in such an ecosystems may be able to utilize a significant amount of transferrable knowledge when moving between projects in the ecosystem and, thereby, skip steps in the onion model. In this paper, we examine whether the onion model of joining and progressing in a standalone Open Source project still holds true in large project ecosystems and how the model might change in such settings.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115360777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ESEC/FSE '11Pub Date : 2011-09-05DOI: 10.1145/2025113.2025177
Marco Mori
{"title":"A software lifecycle process for context-aware adaptive systems","authors":"Marco Mori","doi":"10.1145/2025113.2025177","DOIUrl":"https://doi.org/10.1145/2025113.2025177","url":null,"abstract":"It is increasingly important for computing systems to evolve their behavior at run-time because of resources uncertainty, system failures and emerging user needs. Our approach supports software engineers to analyze and develop context-aware adaptive applications. The software lifecycle process we propose supports static and dynamic decision making mechanisms, run-time consistent evolution and it is amenable to be automated.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116276098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ESEC/FSE '11Pub Date : 2011-09-05DOI: 10.1145/2025113.2025163
Ahmed Tamrawi, T. Nguyen, Jafar M. Al-Kofahi, T. Nguyen
{"title":"Fuzzy set and cache-based approach for bug triaging","authors":"Ahmed Tamrawi, T. Nguyen, Jafar M. Al-Kofahi, T. Nguyen","doi":"10.1145/2025113.2025163","DOIUrl":"https://doi.org/10.1145/2025113.2025163","url":null,"abstract":"Bug triaging aims to assign a bug to the most appropriate fixer. That task is crucial in reducing time and efforts in a bug fixing process. In this paper, we propose Bugzie, a novel approach for automatic bug triaging based on fuzzy set and cache-based modeling of the bug-fixing expertise of developers. Bugzie considers a software system to have multiple technical aspects, each of which is associated with technical terms. For each technical term, it uses a fuzzy set to represent the developers who are capable/competent of fixing the bugs relevant to the corresponding aspect. The fixing correlation of a developer toward a technical term is represented by his/her membership score toward the corresponding fuzzy set. The score is calculated based on the bug reports that (s)he has fixed, and is updated as the newly fixed bug reports are available. For a new bug report, Bugzie combines the fuzzy sets corresponding to its terms and ranks the developers based on their membership scores toward that combined fuzzy set to find the most capable fixers. Our empirical results show that Bugzie achieves significantly higher accuracy and time efficiency than existing state-of-the-art approaches.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125037617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ESEC/FSE '11Pub Date : 2011-09-05DOI: 10.1145/2025113.2025203
B. Cafeo, J. Noppen, F. Ferrari, R. Chitchyan, A. Rashid
{"title":"Inferring test results for dynamic software product lines","authors":"B. Cafeo, J. Noppen, F. Ferrari, R. Chitchyan, A. Rashid","doi":"10.1145/2025113.2025203","DOIUrl":"https://doi.org/10.1145/2025113.2025203","url":null,"abstract":"Due to the very large number of configurations that can typically be derived from a Dynamic Software Product Line (DSPL), efficient and effective testing of such systems have become a major challenge for software developers. In particular, when a configuration needs to be deployed quickly due to rapid contextual changes (e.g., in an unfolding crisis), time constraints hinder the proper testing of such a configuration. In this paper, we propose to reduce the testing required of such DSPLs to a relevant subset of configurations. Whenever a need to adapt to an untested configuration is encountered, our approach determines the most similar tested configuration and reuses its test results to either obtain a coverage measure or infer a confidence degree for the new, untested configuration. We focus on providing these techniques for inference of structural testing results for DSPLs, which is supported by an early prototype implementation.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"34 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123479289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ESEC/FSE '11Pub Date : 2011-09-05DOI: 10.1145/2025113.2025173
M. Baluda
{"title":"Automatic structural testing with abstraction refinement and coarsening","authors":"M. Baluda","doi":"10.1145/2025113.2025173","DOIUrl":"https://doi.org/10.1145/2025113.2025173","url":null,"abstract":"White box testing, also referred to as structural testing, can be used to assess the validity of test suites with respect to the implementation. The applicability of white box testing and structural coverage is limited by the difficulty and the cost of inspecting the uncovered code elements to either generate test cases that cover elements not yet executed or to prove the infeasibility of the elements not yet covered.\u0000 My research targets the problem of increasing code coverage by automatically generating test cases that augment the coverage of the code or proving the infeasibility of uncovered elements, and thus eliminating them from the coverage measure to obtain more realistic values. Although the problem is undecidable in general, the results achieved so far during my PhD indicate that it is possible to extend the test suites and identify many infeasible elements by suitably combining static and dynamic analysis techniques, and that it is possible to manage the combinatorial explosion of execution models by identifying and remove elements of the execution models when not needed anymore.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125549619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ESEC/FSE '11Pub Date : 2011-09-05DOI: 10.1145/2025113.2025160
Lian Li, C. Cifuentes, Nathan Keynes
{"title":"Boosting the performance of flow-sensitive points-to analysis using value flow","authors":"Lian Li, C. Cifuentes, Nathan Keynes","doi":"10.1145/2025113.2025160","DOIUrl":"https://doi.org/10.1145/2025113.2025160","url":null,"abstract":"Points-to analysis is a fundamental static analysis technique which computes the set of memory objects that a pointer may point to. Many different applications, such as security-related program analyses, bug checking, and analyses of multi-threaded programs, require precise points-to information to be effective. Recent work has focused on improving the precision of points-to analysis through flow-sensitivity and great progress has been made. However, even with all recent progress, flow-sensitive points-to analysis can still be much slower than a flow-insensitive analysis.\u0000 In this paper, we propose a novel method that simplifies flow-sensitive points-to analysis to a general graph reachability problem in a value flow graph. The value flow graph summarizes dependencies between pointer variables, including those memory dependencies via pointer dereferences. The points-to set for each pointer variable can then be computed as the set of memory objects that can reach it in the graph. We develop an algorithm to build the value flow graph efficiently by examining the pointed-to-by set of a memory object, i.e., the set of pointers that point to an object. The pointed-to-by information of memory objects is very useful for applications such as escape analysis, and information flow analysis.\u0000 Our approach is intuitive, easy to implement and very efficient. The implementation is around 2000 lines of code and it is more efficient than existing flow-sensitive points-to analyses. The runtime is comparable with the state-of-the-art flow-insensitive points-to analysis.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116222132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ESEC/FSE '11Pub Date : 2011-09-05DOI: 10.1145/2025113.2025155
Emad Shihab, A. Mockus, Yasutaka Kamei, Bram Adams, A. Hassan
{"title":"High-impact defects: a study of breakage and surprise defects","authors":"Emad Shihab, A. Mockus, Yasutaka Kamei, Bram Adams, A. Hassan","doi":"10.1145/2025113.2025155","DOIUrl":"https://doi.org/10.1145/2025113.2025155","url":null,"abstract":"The relationship between various software-related phenomena (e.g., code complexity) and post-release software defects has been thoroughly examined. However, to date these predictions have a limited adoption in practice. The most commonly cited reason is that the prediction identifies too much code to review without distinguishing the impact of these defects. Our aim is to address this drawback by focusing on high-impact defects for customers and practitioners. Customers are highly impacted by defects that break pre-existing functionality (breakage defects), whereas practitioners are caught off-guard by defects in files that had relatively few pre-release changes (surprise defects). The large commercial software system that we study already had an established concept of breakages as the highest-impact defects, however, the concept of surprises is novel and not as well established. We find that surprise defects are related to incomplete requirements and that the common assumption that a fix is caused by a previous change does not hold in this project. We then fit prediction models that are effective at identifying files containing breakages and surprises. The number of pre-release defects and file size are good indicators of breakages, whereas the number of co-changed files and the amount of time between the latest pre-release change and the release date are good indicators of surprises. Although our prediction models are effective at identifying files that have breakages and surprises, we learn that the prediction should also identify the nature or type of defects, with each type being specific enough to be easily identified and repaired.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"430 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132452931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ESEC/FSE '11Pub Date : 2011-09-05DOI: 10.1145/2025113.2025212
S. Wagner, S. Chulani, B. Wong
{"title":"8th international workshop on software quality (WoSQ)","authors":"S. Wagner, S. Chulani, B. Wong","doi":"10.1145/2025113.2025212","DOIUrl":"https://doi.org/10.1145/2025113.2025212","url":null,"abstract":"Software becomes ever more feature-rich and thereby harder to distinguish based on its functionality. Instead, quality is starting to differentiate between similar software products. Specifying, constructing, and assuring quality has been under research for several decades and continues to be a long- term research area because of its many facets and its com- plexity. Current national and international initiatives show that there is an active research community in academia and industry. This workshop builds on the rich experiences of a series of previous workshops and aims to bring this community together to discuss current issues and future developments.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131944804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ESEC/FSE '11Pub Date : 2011-09-05DOI: 10.1145/2025113.2025180
Yunho Kim, Moonzoo Kim
{"title":"SCORE: a scalable concolic testing tool for reliable embedded software","authors":"Yunho Kim, Moonzoo Kim","doi":"10.1145/2025113.2025180","DOIUrl":"https://doi.org/10.1145/2025113.2025180","url":null,"abstract":"Current industrial testing practices often generate test cases in a manual manner, which degrades both the effectiveness and efficiency of testing. To alleviate this problem, concolic testing generates test cases that can achieve high coverage in an automated fashion. One main task of concolic testing is to extract symbolic information from a concrete execution of a target program at runtime. Thus, a design decision on how to extract symbolic information affects efficiency, effectiveness, and applicability of concolic testing. We have developed a Scalable COncolic testing tool for REliable embedded software (SCORE) that targets embedded C programs. SCORE instruments a target C program to extract symbolic information and applies concolic testing to a target program in a scalable manner by utilizing a large number of distributed computing nodes. In this paper, we describe our design decisions that are implemented in SCORE and demonstrate the performance of SCORE through the experiments on the SIR benchmarks.","PeriodicalId":184518,"journal":{"name":"ESEC/FSE '11","volume":"85 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114027433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}