Jinseong Jeon, Xiaokang Qiu, Jonathan Fetter-Degges, J. Foster, Armando Solar-Lezama
{"title":"Synthesizing Framework Models for Symbolic Execution","authors":"Jinseong Jeon, Xiaokang Qiu, Jonathan Fetter-Degges, J. Foster, Armando Solar-Lezama","doi":"10.1145/2884781.2884856","DOIUrl":"https://doi.org/10.1145/2884781.2884856","url":null,"abstract":"Symbolic execution is a powerful program analysis technique, but it is difficult to apply to programs built using frameworks such as Swing and Android, because the framework code itself is hard to symbolically execute. The standard solution is to manually create a framework model that can be symbolically executed, but developing and maintaining a model is difficult and error-prone. In this paper, we present Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket's focus is on creating models by instantiating design patterns. Pasket takes as input class, method, and type information from the framework API, together with tutorial programs that exercise the framework. From these artifacts and Pasket’s internal knowledge of design patterns, Pasket synthesizes a framework model whose behavior on the tutorial programs matches that of the original framework. We evaluated Pasket by synthesizing models for subsets of Swing and Android. Our results show that the models derived by Pasket are sufficient to allow us to use off-the-shelf symbolic execution tools to analyze Java programs that rely on frameworks.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"29 23 1","pages":"156-167"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85178458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oleksii Kononenko, Olga Baysal, Michael W. Godfrey
{"title":"Code Review Quality: How Developers See It","authors":"Oleksii Kononenko, Olga Baysal, Michael W. Godfrey","doi":"10.1145/2884781.2884840","DOIUrl":"https://doi.org/10.1145/2884781.2884840","url":null,"abstract":"In a large, long-lived project, an effective code review process is key to ensuring the long-term quality of the code base. In this work, we study code review practices of a large, open source project, and we investigate how the developers themselves perceive code review quality. We present a qualitative study that summarizes the results from a survey of 88 Mozilla core developers. The results provide developer insights into how they define review quality, what factors contribute to how they evaluate submitted code, and what challenges they face when performing review tasks. We found that the review quality is primarily associated with the thoroughness of the feedback, the reviewer's familiarity with the code, and the perceived quality of the code itself. Also, we found that while different factors are perceived to contribute to the review quality, reviewers often find it difficult to keep their technical skills up-to-date, manage personal priorities, and mitigate context switching.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"33 1","pages":"1028-1038"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75292878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weidong Cui, Marcus Peinado, S. Cha, Y. Fratantonio, V. Kemerlis
{"title":"RETracer: Triaging Crashes by Reverse Execution from Partial Memory Dumps","authors":"Weidong Cui, Marcus Peinado, S. Cha, Y. Fratantonio, V. Kemerlis","doi":"10.1145/2884781.2884844","DOIUrl":"https://doi.org/10.1145/2884781.2884844","url":null,"abstract":"Many software providers operate crash reporting services to automatically collect crashes from millions of customers and file bug reports. Precisely triaging crashes is necessary and important for software providers because the millions of crashes that may be reported every day are critical in identifying high impact bugs. However, the triaging accuracy of existing systems is limited, as they rely only on the syntactic information of the stack trace at the moment of a crash without analyzing program semantics.In this paper, we present RETracer, the first system to triage software crashes based on program semantics reconstructed from memory dumps. RETracer was designed to meet the requirements of large-scale crash reporting services. RETracer performs binary-level backward taint analysis without a recorded execution trace to understand how functions on the stack contribute to the crash. The main challenge is that the machine state at an earlier time cannot be recovered completely from a memory dump, since most instructions are information destroying.We have implemented RETracer for x86 and x86-64 native code, and compared it with the existing crash triaging tool used by Microsoft. We found that RETracer eliminates two thirds of triage errors based on a manual analysis of 140 bugs fixed in Microsoft Windows and Office. RETracer has been deployed as the main crash triaging system on Microsoft’s crash reporting service.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"25 1","pages":"820-831"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81933125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rahul Gopinath, Mohammad Amin Alipour, Iftekhar Ahmed, Carlos Jensen, Alex Groce
{"title":"On The Limits of Mutation Reduction Strategies","authors":"Rahul Gopinath, Mohammad Amin Alipour, Iftekhar Ahmed, Carlos Jensen, Alex Groce","doi":"10.1145/2884781.2884787","DOIUrl":"https://doi.org/10.1145/2884781.2884787","url":null,"abstract":"Although mutation analysis is considered the best way to evaluate the effectiveness of a test suite, hefty computational cost often limits its use. To address this problem, various mutation reduction strategies have been proposed, all seeking to reduce the number of mutants while maintaining the representativeness of an exhaustive mutation analysis. While research has focused on the reduction achieved, the effectiveness of these strategies in selecting representative mutants, and the limits in doing so have not been investigated, either theoretically or empirically. We investigate the practical limits to the effectiveness of mutation reduction strategies, and provide a simple theoretical framework for thinking about the absolute limits. Our results show that the limit in improvement of effectiveness over random sampling for real-world open source programs is a mean of only 13.078%. Interestingly, there is no limit to the improvement that can be made by addition of new mutation operators. Given that this is the maximum that can be achieved with perfect advance knowledge of mutation kills, what can be practically achieved may be much worse. We conclude that more effort should be focused on enhancing mutations than removing operators in the name of selective mutation for questionable benefit.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"215 1","pages":"511-522"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79579118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-objective Software Effort Estimation","authors":"Federica Sarro, Alessio Petrozziello, M. Harman","doi":"10.1145/2884781.2884830","DOIUrl":"https://doi.org/10.1145/2884781.2884830","url":null,"abstract":"We introduce a bi-objective effort estimation algorithm that combines Confidence Interval Analysis and assessment of Mean Absolute Error. We evaluate our proposed algorithm on three different alternative formulations, baseline comparators and current state-of-the-art effort estimators applied to five real-world datasets from the PROMISE repository, involving 724 different software projects in total. The results reveal that our algorithm outperforms the baseline, state-of-the-art and all three alternative formulations, statistically significantly (p","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"72 5 1","pages":"619-630"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87755207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tian Huat Tan, Manman Chen, Jun Sun, Yang Liu, É. André, Yinxing Xue, J. Dong
{"title":"Optimizing Selection of Competing Services with Probabilistic Hierarchical Refinement","authors":"Tian Huat Tan, Manman Chen, Jun Sun, Yang Liu, É. André, Yinxing Xue, J. Dong","doi":"10.1145/2884781.2884861","DOIUrl":"https://doi.org/10.1145/2884781.2884861","url":null,"abstract":"Recently, many large enterprises (e.g., Netflix, Amazon) have decomposed their monolithic application into services, and composed them to fulfill their business functionalities. Many hosting services on the cloud, with different Quality of Service (QoS) (e.g., availability, cost), can be used to host the services. This is an example of competing services. QoS is crucial for the satisfaction of users. It is important to choose a set of services that maximize the overall QoS, and satisfy all QoS requirements for the service composition. This problem, known as optimal service selection, is NP-hard. Therefore, an effective method for reducing the search space and guiding the search process is highly desirable. To this end, we introduce a novel technique, called Probabilistic Hierarchical Refinement (PROHR). PROHR effectively reduces the search space by removing competing services that cannot be part of the selection. PROHR provides two methods, probabilistic ranking and hierarchical refinement, that enable smart exploration of the reduced search space. Unlike existing approaches that perform poorly when QoS requirements become stricter, PROHR maintains high performance and accuracy, independent of the strictness of the QoS requirements. PROHR has been evaluated on a publicly available dataset, and has shown significant improvement over existing approaches.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"48 1","pages":"85-95"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90415759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ran Mo, Yuanfang Cai, R. Kazman, Lu Xiao, Qiong Feng
{"title":"Decoupling Level: A New Metric for Architectural Maintenance Complexity","authors":"Ran Mo, Yuanfang Cai, R. Kazman, Lu Xiao, Qiong Feng","doi":"10.1145/2884781.2884825","DOIUrl":"https://doi.org/10.1145/2884781.2884825","url":null,"abstract":"Despite decades of research on software metrics, we still cannot reliably measure if one design is more maintainable than another. Software managers and architects need to understand whether their software architecture is \"good enough\", whether it is decaying over time and, if so, by how much. In this paper, we contribute a new architecture maintainability metric---Decoupling Level (DL)---derived from Baldwin andClark's option theory. Instead of measuring how coupled an architecture is, we measure how well the software can be decoupled into small and independently replaceable modules. We measured the DL for 108 open source projects and 21 industrial projects, each of which has multiple releases. Our main result shows that the larger the DL, the better thearchitecture. By \"better\" we mean: the more likely bugs and changes can be localized and separated, and the more likely that developers can make changes independently. The DL metric also opens the possibility of quantifying canonical principles of single responsibility and separation of concerns, aiding cross-project comparison and architecture decay monitoring.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"69 1","pages":"499-510"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90761041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ronnie E. S. Santos, F. Silva, C. Magalhães, Cleviton V. F. Monteiro
{"title":"Building a Theory of Job Rotation in Software Engineering from an Instrumental Case Study","authors":"Ronnie E. S. Santos, F. Silva, C. Magalhães, Cleviton V. F. Monteiro","doi":"10.1145/2884781.2884837","DOIUrl":"https://doi.org/10.1145/2884781.2884837","url":null,"abstract":"Job Rotation is an organizational practice in which individuals are frequently moved from a job (or project) to another in the same organization. Studies in other areas have found that this practice has both negative and positive effects on individuals’ work. However, there are only few studies addressing this issue in software engineering so far. The goal of our study is to investigate the effects of job rotation on work related factors in software engineering by performing a qualitative case study on a large software organization that uses job rotation as an organizational practice. We interviewed senior managers, project managers, and software engineers that had experienced this practice. Altogether, 48 participants were involved in all phases of this research. Collected data was analyzed using qualitative coding techniques and the results were checked and validated with participants through member checking. Our findings suggest that it is necessary to find balance between the positive effects on work variety and learning opportunities, and negative effects on cognitive workload and performance. Further, the lack of feedback resulting from constant movement among projects and teams may have a negative impact on performance feedback. We conclude that job rotation is an important organizational practice with important positive results. However, managers must be aware of potential negative effects and deploy tactics to balance them. We discuss such tactics in this article.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"110 1","pages":"971-981"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87709539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ding Li, Yingjun Lyu, Jiaping Gui, William G. J. Halfond
{"title":"Automated Energy Optimization of HTTP Requests for Mobile Applications","authors":"Ding Li, Yingjun Lyu, Jiaping Gui, William G. J. Halfond","doi":"10.1145/2884781.2884867","DOIUrl":"https://doi.org/10.1145/2884781.2884867","url":null,"abstract":"Energy is a critical resource for apps that run on mobile de- vices. Among all operations, making HTTP requests is one of the most energy consuming. Previous studies have shown that bundling smaller HTTP requests into a single larger HTTP request can be an effective way to improve energy efficiency of network communication, but have not defined an automated way to detect when apps can be bundled nor to transform the apps to do this bundling. In this paper we propose an approach to reduce the energy consumption of HTTP requests in Android apps by automatically detecting and then bundling multiple HTTP requests. Our approach first detects HTTP requests that can be bundled using static analysis, then uses a proxy based technique to bundle HTTP requests at runtime. We evaluated our approach on a set of real world marketplace Android apps. In this evaluation, our approach achieved an average energy reduction of 15% for the subject apps and did not impose a significant runtime overhead on the optimized apps.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"16 1","pages":"249-260"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78713721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bin Liang, Pan Bian, Yan Zhang, Wenchang Shi, Wei You, Yan Cai
{"title":"AntMiner: Mining More Bugs by Reducing Noise Interference","authors":"Bin Liang, Pan Bian, Yan Zhang, Wenchang Shi, Wei You, Yan Cai","doi":"10.1145/2884781.2884870","DOIUrl":"https://doi.org/10.1145/2884781.2884870","url":null,"abstract":"Detecting bugs with code mining has proven to be an effective approach. However, the existing methods suffer from reporting serious false positives and false negatives. In this paper, we developed an approach called AntMiner to improve the precision of code mining by carefully preprocessing the source code. Specifically, we employ the program slicing technique to decompose the original source repository into independent sub-repositories, taking critical operations (automatically extracted from source code) as slicing criteria. In this way, the statements irrelevant to a critical operation are excluded from the corresponding sub-repository. Besides, various semantics-equivalent representations are normalized into a canonical form. Eventually, the mining process can be performed on a refined code database, and false positives and false negatives can be significantly pruned. We have implemented AntMiner and applied it to detect bugs in the Linux kernel. It reported 52 violations that have been either confirmed as real bugs by the kernel development community or fixed in new kernel versions. Among them, 41 cannot be detected by a widely used representative analysis tool Coverity. Besides, the result of a comparative analysis shows that our approach can effectively improve the precision of code mining and detect subtle bugs that have previously been missed.","PeriodicalId":6485,"journal":{"name":"2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE)","volume":"148 1","pages":"333-344"},"PeriodicalIF":0.0,"publicationDate":"2016-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76243839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}