2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation最新文献

筛选
英文 中文
Dynamic Trace-Based Data Dependency Analysis for Parallelization of C Programs 基于动态跟踪的C程序并行化数据依赖分析
M. Lazarescu, L. Lavagno
{"title":"Dynamic Trace-Based Data Dependency Analysis for Parallelization of C Programs","authors":"M. Lazarescu, L. Lavagno","doi":"10.1109/SCAM.2012.15","DOIUrl":"https://doi.org/10.1109/SCAM.2012.15","url":null,"abstract":"Writing parallel code is traditionally considered a difficult task, even when it is tackled from the beginning of a project. In this paper, we demonstrate an innovative toolset that faces this challenge directly. It provides the software developers with profile data and directs them to possible top-level, pipeline-style parallelization opportunities for an arbitrary sequential C program. This approach is complementary to the methods based on static code analysis and automatic code rewriting and does not impose restrictions on the structure of the sequential code or the parallelization style, even though it is mostly aimed at coarse-grained task-level parallelization. The proposed toolset has been utilized to define parallel code organizations for a number of real-world representative applications and is based on and is provided as free source.","PeriodicalId":291855,"journal":{"name":"2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114199949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
InputTracer: A Data-Flow Analysis Tool for Manual Program Comprehension of x86 Binaries InputTracer:一个用于x86二进制文件手动程序理解的数据流分析工具
Ulf Kargén, N. Shahmehri
{"title":"InputTracer: A Data-Flow Analysis Tool for Manual Program Comprehension of x86 Binaries","authors":"Ulf Kargén, N. Shahmehri","doi":"10.1109/SCAM.2012.16","DOIUrl":"https://doi.org/10.1109/SCAM.2012.16","url":null,"abstract":"Third-party security analysis of closed-source programs has become an important part of a defense-in-depth approach to software security for many companies. In the absence of efficient tools, the analysis has generally been performed through manual reverse engineering of the machine code. As reverse engineering is an extremely time-consuming and costly task, much research has been performed to develop more powerful methods for analysis of program binaries. One such popular method is dynamic taint analysis (DTA), which is a type of runtime data-flow analysis, where certain input data is marked as tainted. By tracking the flow of tainted data, DTA can, for instance, be used to determine which computations in a program are affected by a certain part of the input. In this paper we present Input Tracer, a tool that utilizes DTA for aiding in manual program comprehension and analysis of unmodified x86 executables running in Linux. A brief overview of dynamic taint analysis is given, followed by a description of the tool and its implementation. We also demonstrate the tool's ability to provide exact information on the origin of tainted data through a detailed use case, where the tool is used to find the root cause of a memory corruption bug.","PeriodicalId":291855,"journal":{"name":"2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121296447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Compatibility Prediction of Eclipse Third-Party Plug-ins in New Eclipse Releases Eclipse新版本中第三方插件的兼容性预测
John Businge, Alexander Serebrenik, M. Brand
{"title":"Compatibility Prediction of Eclipse Third-Party Plug-ins in New Eclipse Releases","authors":"John Businge, Alexander Serebrenik, M. Brand","doi":"10.1109/SCAM.2012.10","DOIUrl":"https://doi.org/10.1109/SCAM.2012.10","url":null,"abstract":"Incompatibility between applications developed on top of frameworks with new versions of the frameworks is a big nightmare to both developers and users of the applications. Understanding the factors that cause incompatibilities is a step to solving them. One such direction is to analyze and identify parts of the reusable code of the framework that are prone to change. In this study we carried out an empirical investigation on 11 Eclipse SDK releases (1.0 to 3.7) and 288 Eclipse third-party plug-ins (ETPs) with two main goals: First, to determine the relationship between the age of Eclipse non-APIs (internal implementations) used by an ETP and the compatibility of the ETP. We found that third-party plug-in that use only old non-APIs have a high chance of compatibility success in new SDK releases compared to those that use at least one newly introduced non-API. Second, our goal was to build and test a predictive model for the compatibility of an ETP, supported in a given SDK release in a newer SDK release. Our findings produced 23 statistically significant prediction models having high values of the strength of the relationship between the predictors and the prediction (logistic regression R2 of up to 0.810). In addition, the results from model testing indicate high values of up to 100% of precision and recall and up to 98% of accuracy of the predictions. Finally, despite the fact that SDK releases with API breaking changes, i.e., 1.0, 2.0 and 3.0, have got nothing to do with non-APIs, our findings reveal that non-APIs introduced in these releases have a significant impact on the compatibility of the ETPs that use them.","PeriodicalId":291855,"journal":{"name":"2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122379473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Cooperative Testing and Analysis: Human-Tool, Tool-Tool and Human-Human Cooperations to Get Work Done 协作测试与分析:人-工具,工具-工具和人-人合作完成工作
Tao Xie
{"title":"Cooperative Testing and Analysis: Human-Tool, Tool-Tool and Human-Human Cooperations to Get Work Done","authors":"Tao Xie","doi":"10.1109/SCAM.2012.31","DOIUrl":"https://doi.org/10.1109/SCAM.2012.31","url":null,"abstract":"Tool automation to reduce manual effort has been an active research area in various sub fields of software engineering such as software testing and analysis. To maximize the value of software testing and analysis, effective support for cooperation between engineers and tools is greatly needed and yet lacking in state-of-the-art research and practice. In particular, testing and analysis are in a great need of (1) effective ways for engineers to communicate their testing or analysis goals and guidance to tools and (2) tools with strong enough capabilities to accomplish the given testing or analysis goals and with effective ways to communicate challenges faced by them to engineers -- enabling a feedback loop between engineers and tools to refine and accomplish the testing or analysis goals. In addition, different tools have their respective strengths and weaknesses, and there is also a great need of allowing these tools to cooperate with each other. Similarly, there is a great need of allowing engineers (or even users) to cooperate to help tools such as in the form of crowd sourcing. A new research frontier on synergistic co operations between humans and tools, tools and tools, and humans and humans is yet to be explored. This paper presents recent example advances on cooperative testing and analysis.","PeriodicalId":291855,"journal":{"name":"2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121913672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Evolution of Near-Miss Clones 侥幸克隆的进化
Saman Bazrafshan
{"title":"Evolution of Near-Miss Clones","authors":"Saman Bazrafshan","doi":"10.1109/SCAM.2012.18","DOIUrl":"https://doi.org/10.1109/SCAM.2012.18","url":null,"abstract":"It is often claimed that duplicated source code fragments increase the maintenance effort in software systems. To investigate the impact of so called clones it is useful to analyze how they evolve. A previous study analyzed several aspects of the evolution of identical clones in nine open source systems and has found that the peculiarity of clone evolution is significantly different for each system, which makes a general conclusion difficult. In this paper we investigate in which ways the evolution of near-miss clones differs from the evolution of identical clones. By analyzing seven open source systems we draw comparisons between identical and near-miss clones. Based on the findings we conclude that near-miss clones require more attention regarding clone management techniques compared to identical clones.","PeriodicalId":291855,"journal":{"name":"2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130614960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Cross-Language Code Analysis and Refactoring 跨语言代码分析和重构
Philip Mayer, Andreas Schroeder
{"title":"Cross-Language Code Analysis and Refactoring","authors":"Philip Mayer, Andreas Schroeder","doi":"10.1109/SCAM.2012.11","DOIUrl":"https://doi.org/10.1109/SCAM.2012.11","url":null,"abstract":"Software composed of artifacts written in multiple (programming) languages is pervasive in today's enterprise, desktop, and mobile applications. Since they form one system, artifacts from different languages reference one another, thus creating what we call semantic cross-language links. By their very nature, such links are out of scope of the individual programming language, they are ignored by most language-specific tools and are often only established -- and checked for errors -- at runtime. This is unfortunate since it requires additional testing, leads to brittle code, and lessens maintainability. In this paper, we advocate a generic approach to understanding, analyzing and refactoring cross-language code by explicitly specifying and exploiting semantic links with the aim of giving developers the same amount of control over and confidence in multi-language programs they have for single-language code today.","PeriodicalId":291855,"journal":{"name":"2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114918090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Combining Conceptual and Domain-Based Couplings to Detect Database and Code Dependencies 结合概念耦合和基于域的耦合来检测数据库和代码依赖关系
Malcom Gethers, Amir Aryani, D. Poshyvanyk
{"title":"Combining Conceptual and Domain-Based Couplings to Detect Database and Code Dependencies","authors":"Malcom Gethers, Amir Aryani, D. Poshyvanyk","doi":"10.1109/SCAM.2012.27","DOIUrl":"https://doi.org/10.1109/SCAM.2012.27","url":null,"abstract":"Knowledge of software dependencies plays an important role in program comprehension and other maintenance activities. Traditionally, dependencies are derived by source code analysis, however, such an approach can be difficult to use in multi-tier hybrid software systems, or legacy applications where conventional code analysis tools simply do not work as is. In this paper, we propose a hybrid approach to detecting software dependencies by combining conceptual and domain-based coupling metrics. In recent years, a great deal of research focused on deriving various coupling metrics from these sources of information with the aim of assisting software maintainers. Conceptual metrics specify underlying relationships encoded by developers in identifiers and comments of source code classes whereas domain metrics exploit coupling manifested in domain-level information of software components and it is independent from software implementation. The proposed approach is independent from programming language, as such it can be used in multi-tier hybrid systems or legacy applications. We report the results of an empirical case study on a large-scale enterprise system where we demonstrate that the combined approach is able to detect database and source code dependencies with higher precision and recall as compared to its standalone constituents.","PeriodicalId":291855,"journal":{"name":"2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122634804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Optimizing Expression Selection for Lookup Table Program Transformation 查找表程序转换表达式选择优化
C. Wilcox, M. Strout, J. Bieman
{"title":"Optimizing Expression Selection for Lookup Table Program Transformation","authors":"C. Wilcox, M. Strout, J. Bieman","doi":"10.1109/SCAM.2012.12","DOIUrl":"https://doi.org/10.1109/SCAM.2012.12","url":null,"abstract":"Scientific programmers can speed up function evaluation by precomputing and storing function results in lookup table (LUTs), thereby replacing costly evaluation code with an inexpensive memory access. A code transform that replaces computation with LUT code can improve performance, however, accuracy is reduced because of error inherent in reconstructing values from LUT data. LUT transforms are commonly used to approximate expensive elementary functions. The current practice is for software developers to (1) manually identify expressions that can benefit from a LUT transform, (2) modify the code by hand to implement the LUT transform, and (3) run experiments to determine if the resulting error is within application requirements. This approach reduces productivity, obfuscates code, and limits programmer control over accuracy and performance. We propose source code analysis and program transformation to substantially automate the application of LUT transforms. Our approach uses a novel optimization algorithm that selects Pareto optimal sets of expressions that benefit most from LUT transformation, based on error and performance estimates. We demonstrate our methodology with the Mesa tool, which achieves speedups of 1.4-6.9× on scientific codes while managing introduced error. Our tool makes the programmer more productive and improves the chances of finding an effective solution.","PeriodicalId":291855,"journal":{"name":"2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123127697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Collections Frameworks for Points-To Analysis 指向分析的集合框架
T. Gutzmann, Jonas Lundberg, Welf Löwe
{"title":"Collections Frameworks for Points-To Analysis","authors":"T. Gutzmann, Jonas Lundberg, Welf Löwe","doi":"10.1109/SCAM.2012.24","DOIUrl":"https://doi.org/10.1109/SCAM.2012.24","url":null,"abstract":"Points-to information is the basis for many analyses and transformations, e.g., for program understanding and optimization. Collections frameworks are part of most modern programming languages' infrastructures and used by many applications. The richness of features and the inherent structure of collection classes affect both performance and precision of points-to analysis negatively. In this paper, we discuss how to replace original collections frameworks with versions specialized for points-to analysis. We implement such a replacement for the Java Collections Framework and support its benefits for points-to analysis by applying it to three different points-to analysis implementations. In experiments, context-sensitive points-to analyses require, on average, 16-24% less time while at the same time being more precise. Context-insensitive analysis in conjunction with in lining also benefits in both precision and analysis cost.","PeriodicalId":291855,"journal":{"name":"2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130073889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
What Does Control Flow Really Look Like? Eyeballing the Cyclomatic Complexity Metric 控制流到底是什么样子的?关注圈复杂度度量
J. Vinju, Michael W. Godfrey
{"title":"What Does Control Flow Really Look Like? Eyeballing the Cyclomatic Complexity Metric","authors":"J. Vinju, Michael W. Godfrey","doi":"10.1109/SCAM.2012.17","DOIUrl":"https://doi.org/10.1109/SCAM.2012.17","url":null,"abstract":"Assessing the understandability of source code remains an elusive yet highly desirable goal for software developers and their managers. While many metrics have been suggested and investigated empirically, the McCabe cyclomatic complexity metric (CC) - which is based on control flow complexity - seems to hold enduring fascination within both industry and the research community despite its known limitations. In this work, we introduce the ideas of Control Flow Patterns (CFPs) and Compressed Control Flow Patterns (CCFPs), which eliminate some repetitive structure from control flow graphs in order to emphasize high-entropy graphs. We examine eight well-known open source Java systems by grouping the CFPs of the methods into equivalence classes, and exploring the results. We observed several surprising outcomes: first, the number of unique CFPs is relatively low, second, CC often does not accurately reflect the intricacies of Java control flow, and third, methods with high CC often have very low entropy, suggesting that they may be relatively easy to understand. These findings challenge the widely-held belief that there is a clear-cut causal relationship between CC and understandability, and suggest that CC and similar measures need to be reconsidered as metrics for code understandability.","PeriodicalId":291855,"journal":{"name":"2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124569696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信