{"title":"An Empirical Study of Function Overloading in C++","authors":"Cheng Wang, Daqing Hou","doi":"10.1109/SCAM.2008.25","DOIUrl":"https://doi.org/10.1109/SCAM.2008.25","url":null,"abstract":"The usefulness and usability of programming tools (for example, languages, libraries, and frameworks) may greatly impact programmer productivity and software quality. Ideally, these tools should be designed to be both useful and usable.But in reality, there always exist some tools or features whose essential characteristics can be fully understood only after they have been extensively used. The study described in this paper is focused on discovering how C++'s function overloading is used in production code using an instrumented g++ compiler. Our principal finding for the system studied is that the most 'advanced' subset of function overloading tends to be defined in only a few utility modules, which are probably developed and maintained by a small number of programmers, the majority of application modules use only the 'easy' subset of function overloading when overloading names,and most overloaded names are used locally within rather than across module interfaces.We recommend these as guidelines to software designers.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115716302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modular Decompilation of Low-Level Code by Partial Evaluation","authors":"M. Gómez-Zamalloa, E. Albert, G. Puebla","doi":"10.1109/SCAM.2008.35","DOIUrl":"https://doi.org/10.1109/SCAM.2008.35","url":null,"abstract":"Decompiling low-level code to a high-level intermediate representation facilitates the development of analyzers, model checkers, etc. which reason about properties of the low-level code (e.g., bytecode, .NET). Interpretive decompilation consists in partially evaluating an interpreter for the low-level language (written in the high-level language) w.r.t. the code to be decompiled. There have been proofs-of-concept that interpretive decompilation is feasible, butt here remain important open issues when it comes to decompile a real language: does the approach scale up? is the quality of decompiled programs comparable to that obtained by ad-hoc decompilers? do decompiled programs preserve the structure of the original programs? This paper addresses these issues by presenting, to the best of our knowledge, the first modular scheme to enable interpretive decompilation of low-level code to a high-level representation, namely, we decompile bytecode into PROLOG. We introduce two notions of optimality. The first one requires that each method/block is decompiled just once. The second one requires that each program point is traversed at most once during decompilation. We demonstrate the impact of our modular approach and optimality issues on a series of realistic benchmarks. Decompilation times and decompiled program sizes are linear with the size of the input bytecode program. This demostrates empirically the scalability of modular decompilation of low-level code by partial evaluation.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124517212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Some Assembly Required - Program Analysis of Embedded System Code","authors":"A. Fehnker, Ralf Huuck, F. Rauch, Sean Seefried","doi":"10.1109/SCAM.2008.15","DOIUrl":"https://doi.org/10.1109/SCAM.2008.15","url":null,"abstract":"Programming embedded system software typically involves more than one programming language. Normally, a high-level language such as C/C++ is used for application oriented tasks and a low-level assembly language for direct interaction with the underlying hardware. In most cases those languages are closely interwoven and the assembly is embedded in the C/C++ code. Verification of such programs requires the integrated analysis of both languages at the same time. However, common algorithmic verification tools fail to address this issue. In this work we present a model-checking based static analysis approach which seamlessly integrates the analysis of embedded ARM assembly with C/C++ code analysis. In particular, we show how to automatically check that the ARM code complies to its interface descriptions. Given interface compliance, we then provide an extended analysis framework for checking general properties of ARM code. We implemented this analysis in our source code analysis tool Goanna, and applied to the source code of an L4 micro kernel implementation.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133180797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Semantics of Abstract Program Slicing","authors":"D. Zanardini","doi":"10.1109/SCAM.2008.19","DOIUrl":"https://doi.org/10.1109/SCAM.2008.19","url":null,"abstract":"The present paper introduces the semantic basis for abstract slicing. This notion is more general than standard, concrete slicing, in that slicing criteria are abstract, i.e., defined on properties of data, rather than concrete values. This approach is based on abstract interpretation: properties are abstractions of data. Many properties can be investigated; e.g., the nullity of a program variable. Standard slicing is a special case, where properties are exactly the concrete values. As a practical outcome, abstract slices are likely to be smaller than standard ones, since commands which are relevant at the concrete level can be removed if only some abstract property is supposed to be preserved. This can make debugging and program understanding tasks easier, since a smaller portion of code must be inspected when searching for undesired behavior. The framework also includes the possibility to restrict the input states of the program, in the style of conditioned slicing, thus lying between static and dynamic slicing.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122274865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Program Transformations to Add Structure to a Legacy Data Model","authors":"M. Ceccato, T. Dean, P. Tonella","doi":"10.1109/SCAM.2008.9","DOIUrl":"https://doi.org/10.1109/SCAM.2008.9","url":null,"abstract":"An appropriate translation of the data model is central to any language migration effort. Finding a mapping between original and target data models may be challenging for legacy languages (e.g., Assembly) which lack a structured data model and rely instead on explicit programmer control of the overlay of variables. Before legacy applications written in languages with an unstructured data model can be migrated to modern languages, a structured data model must be inferred. This paper describes a set of source transformations used to create such a model as part of a migration of eight million lines of code to Java. The original application is written in a proprietary language supporting variable layout by memory relocation.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127119220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Use of Data Flow Analysis in Static Profiling","authors":"C. Boogerd, L. Moonen","doi":"10.1109/SCAM.2008.18","DOIUrl":"https://doi.org/10.1109/SCAM.2008.18","url":null,"abstract":"Static profiling is a technique that produces estimates of execution likelihoods or frequencies based on source code analysis only. It is frequently used in determining cost/benefit ratios for certain compiler optimizations. In previous work,we introduced a simple algorithm to compute execution likelihoods,based on a control flow graph and heuristic branch prediction. In this paper we examine the benefits of using more involved analysis techniques in such a static profiler. In particular, we explore the use of value range propagation to improve the accuracy of the estimates, and we investigate the differences in estimating execution likelihoods and frequencies.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126716694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parfait - A Scalable Bug Checker for C Code","authors":"C. Cifuentes","doi":"10.1109/SCAM.2008.21","DOIUrl":"https://doi.org/10.1109/SCAM.2008.21","url":null,"abstract":"Parfait is a bug checker of C code that has been designed to address developers' requirements of scalability (support millions of lines of code in a reasonable amount of time), precision (report few false positives) and reporting of bugs that may be exploitable from a security vulnerability point of view. For large code bases, performance is at stake if the bug checking tool is to be integrated into the software development process, and so is precision, as each false alarm (i.e., false positive) costs developer time to track down. Further, false negatives give a false sense of security to developers and testers, as it is not obvious or clear what other bugs were not reported by the tool. A common criticism of existing bug checking tools is the lack of reported metrics on the use of the tool. To a developer it is unclear how accurate the tool is, how many bugs it does not find, how many bugs get reported that are not actual bugs, whether the tool understands when a bug has been fixed, and what the performance is for the reported bugs. In this tool demonstration we show how Parfait fairs in the area of buffer overflow checking against the various requirements of scalability and precision.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129579063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rejuvenate Pointcut: A Tool for Pointcut Expression Recovery in Evolving Aspect-Oriented Software","authors":"Raffi Khatchadourian, A. Rashid","doi":"10.1109/SCAM.2008.32","DOIUrl":"https://doi.org/10.1109/SCAM.2008.32","url":null,"abstract":"Aspect-oriented programming (AOP) strives to localize the scattered and tangled implementations of crosscutting concerns (CCCs) by allowing developers to declare that certain actions (advice) should be taken at specific points (join points) during the execution of software where a CCC (an aspect) is applicable. However, it is non-trivial to construct optimal pointcut expressions (a collection of join points) that capture the true intentions of the programmer and, upon evolution, maintain these intentions. We demonstrate an AspectJ source-level inferencing tool called rejuvenate pointcut which helps developers maintain pointcut expressions over the lifetime of a software product. A key insight into the tool's construction is that the problem of maintaining pointcut expressions bears strong similarity to the requirements traceability problem in software engineering; hence, the underlying algorithm was devised by adapting existing approaches for requirements traceability to pointcut maintenance. The Eclipse IDE-based tool identifies intention graph patterns pertaining to a pointcut and, based on these patterns, uncovers other potential join points that may fall within the scope of the pointcut with a given confidence. This work represents a significant step towards providing tool-supported maintainability for evolving aspect-oriented software.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129758317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhaohong Yang, Yunzhan Gong, Qing Xiao, Yawen Wang
{"title":"DTS - A Software Defects Testing System","authors":"Zhaohong Yang, Yunzhan Gong, Qing Xiao, Yawen Wang","doi":"10.1109/SCAM.2008.12","DOIUrl":"https://doi.org/10.1109/SCAM.2008.12","url":null,"abstract":"This demo presents DTS (software defects testing system), a tool to catch defects in source code using static testing techniques. In DTS, various defect patterns are defined using defect patterns state machine and tested by a unified testing framework. Since DTS externalizes all the defect patterns it checks, defect patterns can be added, subtracted, or altered without having to modify the tool itself. Moreover, typical interval computation is expanded and applied in DTS to reduce the false positive and compute the state of defect state machine. In order to validate its usefulness, we perform some experiments on a suite of open source software whose results are briefly presented in the last part of the demo.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116097288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Precise Analysis of Java Programs Using JOANA","authors":"Dennis Giffhorn, Christian Hammer","doi":"10.1109/SCAM.2008.17","DOIUrl":"https://doi.org/10.1109/SCAM.2008.17","url":null,"abstract":"The JOANA project (Java Object-sensitive ANAlysis) is a program analysis infrastructure for the Java language. It contains a wide range of analysis techniques such as dependence graph computation, slicing and chopping for sequential and concurrent programs, computation of path conditions and algorithms for software security. This demonstration presents the JOANA plugin for the Eclipse framework. In the current version, a user can compute and navigate through dependence graphs for full Java bytecode, analyze Java programs with a broad range of slicing and chopping algorithms, and use precise algorithms for language-based security to check programs for information leaks.","PeriodicalId":433693,"journal":{"name":"2008 Eighth IEEE International Working Conference on Source Code Analysis and Manipulation","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126164287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}