Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis最新文献

筛选
英文 中文
Tests from traces: automated unit test extraction for R 跟踪测试:R的自动单元测试提取
Filip Krikava, J. Vitek
{"title":"Tests from traces: automated unit test extraction for R","authors":"Filip Krikava, J. Vitek","doi":"10.1145/3213846.3213863","DOIUrl":"https://doi.org/10.1145/3213846.3213863","url":null,"abstract":"Unit tests are labor-intensive to write and maintain. This paper looks into how well unit tests for a target software package can be extracted from the execution traces of client code. Our objective is to reduce the effort involved in creating test suites while minimizing the number and size of individual tests, and maximizing coverage. To evaluate the viability of our approach, we select a challenging target for automated test extraction, namely R, a programming language that is popular for data science applications. The challenges presented by R are its extreme dynamism, coerciveness, and lack of types. This combination decrease the efficacy of traditional test extraction techniques. We present Genthat, a tool developed over the last couple of years to non-invasively record execution traces of R programs and extract unit tests from those traces. We have carried out an evaluation on 1,545 packages comprising 1.7M lines of R code. The tests extracted by Genthat improved code coverage from the original rather low value of 267,496 lines to 700,918 lines. The running time of the generated tests is 1.9 times faster than the code they came from","PeriodicalId":20542,"journal":{"name":"Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74402195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Safe and sound program analysis with Flix 安全和健全的程序分析与Flix
Magnus Madsen, O. Lhoták
{"title":"Safe and sound program analysis with Flix","authors":"Magnus Madsen, O. Lhoták","doi":"10.1145/3213846.3213847","DOIUrl":"https://doi.org/10.1145/3213846.3213847","url":null,"abstract":"Program development tools such as bug finders, build automation tools, compilers, debuggers, integrated development environments, and refactoring tools increasingly rely on static analysis techniques to reason about program behavior. Implementing such static analysis tools is a complex and difficult task with concerns about safety and soundness. Safety guarantees that the fixed point computation -- inherent in most static analyses -- converges and ultimately terminates with a deterministic result. Soundness guarantees that the computed result over-approximates the concrete behavior of the program under analysis. But how do we know if we can trust the result of the static analysis itself? Who will guard the guards? In this paper, we propose the use of automatic program verification techniques based on symbolic execution and SMT solvers to verify the correctness of the abstract domains used in static analysis tools. We implement a verification toolchain for Flix, a functional and logic programming language tailored for the implementation of static analyses. We apply this toolchain to several abstract domains. The experimental results show that we are able to prove 99.5% and 96.3% of the required safety and soundness properties, respectively.","PeriodicalId":20542,"journal":{"name":"Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87921548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Practical detection of concurrency issues at coding time 编码时并发性问题的实际检测
Luc Bläser
{"title":"Practical detection of concurrency issues at coding time","authors":"Luc Bläser","doi":"10.1145/3213846.3213853","DOIUrl":"https://doi.org/10.1145/3213846.3213853","url":null,"abstract":"We have developed a practical static checker that is designed to interactively mark data races and deadlocks in program source code at development time. As this use case requires a checker to be both fast and precise, we engaged a simple technique of randomized bounded concrete concurrent interpretation that is experimentally effective for this purpose. Implemented as a tool for C# in Visual Studio, the checker covers the broad spectrum of concurrent language concepts, including task and data parallelism, asynchronous programming, UI dispatching, the various synchronization primitives, monitor, atomic and volatile accesses, and finalizers. Its application to popular open-source C# projects revealed several real issues with only a few false positives.","PeriodicalId":20542,"journal":{"name":"Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87537871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
piCoq: parallel regression proving for large-scale verification projects piCoq:大规模验证项目的并行回归证明
Karl Palmskog, Ahmet Çelik, Miloš Gligorić
{"title":"piCoq: parallel regression proving for large-scale verification projects","authors":"Karl Palmskog, Ahmet Çelik, Miloš Gligorić","doi":"10.1145/3213846.3213877","DOIUrl":"https://doi.org/10.1145/3213846.3213877","url":null,"abstract":"Large-scale verification projects using proof assistants typically contain many proofs that must be checked at each new project revision. While proof checking can sometimes be parallelized at the coarse-grained file level to save time, recent changes in some proof assistant in the LCF family, such as Coq, enable fine-grained parallelism at the level of proofs. However, these parallel techniques are not currently integrated with regression proof selection, a technique that checks only the subset of proofs affected by a change. We present techniques that blend the power of parallel proof checking and selection to speed up regression proving in verification projects, suitable for use both on users' own machines and in workflows involving continuous integration services. We implemented the techniques in a tool, piCoq, which supports Coq projects. piCoq can track dependencies between files, definitions, and lemmas and perform parallel checking of only those files or proofs affected by changes between two project revisions. We applied piCoq to perform regression proving over many revisions of several large open source projects and measured the proof checking time. While gains from using proof-level parallelism and file selection can be considerable, our results indicate that proof-level parallelism and proof selection is consistently much faster than both sequential checking from scratch and sequential checking with proof selection. In particular, 4-way parallelization is up to 28.6 times faster than the former, and up to 2.8 times faster than the latter.","PeriodicalId":20542,"journal":{"name":"Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80560799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
PerfFuzz: automatically generating pathological inputs PerfFuzz:自动生成病理输入
Caroline Lemieux, Rohan Padhye, Koushik Sen, D. Song
{"title":"PerfFuzz: automatically generating pathological inputs","authors":"Caroline Lemieux, Rohan Padhye, Koushik Sen, D. Song","doi":"10.1145/3213846.3213874","DOIUrl":"https://doi.org/10.1145/3213846.3213874","url":null,"abstract":"Performance problems in software can arise unexpectedly when programs are provided with inputs that exhibit worst-case behavior. A large body of work has focused on diagnosing such problems via statistical profiling techniques. But how does one find these inputs in the first place? We present PerfFuzz, a method to automatically generate inputs that exercise pathological behavior across program locations, without any domain knowledge. PerfFuzz generates inputs via feedback-directed mutational fuzzing. Unlike previous approaches that attempt to maximize only a scalar characteristic such as the total execution path length, PerfFuzz uses multi-dimensional feedback and independently maximizes execution counts for all program locations. This enables PerfFuzz to (1) find a variety of inputs that exercise distinct hot spots in a program and (2) generate inputs with higher total execution path length than previous approaches by escaping local maxima. PerfFuzz is also effective at generating inputs that demonstrate algorithmic complexity vulnerabilities. We implement PerfFuzz on top of AFL, a popular coverage-guided fuzzing tool, and evaluate PerfFuzz on four real-world C programs typically used in the fuzzing literature. We find that PerfFuzz outperforms prior work by generating inputs that exercise the most-hit program branch 5x to 69x times more, and result in 1.9x to 24.7x longer total execution paths.","PeriodicalId":20542,"journal":{"name":"Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77155697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 146
Symbolic path cost analysis for side-channel detection 旁信道检测的符号路径成本分析
Tegan Brennan, Seemanta Saha, T. Bultan, C. Pasareanu
{"title":"Symbolic path cost analysis for side-channel detection","authors":"Tegan Brennan, Seemanta Saha, T. Bultan, C. Pasareanu","doi":"10.1145/3213846.3213867","DOIUrl":"https://doi.org/10.1145/3213846.3213867","url":null,"abstract":"Side-channels in software are an increasingly significant threat to the confidentiality of private user information, and the static detection of such vulnerabilities is a key challenge in secure software development. In this paper, we introduce a new technique for scalable detection of side- channels in software. Given a program and a cost model for a side-channel (such as time or memory usage), we decompose the control flow graph of the program into nested branch and loop components, and compositionally assign a symbolic cost expression to each component. Symbolic cost expressions provide an over-approximation of all possible observable cost values that components can generate. Queries to a satisfiability solver on the difference between possible cost values of a component allow us to detect the presence of imbalanced paths (with respect to observable cost) through the control flow graph. When combined with taint analysis that identifies conditional statements that depend on secret information, our technique answers the following question: Does there exist a pair of paths in the program's control flow graph, differing only on branch conditions influenced by the secret, that differ in observable side-channel value by more than some given threshold? Additional optimization queries allow us to identify the minimal number of loop iterations necessary for the above to hold or the maximal cost difference between paths in the graph. We perform symbolic execution based feasibility analyses to eliminate control flow paths that are infeasible. We implemented our techniques in a prototype, and we demonstrate its favourable performance against state-of-the-art tools as well as its effectiveness and scalability on a set of sizable, realistic Java server-client and peer-to-peer applications.","PeriodicalId":20542,"journal":{"name":"Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77645813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
CiD: automating the detection of API-related compatibility issues in Android apps CiD:自动检测Android应用中api相关的兼容性问题
Li Li, Tegawendé F. Bissyandé, Haoyu Wang, Jacques Klein
{"title":"CiD: automating the detection of API-related compatibility issues in Android apps","authors":"Li Li, Tegawendé F. Bissyandé, Haoyu Wang, Jacques Klein","doi":"10.1145/3213846.3213857","DOIUrl":"https://doi.org/10.1145/3213846.3213857","url":null,"abstract":"The Android Application Programming Interface provides the necessary building blocks for app developers to harness the functionalities of the Android devices, including for interacting with services and accessing hardware. This API thus evolves rapidly to meet new requirements for security, performance and advanced features, creating a race for developers to update apps. Unfortunately, given the extent of the API and the lack of automated alerts on important changes, Android apps are suffered from API-related compatibility issues. These issues can manifest themselves as runtime crashes creating a poor user experience. We propose in this paper an automated approach named CiD for systematically modelling the lifecycle of the Android APIs and analysing app bytecode to flag usages that can lead to potential compatibility issues. We demonstrate the usefulness of CiD by helping developers repair their apps, and we validate that our tool outperforms the state-of-the-art on benchmark apps that take into account several challenges for automatic detection.","PeriodicalId":20542,"journal":{"name":"Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87948765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 108
Badger: complexity analysis with fuzzing and symbolic execution 獾:复杂性分析与模糊和符号执行
Yannic Noller, Rody Kersten, C. Pasareanu
{"title":"Badger: complexity analysis with fuzzing and symbolic execution","authors":"Yannic Noller, Rody Kersten, C. Pasareanu","doi":"10.1145/3213846.3213868","DOIUrl":"https://doi.org/10.1145/3213846.3213868","url":null,"abstract":"Hybrid testing approaches that involve fuzz testing and symbolic execution have shown promising results in achieving high code coverage, uncovering subtle errors and vulnerabilities in a variety of software applications. In this paper we describe Badger - a new hybrid approach for complexity analysis, with the goal of discovering vulnerabilities which occur when the worst-case time or space complexity of an application is significantly higher than the average case. Badger uses fuzz testing to generate a diverse set of inputs that aim to increase not only coverage but also a resource-related cost associated with each path. Since fuzzing may fail to execute deep program paths due to its limited knowledge about the conditions that influence these paths, we complement the analysis with a symbolic execution, which is also customized to search for paths that increase the resource-related cost. Symbolic execution is particularly good at generating inputs that satisfy various program conditions but by itself suffers from path explosion. Therefore, Badger uses fuzzing and symbolic execution in tandem, to leverage their benefits and overcome their weaknesses. We implemented our approach for the analysis of Java programs, based on Kelinci and Symbolic PathFinder. We evaluated Badger on Java applications, showing that our approach is significantly faster in generating worst-case executions compared to fuzzing or symbolic execution on their own.","PeriodicalId":20542,"journal":{"name":"Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87457816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Eliminating timing side-channel leaks using program repair 使用程序修复消除时序侧通道泄漏
Meng Wu, Shengjian Guo, P. Schaumont, Chao Wang
{"title":"Eliminating timing side-channel leaks using program repair","authors":"Meng Wu, Shengjian Guo, P. Schaumont, Chao Wang","doi":"10.1145/3213846.3213851","DOIUrl":"https://doi.org/10.1145/3213846.3213851","url":null,"abstract":"We propose a method, based on program analysis and transformation, for eliminating timing side channels in software code that implements security-critical applications. Our method takes as input the original program together with a list of secret variables (e.g., cryptographic keys, security tokens, or passwords) and returns the transformed program as output. The transformed program is guaranteed to be functionally equivalent to the original program and free of both instruction- and cache-timing side channels. Specifically, we ensure that the number of CPU cycles taken to execute any path is independent of the secret data, and the cache behavior of memory accesses, in terms of hits and misses, is independent of the secret data. We have implemented our method in LLVM and validated its effectiveness on a large set of applications, which are cryptographic libraries with 19,708 lines of C/C++ code in total. Our experiments show the method is both scalable for real applications and effective in eliminating timing side channels.","PeriodicalId":20542,"journal":{"name":"Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91088977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信