Ru He, Paul Jennings, Samik Basu, Arka P. Ghosh, Huaiqin Wu
{"title":"A bounded statistical approach for model checking of unbounded until properties","authors":"Ru He, Paul Jennings, Samik Basu, Arka P. Ghosh, Huaiqin Wu","doi":"10.1145/1858996.1859043","DOIUrl":"https://doi.org/10.1145/1858996.1859043","url":null,"abstract":"We study the problem of statistical model checking of probabilistic systems for PCTL unbounded until property PJoinp(Æ1UÆ2) (where Join |X| {<, d, >, e}) using the computation of P d 0(Æ1UÆ2). The approach is first proposed by Sen et al. in CAV'05 but their approach suffers from two drawbacks. Firstly, the computation of Pd0Æ1UÆ2) requires for its validity, a user-specified input parameter ´2 which the user is unlikely to correctly provide. Secondly, the validity of computation of Pd0Æ1UÆ2) is limited only to probabilistic models that do not contain loops. We present a new technique which addresses both problems described above. Essentially our technique transforms the hypothesis test for the unbounded until property in the original model into a new equivalent hypothesis test for bounded until property in our modified model. We empirically show the effectiveness of our technique and compare our results with those using the method proposed by Sen et al.","PeriodicalId":341489,"journal":{"name":"Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134280798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Search-carrying code","authors":"Ali Taleghani, J. Atlee","doi":"10.1145/1858996.1859079","DOIUrl":"https://doi.org/10.1145/1858996.1859079","url":null,"abstract":"In this paper, we introduce a model-checking-based certification technique called search-carrying code (SCC). SCC is an adaptation of the principles of proof-carrying code, in which program certification is reduced to checking a provided safety proof. In SCC, program certification is an efficient re-examination of a program's state space. A code producer, who offers a program for use, provides a search script that encodes a search of the program's state space. A code consumer, who wants to certify that the program fits her needs, uses the search script to direct how a model checker searches the program's state space. Basic SCC achieves slight reductions in certification time, but it can be optimized in two important ways. (1) When a program comes from a trusted source, SCC certification can forgo authenticating the provided search script and instead optimize for speed of certification. (2) The search script can be partitioned into multiple partial certification tasks of roughly equal size, which can be performed in parallel. Using parallel model checking, we reduce the certification times by a factor of up to n, for n processors. When certifying a program from a trusted source, we reduce the certification times by a factor of up to 5n, for n processors.","PeriodicalId":341489,"journal":{"name":"Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114596366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An automated approach for finding variable-constant pairing bugs","authors":"J. Lawall, D. Lo","doi":"10.1145/1858996.1859014","DOIUrl":"https://doi.org/10.1145/1858996.1859014","url":null,"abstract":"Named constants are used heavily in operating systems code, both as internal flags and in interactions with devices. Decision making within an operating system thus critically depends on the correct usage of these values. Nevertheless, compilers for the languages typically used in implementing operating systems provide little support for checking the usage of named constants. This affects correctness, when a constant is used in a context where its value is meaningless, and software maintenance, when a constant has the right value for its usage context but the wrong name. We propose a hybrid program-analysis and data-mining based approach to identify the uses of named constants and to identify anomalies in these uses. We have applied our approach to a recent version of the Linux kernel and have found a number of bugs affecting both correctness and software maintenance. Many of these bugs have been validated by the Linux developers.","PeriodicalId":341489,"journal":{"name":"Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116367975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SpecDiff: debugging formal specifications","authors":"Zhenchang Xing, Jun Sun, Yang Liu, J. Dong","doi":"10.1145/1858996.1859072","DOIUrl":"https://doi.org/10.1145/1858996.1859072","url":null,"abstract":"This paper presents our SpecDiff tool that exploits the model differencing technique for debugging and understanding evolving behaviors of formal specifications. SpecDiff has been integrated in the Process Analysis Toolkit (PAT), a framework for formal specification, verification and simulation. SpecDiff is able to assist in diagnosing system faults, understanding the impacts of specification optimization techniques, and revealing the system change patterns.","PeriodicalId":341489,"journal":{"name":"Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129578729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impendulo: debugging the programmer","authors":"W. Visser, J. Geldenhuys","doi":"10.1145/1858996.1859071","DOIUrl":"https://doi.org/10.1145/1858996.1859071","url":null,"abstract":"We describe the Impendulo tool for fine-grained analyses of programmer behavior. The initial design goal was to create a system to answer the following simple question: \"What kind of mistakes do programmers make and how often do they make these mistakes?\" However it quickly became apparent that the tool can be used to also analyze other fundamental software engineering questions, such as, how good are static analysis tools at finding real errors?, what is the fault finding capability of automated test generation tools?, what is the influence of a bad specification?, etc. We briefly describe the tool and some of the insights gained from using it.","PeriodicalId":341489,"journal":{"name":"Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130103084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A sentence-matching method for automatic license identification of source code files","authors":"D. Germán, Yuki Manabe, Katsuro Inoue","doi":"10.1145/1858996.1859088","DOIUrl":"https://doi.org/10.1145/1858996.1859088","url":null,"abstract":"The reuse of free and open source software (FOSS) components is becoming more prevalent. One of the major challenges in finding the right component is finding one that has a license that is e for its intended use. The license of a FOSS component is determined by the licenses of its source code files. In this paper, we describe the challenges of identifying the license under which source code is made available, and propose a sentence-based matching algorithm to automatically do it. We demonstrate the feasibility of our approach by implementing a tool named Ninka. We performed an evaluation that shows that Ninka outperforms other methods of license identification in precision and speed. We also performed an empirical study on 0.8 million source code files of Debian that highlight interesting facts about the manner in which licenses are used by FOSS","PeriodicalId":341489,"journal":{"name":"Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125442324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection of recurring software vulnerabilities","authors":"N. Pham, T. Nguyen, H. Nguyen, T. Nguyen","doi":"10.1145/1858996.1859089","DOIUrl":"https://doi.org/10.1145/1858996.1859089","url":null,"abstract":"Software security vulnerabilities are discovered on an almost daily basis and have caused substantial damage. Aiming at supporting early detection and resolution for them, we have conducted an empirical study on thousands of vulnerabilities and found that many of them are recurring due to software reuse. Based on the knowledge gained from the study, we developed SecureSync, an automatic tool to detect recurring software vulnerabilities on the systems that reuse source code or libraries. The core of SecureSync includes two techniques to represent and compute the similarity of vulnerable code across different systems. The evaluation for 60 vulnerabilities on 176 releases of 119 open-source software systems shows that SecureSync is able to detect recurring vulnerabilities with high accuracy and to identify 90 releases having potentially vulnerable code that are not reported or fixed yet, even in mature systems. A couple of cases were actually confirmed by their developers.","PeriodicalId":341489,"journal":{"name":"Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114206504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic detection of nocuous coordination ambiguities in natural language requirements","authors":"Hui Yang, A. Willis, A. Roeck, B. Nuseibeh","doi":"10.1145/1858996.1859007","DOIUrl":"https://doi.org/10.1145/1858996.1859007","url":null,"abstract":"Natural language is prevalent in requirements documents. However, ambiguity is an intrinsic phenomenon of natural language, and is therefore present in all such documents. Ambiguity occurs when a sentence can be interpreted differently by different readers. In this paper, we describe an automated approach for characterizing and detecting so-called nocuous ambiguities, which carry a high risk of misunderstanding among different readers. Given a natural language requirements document, sentences that contain specific types of ambiguity are first extracted automatically from the text. A machine learning algorithm is then used to determine whether an ambiguous sentence is nocuous or innocuous, based on a set of heuristics that draw on human judgments, which we collected as training data. We implemented a prototype tool for Nocuous Ambiguity Identification (NAI), in order to illustrate and evaluate our approach. The tool focuses on coordination ambiguity. We report on the results of a set of experiments to assess the performance and usefulness of the approach.","PeriodicalId":341489,"journal":{"name":"Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120956386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Keynote address: model engineering for model-driven engineering","authors":"A. van Lamsweerde","doi":"10.1145/1858996.1858999","DOIUrl":"https://doi.org/10.1145/1858996.1858999","url":null,"abstract":"The effectiveness of MDE relies on our ability to build high-quality models. This task is intrinsically difficult. We need to produce sufficiently complete, adequate, consistent, and well-structured models from incomplete, imprecise, and sparse material originating from multiple, often conflicting sources. The system we need to consider in the early stages comprises software and environment components including people and devices. Such models should integrate the intentional, structural, functional, and behavioral facets of the system being developed. Rigorous techniques are needed for model construction, analysis, and evolution. They should support early and incremental reasoning about partial models for a variety of purposes, including satisfaction arguments, property checks, animations, the evaluation of alternative options, the analysis of risks, threats and conflicts, and traceability management. The tension between technical precision and practical applicability calls for a suitable mix of heuristic, deductive, and inductive forms of reasoning on a suitable mix of declarative and operational models. Formal techniques should be deployed only when and where needed, and kept hidden wherever possible. The talk will provide a retrospective account of our research efforts and practical experience along this route, including recent progress in model engineering for safety-critical medical workfows. Problem-oriented abstractions, analyzable models, and constructive techniques are pervasive concerns.","PeriodicalId":341489,"journal":{"name":"Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering","volume":"877 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116538167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deriving behavior of multi-user processes from interactive requirements validation","authors":"Gregor Gabrysiak, H. Giese, Andreas Seibel","doi":"10.1145/1858996.1859073","DOIUrl":"https://doi.org/10.1145/1858996.1859073","url":null,"abstract":"In this tool demonstration we present an implementation for interactively validating requirements for multi-user software systems and the processes they support with end users. The tool combines the advantages of requirements animation and scenario synthesis to gather stakeholder feedback and create a common understanding amongst stakeholders. Additionally, the users' behavior during the simulation is captured and used to automatically derive new behavioral specifications.","PeriodicalId":341489,"journal":{"name":"Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125083884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}