{"title":"Modeling and Verifying Transaction Scheduling for Software Transactional Memory using CSP","authors":"Chao Xu, Xi Wu, Huibiao Zhu, M. Popovic","doi":"10.1109/TASE.2019.00009","DOIUrl":"https://doi.org/10.1109/TASE.2019.00009","url":null,"abstract":"Transaction Memory (TM) is designed for simplifying parallel programming, while some key problems exist in it, such as starvation and reduced performance with high contention among transactions. In order to improve the performance of TM, researchers have designed several transaction scheduling algorithms and given their experimental results. However, the evaluations on the algorithms given by these researches are rather partial and lack of generality. Since these experimental results ignore the verification of properties which are necessary for transaction scheduling and could be greatly affected by the execution environment, thus it is still challenging for us to judge the quality of the algorithms for TM. In this paper, we provide a formal approach to evaluate transaction scheduling algorithms in a more comprehensive and strict way. We choose three recently proposed algorithms as motivating examples and formalize them using the process algebra CSP. We also use a model checker PAT to verify the properties (e.g., deadlock freeness and starvation freeness) of the models. Besides, it is also easier to compare the performance of the algorithms, from the perspective of makespan, speedup, aborts time and throughput, based on the statistics given by PAT. Consequently, a formal approach can be achieved to evaluate transaction scheduling algorithms, which is also a good guide for the further design of the algorithms for TM.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133648922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ping Wang, Jiwei Yan, Xi Deng, Jun Yan, Jian Zhang
{"title":"Understanding Ineffective Events and Reducing Test Sequences for Android Applications","authors":"Ping Wang, Jiwei Yan, Xi Deng, Jun Yan, Jian Zhang","doi":"10.1109/TASE.2019.00012","DOIUrl":"https://doi.org/10.1109/TASE.2019.00012","url":null,"abstract":"Monkey, which is integrated with the Android system, becomes the most widely used test input generation tool, owing to the simplicity, effectiveness and good compatibility. However, Monkey is based on coordinates of screen and oblivious to the widgets and the GUI states, which results in a great many ineffective events that have no contribution to the test. To address the major drawbacks, this paper parses the events of 200 test sequences generated by Monkey into human-readable scripts and manually investigate the effects of these events. We find three types of patterns on the ineffective events, including no-ops, single and combination of effect-free ones, and summarize them into ten rules for sequence reduction. Then, we implement a tool CHARD to match these patterns in real-world traces and prune the redundant events. The evaluation on 923 traces from various apps covering 16 categories shows that CHARD can process 1,000 events in a few seconds and identifies 41.3% events as ineffective ones. Meanwhile, the reduced sequence keeps the same functionality with the original one that can trigger the same behaviors. Our work can be applied to lessen the diagnose effort for record-and-replay, and as a preprocessing step for other works on analyzing sequences. For instance, CHARD can remove 72.6% ineffective events and saves 67.6% time of delta debugging in our experiments.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130552028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandra Halchin, Y. A. Ameur, N. Singh, Abderrahmane Feliachi, J. Ordioni
{"title":"Certified Embedding of B Models in an Integrated Verification Framework","authors":"Alexandra Halchin, Y. A. Ameur, N. Singh, Abderrahmane Feliachi, J. Ordioni","doi":"10.1109/TASE.2019.000-4","DOIUrl":"https://doi.org/10.1109/TASE.2019.000-4","url":null,"abstract":"To check the correctness of heterogeneous models of a complex critical system is challenging to meet the certification standard. Such guarantee can be provided by embedding the heterogeneous models into an integrated modelling framework. This work is proposed in the B-PERFect project of RATP (Parisian Public Transport Operator and Maintainer), it aims to apply formal verification using the PERF approach on the integrated safety-critical software related to railway domain expressed in a single modelling language: HLL. This paper presents a certified translation from B formal language to HLL. The proposed approach uses HOL as a unified logical framework to describe the formal semantics and to formalize the translation relation of both languages. The developed Isabelle/HOL models are proved in order to guarantee the correctness of our translation process. Moreover, we have also used weak-bisimulation relation to check the correctness of translation steps. The overall approach is illustrated through a case study issued from a railway software system: onboard localization function. Furthermore, it discusses the integrated verification at system level.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134402285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Counter-factual Type Error Debugging","authors":"Sheng Chen, Baijun Wu","doi":"10.1109/TASE.2019.00-13","DOIUrl":"https://doi.org/10.1109/TASE.2019.00-13","url":null,"abstract":"Type inference is an important part of functional programming languages and has been increasingly adopted to imperative programming. However, providing effective error messages in response to type inference failures (due to type errors in programs) continues to be a challenge. Type error messages generated by compilers and existing error debugging approaches often point to bogus error locations or lack sufficient information for removing the type error, making error debugging ineffective. Counter-factual typing (CFT) addressed this problem by generating comprehensive error messages with each message includes a rich set of information. However, CFT has a large response time, making it too slow for interactive use. In particular, our recent study shows that programmers usually have to go through multiple iterations of updating and recompiling programs to remove a type error. Interestingly, our study also reveals that program updates are minor in each iteration during type error debugging. We exploit this fact and develop eCFT, an efficient version of CFT, which doesn't recompute all error fixes from scratch for each updated program but only recomputes error fixes that are changed in response to the update. Our key observation is that minor program changes lead to minor error suggestion changes. eCFT is based on principal typing, a typing scheme more amenable to reuse previous typing results. We have evaluated our approach and found it is about 12.4× faster than CFT in updating error fixes.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131892207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faustin Ahishakiye, S. Jaksic, F. D. Lange, Malte Schmitz, V. Stolz, Daniel Thoma
{"title":"Non-Intrusive MC/DC Measurement Based on Traces","authors":"Faustin Ahishakiye, S. Jaksic, F. D. Lange, Malte Schmitz, V. Stolz, Daniel Thoma","doi":"10.1109/TASE.2019.00-15","DOIUrl":"https://doi.org/10.1109/TASE.2019.00-15","url":null,"abstract":"We present a novel, non-intrusive approach to MC/DC coverage measurement using modern processor-based tracing facilities. Our approach does not require recompilation or instrumentation of the software under test. Instead, we use the Intel Processor Trace (Intel PT) facility present on modern Intel CPUs. Our tooling consists of the following parts: a frontend that detects so-called decisions (Boolean expressions) that are used in conditionals in C source code, a mapping from conditional jumps in the object code back to those decisions, and an analysis that computes satisfaction of the MC/DC coverage relation on those decisions from an execution trace. This analysis takes as input a stream of instruction addresses decoded from Intel PT trace data, which was recorded while running the software under test. We describe our architecture and discuss limitations and future work.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134594719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identify Blackboard Pattern Based on Ontology","authors":"Lihui Tang, Lulu Wang, Bixin Li","doi":"10.1109/TASE.2019.00007","DOIUrl":"https://doi.org/10.1109/TASE.2019.00007","url":null,"abstract":"Blackboard pattern identification is significant for the programmer to maintain the software system. Whether and how the system uses the blackboard pattern could help the programmers unfamiliar with the target system. This paper proposes a blackboard-instance identification approach based on ontology, which not only judges whether the target system uses the blackboard pattern but also provides the blackboard pattern implementation of the target system. The target system is described by ontology and input into the ABox of the knowledge base, the blackboard pattern is described by ontology and input into the TBox of the knowledge base. And the inference engine will reason out the raw pattern instance. Finally, the final pattern instance will be outputted by iterative refinement. To study the accuracy of our approach, sixty-eight projects have been tested and two of them have been analyzed the components' identification accuracy.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132569148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanrui Zhang, F. Mallet, Huibiao Zhu, Yixiang Chen
{"title":"A Logical Approach for the Schedulability Analysis of CCSL","authors":"Yuanrui Zhang, F. Mallet, Huibiao Zhu, Yixiang Chen","doi":"10.1109/TASE.2019.00-23","DOIUrl":"https://doi.org/10.1109/TASE.2019.00-23","url":null,"abstract":"The Clock Constraint Specification Language (CCSL) is a clock-based formalism for formal specification and analysis of real-time embedded systems. Previous approaches for the schedulability analysis of CCSL specifications are mainly based on model checking or SMT-checking. In this paper we propose a logical approach mainly based on theorem proving. We build a dynamic logic called 'clock-based dynamic logic' (cDL) to capture the CCSL specifications and build a proof calculus to analyze the schedule problem of the specifications. Comparing with previous approaches, our method benefits from the dynamic logic that provides a natural way of capturing the dynamic behaviour of CCSL and a divide-and-conquer way for 'decomposing' a complex formula into simple ones for an SMT-checking procedure. Based on cDL, we outline a method for the schedulability analysis of CCSL. We illustrate our theory through one example.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115553368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning-based Probabilistic Modeling and Verifying Driver Behavior using MDP","authors":"Xin Bai, Chenghao Xu, Yi Ao, Biao Chen, Dehui Du","doi":"10.1109/TASE.2019.000-6","DOIUrl":"https://doi.org/10.1109/TASE.2019.000-6","url":null,"abstract":"Assisted driving has always been a hot research issue. The existing work mainly focuses on modeling vehicles behavior. However, there still lacks research work of modeling and verifying driver behavior. To solve these problems, we are committed to modeling and analyzing the driver behavior with Markov Decision Process (MDP). The aim is to achieve safe driving by monitoring and predicting the driver's states. In this paper, we propose a novel approach to construct MDP models of driver behavior. It comprises four phases: (1) data preprocessing using Convolutional Neural Network (CNN), wherein we adopt CNN to extract the features of driver behavior with the simulation data; (2) Bayes-based learning, wherein we construct a training set and use the Naive Bayes algorithm to train the State Prediction Model (SPM); (3) MDP generating, wherein we propose an algorithm to generate MDP models for the driver behavior with the help of SPM; and (4) quantitative analysis, wherein we analyze the uncertain behavior of the driver with probabilistic model checking technology. The main novelty of our work is to model and verify the driver behavior by integrating the learning and the model checking technology. To implement our approach, we have developed the MDP generator. Moreover, the quantitative analyses of the driver behavior are conducted with the model checker PRISM. The experiment results show that our approach facilitates generating MDP models, which helps to model and analyze the uncertain behavior of the driver.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128755635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Recurrent Neural Network to Predict Tactics for Proving Component Connector Properties in Coq","authors":"Xiyue Zhang, Yi Li, Weijiang Hong, Mengyong Sun","doi":"10.1109/TASE.2019.00-12","DOIUrl":"https://doi.org/10.1109/TASE.2019.00-12","url":null,"abstract":"Formal modeling and verification of component connectors in complex software systems are getting more interests with recent advancements and evolution in modern software techniques. Various properties of connectors can be specified as high-order logic propositions and verified using theorem proving techniques. However, most high-order logic provers still highly rely on human interactions and thus make the proving process difficult and time-consuming. In this paper, we propose an approach based on recurrent neural networks (RNNs) to predict the correct tactics in the proving process. Recurrent layers consisting of Long-Short-Term-Memory (LSTM) units provide a better correctness rate comparing with simple RNN units. Under this framework, properties of connectors can be naturally formalized and semi-automatically proved in Coq.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122881155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Verifying the Relationship Among Three Descriptions in Problem Frames Using CSP","authors":"Xiaohong Chen, Xi Wu, Mengyao Zhao, Haiying Sun","doi":"10.1109/TASE.2019.00010","DOIUrl":"https://doi.org/10.1109/TASE.2019.00010","url":null,"abstract":"In requirements engineering (RE), there are three essential descriptions, i.e., requirements, specification and domain properties. Their relationship is proposed by Jackson et al, and verified in various requirements approaches. However, at present, there is no formal verification for the relationship in the Problem Frames (PF) which is a well known approach in the RE. Our previous work based on the PF explicitly captures the three descriptions. Based on that work, this paper further formalizes the three descriptions using Communicating Sequential Process (CSP), transforms the relationship into two \"refines\", and verifies them with Process Analysis Tool (PAT). The verification ensures that the machine which behaves as in the specification installed in a specific domain will satisfy the requirements.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117097061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}