{"title":"Working Across Boundaries","authors":"R. Hierons, Tao Xie","doi":"10.1002/stvr.1734","DOIUrl":null,"url":null,"abstract":"This editorial was written during a period of extreme difficulty for many individuals, families, and nations in the ongoing COVID-19 outbreak. We can only hope that measures taken are successful and that the situation has improved considerably. We also do not pretend that we have anything to add regarding health, social, or economic issues. However, the crisis has shown the role that Computer Science can play in informing policy. Society requires evidence and computers are often involved in producing such evidence via, for example, simulation. It is here that we, as a community, can contribute through advances in testing, verification, and reliability in areas such as Scientific Computing and Computer Simulations and maybe also AI/data sciences for helping expedite the process of finding treatment. As a recent example discussed in social media, when commenting on pandemic simulation code used to model control measures against COVID-19, Prof. Guido Salvaneschi said in his tweet: “Ever wondered about the “impact“ of research on programming languages and software engineering? Political decisions affecting hundreds of millions are being taken based on thousands of lines of 13+ years old C code that allegedly nobody understands anymore. #COVID19 #cs” (https://twitter.com/guidosalva/status/1242049884347412482). There is already some truly excellent work for making advances in these areas and we are confident that the community will rise to the challenge. This issue contains two papers. In the first paper, Simons and Lefticaru introduce a new Model-Based Testing approach, which is based on the use of a Stream X-machine (SXM) specification. SXMs provide a state-based formalism and there is a traditional approach to testing from an SXM. This approach typically assumes that the underlying functions/operations have been implemented correctly but these functions may be integrated (into a state machine) in the wrong way. There are a number of automated test generation approaches for SXMs and the authors make two main additional contributions to this area. First, they introduce a number of novel optimisations into test generation. Second, they observe that SXM test generation algorithms return abstract test cases (sequences of functions); the paper shows how corresponding concrete test data can be generated. The approach has been implemented and evaluated on case studies, with the tool also checking that a specification satisfies certain desirable properties. (Recommended by Hyunsook Do). In the second paper, Pouria Derakhshanfar, Xavier Devroey, Gilles Perrouin, Andy Zaidman, and Arie van Deursen introduce behavioural model seeding, a new seeding approach for learning class usages from both the system source code under test and existing test cases. The learned class usages are represented in a state-machine-based behavioural model. The behavioural model is then used to guide search-based crash reproduction, which generates a test case (i.e., objects and sequences of method calls on those objects) to reproduce a crash given its stack trace. This approach is in contrast to the existing seeding strategies, which simply collect and reuse values and object states from the system source code under test and existing test cases without any abstraction. The approach has been implemented in an open-source implementation named the BOTSING toolset and evaluated on 122 crashes from six open-source applications. (Recommended by Phil McMinn).","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"77 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2020-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Testing Verification & Reliability","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1002/stvr.1734","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
This editorial was written during a period of extreme difficulty for many individuals, families, and nations in the ongoing COVID-19 outbreak. We can only hope that measures taken are successful and that the situation has improved considerably. We also do not pretend that we have anything to add regarding health, social, or economic issues. However, the crisis has shown the role that Computer Science can play in informing policy. Society requires evidence and computers are often involved in producing such evidence via, for example, simulation. It is here that we, as a community, can contribute through advances in testing, verification, and reliability in areas such as Scientific Computing and Computer Simulations and maybe also AI/data sciences for helping expedite the process of finding treatment. As a recent example discussed in social media, when commenting on pandemic simulation code used to model control measures against COVID-19, Prof. Guido Salvaneschi said in his tweet: “Ever wondered about the “impact“ of research on programming languages and software engineering? Political decisions affecting hundreds of millions are being taken based on thousands of lines of 13+ years old C code that allegedly nobody understands anymore. #COVID19 #cs” (https://twitter.com/guidosalva/status/1242049884347412482). There is already some truly excellent work for making advances in these areas and we are confident that the community will rise to the challenge. This issue contains two papers. In the first paper, Simons and Lefticaru introduce a new Model-Based Testing approach, which is based on the use of a Stream X-machine (SXM) specification. SXMs provide a state-based formalism and there is a traditional approach to testing from an SXM. This approach typically assumes that the underlying functions/operations have been implemented correctly but these functions may be integrated (into a state machine) in the wrong way. There are a number of automated test generation approaches for SXMs and the authors make two main additional contributions to this area. First, they introduce a number of novel optimisations into test generation. Second, they observe that SXM test generation algorithms return abstract test cases (sequences of functions); the paper shows how corresponding concrete test data can be generated. The approach has been implemented and evaluated on case studies, with the tool also checking that a specification satisfies certain desirable properties. (Recommended by Hyunsook Do). In the second paper, Pouria Derakhshanfar, Xavier Devroey, Gilles Perrouin, Andy Zaidman, and Arie van Deursen introduce behavioural model seeding, a new seeding approach for learning class usages from both the system source code under test and existing test cases. The learned class usages are represented in a state-machine-based behavioural model. The behavioural model is then used to guide search-based crash reproduction, which generates a test case (i.e., objects and sequences of method calls on those objects) to reproduce a crash given its stack trace. This approach is in contrast to the existing seeding strategies, which simply collect and reuse values and object states from the system source code under test and existing test cases without any abstraction. The approach has been implemented in an open-source implementation named the BOTSING toolset and evaluated on 122 crashes from six open-source applications. (Recommended by Phil McMinn).
期刊介绍:
The journal is the premier outlet for research results on the subjects of testing, verification and reliability. Readers will find useful research on issues pertaining to building better software and evaluating it.
The journal is unique in its emphasis on theoretical foundations and applications to real-world software development. The balance of theory, empirical work, and practical applications provide readers with better techniques for testing, verifying and improving the reliability of software.
The journal targets researchers, practitioners, educators and students that have a vested interest in results generated by high-quality testing, verification and reliability modeling and evaluation of software. Topics of special interest include, but are not limited to:
-New criteria for software testing and verification
-Application of existing software testing and verification techniques to new types of software, including web applications, web services, embedded software, aspect-oriented software, and software architectures
-Model based testing
-Formal verification techniques such as model-checking
-Comparison of testing and verification techniques
-Measurement of and metrics for testing, verification and reliability
-Industrial experience with cutting edge techniques
-Descriptions and evaluations of commercial and open-source software testing tools
-Reliability modeling, measurement and application
-Testing and verification of software security
-Automated test data generation
-Process issues and methods
-Non-functional testing