{"title":"Assessing the testing skills transfer of model-based testing on testing skill acquisition","authors":"","doi":"10.1007/s10270-023-01141-1","DOIUrl":null,"url":null,"abstract":"<h3>Abstract</h3> <p>When creating a software model, it is necessary that it accurately captures the desired behaviour, while at the same time ensuring that any undesired behaviour is excluded. On the one hand, formal verification tools can be used to check the internal consistency of a software system, ensuring that the behaviour of one software component does not contradict another. On the other hand, software testing is essential to check the external validity of the model more comprehensively. Unfortunately, software testing is often overlooked in curricula, resulting in graduates with inadequate software testing skills for industry. Software testing tools such as TesCaV can be used to help teachers teach software testing topics in a non-intrusive and less time-consuming way. Previous research has shown that TesCaV is easy to use and that novice users produce better quality software tests when using TesCaV. However, it has remained unclear whether learners retain the skills they gain from using TesCaV even when the tool is not offered for help. In order to understand the positive effect of TesCaV on learners’ software testing skills, this study conducted an experiment with 45 participants. The experiment used a pretest-treatment-posttest design. The results show that participants feel equally confident about the completeness of their test coverage, even though they identify more test cases. It is concluded that for course design, a capsule such as TesCaV can help students to understand the full complexity of software testing and help them to be more systematic in their approach.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"54 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software and Systems Modeling","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10270-023-01141-1","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
When creating a software model, it is necessary that it accurately captures the desired behaviour, while at the same time ensuring that any undesired behaviour is excluded. On the one hand, formal verification tools can be used to check the internal consistency of a software system, ensuring that the behaviour of one software component does not contradict another. On the other hand, software testing is essential to check the external validity of the model more comprehensively. Unfortunately, software testing is often overlooked in curricula, resulting in graduates with inadequate software testing skills for industry. Software testing tools such as TesCaV can be used to help teachers teach software testing topics in a non-intrusive and less time-consuming way. Previous research has shown that TesCaV is easy to use and that novice users produce better quality software tests when using TesCaV. However, it has remained unclear whether learners retain the skills they gain from using TesCaV even when the tool is not offered for help. In order to understand the positive effect of TesCaV on learners’ software testing skills, this study conducted an experiment with 45 participants. The experiment used a pretest-treatment-posttest design. The results show that participants feel equally confident about the completeness of their test coverage, even though they identify more test cases. It is concluded that for course design, a capsule such as TesCaV can help students to understand the full complexity of software testing and help them to be more systematic in their approach.
期刊介绍:
We invite authors to submit papers that discuss and analyze research challenges and experiences pertaining to software and system modeling languages, techniques, tools, practices and other facets. The following are some of the topic areas that are of special interest, but the journal publishes on a wide range of software and systems modeling concerns:
Domain-specific models and modeling standards;
Model-based testing techniques;
Model-based simulation techniques;
Formal syntax and semantics of modeling languages such as the UML;
Rigorous model-based analysis;
Model composition, refinement and transformation;
Software Language Engineering;
Modeling Languages in Science and Engineering;
Language Adaptation and Composition;
Metamodeling techniques;
Measuring quality of models and languages;
Ontological approaches to model engineering;
Generating test and code artifacts from models;
Model synthesis;
Methodology;
Model development tool environments;
Modeling Cyberphysical Systems;
Data intensive modeling;
Derivation of explicit models from data;
Case studies and experience reports with significant modeling lessons learned;
Comparative analyses of modeling languages and techniques;
Scientific assessment of modeling practices