{"title":"定义基于场景的模型测试的三种符号的比较:一个控制实验","authors":"Bernhard Hoisl, Stefan Sobernig, Mark Strembeck","doi":"10.1109/QUATIC.2014.19","DOIUrl":null,"url":null,"abstract":"Scenarios are an established means to specify requirements for software systems. Scenario-based tests allow for validating software models against such requirements. In this paper, we consider three alternative notations to define such scenario tests on structural models: a semi structured natural-language notation, a diagrammatic notation, and a fully-structured textual notation. In particular, we performed a study to understand how these three notations compare to each other with respect to accuracy and effort of comprehending scenario-test definitions, as well as with respect to the detection of errors in the models under test. 20 software professionals (software engineers, testers, researchers) participated in a controlled experiment based on six different comprehension and maintenance tasks. For each of these tasks, questions on a scenario-test definition and on a model under test had to be answered. In an ex-post questionnaire, the participants rated each notation on a number of dimensions (e.g., practicality or scalability). Our results show that the choice of a specific scenario-test notation can affect the productivity (in terms of correctness and time-effort) when testing software models for requirements conformance. In particular, the participants of our study spent comparatively less time and completed the tasks more accurately when using the natural-language notation compared to the other two notations. Moreover, the participants of our study explicitly expressed their preference for the natural-language notation.","PeriodicalId":317037,"journal":{"name":"2014 9th International Conference on the Quality of Information and Communications Technology","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":"{\"title\":\"Comparing Three Notations for Defining Scenario-Based Model Tests: A Controlled Experiment\",\"authors\":\"Bernhard Hoisl, Stefan Sobernig, Mark Strembeck\",\"doi\":\"10.1109/QUATIC.2014.19\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Scenarios are an established means to specify requirements for software systems. Scenario-based tests allow for validating software models against such requirements. In this paper, we consider three alternative notations to define such scenario tests on structural models: a semi structured natural-language notation, a diagrammatic notation, and a fully-structured textual notation. In particular, we performed a study to understand how these three notations compare to each other with respect to accuracy and effort of comprehending scenario-test definitions, as well as with respect to the detection of errors in the models under test. 20 software professionals (software engineers, testers, researchers) participated in a controlled experiment based on six different comprehension and maintenance tasks. For each of these tasks, questions on a scenario-test definition and on a model under test had to be answered. In an ex-post questionnaire, the participants rated each notation on a number of dimensions (e.g., practicality or scalability). Our results show that the choice of a specific scenario-test notation can affect the productivity (in terms of correctness and time-effort) when testing software models for requirements conformance. In particular, the participants of our study spent comparatively less time and completed the tasks more accurately when using the natural-language notation compared to the other two notations. Moreover, the participants of our study explicitly expressed their preference for the natural-language notation.\",\"PeriodicalId\":317037,\"journal\":{\"name\":\"2014 9th International Conference on the Quality of Information and Communications Technology\",\"volume\":\"69 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"27\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 9th International Conference on the Quality of Information and Communications Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/QUATIC.2014.19\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 9th International Conference on the Quality of Information and Communications Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/QUATIC.2014.19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Comparing Three Notations for Defining Scenario-Based Model Tests: A Controlled Experiment
Scenarios are an established means to specify requirements for software systems. Scenario-based tests allow for validating software models against such requirements. In this paper, we consider three alternative notations to define such scenario tests on structural models: a semi structured natural-language notation, a diagrammatic notation, and a fully-structured textual notation. In particular, we performed a study to understand how these three notations compare to each other with respect to accuracy and effort of comprehending scenario-test definitions, as well as with respect to the detection of errors in the models under test. 20 software professionals (software engineers, testers, researchers) participated in a controlled experiment based on six different comprehension and maintenance tasks. For each of these tasks, questions on a scenario-test definition and on a model under test had to be answered. In an ex-post questionnaire, the participants rated each notation on a number of dimensions (e.g., practicality or scalability). Our results show that the choice of a specific scenario-test notation can affect the productivity (in terms of correctness and time-effort) when testing software models for requirements conformance. In particular, the participants of our study spent comparatively less time and completed the tasks more accurately when using the natural-language notation compared to the other two notations. Moreover, the participants of our study explicitly expressed their preference for the natural-language notation.