{"title":"An experiment design for validating a test case generation strategy from requirements models","authors":"Maria Fernanda Granda","doi":"10.1109/EmpiRE.2014.6890115","DOIUrl":null,"url":null,"abstract":"Currently, in a Model-Driven Engineering environment, it is a difficult and challenging task to fully automate model-driven testing because this demands complete and unambiguous models as input. Although some approaches have been developed to generate test cases from models, they require rigorous assessment of the completeness of the derivation rules. This paper proposes the plan and design of a controlled experiment that analyses a test case generation strategy for the purpose of evaluating its completeness from the viewpoint of those testers who will use a Communication Analysis-based requirements model. We will compare the abstract test cases obtained by applying (i) manual derivation without derivation rules with (ii) manual derivation with transformation rules; and both these strategies against a case of automated generation using transformation rules.","PeriodicalId":259907,"journal":{"name":"2014 IEEE 4th International Workshop on Empirical Requirements Engineering (EmpiRE)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 4th International Workshop on Empirical Requirements Engineering (EmpiRE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EmpiRE.2014.6890115","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Currently, in a Model-Driven Engineering environment, it is a difficult and challenging task to fully automate model-driven testing because this demands complete and unambiguous models as input. Although some approaches have been developed to generate test cases from models, they require rigorous assessment of the completeness of the derivation rules. This paper proposes the plan and design of a controlled experiment that analyses a test case generation strategy for the purpose of evaluating its completeness from the viewpoint of those testers who will use a Communication Analysis-based requirements model. We will compare the abstract test cases obtained by applying (i) manual derivation without derivation rules with (ii) manual derivation with transformation rules; and both these strategies against a case of automated generation using transformation rules.