{"title":"A generic framework for model-set selection for the unification of testing and learning MDE tasks","authors":"Edouard R. Batot, H. Sahraoui","doi":"10.1145/2976767.2976785","DOIUrl":null,"url":null,"abstract":"We propose a generic framework for model-set selection for learning or testing Model-Driven Engineering tasks. We target specifically tasks that apply to or manipulate models, such as model definition, model well-formedness checking, and model transformation. In our framework, we view the model-set selection as a multi-objective optimization problem. The framework can be tailored to the learning or testing of a specific task by firstly expressing the coverage criterion, which will be encoded as a first optimization objective. The coverage is expressed by tagging the subset of the input metamodel that is relevant to the considered task. Then, one or more minimality criteria are selected as additional optimization objectives. We illustrate the use of our framework with the testing of metamodels. This case study shows that the multi-objective approach gives better results than random and mono-objective selections.","PeriodicalId":179690,"journal":{"name":"Proceedings of the ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2976767.2976785","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 25
Abstract
We propose a generic framework for model-set selection for learning or testing Model-Driven Engineering tasks. We target specifically tasks that apply to or manipulate models, such as model definition, model well-formedness checking, and model transformation. In our framework, we view the model-set selection as a multi-objective optimization problem. The framework can be tailored to the learning or testing of a specific task by firstly expressing the coverage criterion, which will be encoded as a first optimization objective. The coverage is expressed by tagging the subset of the input metamodel that is relevant to the considered task. Then, one or more minimality criteria are selected as additional optimization objectives. We illustrate the use of our framework with the testing of metamodels. This case study shows that the multi-objective approach gives better results than random and mono-objective selections.