Andrea Bombarda, Silvia Bonfanti, Angelo Gargantini
{"title":"我的特征模型改变了…我应该怎么处理我的测试?","authors":"Andrea Bombarda, Silvia Bonfanti, Angelo Gargantini","doi":"10.1016/j.jss.2025.112645","DOIUrl":null,"url":null,"abstract":"<div><div>Software Product Lines (SPLs) evolve over time, driven by changing requirements and advancements in technology. While much research has been dedicated to the evolution of feature models (FMs), less focus has been put on how associated artifacts, such as test cases, should adapt to these changes. Test cases, derived as valid products from an FM, play a critical role in ensuring the correctness of an SPL. However, when an FM evolves, the original test suite may become outdated, requiring either regeneration from scratch or repair of existing test cases to align with the updated FM. In this paper, we address the challenge of evolving test suites upon FM evolution. We introduce novel definitions of test suite dissimilarity and specificity We use these metrics to evaluate three test generation strategies: GFS (generating a new suite from scratch), GFE (repairing and reusing an existing suite), and SPECGEN (maximizing specific tests for the FM evolution). Additionally, we introduce a set of mutations to simulate FM evolution and obtain additional FMs. By using mutants, we conduct our analyses and evaluate the mutation score of test generation strategies. Our experiments, conducted on a set of FMs taken from the literature and on more than 3,200 FMs artificially generated with mutations, reveal that GFE often produces the smallest test suites with high mutation scores, while SPECGEN excels in specificity, particularly for mutations expanding the set of valid products.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"231 ","pages":"Article 112645"},"PeriodicalIF":4.1000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"My feature model has changed... What should I do with my tests?\",\"authors\":\"Andrea Bombarda, Silvia Bonfanti, Angelo Gargantini\",\"doi\":\"10.1016/j.jss.2025.112645\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Software Product Lines (SPLs) evolve over time, driven by changing requirements and advancements in technology. While much research has been dedicated to the evolution of feature models (FMs), less focus has been put on how associated artifacts, such as test cases, should adapt to these changes. Test cases, derived as valid products from an FM, play a critical role in ensuring the correctness of an SPL. However, when an FM evolves, the original test suite may become outdated, requiring either regeneration from scratch or repair of existing test cases to align with the updated FM. In this paper, we address the challenge of evolving test suites upon FM evolution. We introduce novel definitions of test suite dissimilarity and specificity We use these metrics to evaluate three test generation strategies: GFS (generating a new suite from scratch), GFE (repairing and reusing an existing suite), and SPECGEN (maximizing specific tests for the FM evolution). Additionally, we introduce a set of mutations to simulate FM evolution and obtain additional FMs. By using mutants, we conduct our analyses and evaluate the mutation score of test generation strategies. Our experiments, conducted on a set of FMs taken from the literature and on more than 3,200 FMs artificially generated with mutations, reveal that GFE often produces the smallest test suites with high mutation scores, while SPECGEN excels in specificity, particularly for mutations expanding the set of valid products.</div></div>\",\"PeriodicalId\":51099,\"journal\":{\"name\":\"Journal of Systems and Software\",\"volume\":\"231 \",\"pages\":\"Article 112645\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems and Software\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0164121225003140\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121225003140","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
My feature model has changed... What should I do with my tests?
Software Product Lines (SPLs) evolve over time, driven by changing requirements and advancements in technology. While much research has been dedicated to the evolution of feature models (FMs), less focus has been put on how associated artifacts, such as test cases, should adapt to these changes. Test cases, derived as valid products from an FM, play a critical role in ensuring the correctness of an SPL. However, when an FM evolves, the original test suite may become outdated, requiring either regeneration from scratch or repair of existing test cases to align with the updated FM. In this paper, we address the challenge of evolving test suites upon FM evolution. We introduce novel definitions of test suite dissimilarity and specificity We use these metrics to evaluate three test generation strategies: GFS (generating a new suite from scratch), GFE (repairing and reusing an existing suite), and SPECGEN (maximizing specific tests for the FM evolution). Additionally, we introduce a set of mutations to simulate FM evolution and obtain additional FMs. By using mutants, we conduct our analyses and evaluate the mutation score of test generation strategies. Our experiments, conducted on a set of FMs taken from the literature and on more than 3,200 FMs artificially generated with mutations, reveal that GFE often produces the smallest test suites with high mutation scores, while SPECGEN excels in specificity, particularly for mutations expanding the set of valid products.
期刊介绍:
The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to:
•Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution
•Agile, model-driven, service-oriented, open source and global software development
•Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems
•Human factors and management concerns of software development
•Data management and big data issues of software systems
•Metrics and evaluation, data mining of software development resources
•Business and economic aspects of software development processes
The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.