Daniel Friesel, Michael Müller, Matheus Ferraz, O. Spinczyk
{"title":"On the relation of variability modeling languages and non-functional properties","authors":"Daniel Friesel, Michael Müller, Matheus Ferraz, O. Spinczyk","doi":"10.1145/3503229.3547055","DOIUrl":null,"url":null,"abstract":"Non-functional properties (NFPs) such as code size (RAM, ROM), performance, and energy consumption are at least as important as functional properties in many software development domains. When configuring a software product line - especially in the area of resource-constrained embedded systems - developers must be aware of the NFPs of the configured product instance. Several NFP-aware variability modeling languages have been proposed to address this in the past. However, it is not clear whether a variability modeling language is the best place for handling NFP-related concerns, or whether separate NFP prediction models should be preferred. We shine light onto this question by discussing limitations of state-of-the-art NFP-aware variability modeling languages, and find that both in terms of the development process and model accuracy a separate NFP model is favorable. Our quantitative analysis is based on six different software product lines, including the widely used busybox multi-call binary and the x264 video encoder. We use classification and regression trees (CART) and our recently proposed Regression Model Trees [8] as separate NFP models. These tree-based models can cover the effects of arbitrary feature interactions and thus easily outperform variability models with static, feature-wise NFP annotations. For example, when estimating the throughput of an embedded AI product line, static annotations come with a mean generalization error of 114.5% while the error of CART is only 9.4 %.","PeriodicalId":193319,"journal":{"name":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 26th ACM International Systems and Software Product Line Conference - Volume B","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3503229.3547055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Non-functional properties (NFPs) such as code size (RAM, ROM), performance, and energy consumption are at least as important as functional properties in many software development domains. When configuring a software product line - especially in the area of resource-constrained embedded systems - developers must be aware of the NFPs of the configured product instance. Several NFP-aware variability modeling languages have been proposed to address this in the past. However, it is not clear whether a variability modeling language is the best place for handling NFP-related concerns, or whether separate NFP prediction models should be preferred. We shine light onto this question by discussing limitations of state-of-the-art NFP-aware variability modeling languages, and find that both in terms of the development process and model accuracy a separate NFP model is favorable. Our quantitative analysis is based on six different software product lines, including the widely used busybox multi-call binary and the x264 video encoder. We use classification and regression trees (CART) and our recently proposed Regression Model Trees [8] as separate NFP models. These tree-based models can cover the effects of arbitrary feature interactions and thus easily outperform variability models with static, feature-wise NFP annotations. For example, when estimating the throughput of an embedded AI product line, static annotations come with a mean generalization error of 114.5% while the error of CART is only 9.4 %.