J. Horcas, Joaquín Ballesteros, M. Pinto, L. Fuentes
{"title":"消除制约因素,并行分析特征模型","authors":"J. Horcas, Joaquín Ballesteros, M. Pinto, L. Fuentes","doi":"10.1145/3579027.3608981","DOIUrl":null,"url":null,"abstract":"Cross-tree constraints give feature models maximal expressive power since any interdependency between features can be captured through arbitrary propositional logic formulas. However, the existence of these constraints increases the complexity of reasoning about feature models, both for using SAT solvers or compiling the model to a binary decision diagram for efficient analyses. Although some works have tried to refactor constraints to eliminate them, they deal only with simple constraints (i.e., requires and excludes) or require the introduction of an additional set of features, increasing the complexity of the resulting feature model. This paper presents an approach that eliminates all the cross-tree constraints present in regular boolean feature models, including arbitrary constraints, in propositional logic formulas. Our approach for removing constraints consists of splitting the semantics of feature models into orthogonal disjoint feature subtrees, which are then analyzed in parallel to alleviate the exponential blow-up in memory of the resulting feature tree.","PeriodicalId":322542,"journal":{"name":"Proceedings of the 27th ACM International Systems and Software Product Line Conference - Volume A","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Elimination of constraints for parallel analysis of feature models\",\"authors\":\"J. Horcas, Joaquín Ballesteros, M. Pinto, L. Fuentes\",\"doi\":\"10.1145/3579027.3608981\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cross-tree constraints give feature models maximal expressive power since any interdependency between features can be captured through arbitrary propositional logic formulas. However, the existence of these constraints increases the complexity of reasoning about feature models, both for using SAT solvers or compiling the model to a binary decision diagram for efficient analyses. Although some works have tried to refactor constraints to eliminate them, they deal only with simple constraints (i.e., requires and excludes) or require the introduction of an additional set of features, increasing the complexity of the resulting feature model. This paper presents an approach that eliminates all the cross-tree constraints present in regular boolean feature models, including arbitrary constraints, in propositional logic formulas. Our approach for removing constraints consists of splitting the semantics of feature models into orthogonal disjoint feature subtrees, which are then analyzed in parallel to alleviate the exponential blow-up in memory of the resulting feature tree.\",\"PeriodicalId\":322542,\"journal\":{\"name\":\"Proceedings of the 27th ACM International Systems and Software Product Line Conference - Volume A\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 27th ACM International Systems and Software Product Line Conference - Volume A\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3579027.3608981\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 27th ACM International Systems and Software Product Line Conference - Volume A","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579027.3608981","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
交叉树约束赋予了特征模型最大的表达能力,因为特征之间的任何相互依赖关系都可以通过任意命题逻辑公式来捕捉。然而,这些约束的存在增加了特征模型推理的复杂性,无论是使用 SAT 求解器,还是将模型编译成二元决策图以进行高效分析,都是如此。虽然有些作品试图重构约束以消除它们,但它们只处理简单的约束(即要求和排除),或者需要引入一组额外的特征,从而增加了所得到的特征模型的复杂性。本文提出了一种在命题逻辑公式中消除常规布尔特征模型中所有交叉树约束(包括任意约束)的方法。我们消除约束的方法包括将特征模型的语义分割成正交的互不相交的特征子树,然后并行分析这些子树,以减轻由此产生的特征树在内存中的指数级膨胀。
Elimination of constraints for parallel analysis of feature models
Cross-tree constraints give feature models maximal expressive power since any interdependency between features can be captured through arbitrary propositional logic formulas. However, the existence of these constraints increases the complexity of reasoning about feature models, both for using SAT solvers or compiling the model to a binary decision diagram for efficient analyses. Although some works have tried to refactor constraints to eliminate them, they deal only with simple constraints (i.e., requires and excludes) or require the introduction of an additional set of features, increasing the complexity of the resulting feature model. This paper presents an approach that eliminates all the cross-tree constraints present in regular boolean feature models, including arbitrary constraints, in propositional logic formulas. Our approach for removing constraints consists of splitting the semantics of feature models into orthogonal disjoint feature subtrees, which are then analyzed in parallel to alleviate the exponential blow-up in memory of the resulting feature tree.