{"title":"一种包含非线性效应的可解释模型的特征转换与选择方法","authors":"Yu Zheng, Jin Zhu, Junxian Zhu, Xueqin Wang","doi":"10.1007/s10114-025-3329-9","DOIUrl":null,"url":null,"abstract":"<div><p>Finding a highly interpretable nonlinear model has been an important yet challenging problem, and related research is relatively scarce in the current literature. To tackle this issue, we propose a new algorithm called Feat-ABESS based on a framework that utilizes feature transformation and selection for re-interpreting many machine learning algorithms. The core idea behind Feat-ABESS is to parameterize interpretable feature transformation within this framework and construct an objective function based on these parameters. This approach enables us to identify a proper interpretable feature transformation from the optimization perspective. By leveraging a recently advanced optimization technique, Feat-ABESS can obtain a concise and interpretable model. Moreover, Feat-ABESS can perform nonlinear variable selection. Our extensive experiments on 205 benchmark datasets and case studies on two datasets have demonstrated that Feat-ABESS can achieve powerful prediction accuracy while maintaining a high level of interpretability. The comparison with existing nonlinear variable selection methods exhibits Feat-ABESS has a higher true positive rate and a lower false discovery rate.</p></div>","PeriodicalId":50893,"journal":{"name":"Acta Mathematica Sinica-English Series","volume":"41 2","pages":"703 - 732"},"PeriodicalIF":0.8000,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Feature Transformation and Selection Method to Acquire an Interpretable Model Incorporating Nonlinear Effects\",\"authors\":\"Yu Zheng, Jin Zhu, Junxian Zhu, Xueqin Wang\",\"doi\":\"10.1007/s10114-025-3329-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Finding a highly interpretable nonlinear model has been an important yet challenging problem, and related research is relatively scarce in the current literature. To tackle this issue, we propose a new algorithm called Feat-ABESS based on a framework that utilizes feature transformation and selection for re-interpreting many machine learning algorithms. The core idea behind Feat-ABESS is to parameterize interpretable feature transformation within this framework and construct an objective function based on these parameters. This approach enables us to identify a proper interpretable feature transformation from the optimization perspective. By leveraging a recently advanced optimization technique, Feat-ABESS can obtain a concise and interpretable model. Moreover, Feat-ABESS can perform nonlinear variable selection. Our extensive experiments on 205 benchmark datasets and case studies on two datasets have demonstrated that Feat-ABESS can achieve powerful prediction accuracy while maintaining a high level of interpretability. The comparison with existing nonlinear variable selection methods exhibits Feat-ABESS has a higher true positive rate and a lower false discovery rate.</p></div>\",\"PeriodicalId\":50893,\"journal\":{\"name\":\"Acta Mathematica Sinica-English Series\",\"volume\":\"41 2\",\"pages\":\"703 - 732\"},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2025-02-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Acta Mathematica Sinica-English Series\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10114-025-3329-9\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Acta Mathematica Sinica-English Series","FirstCategoryId":"100","ListUrlMain":"https://link.springer.com/article/10.1007/s10114-025-3329-9","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS","Score":null,"Total":0}
A Feature Transformation and Selection Method to Acquire an Interpretable Model Incorporating Nonlinear Effects
Finding a highly interpretable nonlinear model has been an important yet challenging problem, and related research is relatively scarce in the current literature. To tackle this issue, we propose a new algorithm called Feat-ABESS based on a framework that utilizes feature transformation and selection for re-interpreting many machine learning algorithms. The core idea behind Feat-ABESS is to parameterize interpretable feature transformation within this framework and construct an objective function based on these parameters. This approach enables us to identify a proper interpretable feature transformation from the optimization perspective. By leveraging a recently advanced optimization technique, Feat-ABESS can obtain a concise and interpretable model. Moreover, Feat-ABESS can perform nonlinear variable selection. Our extensive experiments on 205 benchmark datasets and case studies on two datasets have demonstrated that Feat-ABESS can achieve powerful prediction accuracy while maintaining a high level of interpretability. The comparison with existing nonlinear variable selection methods exhibits Feat-ABESS has a higher true positive rate and a lower false discovery rate.
期刊介绍:
Acta Mathematica Sinica, established by the Chinese Mathematical Society in 1936, is the first and the best mathematical journal in China. In 1985, Acta Mathematica Sinica is divided into English Series and Chinese Series. The English Series is a monthly journal, publishing significant research papers from all branches of pure and applied mathematics. It provides authoritative reviews of current developments in mathematical research. Contributions are invited from researchers from all over the world.