{"title":"基于回归的自适应核条件独立性检验","authors":"Yixin Ren , Juncai Zhang , Yewei Xia , Ruxin Wang , Feng Xie , Jihong Guan , Hao Zhang , Shuigeng Zhou","doi":"10.1016/j.artint.2025.104391","DOIUrl":null,"url":null,"abstract":"<div><div>We propose a novel framework for regression-based conditional independence (CI) test with adaptive kernels, where the task of CI test is reduced to regression and statistical independence test while proving that the test power of CI can be maximized by adaptively learning parameterized kernels of the independence test if the consistency of regression can be guaranteed. For the adaptively learning kernel of independence test, we first address the pitfall inherent in the existing signal-to-noise ratio criterion by modeling the change of the null distribution during the learning process, then design a new class of kernels that can adaptively focus on the significant dimensions of variables to judge independence, which makes the tests more flexible than using simple kernels that are adaptive only in length-scale, and especially suitable for high-dimensional complex data. Theoretically, we demonstrate the consistency of the proposed tests, and show that the non-convex objective function used for learning fits the L-smoothing condition, thus benefiting the optimization. Experimental results on both synthetic and real data show the superiority of our method. The source code and datasets are available at <span><span>https://github.com/hzsiat/AdaRCIT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"347 ","pages":"Article 104391"},"PeriodicalIF":5.1000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Regression-based conditional independence test with adaptive kernels\",\"authors\":\"Yixin Ren , Juncai Zhang , Yewei Xia , Ruxin Wang , Feng Xie , Jihong Guan , Hao Zhang , Shuigeng Zhou\",\"doi\":\"10.1016/j.artint.2025.104391\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>We propose a novel framework for regression-based conditional independence (CI) test with adaptive kernels, where the task of CI test is reduced to regression and statistical independence test while proving that the test power of CI can be maximized by adaptively learning parameterized kernels of the independence test if the consistency of regression can be guaranteed. For the adaptively learning kernel of independence test, we first address the pitfall inherent in the existing signal-to-noise ratio criterion by modeling the change of the null distribution during the learning process, then design a new class of kernels that can adaptively focus on the significant dimensions of variables to judge independence, which makes the tests more flexible than using simple kernels that are adaptive only in length-scale, and especially suitable for high-dimensional complex data. Theoretically, we demonstrate the consistency of the proposed tests, and show that the non-convex objective function used for learning fits the L-smoothing condition, thus benefiting the optimization. Experimental results on both synthetic and real data show the superiority of our method. The source code and datasets are available at <span><span>https://github.com/hzsiat/AdaRCIT</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":8434,\"journal\":{\"name\":\"Artificial Intelligence\",\"volume\":\"347 \",\"pages\":\"Article 104391\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0004370225001109\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0004370225001109","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Regression-based conditional independence test with adaptive kernels
We propose a novel framework for regression-based conditional independence (CI) test with adaptive kernels, where the task of CI test is reduced to regression and statistical independence test while proving that the test power of CI can be maximized by adaptively learning parameterized kernels of the independence test if the consistency of regression can be guaranteed. For the adaptively learning kernel of independence test, we first address the pitfall inherent in the existing signal-to-noise ratio criterion by modeling the change of the null distribution during the learning process, then design a new class of kernels that can adaptively focus on the significant dimensions of variables to judge independence, which makes the tests more flexible than using simple kernels that are adaptive only in length-scale, and especially suitable for high-dimensional complex data. Theoretically, we demonstrate the consistency of the proposed tests, and show that the non-convex objective function used for learning fits the L-smoothing condition, thus benefiting the optimization. Experimental results on both synthetic and real data show the superiority of our method. The source code and datasets are available at https://github.com/hzsiat/AdaRCIT.
期刊介绍:
The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.