{"title":"Variable selection using data splitting and projection for principal fitted component models in high dimension","authors":"Seungchul Baek , Hoyoung Park , Junyong Park","doi":"10.1016/j.csda.2024.107960","DOIUrl":null,"url":null,"abstract":"<div><p>Sufficient dimension reduction (SDR) is such an effective way to detect nonlinear relationship between response variable and covariates by reducing the dimensionality of covariates without information loss. The principal fitted component (PFC) model is a way to implement SDR using some class of basis functions, however the PFC model is not efficient when there are many irrelevant or noisy covariates. There have been a few studies on the selection of variables in the PFC model via penalized regression or sequential likelihood ratio test. A novel variable selection technique in the PFC model has been proposed by incorporating a recent development in multiple testing such as mirror statistics and random data splitting. It is highlighted how we construct a mirror statistic in the PFC model using the idea of projection of coefficients to the other space generated from data splitting. The proposed method is superior to some existing methods in terms of false discovery rate (FDR) control and applicability to high-dimensional cases. In particular, the proposed method outperforms other methods as the number of covariates tends to be getting larger, which would be appealing in high dimensional data analysis. Simulation studies and analyses of real data sets have been conducted to show the finite sample performance and the gain that it yields compared to existing methods.</p></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"196 ","pages":"Article 107960"},"PeriodicalIF":1.5000,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Statistics & Data Analysis","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167947324000446","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Sufficient dimension reduction (SDR) is such an effective way to detect nonlinear relationship between response variable and covariates by reducing the dimensionality of covariates without information loss. The principal fitted component (PFC) model is a way to implement SDR using some class of basis functions, however the PFC model is not efficient when there are many irrelevant or noisy covariates. There have been a few studies on the selection of variables in the PFC model via penalized regression or sequential likelihood ratio test. A novel variable selection technique in the PFC model has been proposed by incorporating a recent development in multiple testing such as mirror statistics and random data splitting. It is highlighted how we construct a mirror statistic in the PFC model using the idea of projection of coefficients to the other space generated from data splitting. The proposed method is superior to some existing methods in terms of false discovery rate (FDR) control and applicability to high-dimensional cases. In particular, the proposed method outperforms other methods as the number of covariates tends to be getting larger, which would be appealing in high dimensional data analysis. Simulation studies and analyses of real data sets have been conducted to show the finite sample performance and the gain that it yields compared to existing methods.
期刊介绍:
Computational Statistics and Data Analysis (CSDA), an Official Publication of the network Computational and Methodological Statistics (CMStatistics) and of the International Association for Statistical Computing (IASC), is an international journal dedicated to the dissemination of methodological research and applications in the areas of computational statistics and data analysis. The journal consists of four refereed sections which are divided into the following subject areas:
I) Computational Statistics - Manuscripts dealing with: 1) the explicit impact of computers on statistical methodology (e.g., Bayesian computing, bioinformatics,computer graphics, computer intensive inferential methods, data exploration, data mining, expert systems, heuristics, knowledge based systems, machine learning, neural networks, numerical and optimization methods, parallel computing, statistical databases, statistical systems), and 2) the development, evaluation and validation of statistical software and algorithms. Software and algorithms can be submitted with manuscripts and will be stored together with the online article.
II) Statistical Methodology for Data Analysis - Manuscripts dealing with novel and original data analytical strategies and methodologies applied in biostatistics (design and analytic methods for clinical trials, epidemiological studies, statistical genetics, or genetic/environmental interactions), chemometrics, classification, data exploration, density estimation, design of experiments, environmetrics, education, image analysis, marketing, model free data exploration, pattern recognition, psychometrics, statistical physics, image processing, robust procedures.
[...]
III) Special Applications - [...]
IV) Annals of Statistical Data Science [...]