{"title":"正态均值模型的稀疏置信集","authors":"Y. Ning, Guang Cheng","doi":"10.1093/imaiai/iaad003","DOIUrl":null,"url":null,"abstract":"\n In this paper, we propose a new framework to construct confidence sets for a $d$-dimensional unknown sparse parameter ${\\boldsymbol \\theta }$ under the normal mean model ${\\boldsymbol X}\\sim N({\\boldsymbol \\theta },\\sigma ^{2}\\bf{I})$. A key feature of the proposed confidence set is its capability to account for the sparsity of ${\\boldsymbol \\theta }$, thus named as sparse confidence set. This is in sharp contrast with the classical methods, such as the Bonferroni confidence intervals and other resampling-based procedures, where the sparsity of ${\\boldsymbol \\theta }$ is often ignored. Specifically, we require the desired sparse confidence set to satisfy the following two conditions: (i) uniformly over the parameter space, the coverage probability for ${\\boldsymbol \\theta }$ is above a pre-specified level; (ii) there exists a random subset $S$ of $\\{1,...,d\\}$ such that $S$ guarantees the pre-specified true negative rate for detecting non-zero $\\theta _{j}$’s. To exploit the sparsity of ${\\boldsymbol \\theta }$, we allow the confidence interval for $\\theta _{j}$ to degenerate to a single point 0 for any $j\\notin S$. Under this new framework, we first consider whether there exist sparse confidence sets that satisfy the above two conditions. To address this question, we establish a non-asymptotic minimax lower bound for the non-coverage probability over a suitable class of sparse confidence sets. The lower bound deciphers the role of sparsity and minimum signal-to-noise ratio (SNR) in the construction of sparse confidence sets. Furthermore, under suitable conditions on the SNR, a two-stage procedure is proposed to construct a sparse confidence set. To evaluate the optimality, the proposed sparse confidence set is shown to attain a minimax lower bound of some properly defined risk function up to a constant factor. Finally, we develop an adaptive procedure to the unknown sparsity. Numerical studies are conducted to verify the theoretical results.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"42 1","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Sparse confidence sets for normal mean models\",\"authors\":\"Y. Ning, Guang Cheng\",\"doi\":\"10.1093/imaiai/iaad003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n In this paper, we propose a new framework to construct confidence sets for a $d$-dimensional unknown sparse parameter ${\\\\boldsymbol \\\\theta }$ under the normal mean model ${\\\\boldsymbol X}\\\\sim N({\\\\boldsymbol \\\\theta },\\\\sigma ^{2}\\\\bf{I})$. A key feature of the proposed confidence set is its capability to account for the sparsity of ${\\\\boldsymbol \\\\theta }$, thus named as sparse confidence set. This is in sharp contrast with the classical methods, such as the Bonferroni confidence intervals and other resampling-based procedures, where the sparsity of ${\\\\boldsymbol \\\\theta }$ is often ignored. Specifically, we require the desired sparse confidence set to satisfy the following two conditions: (i) uniformly over the parameter space, the coverage probability for ${\\\\boldsymbol \\\\theta }$ is above a pre-specified level; (ii) there exists a random subset $S$ of $\\\\{1,...,d\\\\}$ such that $S$ guarantees the pre-specified true negative rate for detecting non-zero $\\\\theta _{j}$’s. To exploit the sparsity of ${\\\\boldsymbol \\\\theta }$, we allow the confidence interval for $\\\\theta _{j}$ to degenerate to a single point 0 for any $j\\\\notin S$. Under this new framework, we first consider whether there exist sparse confidence sets that satisfy the above two conditions. To address this question, we establish a non-asymptotic minimax lower bound for the non-coverage probability over a suitable class of sparse confidence sets. The lower bound deciphers the role of sparsity and minimum signal-to-noise ratio (SNR) in the construction of sparse confidence sets. Furthermore, under suitable conditions on the SNR, a two-stage procedure is proposed to construct a sparse confidence set. To evaluate the optimality, the proposed sparse confidence set is shown to attain a minimax lower bound of some properly defined risk function up to a constant factor. Finally, we develop an adaptive procedure to the unknown sparsity. Numerical studies are conducted to verify the theoretical results.\",\"PeriodicalId\":45437,\"journal\":{\"name\":\"Information and Inference-A Journal of the Ima\",\"volume\":\"42 1\",\"pages\":\"\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2020-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information and Inference-A Journal of the Ima\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1093/imaiai/iaad003\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Inference-A Journal of the Ima","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1093/imaiai/iaad003","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
In this paper, we propose a new framework to construct confidence sets for a $d$-dimensional unknown sparse parameter ${\boldsymbol \theta }$ under the normal mean model ${\boldsymbol X}\sim N({\boldsymbol \theta },\sigma ^{2}\bf{I})$. A key feature of the proposed confidence set is its capability to account for the sparsity of ${\boldsymbol \theta }$, thus named as sparse confidence set. This is in sharp contrast with the classical methods, such as the Bonferroni confidence intervals and other resampling-based procedures, where the sparsity of ${\boldsymbol \theta }$ is often ignored. Specifically, we require the desired sparse confidence set to satisfy the following two conditions: (i) uniformly over the parameter space, the coverage probability for ${\boldsymbol \theta }$ is above a pre-specified level; (ii) there exists a random subset $S$ of $\{1,...,d\}$ such that $S$ guarantees the pre-specified true negative rate for detecting non-zero $\theta _{j}$’s. To exploit the sparsity of ${\boldsymbol \theta }$, we allow the confidence interval for $\theta _{j}$ to degenerate to a single point 0 for any $j\notin S$. Under this new framework, we first consider whether there exist sparse confidence sets that satisfy the above two conditions. To address this question, we establish a non-asymptotic minimax lower bound for the non-coverage probability over a suitable class of sparse confidence sets. The lower bound deciphers the role of sparsity and minimum signal-to-noise ratio (SNR) in the construction of sparse confidence sets. Furthermore, under suitable conditions on the SNR, a two-stage procedure is proposed to construct a sparse confidence set. To evaluate the optimality, the proposed sparse confidence set is shown to attain a minimax lower bound of some properly defined risk function up to a constant factor. Finally, we develop an adaptive procedure to the unknown sparsity. Numerical studies are conducted to verify the theoretical results.