{"title":"高维数据中的改进型高斯均值矩阵估计器","authors":"Arash A. Foroushani, Sévérien Nkurunziza","doi":"10.1016/j.jmva.2025.105424","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we introduce a class of improved estimators for the mean parameter matrix of a multivariate normal distribution with an unknown variance–covariance matrix. In particular, the main results of Chételat and Wells (2012) are established in their full generalities and we provide the corrected version of their Theorem 2. Specifically, we generalize the existing results in three ways. First, we consider a parametric estimation problem which encloses as a special case the one about the vector parameter. Second, we propose a class of James–Stein matrix estimators and, we establish a necessary and a sufficient condition for any member of the proposed class to have a finite risk function. Third, we present the conditions for the proposed class of estimators to dominate the maximum likelihood estimator. On the top of these interesting contributions, the additional novelty consists in the fact that, we extend the methods suitable for the vector parameter case and the derived results hold in the classical case as well as in the context of high and ultra-high dimensional data.</div></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"208 ","pages":"Article 105424"},"PeriodicalIF":1.4000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improved Gaussian mean matrix estimators in high-dimensional data\",\"authors\":\"Arash A. Foroushani, Sévérien Nkurunziza\",\"doi\":\"10.1016/j.jmva.2025.105424\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In this paper, we introduce a class of improved estimators for the mean parameter matrix of a multivariate normal distribution with an unknown variance–covariance matrix. In particular, the main results of Chételat and Wells (2012) are established in their full generalities and we provide the corrected version of their Theorem 2. Specifically, we generalize the existing results in three ways. First, we consider a parametric estimation problem which encloses as a special case the one about the vector parameter. Second, we propose a class of James–Stein matrix estimators and, we establish a necessary and a sufficient condition for any member of the proposed class to have a finite risk function. Third, we present the conditions for the proposed class of estimators to dominate the maximum likelihood estimator. On the top of these interesting contributions, the additional novelty consists in the fact that, we extend the methods suitable for the vector parameter case and the derived results hold in the classical case as well as in the context of high and ultra-high dimensional data.</div></div>\",\"PeriodicalId\":16431,\"journal\":{\"name\":\"Journal of Multivariate Analysis\",\"volume\":\"208 \",\"pages\":\"Article 105424\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2025-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Multivariate Analysis\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0047259X25000193\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"STATISTICS & PROBABILITY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Multivariate Analysis","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0047259X25000193","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
Improved Gaussian mean matrix estimators in high-dimensional data
In this paper, we introduce a class of improved estimators for the mean parameter matrix of a multivariate normal distribution with an unknown variance–covariance matrix. In particular, the main results of Chételat and Wells (2012) are established in their full generalities and we provide the corrected version of their Theorem 2. Specifically, we generalize the existing results in three ways. First, we consider a parametric estimation problem which encloses as a special case the one about the vector parameter. Second, we propose a class of James–Stein matrix estimators and, we establish a necessary and a sufficient condition for any member of the proposed class to have a finite risk function. Third, we present the conditions for the proposed class of estimators to dominate the maximum likelihood estimator. On the top of these interesting contributions, the additional novelty consists in the fact that, we extend the methods suitable for the vector parameter case and the derived results hold in the classical case as well as in the context of high and ultra-high dimensional data.
期刊介绍:
Founded in 1971, the Journal of Multivariate Analysis (JMVA) is the central venue for the publication of new, relevant methodology and particularly innovative applications pertaining to the analysis and interpretation of multidimensional data.
The journal welcomes contributions to all aspects of multivariate data analysis and modeling, including cluster analysis, discriminant analysis, factor analysis, and multidimensional continuous or discrete distribution theory. Topics of current interest include, but are not limited to, inferential aspects of
Copula modeling
Functional data analysis
Graphical modeling
High-dimensional data analysis
Image analysis
Multivariate extreme-value theory
Sparse modeling
Spatial statistics.