{"title":"STEIN UNBIASED RISK ESTIMATE AS A MODEL SELECTION ESTIMATOR","authors":"Nihad Nouri, Algeria Applied Economy, Fatiha Mezoued","doi":"10.46529/socioint.2020256","DOIUrl":null,"url":null,"abstract":"To restore a low-rank structure from a noisy matrix, many recent authors has used and studied truncated singular value decomposition. So thus, according to these studies, the image can be better estimated by shrinking the singular values as well. This paper is concerned with additive models of the form Y = M +E, where Y is an observed n×m matrix with m < n, M is an unknown n×m matrix of interest with low rank, and E is a random noise. For a family of estimators of which is obtained from shrinkage functions ϕ λ (σ i ) based on the singular values decomposition of the matrix Y, we are interested in the performance of the model proposed by Candès et al (2012) for other thresholding function (Minimax Concave Penalty (MCP)), and under the assumption that the distribution of data matrix Y is multivariate t-Student distribution that belongs to an elliptically distribution family which extends the Gaussian case. Under this distributional context, we propose to apply stein unbiased risk estimate (SURE) improved by S. Canu and D. Fourdrinier (2017), in order to select the best thresholding function between Minimax Concave Penalty (MCP) and Soft-thersholding, and also to find the optimal shrinking parameter λ from the data Y. Numerical results reveal that the risk estimate SURE is good, the minima are reached for the same lambda λ (λ ∗ = = 5218.4) and the difference between the estimated (SURE) and the usual (Mean Square Error (MSE)) risks is low, and that the risk of MCP is lower than SOFT. estimate, mean square error, elliptical distribution, singular value decomposition, minimax concave penalty, soft-thresholding.","PeriodicalId":189259,"journal":{"name":"Proceedings of SOCIOINT 2020- 7th International Conference on Education and Education of Social Sciences","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of SOCIOINT 2020- 7th International Conference on Education and Education of Social Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.46529/socioint.2020256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
To restore a low-rank structure from a noisy matrix, many recent authors has used and studied truncated singular value decomposition. So thus, according to these studies, the image can be better estimated by shrinking the singular values as well. This paper is concerned with additive models of the form Y = M +E, where Y is an observed n×m matrix with m < n, M is an unknown n×m matrix of interest with low rank, and E is a random noise. For a family of estimators of which is obtained from shrinkage functions ϕ λ (σ i ) based on the singular values decomposition of the matrix Y, we are interested in the performance of the model proposed by Candès et al (2012) for other thresholding function (Minimax Concave Penalty (MCP)), and under the assumption that the distribution of data matrix Y is multivariate t-Student distribution that belongs to an elliptically distribution family which extends the Gaussian case. Under this distributional context, we propose to apply stein unbiased risk estimate (SURE) improved by S. Canu and D. Fourdrinier (2017), in order to select the best thresholding function between Minimax Concave Penalty (MCP) and Soft-thersholding, and also to find the optimal shrinking parameter λ from the data Y. Numerical results reveal that the risk estimate SURE is good, the minima are reached for the same lambda λ (λ ∗ = = 5218.4) and the difference between the estimated (SURE) and the usual (Mean Square Error (MSE)) risks is low, and that the risk of MCP is lower than SOFT. estimate, mean square error, elliptical distribution, singular value decomposition, minimax concave penalty, soft-thresholding.