Shrinkage, pretest, and penalty estimators in generalized linear models

Q Mathematics
Shakhawat Hossain , S. Ejaz Ahmed , Kjell A. Doksum
{"title":"Shrinkage, pretest, and penalty estimators in generalized linear models","authors":"Shakhawat Hossain ,&nbsp;S. Ejaz Ahmed ,&nbsp;Kjell A. Doksum","doi":"10.1016/j.stamet.2014.11.003","DOIUrl":null,"url":null,"abstract":"<div><p><span>We consider estimation in generalized linear models<span> when there are many potential predictors and some of them may not have influence on the response of interest. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients<span> to a candidate linear subspace based on subject matter or prior knowledge, we investigate the relative performances of Stein type shrinkage, pretest, and penalty estimators (</span></span></span><span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>GLM, adaptive <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span><span><span>GLM, and SCAD) with respect to the unrestricted maximum likelihood estimator (MLE). The </span>asymptotic properties<span><span> of the pretest and shrinkage estimators including the derivation of asymptotic distributional biases and risks are established. In particular, we give conditions under which the shrinkage estimators are asymptotically more efficient than the unrestricted MLE. A </span>Monte Carlo simulation study shows that the mean squared error (MSE) of an adaptive shrinkage estimator is comparable to the MSE of the penalty estimators in many situations and in particular performs better than the penalty estimators when the dimension of the restricted parameter space is large. The Steinian shrinkage and penalty estimators all improve substantially on the unrestricted MLE. A real data set analysis is also presented to compare the suggested methods.</span></span></p></div>","PeriodicalId":48877,"journal":{"name":"Statistical Methodology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.stamet.2014.11.003","citationCount":"25","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Statistical Methodology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1572312714000896","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 25

Abstract

We consider estimation in generalized linear models when there are many potential predictors and some of them may not have influence on the response of interest. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients to a candidate linear subspace based on subject matter or prior knowledge, we investigate the relative performances of Stein type shrinkage, pretest, and penalty estimators (L1GLM, adaptive L1GLM, and SCAD) with respect to the unrestricted maximum likelihood estimator (MLE). The asymptotic properties of the pretest and shrinkage estimators including the derivation of asymptotic distributional biases and risks are established. In particular, we give conditions under which the shrinkage estimators are asymptotically more efficient than the unrestricted MLE. A Monte Carlo simulation study shows that the mean squared error (MSE) of an adaptive shrinkage estimator is comparable to the MSE of the penalty estimators in many situations and in particular performs better than the penalty estimators when the dimension of the restricted parameter space is large. The Steinian shrinkage and penalty estimators all improve substantially on the unrestricted MLE. A real data set analysis is also presented to compare the suggested methods.

广义线性模型中的收缩、预试和惩罚估计
当有许多潜在的预测因子,其中一些可能对感兴趣的响应没有影响时,我们考虑广义线性模型中的估计。在两个相互竞争的模型中,其中一个模型包括所有预测因子,另一个模型将变量系数限制在基于主题或先验知识的候选线性子空间中,我们研究了Stein型收缩、预检验和惩罚估计器(L1GLM、自适应L1GLM和SCAD)相对于无限制最大似然估计器(MLE)的相对性能。建立了预检验和收缩估计的渐近性质,包括渐近分布偏差和风险的推导。特别是,我们给出了收缩估计器比不受限制的最大似然估计器渐近更有效的条件。蒙特卡罗仿真研究表明,在许多情况下,自适应收缩估计器的均方误差(MSE)与惩罚估计器的均方误差相当,特别是当受限参数空间的维数较大时,其性能优于惩罚估计器。斯坦尼收缩和惩罚估计器在不受限制的MLE上都有很大的改进。并以实际数据集为例,对所提方法进行了比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Statistical Methodology
Statistical Methodology STATISTICS & PROBABILITY-
CiteScore
0.59
自引率
0.00%
发文量
0
期刊介绍: Statistical Methodology aims to publish articles of high quality reflecting the varied facets of contemporary statistical theory as well as of significant applications. In addition to helping to stimulate research, the journal intends to bring about interactions among statisticians and scientists in other disciplines broadly interested in statistical methodology. The journal focuses on traditional areas such as statistical inference, multivariate analysis, design of experiments, sampling theory, regression analysis, re-sampling methods, time series, nonparametric statistics, etc., and also gives special emphasis to established as well as emerging applied areas.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信