{"title":"On Bayesian predictive density estimation for skew-normal distributions","authors":"Othmane Kortbi","doi":"10.1007/s00184-024-00946-4","DOIUrl":null,"url":null,"abstract":"<p>This paper is concerned with prediction for skew-normal models, and more specifically the Bayes estimation of a predictive density for <span>\\(Y \\left. \\right| \\mu \\sim {\\mathcal {S}} {\\mathcal {N}}_p (\\mu , v_y I_p, \\lambda )\\)</span> under Kullback–Leibler loss, based on <span>\\(X \\left. \\right| \\mu \\sim {\\mathcal {S}} {\\mathcal {N}}_p (\\mu , v_x I_p, \\lambda )\\)</span> with known dependence and skewness parameters. We obtain representations for Bayes predictive densities, including the minimum risk equivariant predictive density <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> which is a Bayes predictive density with respect to the noninformative prior <span>\\(\\pi _0\\equiv 1\\)</span>. George et al. (Ann Stat 34:78–91, 2006) used the parallel between the problem of point estimation and the problem of estimation of predictive densities to establish a connection between the difference of risks of the two problems. The development of similar connection, allows us to determine sufficient conditions of dominance over <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> and of minimaxity. First, we show that <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> is a minimax predictive density under KL risk for the skew-normal model. After this, for dimensions <span>\\(p\\ge 3\\)</span>, we obtain classes of Bayesian minimax densities that improve <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> under KL loss, for the subclass of skew-normal distributions with small value of skewness parameter. Moreover, for dimensions <span>\\(p\\ge 4\\)</span>, we obtain classes of Bayesian minimax densities that improve <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> under KL loss, for the whole class of skew-normal distributions. Examples of proper priors, including generalized student priors, generating Bayesian minimax densities that improve <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> under KL loss, were constructed when <span>\\(p\\ge 5\\)</span>. This findings represent an extension of Liang and Barron (IEEE Trans Inf Theory 50(11):2708–2726, 2004), George et al. (Ann Stat 34:78–91, 2006) and Komaki (Biometrika 88(3):859–864, 2001) results to a subclass of asymmetrical distributions.\n</p>","PeriodicalId":49821,"journal":{"name":"Metrika","volume":"6 1","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Metrika","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s00184-024-00946-4","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 0
Abstract
This paper is concerned with prediction for skew-normal models, and more specifically the Bayes estimation of a predictive density for \(Y \left. \right| \mu \sim {\mathcal {S}} {\mathcal {N}}_p (\mu , v_y I_p, \lambda )\) under Kullback–Leibler loss, based on \(X \left. \right| \mu \sim {\mathcal {S}} {\mathcal {N}}_p (\mu , v_x I_p, \lambda )\) with known dependence and skewness parameters. We obtain representations for Bayes predictive densities, including the minimum risk equivariant predictive density \(\hat{p}_{\pi _{o}}\) which is a Bayes predictive density with respect to the noninformative prior \(\pi _0\equiv 1\). George et al. (Ann Stat 34:78–91, 2006) used the parallel between the problem of point estimation and the problem of estimation of predictive densities to establish a connection between the difference of risks of the two problems. The development of similar connection, allows us to determine sufficient conditions of dominance over \(\hat{p}_{\pi _{o}}\) and of minimaxity. First, we show that \(\hat{p}_{\pi _{o}}\) is a minimax predictive density under KL risk for the skew-normal model. After this, for dimensions \(p\ge 3\), we obtain classes of Bayesian minimax densities that improve \(\hat{p}_{\pi _{o}}\) under KL loss, for the subclass of skew-normal distributions with small value of skewness parameter. Moreover, for dimensions \(p\ge 4\), we obtain classes of Bayesian minimax densities that improve \(\hat{p}_{\pi _{o}}\) under KL loss, for the whole class of skew-normal distributions. Examples of proper priors, including generalized student priors, generating Bayesian minimax densities that improve \(\hat{p}_{\pi _{o}}\) under KL loss, were constructed when \(p\ge 5\). This findings represent an extension of Liang and Barron (IEEE Trans Inf Theory 50(11):2708–2726, 2004), George et al. (Ann Stat 34:78–91, 2006) and Komaki (Biometrika 88(3):859–864, 2001) results to a subclass of asymmetrical distributions.
期刊介绍:
Metrika is an international journal for theoretical and applied statistics. Metrika publishes original research papers in the field of mathematical statistics and statistical methods. Great importance is attached to new developments in theoretical statistics, statistical modeling and to actual innovative applicability of the proposed statistical methods and results. Topics of interest include, without being limited to, multivariate analysis, high dimensional statistics and nonparametric statistics; categorical data analysis and latent variable models; reliability, lifetime data analysis and statistics in engineering sciences.