{"title":"On Bayesian predictive density estimation for skew-normal distributions","authors":"Othmane Kortbi","doi":"10.1007/s00184-024-00946-4","DOIUrl":null,"url":null,"abstract":"<p>This paper is concerned with prediction for skew-normal models, and more specifically the Bayes estimation of a predictive density for <span>\\(Y \\left. \\right| \\mu \\sim {\\mathcal {S}} {\\mathcal {N}}_p (\\mu , v_y I_p, \\lambda )\\)</span> under Kullback–Leibler loss, based on <span>\\(X \\left. \\right| \\mu \\sim {\\mathcal {S}} {\\mathcal {N}}_p (\\mu , v_x I_p, \\lambda )\\)</span> with known dependence and skewness parameters. We obtain representations for Bayes predictive densities, including the minimum risk equivariant predictive density <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> which is a Bayes predictive density with respect to the noninformative prior <span>\\(\\pi _0\\equiv 1\\)</span>. George et al. (Ann Stat 34:78–91, 2006) used the parallel between the problem of point estimation and the problem of estimation of predictive densities to establish a connection between the difference of risks of the two problems. The development of similar connection, allows us to determine sufficient conditions of dominance over <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> and of minimaxity. First, we show that <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> is a minimax predictive density under KL risk for the skew-normal model. After this, for dimensions <span>\\(p\\ge 3\\)</span>, we obtain classes of Bayesian minimax densities that improve <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> under KL loss, for the subclass of skew-normal distributions with small value of skewness parameter. Moreover, for dimensions <span>\\(p\\ge 4\\)</span>, we obtain classes of Bayesian minimax densities that improve <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> under KL loss, for the whole class of skew-normal distributions. Examples of proper priors, including generalized student priors, generating Bayesian minimax densities that improve <span>\\(\\hat{p}_{\\pi _{o}}\\)</span> under KL loss, were constructed when <span>\\(p\\ge 5\\)</span>. This findings represent an extension of Liang and Barron (IEEE Trans Inf Theory 50(11):2708–2726, 2004), George et al. (Ann Stat 34:78–91, 2006) and Komaki (Biometrika 88(3):859–864, 2001) results to a subclass of asymmetrical distributions.\n</p>","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s00184-024-00946-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper is concerned with prediction for skew-normal models, and more specifically the Bayes estimation of a predictive density for \(Y \left. \right| \mu \sim {\mathcal {S}} {\mathcal {N}}_p (\mu , v_y I_p, \lambda )\) under Kullback–Leibler loss, based on \(X \left. \right| \mu \sim {\mathcal {S}} {\mathcal {N}}_p (\mu , v_x I_p, \lambda )\) with known dependence and skewness parameters. We obtain representations for Bayes predictive densities, including the minimum risk equivariant predictive density \(\hat{p}_{\pi _{o}}\) which is a Bayes predictive density with respect to the noninformative prior \(\pi _0\equiv 1\). George et al. (Ann Stat 34:78–91, 2006) used the parallel between the problem of point estimation and the problem of estimation of predictive densities to establish a connection between the difference of risks of the two problems. The development of similar connection, allows us to determine sufficient conditions of dominance over \(\hat{p}_{\pi _{o}}\) and of minimaxity. First, we show that \(\hat{p}_{\pi _{o}}\) is a minimax predictive density under KL risk for the skew-normal model. After this, for dimensions \(p\ge 3\), we obtain classes of Bayesian minimax densities that improve \(\hat{p}_{\pi _{o}}\) under KL loss, for the subclass of skew-normal distributions with small value of skewness parameter. Moreover, for dimensions \(p\ge 4\), we obtain classes of Bayesian minimax densities that improve \(\hat{p}_{\pi _{o}}\) under KL loss, for the whole class of skew-normal distributions. Examples of proper priors, including generalized student priors, generating Bayesian minimax densities that improve \(\hat{p}_{\pi _{o}}\) under KL loss, were constructed when \(p\ge 5\). This findings represent an extension of Liang and Barron (IEEE Trans Inf Theory 50(11):2708–2726, 2004), George et al. (Ann Stat 34:78–91, 2006) and Komaki (Biometrika 88(3):859–864, 2001) results to a subclass of asymmetrical distributions.