Journal of Statistical Planning and Inference最新文献

筛选
英文 中文
A unified Fourier slice method to derive ridgelet transform for a variety of depth-2 neural networks 为各种深度-2 神经网络推导小岭变换的统一傅立叶切片法
IF 0.9 4区 数学
Journal of Statistical Planning and Inference Pub Date : 2024-04-15 DOI: 10.1016/j.jspi.2024.106184
Sho Sonoda , Isao Ishikawa , Masahiro Ikeda
{"title":"A unified Fourier slice method to derive ridgelet transform for a variety of depth-2 neural networks","authors":"Sho Sonoda ,&nbsp;Isao Ishikawa ,&nbsp;Masahiro Ikeda","doi":"10.1016/j.jspi.2024.106184","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106184","url":null,"abstract":"<div><p>To investigate neural network parameters, it is easier to study the distribution of parameters than to study the parameters in each neuron. The ridgelet transform is a pseudo-inverse operator that maps a given function <span><math><mi>f</mi></math></span> to the parameter distribution <span><math><mi>γ</mi></math></span> so that a network <span><math><mrow><mstyle><mi>N</mi><mi>N</mi></mstyle><mrow><mo>[</mo><mi>γ</mi><mo>]</mo></mrow></mrow></math></span> reproduces <span><math><mi>f</mi></math></span>, i.e. <span><math><mrow><mstyle><mi>N</mi><mi>N</mi></mstyle><mrow><mo>[</mo><mi>γ</mi><mo>]</mo></mrow><mo>=</mo><mi>f</mi></mrow></math></span>. For depth-2 fully-connected networks on a Euclidean space, the ridgelet transform has been discovered up to the closed-form expression, thus we could describe how the parameters are distributed. However, for a variety of modern neural network architectures, the closed-form expression has not been known. In this paper, we explain a systematic method using Fourier expressions to derive ridgelet transforms for a variety of modern networks such as networks on finite fields <span><math><msub><mrow><mi>F</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span>, group convolutional networks on abstract Hilbert space <span><math><mi>H</mi></math></span>, fully-connected networks on noncompact symmetric spaces <span><math><mrow><mi>G</mi><mo>/</mo><mi>K</mi></mrow></math></span>, and pooling layers, or the <span><math><mi>d</mi></math></span>-plane ridgelet transform.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"233 ","pages":"Article 106184"},"PeriodicalIF":0.9,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378375824000417/pdfft?md5=98e3c89ff86925f67f13c56d174f0109&pid=1-s2.0-S0378375824000417-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140618803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust nonparametric regression based on deep ReLU neural networks 基于深度 ReLU 神经网络的鲁棒非参数回归
IF 0.9 4区 数学
Journal of Statistical Planning and Inference Pub Date : 2024-04-15 DOI: 10.1016/j.jspi.2024.106182
Juntong Chen
{"title":"Robust nonparametric regression based on deep ReLU neural networks","authors":"Juntong Chen","doi":"10.1016/j.jspi.2024.106182","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106182","url":null,"abstract":"<div><p>In this paper, we consider robust nonparametric regression using deep neural networks with ReLU activation function. While several existing theoretically justified methods are geared towards robustness against identical heavy-tailed noise distributions, the rise of adversarial attacks has emphasized the importance of safeguarding estimation procedures against systematic contamination. We approach this statistical issue by shifting our focus towards estimating conditional distributions. To address it robustly, we introduce a novel estimation procedure based on <span><math><mi>ℓ</mi></math></span>-estimation. Under a mild model assumption, we establish general non-asymptotic risk bounds for the resulting estimators, showcasing their robustness against contamination, outliers, and model misspecification. We then delve into the application of our approach using deep ReLU neural networks. When the model is well-specified and the regression function belongs to an <span><math><mi>α</mi></math></span>-Hölder class, employing <span><math><mi>ℓ</mi></math></span>-type estimation on suitable networks enables the resulting estimators to achieve the minimax optimal rate of convergence. Additionally, we demonstrate that deep <span><math><mi>ℓ</mi></math></span>-type estimators can circumvent the curse of dimensionality by assuming the regression function closely resembles the composition of several Hölder functions. To attain this, new deep fully-connected ReLU neural networks have been designed to approximate this composition class. This approximation result can be of independent interest.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"233 ","pages":"Article 106182"},"PeriodicalIF":0.9,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378375824000399/pdfft?md5=79a5bc36ebe3d6024d39b9f8adf1f910&pid=1-s2.0-S0378375824000399-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140649412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convergence guarantees for forward gradient descent in the linear regression model 线性回归模型中前向梯度下降的收敛保证
IF 0.9 4区 数学
Journal of Statistical Planning and Inference Pub Date : 2024-04-06 DOI: 10.1016/j.jspi.2024.106174
Thijs Bos , Johannes Schmidt-Hieber
{"title":"Convergence guarantees for forward gradient descent in the linear regression model","authors":"Thijs Bos ,&nbsp;Johannes Schmidt-Hieber","doi":"10.1016/j.jspi.2024.106174","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106174","url":null,"abstract":"<div><p>Renewed interest in the relationship between artificial and biological neural networks motivates the study of gradient-free methods. Considering the linear regression model with random design, we theoretically analyze in this work the biologically motivated (weight-perturbed) forward gradient scheme that is based on random linear combination of the gradient. If <span><math><mi>d</mi></math></span> denotes the number of parameters and <span><math><mi>k</mi></math></span> the number of samples, we prove that the mean squared error of this method converges for <span><math><mrow><mi>k</mi><mo>≳</mo><msup><mrow><mi>d</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>log</mo><mrow><mo>(</mo><mi>d</mi><mo>)</mo></mrow></mrow></math></span> with rate <span><math><mrow><msup><mrow><mi>d</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>log</mo><mrow><mo>(</mo><mi>d</mi><mo>)</mo></mrow><mo>/</mo><mi>k</mi></mrow></math></span>. Compared to the dimension dependence <span><math><mi>d</mi></math></span> for stochastic gradient descent, an additional factor <span><math><mrow><mi>d</mi><mo>log</mo><mrow><mo>(</mo><mi>d</mi><mo>)</mo></mrow></mrow></math></span> occurs.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"233 ","pages":"Article 106174"},"PeriodicalIF":0.9,"publicationDate":"2024-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378375824000314/pdfft?md5=fc5918288c472da3301b467d899078ad&pid=1-s2.0-S0378375824000314-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140536571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward improved inference for Krippendorff’s Alpha agreement coefficient 改进克里彭多夫阿尔法一致系数的推断方法
IF 0.9 4区 数学
Journal of Statistical Planning and Inference Pub Date : 2024-04-05 DOI: 10.1016/j.jspi.2024.106170
John Hughes
{"title":"Toward improved inference for Krippendorff’s Alpha agreement coefficient","authors":"John Hughes","doi":"10.1016/j.jspi.2024.106170","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106170","url":null,"abstract":"<div><p>In this article I recommend a better point estimator for Krippendorff’s Alpha agreement coefficient, and develop a jackknife variance estimator that leads to much better interval estimation than does the customary bootstrap procedure or an alternative bootstrap procedure. Having developed the new methodology, I analyze nominal data previously analyzed by Krippendorff, and two experimentally observed datasets: (1) ordinal data from an imaging study of congenital diaphragmatic hernia, and (2) United States Environmental Protection Agency air pollution data for the Philadelphia, Pennsylvania area. The latter two applications are novel. The proposed methodology is now supported in version 2.0 of my open source R package, <span>krippendorffsalpha</span>, which supports common and user-defined distance functions, and can accommodate any number of units, any number of coders, and missingness. Interval computation can be parallelized.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"233 ","pages":"Article 106170"},"PeriodicalIF":0.9,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140549711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Informed censoring: The parametric combination of data and expert information 知情剔除:数据和专家信息的参数组合
IF 0.9 4区 数学
Journal of Statistical Planning and Inference Pub Date : 2024-04-05 DOI: 10.1016/j.jspi.2024.106171
Hansjörg Albrecher , Martin Bladt
{"title":"Informed censoring: The parametric combination of data and expert information","authors":"Hansjörg Albrecher ,&nbsp;Martin Bladt","doi":"10.1016/j.jspi.2024.106171","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106171","url":null,"abstract":"<div><p>The statistical censoring setup is extended to the situation when random measures can be assigned to the realization of datapoints, leading to a new way of incorporating expert information into the usual parametric estimation procedures. The asymptotic theory is provided for the resulting estimators, and some special cases of practical relevance are studied in more detail. Although the proposed framework mathematically generalizes censoring and coarsening at random, and borrows techniques from M-estimation theory, it provides a novel and transparent methodology which enjoys significant practical applicability in situations where expert information is present. The potential of the approach is illustrated by a concrete actuarial application of tail parameter estimation for a heavy-tailed MTPL dataset with limited available expert information.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"233 ","pages":"Article 106171"},"PeriodicalIF":0.9,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378375824000284/pdfft?md5=89a65e4806020bf82eea7d220ec50689&pid=1-s2.0-S0378375824000284-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140536572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-asymptotic model selection for models of network data with parameter vectors of increasing dimension 参数向量维度不断增加的网络数据模型的非渐近模型选择
IF 0.9 4区 数学
Journal of Statistical Planning and Inference Pub Date : 2024-04-05 DOI: 10.1016/j.jspi.2024.106173
Sean Eli , Michael Schweinberger
{"title":"Non-asymptotic model selection for models of network data with parameter vectors of increasing dimension","authors":"Sean Eli ,&nbsp;Michael Schweinberger","doi":"10.1016/j.jspi.2024.106173","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106173","url":null,"abstract":"<div><p>Model selection for network data is an open area of research. Using the <span><math><mi>β</mi></math></span>-model as a convenient starting point, we propose a simple and non-asymptotic approach to model selection of <span><math><mi>β</mi></math></span>-models with and without constraints. Simulations indicate that the proposed model selection approach selects the data-generating model with high probability, in contrast to classical and extended Bayesian Information Criteria. We conclude with an application to the Enron email network, which has 181,831 connections among 36,692 employees.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"233 ","pages":"Article 106173"},"PeriodicalIF":0.9,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140536570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hermite regression estimation in noisy convolution model 噪声卷积模型中的赫米特回归估计
IF 0.9 4区 数学
Journal of Statistical Planning and Inference Pub Date : 2024-03-26 DOI: 10.1016/j.jspi.2024.106168
Ousmane Sacko
{"title":"Hermite regression estimation in noisy convolution model","authors":"Ousmane Sacko","doi":"10.1016/j.jspi.2024.106168","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106168","url":null,"abstract":"<div><p>In this paper, we consider the following regression model: <span><math><mrow><mi>y</mi><mrow><mo>(</mo><mi>k</mi><mi>T</mi><mo>/</mo><mi>n</mi><mo>)</mo></mrow><mo>=</mo><mi>f</mi><mo>⋆</mo><mi>g</mi><mrow><mo>(</mo><mi>k</mi><mi>T</mi><mo>/</mo><mi>n</mi><mo>)</mo></mrow><mo>+</mo><msub><mrow><mi>ɛ</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>,</mo><mi>k</mi><mo>=</mo><mo>−</mo><mi>n</mi><mo>,</mo><mo>…</mo><mo>,</mo><mi>n</mi><mo>−</mo><mn>1</mn></mrow></math></span>, <span><math><mi>T</mi></math></span> fixed, where <span><math><mi>g</mi></math></span> is known and <span><math><mi>f</mi></math></span> is the unknown function to be estimated. The errors <span><math><msub><mrow><mrow><mo>(</mo><msub><mrow><mi>ɛ</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>)</mo></mrow></mrow><mrow><mo>−</mo><mi>n</mi><mo>≤</mo><mi>k</mi><mo>≤</mo><mi>n</mi><mo>−</mo><mn>1</mn></mrow></msub></math></span> are independent and identically distributed centered with finite known variance. Two adaptive estimation methods for <span><math><mi>f</mi></math></span> are considered by exploiting the properties of the Hermite basis. We study the quadratic risk of each estimator. If <span><math><mi>f</mi></math></span> belongs to Sobolev regularity spaces, we derive rates of convergence. Adaptive procedures to select the relevant parameter inspired by the Goldenshluger and Lepski method are proposed and we prove that the resulting estimators satisfy oracle inequalities for sub-Gaussian <span><math><mi>ɛ</mi></math></span>’s. Finally, we illustrate numerically these approaches.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"233 ","pages":"Article 106168"},"PeriodicalIF":0.9,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140350068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How many neurons do we need? A refined analysis for shallow networks trained with gradient descent 我们需要多少神经元?使用梯度下降训练的浅层网络的精细分析
IF 0.9 4区 数学
Journal of Statistical Planning and Inference Pub Date : 2024-03-26 DOI: 10.1016/j.jspi.2024.106169
Mike Nguyen, Nicole Mücke
{"title":"How many neurons do we need? A refined analysis for shallow networks trained with gradient descent","authors":"Mike Nguyen,&nbsp;Nicole Mücke","doi":"10.1016/j.jspi.2024.106169","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106169","url":null,"abstract":"<div><p>We analyze the generalization properties of two-layer neural networks in the neural tangent kernel (NTK) regime, trained with gradient descent (GD). For early stopped GD we derive fast rates of convergence that are known to be minimax optimal in the framework of non-parametric regression in reproducing kernel Hilbert spaces. On our way, we precisely keep track of the number of hidden neurons required for generalization and improve over existing results. We further show that the weights during training remain in a vicinity around initialization, the radius being dependent on structural assumptions such as degree of smoothness of the regression function and eigenvalue decay of the integral operator associated to the NTK.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"233 ","pages":"Article 106169"},"PeriodicalIF":0.9,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0378375824000260/pdfft?md5=7d38fc782951295689c7e96160824723&pid=1-s2.0-S0378375824000260-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A class of mixed-level uniform designs generated by code mapping 通过代码映射生成的一类混合级统一设计
IF 0.9 4区 数学
Journal of Statistical Planning and Inference Pub Date : 2024-03-24 DOI: 10.1016/j.jspi.2024.106166
Liuping Hu , Zujun Ou , Hong Qin
{"title":"A class of mixed-level uniform designs generated by code mapping","authors":"Liuping Hu ,&nbsp;Zujun Ou ,&nbsp;Hong Qin","doi":"10.1016/j.jspi.2024.106166","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106166","url":null,"abstract":"<div><p>Literature reviews reveal that there is a very close connection between experimental design and coding theory. Based on a code mapping transformation, this paper provides a new method to construct a class of mixed designs with two- and four-level. A general construction method is described and some theoretical results of obtained designs are given. Analytic connections are established between the generated and the initial designs in terms of aberration criteria and discrepancies. Sharp lower bounds of the wrap-around <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>- and Lee discrepancies are obtained and used as the benchmarks to measure the uniformity of the generated designs. Examples are provided to illustrate the effectiveness of the construction and lend our results further support.</p></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"233 ","pages":"Article 106166"},"PeriodicalIF":0.9,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140341497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust estimation of a regression function in exponential families 指数族回归函数的稳健估计
IF 0.9 4区 数学
Journal of Statistical Planning and Inference Pub Date : 2024-03-24 DOI: 10.1016/j.jspi.2024.106167
Yannick Baraud, Juntong Chen
{"title":"Robust estimation of a regression function in exponential families","authors":"Yannick Baraud,&nbsp;Juntong Chen","doi":"10.1016/j.jspi.2024.106167","DOIUrl":"https://doi.org/10.1016/j.jspi.2024.106167","url":null,"abstract":"&lt;div&gt;&lt;p&gt;We observe &lt;span&gt;&lt;math&gt;&lt;mi&gt;n&lt;/mi&gt;&lt;/math&gt;&lt;/span&gt; pairs of independent (but not necessarily i.i.d.) random variables &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;X&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;1&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;=&lt;/mo&gt;&lt;mrow&gt;&lt;mo&gt;(&lt;/mo&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;W&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;1&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;,&lt;/mo&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;Y&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;1&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;)&lt;/mo&gt;&lt;/mrow&gt;&lt;mo&gt;,&lt;/mo&gt;&lt;mo&gt;…&lt;/mo&gt;&lt;mo&gt;,&lt;/mo&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;X&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;n&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;=&lt;/mo&gt;&lt;mrow&gt;&lt;mo&gt;(&lt;/mo&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;W&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;n&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;,&lt;/mo&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;Y&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;n&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;)&lt;/mo&gt;&lt;/mrow&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; and tackle the problem of estimating the conditional distributions &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;msubsup&gt;&lt;mrow&gt;&lt;mi&gt;Q&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mo&gt;⋆&lt;/mo&gt;&lt;/mrow&gt;&lt;/msubsup&gt;&lt;mrow&gt;&lt;mo&gt;(&lt;/mo&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;w&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;)&lt;/mo&gt;&lt;/mrow&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; of &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;Y&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt; given &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;W&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;=&lt;/mo&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;w&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; for all &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mo&gt;∈&lt;/mo&gt;&lt;mrow&gt;&lt;mo&gt;{&lt;/mo&gt;&lt;mn&gt;1&lt;/mn&gt;&lt;mo&gt;,&lt;/mo&gt;&lt;mo&gt;…&lt;/mo&gt;&lt;mo&gt;,&lt;/mo&gt;&lt;mi&gt;n&lt;/mi&gt;&lt;mo&gt;}&lt;/mo&gt;&lt;/mrow&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt;. Even though these might not be true, we base our estimator on the assumptions that the data are i.i.d. and the conditional distributions of &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;Y&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt; given &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;W&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;=&lt;/mo&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;w&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; belong to a one parameter exponential family &lt;span&gt;&lt;math&gt;&lt;mover&gt;&lt;mrow&gt;&lt;mi&gt;Q&lt;/mi&gt;&lt;/mrow&gt;&lt;mo&gt;¯&lt;/mo&gt;&lt;/mover&gt;&lt;/math&gt;&lt;/span&gt; with parameter space given by an interval &lt;span&gt;&lt;math&gt;&lt;mi&gt;I&lt;/mi&gt;&lt;/math&gt;&lt;/span&gt;. More precisely, we pretend that these conditional distributions take the form &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;Q&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;θ&lt;/mi&gt;&lt;mrow&gt;&lt;mo&gt;(&lt;/mo&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;w&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;)&lt;/mo&gt;&lt;/mrow&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;∈&lt;/mo&gt;&lt;mover&gt;&lt;mrow&gt;&lt;mi&gt;Q&lt;/mi&gt;&lt;/mrow&gt;&lt;mo&gt;¯&lt;/mo&gt;&lt;/mover&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; for some &lt;span&gt;&lt;math&gt;&lt;mi&gt;θ&lt;/mi&gt;&lt;/math&gt;&lt;/span&gt; that belongs to a VC-class &lt;span&gt;&lt;math&gt;&lt;mover&gt;&lt;mrow&gt;&lt;mi&gt;Θ&lt;/mi&gt;&lt;/mrow&gt;&lt;mo&gt;¯&lt;/mo&gt;&lt;/mover&gt;&lt;/math&gt;&lt;/span&gt; of functions with values in &lt;span&gt;&lt;math&gt;&lt;mi&gt;I&lt;/mi&gt;&lt;/math&gt;&lt;/span&gt;. For each &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mo&gt;∈&lt;/mo&gt;&lt;mrow&gt;&lt;mo&gt;{&lt;/mo&gt;&lt;mn&gt;1&lt;/mn&gt;&lt;mo&gt;,&lt;/mo&gt;&lt;mo&gt;…&lt;/mo&gt;&lt;mo&gt;,&lt;/mo&gt;&lt;mi&gt;n&lt;/mi&gt;&lt;mo&gt;}&lt;/mo&gt;&lt;/mrow&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt;, we estimate &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;msubsup&gt;&lt;mrow&gt;&lt;mi&gt;Q&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mo&gt;⋆&lt;/mo&gt;&lt;/mrow&gt;&lt;/msubsup&gt;&lt;mrow&gt;&lt;mo&gt;(&lt;/mo&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;w&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;mo&gt;)&lt;/mo&gt;&lt;/mrow&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; by a distribution of the same form, i.e. &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;Q&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mover&gt;&lt;mrow&gt;&lt;mi&gt;θ&lt;/mi&gt;&lt;/","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"233 ","pages":"Article 106167"},"PeriodicalIF":0.9,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信