{"title":"On Aggregation of Uncensored and Censored Observations","authors":"Sam Efromovich","doi":"10.3103/s1066530724700078","DOIUrl":"https://doi.org/10.3103/s1066530724700078","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In survival analysis a random right-censoring partitions data into uncensored and censored observations of the lifetime of interest. The dominance of uncensored observations is a familiar methodology in nonparametric estimation motivated by the classical Kaplan–Meier product-limit and Cox partial likelihood estimators. Nonetheless, for high rate censoring it is of interest to understand what, if anything, can be done by aggregating uncensored and censored observations for the staple nonparametric problems of density and regression estimation. The oracle, who knows distribution of the censoring lifetime, can use each subsample for consistent estimation and hence may shed light on the aggregation. The oracle’s asymptotic theory reveals that density estimation, based on censored observations, is an ill-posed problem with slower rates of risk convergence, the ill-posedness occurs in frequency-domain, its severity increases with frequency, and accordingly a special aggregation on low frequencies may be beneficial. On the other hand, censored observations are not ill-posed for nonparametric regression and the aggregation is feasible. Based on these theoretical results, methodology of aggregation in frequency domain is developed and proposed estimators are tested on simulated and real examples.</p>","PeriodicalId":46039,"journal":{"name":"Mathematical Methods of Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of Parameters of Misclassified Size Biased Uniform Poisson Distribution and Its Application","authors":"B. S. Trivedi, D. R. Barot, M. N. Patel","doi":"10.3103/s106653072470008x","DOIUrl":"https://doi.org/10.3103/s106653072470008x","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Statistical data analysis is of great interest in every field of\u0000management, business, engineering, medicine, etc. At the time of\u0000classification and analysis, errors may arise, like a\u0000classification of observation in the other class instead of the\u0000actual class. All fields of science and economics have substantial\u0000problems due to misclassification errors in the observed data. Due\u0000to a misclassification error in the data, the sampling process may\u0000not suggest an appropriate probability distribution, and in that\u0000case, inference is impaired. When these types of errors are\u0000identified in variables, it is expected to consider the problem’s\u0000solution regarding classification errors. This paper presents the\u0000situation where specific counts are reported erroneously as\u0000belonging to other counts in the context of size biased Uniform\u0000Poisson distribution, the so-called misclassified size biased\u0000Uniform Poisson distribution. Further, we have estimated the\u0000parameters of misclassified size biased Uniform Poisson\u0000distribution by applying the method of moments, maximum likelihood\u0000method, and approximate Bayes estimation method. A simulation\u0000study is carried out to assess the performance of estimation\u0000methods. A real dataset is discussed to demonstrate the\u0000suitability and applicability of the proposed distribution in the\u0000modeling count dataset. A Monte Carlo simulation study is\u0000presented to compare the estimators. The simulation results show\u0000that the ML estimates perform better than their corresponding\u0000moment estimates and approximate Bayes estimates.</p>","PeriodicalId":46039,"journal":{"name":"Mathematical Methods of Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rates of the Strong Uniform Consistency with Rates for Conditional U-Statistics Estimators with General Kernels on Manifolds","authors":"Salim Bouzebda, Nourelhouda Taachouche","doi":"10.3103/s1066530724700066","DOIUrl":"https://doi.org/10.3103/s1066530724700066","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>\u0000<span>(U)</span>-statistics represent a fundamental class of statistics from modeling quantities of interest defined by multi-subject responses. <span>(U)</span>-statistics generalize the empirical mean of a random variable <span>(X)</span> to sums over every <span>(m)</span>-tuple of distinct observations of <span>(X)</span>. Stute [103] introduced a class of so-called conditional <span>(U)</span>-statistics, which may be viewed as a generalization of the Nadaraya-Watson estimates of a regression function. Stute proved their strong pointwise consistency to:</p><span>$$r^{(k)}(varphi,tilde{mathbf{t}}):=mathbb{E}[varphi(Y_{1},ldots,Y_{k})|(X_{1},ldots,X_{k})=tilde{mathbf{t}}]quadtextrm{for}quadtilde{mathbf{t}}=left(mathbf{t}_{1},ldots,mathbf{t}_{k}right)inmathbb{R}^{dk}.$$</span><p>In the analysis of modern machine learning algorithms, sometimes we need to manipulate kernel estimation within the nonconventional setting with intricate kernels that might even be irregular and asymmetric. In this general setting, we obtain the strong uniform consistency result for the general kernel on Riemannian manifolds with Riemann integrable kernels for the conditional <span>(U)</span>-processes. We treat both cases when the class of functions is bounded or unbounded, satisfying some moment conditions. These results are proved under some standard structural conditions on the classes of functions and some mild conditions on the model. Our findings are applied to the regression function, the set indexed conditional <span>(U)</span>-statistics, the generalized <span>(U)</span>-statistics, and the discrimination problem. The theoretical results established in this paper are (or will be) key tools for many further developments in manifold data analysis.</p>","PeriodicalId":46039,"journal":{"name":"Mathematical Methods of Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suheir Kareem Ramani, Habib Jafari, Ghobad Saadat Kia
{"title":"Stochastic Comparisons of the Smallest Claim Amounts from Two Heterogeneous Portfolios Following Exponentiated Weibull Distribution","authors":"Suheir Kareem Ramani, Habib Jafari, Ghobad Saadat Kia","doi":"10.3103/s1066530724700108","DOIUrl":"https://doi.org/10.3103/s1066530724700108","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In actuarial science, it is often of interest to compare stochastically smallest claim amounts from heterogeneous portfolios. In this paper, we obtain the usual stochastic order between the smallest claim amounts when the matrix of parameters <span>((boldsymbol{alpha})</span>, <span>(boldsymbol{lambda}))</span> changes to another matrix in terms of chain majorization order. By using the Archimedean copula and weak majorization conceptions, we also obtain some conditions for comparison of smallest claim amounts in terms of usual stochastic order.</p>","PeriodicalId":46039,"journal":{"name":"Mathematical Methods of Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Asymptotic Properties of Extrema of Moving Sums of Independent Non-identically Distributed Variables","authors":"Narayanaswamy Balakrishnan, Alexei Stepanov","doi":"10.3103/s1066530724700091","DOIUrl":"https://doi.org/10.3103/s1066530724700091","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In this work, we discuss the asymptotic behavior of minima and maxima of moving sums of independent and non-identically distributed random variables. We first establish some theoretical results associated with the asymptotic behavior of minima and maxima. Then, we apply these results to exponential and normal models. We also derive strong limit results for the minima and maxima of moving sums taken from these two models.</p>","PeriodicalId":46039,"journal":{"name":"Mathematical Methods of Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Functional Uniform-in-Bandwidth Moderate Deviation Principle for the Local Empirical Processes Involving Functional Data","authors":"Nour-Eddine Berrahou, Salim Bouzebda, Lahcen Douge","doi":"10.3103/s1066530724700030","DOIUrl":"https://doi.org/10.3103/s1066530724700030","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Our research employs general empirical process methods to investigate and establish moderate deviation principles for kernel-type function estimators that rely on an infinite-dimensional covariate, subject to mild regularity conditions. In doing so, we introduce a valuable moderate deviation principle for a function-indexed process, utilizing intricate exponential contiguity arguments. The primary objective of this paper is to contribute to the existing literature on functional data analysis by establishing functional moderate deviation principles for both Nadaraya–Watson and conditional distribution processes. These principles serve as fundamental tools for analyzing and understanding the behavior of these processes in the context of functional data analysis. By extending the scope of moderate deviation principles to the realm of functional data analysis, we enhance our understanding of the statistical properties and limitations of kernel-type function estimators when dealing with infinite-dimensional covariates. Our findings provide valuable insights and contribute to the advancement of statistical methodology in functional data analysis.</p>","PeriodicalId":46039,"journal":{"name":"Mathematical Methods of Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140803620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Controlling Separation in Generating Samples for Logistic Regression Models","authors":"Huong T. T. Pham, Hoa Pham","doi":"10.3103/s1066530724700017","DOIUrl":"https://doi.org/10.3103/s1066530724700017","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>Separation has a significant impact on parameter estimates for logistic regression models in frequentist approach and in Bayesian approach. When separation presents in a sample, the maximum likelihood estimation (MLE) does not exist through standard estimation methods. The existence of posterior means is affected by the presence of separation and also depended on the forms of prior distributions. Therefore, controlling the appearance of separation in generating samples from the logistic regression models has an important role for parameter estimation techniques. In this paper, we propose necessary and sufficient conditions for separation occurring in the logistic regression samples with two dimensional models and multiple dimensional models of independent variables. By using the technique of rotating Castesian coordinates of p dimensions, the characteristic of separation occurring in general cases is presented. Using these results, we propose algorithms to control the probability of separation appearance in generated samples for given sample sizes and multiple dimensional models of independent variables. The simulation studies show that the proposed algorithms can effectively generate the designed random samples with controlling the probability of separation appearance.</p>","PeriodicalId":46039,"journal":{"name":"Mathematical Methods of Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140803675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aleksandr Chen, Nadezhda Gribkova, Ričardas Zitikis
{"title":"Assessing Monotonicity: An Approach Based on Transformed Order Statistics","authors":"Aleksandr Chen, Nadezhda Gribkova, Ričardas Zitikis","doi":"10.3103/s1066530724700054","DOIUrl":"https://doi.org/10.3103/s1066530724700054","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In a number of research areas, such as non-convex optimization and machine learning, determining and assessing regions of monotonicity of functions is pivotal. Numerically, it can be done using the proportion of positive (or negative) increments of transformed ordered inputs. When the number of inputs grows, the proportion tends to an index of increase (or decrease) of the underlying function. In this paper, we introduce a most general index of monotonicity and provide its interpretation in all practically relevant scenarios, including those that arise when the distribution of inputs has jumps and flat regions, and when the function is only piecewise differentiable. This enables us to assess monotonicity of very general functions under particularly mild conditions on the inputs.</p>","PeriodicalId":46039,"journal":{"name":"Mathematical Methods of Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140803672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Truncated Estimators for a Precision Matrix","authors":"Anis M. Haddouche, Dominique Fourdrinier","doi":"10.3103/s1066530724700029","DOIUrl":"https://doi.org/10.3103/s1066530724700029","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In this paper, we estimate the precision matrix <span>({Sigma}^{-1})</span> of a Gaussian multivariate linear regression model through its canonical form <span>(({Z}^{T},{U}^{T})^{T})</span> where <span>(Z)</span> and <span>(U)</span> are respectively an <span>(mtimes p)</span> and an <span>(ntimes p)</span> matrices. This problem is addressed under the data-based loss function <span>(textrm{tr} [({hat{Sigma}}^{-1}-{Sigma}^{-1})S]^{2})</span>, where <span>({hat{Sigma}}^{-1})</span> estimates <span>({Sigma}^{-1})</span>, for any ordering of <span>(m,n)</span> and <span>(p)</span>, in a unified approach. We derive estimators which, besides the information contained in the sample covariance matrix <span>(S={U}^{T}U)</span>, use the information contained in the sample mean <span>(Z)</span>. We provide conditions for which these estimators improve over the usual estimators <span>(a{S}^{+})</span> where <span>(a)</span> is a positive constant and <span>({S}^{+})</span> is the Moore-Penrose inverse of <span>(S)</span>. Thanks to the role of <span>(Z)</span>, such estimators are also improved by their truncated version.</p>","PeriodicalId":46039,"journal":{"name":"Mathematical Methods of Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140803686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Characterizing Existence and Location of the ML Estimate in the Conway–Maxwell–Poisson Model","authors":"Stefan Bedbur, Anton Imm, Udo Kamps","doi":"10.3103/s1066530724700042","DOIUrl":"https://doi.org/10.3103/s1066530724700042","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>As a flexible extension of the common Poisson model, the Conway–Maxwell–Poisson distribution allows for describing under- and overdispersion in count data via an additional parameter. Estimation methods for two Conway–Maxwell–Poisson parameters are then required to specify the model. In this work, two characterization results are provided related to maximum likelihood estimation of the Conway–Maxwell–Poisson parameters. The first states that maximum likelihood estimation fails if and only if the range of the observations is less than two. Assuming that the maximum likelihood estimate exists, the second result then comprises a simple necessary and sufficient condition for the maximum likelihood estimate to be a solution of the likelihood equation; otherwise it lies on the boundary of the parameter set. A simulation study is carried out to investigate the accuracy of the maximum likelihood estimate in dependence of the range of the underlying observations.</p>","PeriodicalId":46039,"journal":{"name":"Mathematical Methods of Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140803669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}