{"title":"Modeling and testing for endpoint-inflated count time series with bounded support","authors":"Yao Kang , Xiaojing Fan , Jie Zhang , Ying Tang","doi":"10.1016/j.jspi.2024.106248","DOIUrl":"10.1016/j.jspi.2024.106248","url":null,"abstract":"<div><div>Count time series with bounded support frequently exhibit binomial overdispersion, zero inflation and right-endpoint inflation in practical scenarios. Numerous models have been proposed for the analysis of bounded count time series with binomial overdispersion and zero inflation, yet right-endpoint inflation has received comparatively less attention. To better capture these features, this article introduces three versions of extended first-order binomial autoregressive (BAR(1)) models with endpoint inflation. Corresponding stochastic properties of the new models are investigated and model parameters are estimated by the conditional maximum likelihood and quasi-maximum likelihood methods. A binomial right-endpoint inflation index is also constructed and further used to test whether the data set has endpoint-inflated characteristic with respect to a BAR(1) process. Finally, the proposed models are applied to two real data examples. Firstly, we illustrate the usefulness of the proposed models through an application to the voting data on supporting interest rate changes during consecutive monthly meetings of the Monetary Policy Council at the National Bank of Poland. Then, we apply the proposed models to the number of police stations that received at least one drunk driving report per month. The results of the two real data examples indicate that the new models have significant advantages in terms of fitting performance for the bounded count time series with endpoint inflation.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"237 ","pages":"Article 106248"},"PeriodicalIF":0.8,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semi-parametric empirical likelihood inference on quantile difference between two samples with length-biased and right-censored data","authors":"Li Xun , Xin Guan , Yong Zhou","doi":"10.1016/j.jspi.2024.106249","DOIUrl":"10.1016/j.jspi.2024.106249","url":null,"abstract":"<div><div>Exploring quantile differences between two populations at various probability levels offers valuable insights into their distinctions, which are essential for practical applications such as assessing treatment effects. However, estimating these differences can be challenging due to the complex data often encountered in clinical trials. This paper assumes that right-censored data and length-biased right-censored data originate from two populations of interest. We propose an adjusted smoothed empirical likelihood (EL) method for inferring quantile differences and establish the asymptotic properties of the proposed estimators. Under mild conditions, we demonstrate that the adjusted log-EL ratio statistics asymptotically follow the standard chi-squared distribution. We construct confidence intervals for the quantile differences using both normal and chi-squared approximations and develop a likelihood ratio test for these differences. The performance of our proposed methods is illustrated through simulation studies. Finally, we present a case study utilizing Oscar award nomination data to demonstrate the application of our method.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"237 ","pages":"Article 106249"},"PeriodicalIF":0.8,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sieve estimation of the accelerated mean model based on panel count data","authors":"Xiaoyang Li , Zhi-Sheng Ye , Xingqiu Zhao","doi":"10.1016/j.jspi.2024.106247","DOIUrl":"10.1016/j.jspi.2024.106247","url":null,"abstract":"<div><div>Panel count data are gathered when subjects are examined at discrete times during a study, and only the number of recurrent events occurring before each examination time is recorded. We consider a semiparametric accelerated mean model for panel count data in which the effect of the covariates is to transform the time scale of the baseline mean function. Semiparametric inference for the model is inherently challenging because the finite-dimensional regression parameters appear in the argument of the (infinite-dimensional) functional parameter, i.e., the baseline mean function, leading to the phenomenon of bundled parameters. We propose sieve pseudolikelihood and likelihood methods to construct the random criterion function for estimating the model parameters. An inexact block coordinate ascent algorithm is used to obtain these estimators. We establish the consistency and rate of convergence of the proposed estimators, as well as the asymptotic normality of the estimators of the regression parameters. Novel consistent estimators of the asymptotic covariances of the estimated regression parameters are derived by leveraging the counting process associated with the examination times. Comprehensive simulation studies demonstrate that the optimization algorithm is much less sensitive to the initial values than the Newton–Raphson method. The proposed estimators perform well for practical sample sizes, and are more efficient than existing methods. An example based on real data shows that due to this efficiency gain, the proposed method is better able to detect the significance of practically meaningful covariates than an existing method.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"237 ","pages":"Article 106247"},"PeriodicalIF":0.8,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The proximal bootstrap for constrained estimators","authors":"Jessie Li","doi":"10.1016/j.jspi.2024.106245","DOIUrl":"10.1016/j.jspi.2024.106245","url":null,"abstract":"<div><div>We demonstrate how to conduct uniformly asymptotically valid inference for <span><math><msqrt><mrow><mi>n</mi></mrow></msqrt></math></span>-consistent estimators defined as the solution to a constrained optimization problem with a possibly nonsmooth or nonconvex sample objective function and a possibly nonconvex constraint set. We allow for the solution to the problem to be on the boundary of the constraint set or to drift towards the boundary of the constraint set as the sample size goes to infinity. We construct a confidence set by benchmarking a test statistic against critical values that can be obtained from a simple unconstrained quadratic programming problem. Monte Carlo simulations illustrate the uniformly correct coverage of our method in a boundary constrained maximum likelihood model, a boundary constrained nonsmooth GMM model, and a conditional logit model with capacity constraints.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106245"},"PeriodicalIF":0.8,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testing the equality of distributions using integrated maximum mean discrepancy","authors":"Tianxuan Ding , Zhimei Li , Yaowu Zhang","doi":"10.1016/j.jspi.2024.106246","DOIUrl":"10.1016/j.jspi.2024.106246","url":null,"abstract":"<div><div>Comparing and testing for the homogeneity of two independent random samples is a fundamental statistical problem with many applications across various fields. However, existing methods may not be effective when the data is complex or high-dimensional. We propose a new method that integrates the maximum mean discrepancy (MMD) with a Gaussian kernel over all one-dimensional projections of the data. We derive the closed-form expression of the integrated MMD and prove its validity as a distributional similarity metric. We estimate the integrated MMD with the <span><math><mi>U</mi></math></span>-statistic theory and study its asymptotic behaviors under the null and two kinds of alternative hypotheses. We demonstrate that our method has the benefits of the MMD, and outperforms existing methods on both synthetic and real datasets, especially when the data is complex and high-dimensional.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106246"},"PeriodicalIF":0.8,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semiparametric estimation of a principal functional coefficient panel data model with cross-sectional dependence and its application to cigarette demand","authors":"Yan-Yong Zhao , Ling-Ling Ge , Kong-Sheng Zhang","doi":"10.1016/j.jspi.2024.106244","DOIUrl":"10.1016/j.jspi.2024.106244","url":null,"abstract":"<div><div>In this paper, we consider the estimation of functional coefficient panel data models with cross-sectional dependence. Borrowing the principal component structure, the functional coefficient panel data models can be transformed into a semiparametric panel data model. Combining the local linear dummy variable technique and profile least squares method, we develop a semiparametric profile method to estimate the coefficient functions. A gradient-descent iterative algorithm is employed to enhance computation speed and estimation accuracy. The main results show that the resulting parameter estimator enjoys asymptotic normality with a <span><math><msqrt><mrow><mi>N</mi><mi>T</mi></mrow></msqrt></math></span> convergence rate and the nonparametric estimator is asymptotically normal with a nonparametric convergence rate <span><math><msqrt><mrow><mi>N</mi><mi>T</mi><mi>h</mi></mrow></msqrt></math></span> when both the number of cross-sectional units <span><math><mi>N</mi></math></span> and the length of time series <span><math><mi>T</mi></math></span> go to infinity, under some regularity conditions. Monte Carlo simulations are carried out to evaluate the proposed methods, and an application to cigarette demand is investigated for illustration.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106244"},"PeriodicalIF":0.8,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A family of discrete maximum-entropy distributions","authors":"David J. Hessen","doi":"10.1016/j.jspi.2024.106243","DOIUrl":"10.1016/j.jspi.2024.106243","url":null,"abstract":"<div><div>In this paper, a family of maximum-entropy distributions with general discrete support is derived. Members of the family are distinguished by the number of specified non-central moments. In addition, a subfamily of discrete symmetric distributions is defined. Attention is paid to maximum likelihood estimation of the parameters of any member of the general family. It is shown that the parameters of any special case with infinite support can be estimated using a conditional distribution given a finite subset of the total support. In an empirical data example, the procedures proposed are demonstrated.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106243"},"PeriodicalIF":0.8,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Risk minimization using robust experimental or sampling designs and mixture of designs","authors":"Ejub Talovic, Yves Tillé","doi":"10.1016/j.jspi.2024.106241","DOIUrl":"10.1016/j.jspi.2024.106241","url":null,"abstract":"<div><div>For both experimental and sampling designs, the efficiency or balance of designs has been extensively studied. There are many ways to incorporate auxiliary information into designs. However, when we use balanced designs to decrease the variance due to an auxiliary variable, the variance may increase due to an effect which we define as lack of robustness. This robustness can be written as the largest eigenvalue of the variance operator of a sampling or experimental design. If this eigenvalue is large, then it might induce a large variance in the Horvitz–Thompson estimator of the total. We calculate or estimate the largest eigenvalue of the most common designs. We determine lower, upper bounds and approximations of this eigenvalue for different designs. Then, we compare these results with simulations that show the trade-off between efficiency and robustness. Those results can be used to determine the proper choice of designs for experiments such as clinical trials or surveys. We also propose a new and simple method for mixing two sampling designs, which allows to use a tuning parameter between two sampling designs. This method is then compared to the Gram–Schmidt walk design, which also governs the trade-off between robustness and efficiency. A set of simulation studies shows that our method of mixture gives similar results to the Gram–Schmidt walk design while having an interpretable variance matrix.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106241"},"PeriodicalIF":0.8,"publicationDate":"2024-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal s-level fractional factorial designs under baseline parameterization","authors":"Zhaohui Yan, Shengli Zhao","doi":"10.1016/j.jspi.2024.106242","DOIUrl":"10.1016/j.jspi.2024.106242","url":null,"abstract":"<div><div>In this paper, we explore the minimum aberration criterion for <span><math><mi>s</mi></math></span>-level designs under baseline parameterization, called BP-MA. We give a complete search method and an incomplete search method to obtain the BP-MA (or nearly BP-MA) designs. The methodology has no restriction on <span><math><mi>s</mi></math></span>, the levels of the factors. The catalogues of (nearly) BP-MA designs with <span><math><mrow><mi>s</mi><mo>=</mo><mn>2</mn><mo>,</mo><mn>3</mn><mo>,</mo><mn>4</mn><mo>,</mo><mn>5</mn></mrow></math></span> levels are provided.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106242"},"PeriodicalIF":0.8,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142357419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shifted BH methods for controlling false discovery rate in multiple testing of the means of correlated normals against two-sided alternatives","authors":"Sanat K. Sarkar, Shiyu Zhang","doi":"10.1016/j.jspi.2024.106238","DOIUrl":"10.1016/j.jspi.2024.106238","url":null,"abstract":"<div><div>For simultaneous testing of multivariate normal means with known correlation matrix against two-sided alternatives, this paper introduces new methods with proven finite-sample control of false discovery rate. The methods are obtained by shifting each <span><math><mi>p</mi></math></span>-value to the left and considering a Benjamini–Hochberg-type linear step-up procedure based on these shifted <span><math><mi>p</mi></math></span>-values. The amount of shift for each <span><math><mi>p</mi></math></span>-value is appropriately determined from the correlation matrix to achieve the desired false discovery rate control. Simulation studies and real-data application show favorable performances of the proposed methods when compared with relevant competitors.</div></div>","PeriodicalId":50039,"journal":{"name":"Journal of Statistical Planning and Inference","volume":"236 ","pages":"Article 106238"},"PeriodicalIF":0.8,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}