{"title":"Improvements on scalable stochastic Bayesian inference methods for multivariate Hawkes process","authors":"Alex Ziyu Jiang, Abel Rodriguez","doi":"10.1007/s11222-024-10392-x","DOIUrl":"https://doi.org/10.1007/s11222-024-10392-x","url":null,"abstract":"<p>Multivariate Hawkes Processes (MHPs) are a class of point processes that can account for complex temporal dynamics among event sequences. In this work, we study the accuracy and computational efficiency of three classes of algorithms which, while widely used in the context of Bayesian inference, have rarely been applied in the context of MHPs: stochastic gradient expectation-maximization, stochastic gradient variational inference and stochastic gradient Langevin Monte Carlo. An important contribution of this paper is a novel approximation to the likelihood function that allows us to retain the computational advantages associated with conjugate settings while reducing approximation errors associated with the boundary effects. The comparisons are based on various simulated scenarios as well as an application to the study of risk dynamics in the Standard & Poor’s 500 intraday index prices among its 11 sectors.\u0000</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"2018 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140005135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Maximum likelihood estimation of log-concave densities on tree space","authors":"Yuki Takazawa, Tomonari Sei","doi":"10.1007/s11222-024-10400-0","DOIUrl":"https://doi.org/10.1007/s11222-024-10400-0","url":null,"abstract":"<p>Phylogenetic trees are key data objects in biology, and the method of phylogenetic reconstruction has been highly developed. The space of phylogenetic trees is a nonpositively curved metric space. Recently, statistical methods to analyze samples of trees on this space are being developed utilizing this property. Meanwhile, in Euclidean space, the log-concave maximum likelihood method has emerged as a new nonparametric method for probability density estimation. In this paper, we derive a sufficient condition for the existence and uniqueness of the log-concave maximum likelihood estimator on tree space. We also propose an estimation algorithm for one and two dimensions. Since various factors affect the inferred trees, it is difficult to specify the distribution of a sample of trees. The class of log-concave densities is nonparametric, and yet the estimation can be conducted by the maximum likelihood method without selecting hyperparameters. We compare the estimation performance with a previously developed kernel density estimator numerically. In our examples where the true density is log-concave, we demonstrate that our estimator has a smaller integrated squared error when the sample size is large. We also conduct numerical experiments of clustering using the Expectation-Maximization algorithm and compare the results with k-means++ clustering using Fréchet mean.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"10 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Do applied statisticians prefer more randomness or less? Bootstrap or Jackknife?","authors":"Yannis G. Yatracos","doi":"10.1007/s11222-024-10388-7","DOIUrl":"https://doi.org/10.1007/s11222-024-10388-7","url":null,"abstract":"<p>Bootstrap and Jackknife estimates, <span>(T_{n,B}^*)</span> and <span>(T_{n,J},)</span> respectively, of a population parameter <span>(theta )</span> are both used in statistical computations; <i>n</i> is the sample size, <i>B</i> is the number of Bootstrap samples. For any <span>(n_0)</span> and <span>(B_0,)</span> Bootstrap samples do not add new information about <span>(theta )</span> being observations from the original sample and when <span>(B_0<infty ,)</span> <span>(T_{n_0,B_0}^*)</span> includes also resampling variability, an additional source of uncertainty not affecting <span>(T_{n_0, J}.)</span> These are neglected in theoretical papers with results for the utopian <span>(T_{n, infty }^*, )</span> that do not hold for <span>(B<infty .)</span> The consequence is that <span>(T^*_{n_0, B_0})</span> is expected to have larger mean squared error (MSE) than <span>(T_{n_0,J},)</span> namely <span>(T_{n_0,B_0}^*)</span> is inadmissible. The amount of inadmissibility may be very large when populations’ parameters, e.g. the variance, are unbounded and/or with big data. A palliating remedy is increasing <i>B</i>, the larger the better, but the MSEs ordering remains unchanged for <span>(B<infty .)</span> This is confirmed theoretically when <span>(theta )</span> is the mean of a population, and is observed in the estimated total MSE for linear regression coefficients. In the latter, the chance the estimated total MSE with <span>(T_{n,B}^*)</span> improves that with <span>(T_{n,J})</span> decreases to 0 as <i>B</i> increases.\u0000</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"54 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Forward stability and model path selection","authors":"Nicholas Kissel, Lucas Mentch","doi":"10.1007/s11222-024-10395-8","DOIUrl":"https://doi.org/10.1007/s11222-024-10395-8","url":null,"abstract":"<p>Most scientific publications follow the familiar recipe of (i) obtain data, (ii) fit a model, and (iii) comment on the scientific relevance of the effects of particular covariates in that model. This approach, however, ignores the fact that there may exist a multitude of similarly-accurate models in which the implied effects of individual covariates may be vastly different. This problem of finding an entire collection of plausible models has also received relatively little attention in the statistics community, with nearly all of the proposed methodologies being narrowly tailored to a particular model class and/or requiring an exhaustive search over all possible models, making them largely infeasible in the current big data era. This work develops the idea of forward stability and proposes a novel, computationally-efficient approach to finding collections of accurate models we refer to as model path selection (MPS). MPS builds up a plausible model collection via a forward selection approach and is entirely agnostic to the model class and loss function employed. The resulting model collection can be displayed in a simple and intuitive graphical fashion, easily allowing practitioners to visualize whether some covariates can be swapped for others with minimal loss.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"41 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The minimum covariance determinant estimator for interval-valued data","authors":"Wan Tian, Zhongfeng Qin","doi":"10.1007/s11222-024-10386-9","DOIUrl":"https://doi.org/10.1007/s11222-024-10386-9","url":null,"abstract":"<p>Effective estimation of covariance matrices is crucial for statistical analyses and applications. In this paper, we focus on the robust estimation of covariance matrix for interval-valued data in low and moderately high dimensions. In the low-dimensional scenario, we extend the Minimum Covariance Determinant (MCD) estimator to interval-valued data. We derive an iterative algorithm for computing this estimator, demonstrate its convergence, and theoretically establish that it retains the high breakdown-point property of the MCD estimator. Further, we propose a projection-based estimator and a regularization-based estimator to extend the MCD estimator to moderately high-dimensional settings, respectively. We propose efficient iterative algorithms for solving these two estimators and demonstrate their convergence properties. We conduct extensive simulation studies and real data analysis to validate the finite sample properties of these proposed estimators.\u0000</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"11 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139902728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesco Amato, Julien Jacques, Isabelle Prim-Allaz
{"title":"Clustering longitudinal ordinal data via finite mixture of matrix-variate distributions","authors":"Francesco Amato, Julien Jacques, Isabelle Prim-Allaz","doi":"10.1007/s11222-024-10390-z","DOIUrl":"https://doi.org/10.1007/s11222-024-10390-z","url":null,"abstract":"<p>In social sciences, studies are often based on questionnaires asking participants to express ordered responses several times over a study period. We present a model-based clustering algorithm for such longitudinal ordinal data. Assuming that an ordinal variable is the discretization of an underlying latent continuous variable, the model relies on a mixture of matrix-variate normal distributions, accounting simultaneously for within- and between-time dependence structures. The model is thus able to concurrently model the heterogeneity, the association among the responses and the temporal dependence structure. An EM algorithm is developed and presented for parameters estimation, and approaches to deal with some arising computational challenges are outlined. An evaluation of the model through synthetic data shows its estimation abilities and its advantages when compared to competitors. A real-world application concerning changes in eating behaviors during the Covid-19 pandemic period in France will be presented.\u0000</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"39 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139902724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enmsp: an elastic-net multi-step screening procedure for high-dimensional regression","authors":"Yushan Xue, Jie Ren, Bin Yang","doi":"10.1007/s11222-024-10394-9","DOIUrl":"https://doi.org/10.1007/s11222-024-10394-9","url":null,"abstract":"<p>To improve the estimation efficiency of high-dimensional regression problems, penalized regularization is routinely used. However, accurately estimating the model remains challenging, particularly in the presence of correlated effects, wherein irrelevant covariates exhibit strong correlation with relevant ones. This situation, referred to as correlated data, poses additional complexities for model estimation. In this paper, we propose the elastic-net multi-step screening procedure (EnMSP), an iterative algorithm designed to recover sparse linear models in the context of correlated data. EnMSP uses a small repeated penalty strategy to identify truly relevant covariates in a few iterations. Specifically, in each iteration, EnMSP enhances the adaptive lasso method by adding a weighted <span>(l_2)</span> penalty, which improves the selection of relevant covariates. The method is shown to select the true model and achieve the <span>(l_2)</span>-norm error bound under certain conditions. The effectiveness of EnMSP is demonstrated through numerical comparisons and applications in financial data.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"26 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian parameter inference for partially observed stochastic volterra equations","authors":"Ajay Jasra, Hamza Ruzayqat, Amin Wu","doi":"10.1007/s11222-024-10389-6","DOIUrl":"https://doi.org/10.1007/s11222-024-10389-6","url":null,"abstract":"<p>In this article we consider Bayesian parameter inference for a type of partially observed stochastic Volterra equation (SVE). SVEs are found in many areas such as physics and mathematical finance. In the latter field they can be used to represent long memory in unobserved volatility processes. In many cases of practical interest, SVEs must be time-discretized and then parameter inference is based upon the posterior associated to this time-discretized process. Based upon recent studies on time-discretization of SVEs (e.g. Richard et al. in Stoch Proc Appl 141:109–138, 2021) we use Euler–Maruyama methods for the afore-mentioned discretization. We then show how multilevel Markov chain Monte Carlo (MCMC) methods (Jasra et al. in SIAM J Sci Comp 40:A887–A902, 2018) can be applied in this context. In the examples we study, we give a proof that shows that the cost to achieve a mean square error (MSE) of <span>(mathcal {O}(epsilon ^2))</span>, <span>(epsilon >0)</span>, is <span>(mathcal {O}(epsilon ^{-tfrac{4}{2H+1}}))</span>, where <i>H</i> is the Hurst parameter. If one uses a single level MCMC method then the cost is <span>(mathcal {O}(epsilon ^{-tfrac{2(2H+3)}{2H+1}}))</span> to achieve the same MSE. We illustrate these results in the context of state-space and stochastic volatility models, with the latter applied to real data.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"19 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139754209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subsampling approach for least squares fitting of semi-parametric accelerated failure time models to massive survival data","authors":"Zehan Yang, HaiYing Wang, Jun Yan","doi":"10.1007/s11222-024-10391-y","DOIUrl":"https://doi.org/10.1007/s11222-024-10391-y","url":null,"abstract":"<p>Massive survival data are increasingly common in many research fields, and subsampling is a practical strategy for analyzing such data. Although optimal subsampling strategies have been developed for Cox models, little has been done for semiparametric accelerated failure time (AFT) models due to the challenges posed by non-smooth estimating functions for the regression coefficients. We develop optimal subsampling algorithms for fitting semi-parametric AFT models using the least-squares approach. By efficiently estimating the slope matrix of the non-smooth estimating functions using a resampling approach, we construct optimal subsampling probabilities for the observations. For feasible point and interval estimation of the unknown coefficients, we propose a two-step method, drawing multiple subsamples in the second stage to correct for overestimation of the variance in higher censoring scenarios. We validate the performance of our estimators through a simulation study that compares single and multiple subsampling methods and apply the methods to analyze the survival time of lymphoma patients in the Surveillance, Epidemiology, and End Results program.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"168-169 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"COMBSS: best subset selection via continuous optimization","authors":"","doi":"10.1007/s11222-024-10387-8","DOIUrl":"https://doi.org/10.1007/s11222-024-10387-8","url":null,"abstract":"<h3>Abstract</h3> <p>The problem of best subset selection in linear regression is considered with the aim to find a fixed size subset of features that best fits the response. This is particularly challenging when the total available number of features is very large compared to the number of data samples. Existing optimal methods for solving this problem tend to be slow while fast methods tend to have low accuracy. Ideally, new methods perform best subset selection faster than existing optimal methods but with comparable accuracy, or, being more accurate than methods of comparable computational speed. Here, we propose a novel continuous optimization method that identifies a subset solution path, a small set of models of varying size, that consists of candidates for the single best subset of features, that is optimal in a specific sense in linear regression. Our method turns out to be fast, making the best subset selection possible when the number of features is well in excess of thousands. Because of the outstanding overall performance, framing the best subset selection challenge as a continuous optimization problem opens new research directions for feature extraction for a large variety of regression models. </p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"16 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}