{"title":"Edgeworth expansions for multivariate random sums","authors":"Farrukh Javed , Nicola Loperfido , Stepan Mazur","doi":"10.1016/j.ecosta.2021.04.005","DOIUrl":"10.1016/j.ecosta.2021.04.005","url":null,"abstract":"<div><p>The sum of a random number of independent and identically distributed random vectors has a distribution which is not analytically tractable, in the general case. The problem has been addressed by means of asymptotic approximations embedding the number of summands in a stochastically increasing sequence. Another approach relies on fitting flexible and tractable parametric, multivariate distributions, as for example finite mixtures. Both approaches are investigated within the framework of Edgeworth expansions. A general formula for the fourth-order cumulants of the random sum of independent and identically distributed random vectors is derived and it is shown that the above mentioned asymptotic approach does not necessarily lead to valid asymptotic normal approximations. The problem is addressed by means of Edgeworth expansions. Both theoretical and empirical results suggest that mixtures of two multivariate normal distributions with proportional covariance matrices satisfactorily fit data generated from random sums where the counting random variable and the random summands are Poisson and multivariate skew-normal, respectively.</p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"31 ","pages":"Pages 66-80"},"PeriodicalIF":2.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86545405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tingting Zhang , Minh Pham , Guofen Yan , Yaotian Wang , Sara Medina-DeVilliers , James A. Coan
{"title":"Spatial-Temporal Analysis of Multi-Subject Functional Magnetic Resonance Imaging Data","authors":"Tingting Zhang , Minh Pham , Guofen Yan , Yaotian Wang , Sara Medina-DeVilliers , James A. Coan","doi":"10.1016/j.ecosta.2021.02.006","DOIUrl":"10.1016/j.ecosta.2021.02.006","url":null,"abstract":"<div><p>Functional magnetic resonance imaging (fMRI) is one of the most popular neuroimaging technologies used in human brain studies. However, fMRI data analysis faces several challenges, including intensive computation due to the massive data size and large estimation errors due to a low signal-to-noise ratio of the data. A new statistical model and a computational algorithm are proposed to address these challenges. Specifically, a new multi-subject general linear model is built for stimulus-evoked fMRI data. The new model assumes that brain responses to stimuli at different brain regions of various subjects fall into a low-rank structure and can be represented by a few principal functions. Therefore, the new model enables combining data information across subjects and regions to evaluate subject-specific and region-specific brain activity. Two optimization functions and a new fast-to-compute algorithm are developed to analyze multi-subject stimulus-evoked fMRI data and address two research questions of a broad interest in psychology: evaluating every subject’s brain responses to different stimuli and identifying brain regions responsive to the stimuli. Both simulation and real data analysis are conducted to show that the new method can outperform existing methods by providing more efficient estimates of brain activity.</p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"31 ","pages":"Pages 117-129"},"PeriodicalIF":2.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75841092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating the Output Gap with High-Dimensional Time Series","authors":"A. Giovannelli, T. Proietti","doi":"10.1016/j.ecosta.2024.06.004","DOIUrl":"https://doi.org/10.1016/j.ecosta.2024.06.004","url":null,"abstract":"The output gap measures the deviation of observed output from its potential, thereby defining imbalances in the real economy that affect utilization of resources and price inflation. A novel estimator of the output gap is proposed. It is based on a dynamic factor model that extracts from a high-dimensional set of time series the common component of a stationary transformation of the individual series. The latter results from the application of a nonlinear gap filter, such that for each of the individual time series the gap filter removes from the current value the historical local maximum, which in turn defines the potential. The smooth generalized principal components are extracted and the resulting common components are aggregated into a global output gap measure. An application is presented dealing with the U.S. industrial sector, where the proposed measure is constructed using the disaggregated market and industry groups time series. An evaluation of its external validity is conducted in comparison to alternative measures.","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"27 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal Covariance Estimation for Condition Number Loss in the Spiked model","authors":"David Donoho, Behrooz Ghorbani","doi":"10.1016/j.ecosta.2024.04.004","DOIUrl":"https://doi.org/10.1016/j.ecosta.2024.04.004","url":null,"abstract":"Consider estimation of the covariance matrix under relative condition number loss , where is the condition number of matrix , and and are the estimated and theoretical covariance matrices. Recent advances in understanding the so-called for , are used here to derive a nonlinear shrinker which is asymptotically optimal among orthogonally-covariant procedures. These advances apply in an asymptotic setting, where the number of variables is comparable to the number of observations . The form of the optimal nonlinearity depends on the aspect ratio of the data matrix and on the top eigenvalue of . For , even dependence on the top eigenvalue can be avoided. The optimal shrinker has three notable properties. First, when is moderately large, it shrinks even very large eigenvalues substantially, by a factor . Second, even for moderate , certain highly statistically significant eigencomponents will be completely suppressed.Third, when is very large, the optimal covariance estimator can be purely diagonal, despite the top theoretical eigenvalue being large and the empirical eigenvalues being highly statistically significant. This aligns with practitioner experience. Alternatively, certain non-optimal intuitively reasonable procedures can have small worst-case relative regret - the simplest being generalized soft thresholding having threshold at the bulk edge and slope above the bulk. For this has at most a few percent relative regret.","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"10 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141063561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Factor models and high-dimensional time series: A tribute to Marco Lippi on the occasion of his 80th birthday","authors":"Matteo Barigozzi, Manfred Deistler, Marc Hallin","doi":"10.1016/j.ecosta.2024.04.005","DOIUrl":"https://doi.org/10.1016/j.ecosta.2024.04.005","url":null,"abstract":"","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"22 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140885090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Loss-based prior for the degrees of freedom of the Wishart distribution","authors":"Luca Rossini, Cristiano Villa, Sotiris Prevenas, Rachel McCrea","doi":"10.1016/j.ecosta.2024.04.001","DOIUrl":"https://doi.org/10.1016/j.ecosta.2024.04.001","url":null,"abstract":"Motivated by the proliferation of extensive macroeconomic and health datasets necessitating accurate forecasts, a novel approach is introduced to address Vector Autoregressive (VAR) models. This approach employs the global-local shrinkage-Wishart prior. Unlike conventional VAR models, where degrees of freedom are predetermined to be equivalent to the size of the variable plus one or equal to zero, the proposed method integrates a hyperprior for the degrees of freedom to account for the uncertainty in the parameter values. Specifically, a loss-based prior is derived to leverage information regarding the data-inherent degrees of freedom. The efficacy of the proposed prior is demonstrated in a multivariate setting both for forecasting macroeconomic data, and Dengue infection data.","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"2 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lorenzo Camponovo, Olivier Scaillet, Fabio Trojani
{"title":"Predictability hidden by Anomalous Observations in Financial Data","authors":"Lorenzo Camponovo, Olivier Scaillet, Fabio Trojani","doi":"10.1016/j.ecosta.2024.03.004","DOIUrl":"https://doi.org/10.1016/j.ecosta.2024.03.004","url":null,"abstract":"Testing procedures for predictive regressions involving lagged autoregressive variables produce a suboptimal inference in presence of minor violations of ideal assumptions. A novel testing framework based on resampling methods that exhibits resistance to such violations and is reliable also in models with nearly integrated regressors is introduced. To achieve this objective, the robustness of resampling procedures for time series are defined by deriving new formulas quantifying their quantile breakdown point. For both the block bootstrap and subsampling, these formulas show a very low quantile breakdown point. To overcome this problem, a robust and fast resampling scheme applicable to a broad class of time series settings is proposed. This framework is also suitable for multi-predictor settings, particularly when the data only approximately conform to a predictive regression model. Monte Carlo simulations provide substantial evidence for the significant improvements offered by this robust approach. Using the proposed resampling methods, empirical coverages and rejection frequencies are very close to the nominal levels, both in the presence and absence of small deviations from the ideal model assumptions. Empirical analysis reveals robust evidence of market return predictability, previously obscured by anomalous observations, both in- and out-of-sample.","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"47 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GMM with Nearly-Weak Identification","authors":"Bertille Antoine , Eric Renault","doi":"10.1016/j.ecosta.2021.10.010","DOIUrl":"10.1016/j.ecosta.2021.10.010","url":null,"abstract":"<div><p><span>A unified framework for the asymptotic distributional theory of GMM with nearly-weak instruments is provided. It generalizes a previously proposed framework in two main directions: first, by allowing instruments’ weakness to be less severe in the sense that some GMM estimators remain consistent, while featuring low precision; and second, by relaxing the so-called ”separability assumption” and considering generalized versions of local-to-zero asymptotics without partitioning </span><em>a priori</em><span> the vector of parameters in two subvectors converging at different rates. It is shown how to define directions in the parameter space whose estimators come with different rates of convergence characterized by the Moore-Penrose inverse of the Jacobian matrix of the moments. Furthermore, regularity conditions are provided to ensure standard asymptotic inference for these estimated directions.</span></p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"30 ","pages":"Pages 36-59"},"PeriodicalIF":1.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79558705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fuzzy k-Means: history and applications","authors":"Maria Brigida Ferraro","doi":"10.1016/j.ecosta.2021.11.008","DOIUrl":"10.1016/j.ecosta.2021.11.008","url":null,"abstract":"<div><p><span>The fuzzy approach to clustering arises to cope with situations where objects have not a clear assignment. Unlike the hard/standard approach where each object can only belong to exactly one cluster, in a fuzzy setting, the assignment is soft; that is, each object is assigned to all clusters with certain membership degrees<span> varying in the unit interval. The best known fuzzy clustering algorithm is the fuzzy </span></span><span><math><mi>k</mi></math></span>-means (F<span><math><mi>k</mi></math></span>M), or fuzzy <span><math><mi>c</mi></math></span>-means. It is a generalization of the classical <span><math><mi>k</mi></math></span>-means method. Starting from the F<span><math><mi>k</mi></math></span><span>M algorithm, and in more than 40 years, several variants have been proposed. The peculiarity of such different proposals depends on the type of data to deal with, and on the cluster shape. The aim is to show fuzzy clustering alternatives to manage different kinds of data, ranging from numeric, categorical or mixed data to more complex data structures, such as interval-valued, fuzzy-valued or functional data, together with some robust methods. Furthermore, the case of two-mode clustering is illustrated in a fuzzy setting.</span></p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"30 ","pages":"Pages 110-123"},"PeriodicalIF":1.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82979424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. de Zea Bermudez , J. Miguel Marín , Håvard Rue , Helena Veiga
{"title":"Integrated nested Laplace approximations for threshold stochastic volatility models","authors":"P. de Zea Bermudez , J. Miguel Marín , Håvard Rue , Helena Veiga","doi":"10.1016/j.ecosta.2021.08.006","DOIUrl":"10.1016/j.ecosta.2021.08.006","url":null,"abstract":"<div><p><span>The aim is to implement the integrated nested Laplace approximations<span> (INLA), known to be very fast and efficient, for estimating the parameters of the threshold stochastic volatility (TSV) model. INLA replaces Markov chain Monte Carlo (MCMC) simulations with accurate deterministic approximations. Weakly informative proper priors are used, as well as Penalizing Complexity (PC) priors. The simulation results favor the use of PC priors, specially when the sample size varies from small to moderate. For these sample sizes, PC priors provide more accurate estimates of the model parameters. However, as sample size increases, both types of priors lead to similar estimates of the parameters. The estimation method is applied to six series of returns, including stock market, commodity and cryptocurrency returns, and its performance is assessed, by means of in-sample and out-of-sample approaches; the forecasting of one-day-ahead volatilities is also carried out. The empirical results support that the TSV is the model that generally fits the best to the series of returns and most of the times ranks the first in terms of forecasting one-day-ahead volatility, when compared to the symmetric </span></span>stochastic volatility model.</p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"30 ","pages":"Pages 15-35"},"PeriodicalIF":1.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78259920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}