{"title":"Optimal Covariance Estimation for Condition Number Loss in the Spiked model","authors":"David Donoho, Behrooz Ghorbani","doi":"10.1016/j.ecosta.2024.04.004","DOIUrl":"https://doi.org/10.1016/j.ecosta.2024.04.004","url":null,"abstract":"Consider estimation of the covariance matrix under relative condition number loss , where is the condition number of matrix , and and are the estimated and theoretical covariance matrices. Recent advances in understanding the so-called for , are used here to derive a nonlinear shrinker which is asymptotically optimal among orthogonally-covariant procedures. These advances apply in an asymptotic setting, where the number of variables is comparable to the number of observations . The form of the optimal nonlinearity depends on the aspect ratio of the data matrix and on the top eigenvalue of . For , even dependence on the top eigenvalue can be avoided. The optimal shrinker has three notable properties. First, when is moderately large, it shrinks even very large eigenvalues substantially, by a factor . Second, even for moderate , certain highly statistically significant eigencomponents will be completely suppressed.Third, when is very large, the optimal covariance estimator can be purely diagonal, despite the top theoretical eigenvalue being large and the empirical eigenvalues being highly statistically significant. This aligns with practitioner experience. Alternatively, certain non-optimal intuitively reasonable procedures can have small worst-case relative regret - the simplest being generalized soft thresholding having threshold at the bulk edge and slope above the bulk. For this has at most a few percent relative regret.","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"10 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141063561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Factor models and high-dimensional time series: A tribute to Marco Lippi on the occasion of his 80th birthday","authors":"Matteo Barigozzi, Manfred Deistler, Marc Hallin","doi":"10.1016/j.ecosta.2024.04.005","DOIUrl":"https://doi.org/10.1016/j.ecosta.2024.04.005","url":null,"abstract":"","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"22 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140885090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Loss-based prior for the degrees of freedom of the Wishart distribution","authors":"Luca Rossini, Cristiano Villa, Sotiris Prevenas, Rachel McCrea","doi":"10.1016/j.ecosta.2024.04.001","DOIUrl":"https://doi.org/10.1016/j.ecosta.2024.04.001","url":null,"abstract":"Motivated by the proliferation of extensive macroeconomic and health datasets necessitating accurate forecasts, a novel approach is introduced to address Vector Autoregressive (VAR) models. This approach employs the global-local shrinkage-Wishart prior. Unlike conventional VAR models, where degrees of freedom are predetermined to be equivalent to the size of the variable plus one or equal to zero, the proposed method integrates a hyperprior for the degrees of freedom to account for the uncertainty in the parameter values. Specifically, a loss-based prior is derived to leverage information regarding the data-inherent degrees of freedom. The efficacy of the proposed prior is demonstrated in a multivariate setting both for forecasting macroeconomic data, and Dengue infection data.","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"2 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140800026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lorenzo Camponovo, Olivier Scaillet, Fabio Trojani
{"title":"Predictability hidden by Anomalous Observations in Financial Data","authors":"Lorenzo Camponovo, Olivier Scaillet, Fabio Trojani","doi":"10.1016/j.ecosta.2024.03.004","DOIUrl":"https://doi.org/10.1016/j.ecosta.2024.03.004","url":null,"abstract":"Testing procedures for predictive regressions involving lagged autoregressive variables produce a suboptimal inference in presence of minor violations of ideal assumptions. A novel testing framework based on resampling methods that exhibits resistance to such violations and is reliable also in models with nearly integrated regressors is introduced. To achieve this objective, the robustness of resampling procedures for time series are defined by deriving new formulas quantifying their quantile breakdown point. For both the block bootstrap and subsampling, these formulas show a very low quantile breakdown point. To overcome this problem, a robust and fast resampling scheme applicable to a broad class of time series settings is proposed. This framework is also suitable for multi-predictor settings, particularly when the data only approximately conform to a predictive regression model. Monte Carlo simulations provide substantial evidence for the significant improvements offered by this robust approach. Using the proposed resampling methods, empirical coverages and rejection frequencies are very close to the nominal levels, both in the presence and absence of small deviations from the ideal model assumptions. Empirical analysis reveals robust evidence of market return predictability, previously obscured by anomalous observations, both in- and out-of-sample.","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"47 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140583124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GMM with Nearly-Weak Identification","authors":"Bertille Antoine , Eric Renault","doi":"10.1016/j.ecosta.2021.10.010","DOIUrl":"10.1016/j.ecosta.2021.10.010","url":null,"abstract":"<div><p><span>A unified framework for the asymptotic distributional theory of GMM with nearly-weak instruments is provided. It generalizes a previously proposed framework in two main directions: first, by allowing instruments’ weakness to be less severe in the sense that some GMM estimators remain consistent, while featuring low precision; and second, by relaxing the so-called ”separability assumption” and considering generalized versions of local-to-zero asymptotics without partitioning </span><em>a priori</em><span> the vector of parameters in two subvectors converging at different rates. It is shown how to define directions in the parameter space whose estimators come with different rates of convergence characterized by the Moore-Penrose inverse of the Jacobian matrix of the moments. Furthermore, regularity conditions are provided to ensure standard asymptotic inference for these estimated directions.</span></p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"30 ","pages":"Pages 36-59"},"PeriodicalIF":1.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79558705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fuzzy k-Means: history and applications","authors":"Maria Brigida Ferraro","doi":"10.1016/j.ecosta.2021.11.008","DOIUrl":"10.1016/j.ecosta.2021.11.008","url":null,"abstract":"<div><p><span>The fuzzy approach to clustering arises to cope with situations where objects have not a clear assignment. Unlike the hard/standard approach where each object can only belong to exactly one cluster, in a fuzzy setting, the assignment is soft; that is, each object is assigned to all clusters with certain membership degrees<span> varying in the unit interval. The best known fuzzy clustering algorithm is the fuzzy </span></span><span><math><mi>k</mi></math></span>-means (F<span><math><mi>k</mi></math></span>M), or fuzzy <span><math><mi>c</mi></math></span>-means. It is a generalization of the classical <span><math><mi>k</mi></math></span>-means method. Starting from the F<span><math><mi>k</mi></math></span><span>M algorithm, and in more than 40 years, several variants have been proposed. The peculiarity of such different proposals depends on the type of data to deal with, and on the cluster shape. The aim is to show fuzzy clustering alternatives to manage different kinds of data, ranging from numeric, categorical or mixed data to more complex data structures, such as interval-valued, fuzzy-valued or functional data, together with some robust methods. Furthermore, the case of two-mode clustering is illustrated in a fuzzy setting.</span></p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"30 ","pages":"Pages 110-123"},"PeriodicalIF":1.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82979424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. de Zea Bermudez , J. Miguel Marín , Håvard Rue , Helena Veiga
{"title":"Integrated nested Laplace approximations for threshold stochastic volatility models","authors":"P. de Zea Bermudez , J. Miguel Marín , Håvard Rue , Helena Veiga","doi":"10.1016/j.ecosta.2021.08.006","DOIUrl":"10.1016/j.ecosta.2021.08.006","url":null,"abstract":"<div><p><span>The aim is to implement the integrated nested Laplace approximations<span> (INLA), known to be very fast and efficient, for estimating the parameters of the threshold stochastic volatility (TSV) model. INLA replaces Markov chain Monte Carlo (MCMC) simulations with accurate deterministic approximations. Weakly informative proper priors are used, as well as Penalizing Complexity (PC) priors. The simulation results favor the use of PC priors, specially when the sample size varies from small to moderate. For these sample sizes, PC priors provide more accurate estimates of the model parameters. However, as sample size increases, both types of priors lead to similar estimates of the parameters. The estimation method is applied to six series of returns, including stock market, commodity and cryptocurrency returns, and its performance is assessed, by means of in-sample and out-of-sample approaches; the forecasting of one-day-ahead volatilities is also carried out. The empirical results support that the TSV is the model that generally fits the best to the series of returns and most of the times ranks the first in terms of forecasting one-day-ahead volatility, when compared to the symmetric </span></span>stochastic volatility model.</p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"30 ","pages":"Pages 15-35"},"PeriodicalIF":1.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78259920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Felix Ahelegbey , Monica Billio , Roberto Casarin
{"title":"Modeling Turning Points in the Global Equity Market","authors":"Daniel Felix Ahelegbey , Monica Billio , Roberto Casarin","doi":"10.1016/j.ecosta.2021.10.004","DOIUrl":"10.1016/j.ecosta.2021.10.004","url":null,"abstract":"<div><p>Turning points in financial markets are often characterized by changes in the direction and/or magnitude of market movements with short-to-long term impacts on investors’ decisions. A Bayesian technique is developed for turning point detection in financial equity markets. The interconnectedness among stock market returns from a piece-wise network vector autoregressive model is derived. The turning points in the global equity market over the past two decades are examined in the empirical application. The level of interconnectedness during the Covid-19 pandemic and the 2008 global financial crisis are compared. Similarities and most central markets responsible for spillover propagation emerged from the analysis.</p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"30 ","pages":"Pages 60-75"},"PeriodicalIF":1.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2452306221001192/pdfft?md5=d5813acc3b1da0160286a1921ccc7e7d&pid=1-s2.0-S2452306221001192-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85936966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data segmentation algorithms: Univariate mean change and beyond","authors":"Haeran Cho , Claudia Kirch","doi":"10.1016/j.ecosta.2021.10.008","DOIUrl":"10.1016/j.ecosta.2021.10.008","url":null,"abstract":"<div><p><span>Data segmentation a.k.a. multiple change point analysis has received considerable attention due to its importance in time series analysis<span> and signal processing, with applications in a variety of fields including natural and social sciences, medicine, engineering and finance. The first part reviews the existing literature on the </span></span><em>canonical data segmentation problem</em><span> which aims at detecting and localising multiple change points in the mean of univariate time series. An overview of popular methodologies is provided on their computational complexity and theoretical properties. In particular, the theoretical discussion focuses on the </span><em>separation rate</em> relating to which change points are detectable by a given procedure, and the <em>localisation rate</em><span> quantifying the precision of corresponding change point estimators, and a distinction is made whether a </span><em>homogeneous</em> or <em>multiscale</em><span> viewpoint has been adopted in their derivation. It is further highlighted that the latter viewpoint provides the most general setting for investigating the optimality of data segmentation algorithms.</span></p><p>Arguably, the canonical segmentation problem has been the most popular framework to propose new data segmentation algorithms and study their efficiency in the last decades. The second part of this survey motivates the importance of attaining an in-depth understanding of strengths and weaknesses of methodologies for the change point problem in a simpler, univariate setting, as a stepping stone for the development of methodologies for more complex problems. This point is illustrated with a range of examples showcasing the connections between complex distributional changes and those in the mean. Extensions towards high-dimensional change point problems are also discussed where it is demonstrated that the challenges arising from high dimensionality are orthogonal to those in dealing with multiple change points.</p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"30 ","pages":"Pages 76-95"},"PeriodicalIF":1.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76584108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karim M. Abadir , Walter Distaso , Liudas Giraitis
{"title":"Partially one-sided semiparametric inference for trending persistent and antipersistent processes","authors":"Karim M. Abadir , Walter Distaso , Liudas Giraitis","doi":"10.1016/j.ecosta.2021.12.007","DOIUrl":"10.1016/j.ecosta.2021.12.007","url":null,"abstract":"<div><p>Hypothesis testing in models allowing for trending processes that are possibly nonstationary and non-Gaussian is considered. Using semiparametric estimators, joint hypothesis testing for these processes is developed, taking into account the one-sided nature of typical hypotheses on the persistence parameter in order to gain power. The results are applicable for a wide class of processes and are easy to implement. They are illustrated with an application to the dynamics of GDP.</p></div>","PeriodicalId":54125,"journal":{"name":"Econometrics and Statistics","volume":"30 ","pages":"Pages 1-14"},"PeriodicalIF":1.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78683738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}