Sokbae Lee , Yuan Liao , Myung Hwan Seo , Youngki Shin
{"title":"利用数千万个观测数据快速推断量化回归","authors":"Sokbae Lee , Yuan Liao , Myung Hwan Seo , Youngki Shin","doi":"10.1016/j.jeconom.2024.105673","DOIUrl":null,"url":null,"abstract":"<div><div><span>Big data analytics<span><span> has opened new avenues in economic research, but the challenge of analyzing datasets with tens of millions of observations is substantial. Conventional econometric methods based on extreme estimators require large amounts of computing resources and memory, which are often not readily available. In this paper, we focus on linear </span>quantile<span> regression applied to “ultra-large” datasets, such as U.S. decennial censuses. A fast inference framework is presented, utilizing stochastic subgradient descent (S-subGD) updates. The inference procedure handles cross-sectional data sequentially: (i) updating the parameter estimate with each incoming “new observation”, (ii) aggregating it as a </span></span></span><em>Polyak–Ruppert</em> average, and (iii) computing a pivotal statistic for inference using only a solution path. The methodology draws from time-series regression to create an asymptotically pivotal statistic through random scaling. Our proposed test statistic is calculated in a fully online fashion and critical values are calculated without resampling. We conduct extensive numerical studies to showcase the computational merits of our proposed inference. For inference problems as large as <span><math><mrow><mrow><mo>(</mo><mi>n</mi><mo>,</mo><mi>d</mi><mo>)</mo></mrow><mo>∼</mo><mrow><mo>(</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>7</mn></mrow></msup><mo>,</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>3</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span>, where <span><math><mi>n</mi></math></span><span> is the sample size and </span><span><math><mi>d</mi></math></span> is the number of regressors, our method generates new insights, surpassing current inference methods in computation. Our method specifically reveals trends in the gender gap in the U.S. college wage premium using millions of observations, while controlling over <span><math><mrow><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>3</mn></mrow></msup></mrow></math></span> covariates to mitigate confounding effects.</div></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"249 ","pages":"Article 105673"},"PeriodicalIF":9.9000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast inference for quantile regression with tens of millions of observations\",\"authors\":\"Sokbae Lee , Yuan Liao , Myung Hwan Seo , Youngki Shin\",\"doi\":\"10.1016/j.jeconom.2024.105673\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div><span>Big data analytics<span><span> has opened new avenues in economic research, but the challenge of analyzing datasets with tens of millions of observations is substantial. Conventional econometric methods based on extreme estimators require large amounts of computing resources and memory, which are often not readily available. In this paper, we focus on linear </span>quantile<span> regression applied to “ultra-large” datasets, such as U.S. decennial censuses. A fast inference framework is presented, utilizing stochastic subgradient descent (S-subGD) updates. The inference procedure handles cross-sectional data sequentially: (i) updating the parameter estimate with each incoming “new observation”, (ii) aggregating it as a </span></span></span><em>Polyak–Ruppert</em> average, and (iii) computing a pivotal statistic for inference using only a solution path. The methodology draws from time-series regression to create an asymptotically pivotal statistic through random scaling. Our proposed test statistic is calculated in a fully online fashion and critical values are calculated without resampling. We conduct extensive numerical studies to showcase the computational merits of our proposed inference. For inference problems as large as <span><math><mrow><mrow><mo>(</mo><mi>n</mi><mo>,</mo><mi>d</mi><mo>)</mo></mrow><mo>∼</mo><mrow><mo>(</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>7</mn></mrow></msup><mo>,</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>3</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span>, where <span><math><mi>n</mi></math></span><span> is the sample size and </span><span><math><mi>d</mi></math></span> is the number of regressors, our method generates new insights, surpassing current inference methods in computation. Our method specifically reveals trends in the gender gap in the U.S. college wage premium using millions of observations, while controlling over <span><math><mrow><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>3</mn></mrow></msup></mrow></math></span> covariates to mitigate confounding effects.</div></div>\",\"PeriodicalId\":15629,\"journal\":{\"name\":\"Journal of Econometrics\",\"volume\":\"249 \",\"pages\":\"Article 105673\"},\"PeriodicalIF\":9.9000,\"publicationDate\":\"2025-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Econometrics\",\"FirstCategoryId\":\"96\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0304407624000198\",\"RegionNum\":3,\"RegionCategory\":\"经济学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ECONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Econometrics","FirstCategoryId":"96","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0304407624000198","RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECONOMICS","Score":null,"Total":0}
Fast inference for quantile regression with tens of millions of observations
Big data analytics has opened new avenues in economic research, but the challenge of analyzing datasets with tens of millions of observations is substantial. Conventional econometric methods based on extreme estimators require large amounts of computing resources and memory, which are often not readily available. In this paper, we focus on linear quantile regression applied to “ultra-large” datasets, such as U.S. decennial censuses. A fast inference framework is presented, utilizing stochastic subgradient descent (S-subGD) updates. The inference procedure handles cross-sectional data sequentially: (i) updating the parameter estimate with each incoming “new observation”, (ii) aggregating it as a Polyak–Ruppert average, and (iii) computing a pivotal statistic for inference using only a solution path. The methodology draws from time-series regression to create an asymptotically pivotal statistic through random scaling. Our proposed test statistic is calculated in a fully online fashion and critical values are calculated without resampling. We conduct extensive numerical studies to showcase the computational merits of our proposed inference. For inference problems as large as , where is the sample size and is the number of regressors, our method generates new insights, surpassing current inference methods in computation. Our method specifically reveals trends in the gender gap in the U.S. college wage premium using millions of observations, while controlling over covariates to mitigate confounding effects.
期刊介绍:
The Journal of Econometrics serves as an outlet for important, high quality, new research in both theoretical and applied econometrics. The scope of the Journal includes papers dealing with identification, estimation, testing, decision, and prediction issues encountered in economic research. Classical Bayesian statistics, and machine learning methods, are decidedly within the range of the Journal''s interests. The Annals of Econometrics is a supplement to the Journal of Econometrics.