Thomas Kolokotrones , James H. Stock , Christopher D. Walker
{"title":"Is Newey–West optimal among first-order kernels?","authors":"Thomas Kolokotrones , James H. Stock , Christopher D. Walker","doi":"10.1016/j.jeconom.2022.12.013","DOIUrl":"10.1016/j.jeconom.2022.12.013","url":null,"abstract":"<div><p><span><span>Newey–West (1987) standard errors are the dominant standard errors used for heteroskedasticity and autocorrelation robust (HAR) inference in </span>time series<span> regression. The Newey–West estimator uses the Bartlett kernel, which is a first-order kernel, meaning that its characteristic exponent, </span></span><span><math><mi>q</mi></math></span>, is equal to 1, where <span><math><mi>q</mi></math></span> is defined as the largest value of <span><math><mi>r</mi></math></span> for which the quantity <span><math><mrow><msup><mrow><mi>k</mi></mrow><mrow><mrow><mo>[</mo><mi>r</mi><mo>]</mo></mrow></mrow></msup><mrow><mo>(</mo><mn>0</mn><mo>)</mo></mrow><mo>=</mo><msub><mrow><mo>lim</mo></mrow><mrow><mi>t</mi><mo>→</mo><mn>0</mn></mrow></msub><msup><mrow><mrow><mo>|</mo><mi>t</mi><mo>|</mo></mrow></mrow><mrow><mo>−</mo><mi>r</mi></mrow></msup><mrow><mo>(</mo><mi>k</mi><mrow><mo>(</mo><mn>0</mn><mo>)</mo></mrow><mo>−</mo><mi>k</mi><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow><mo>)</mo></mrow></mrow></math></span> is defined and finite. This raises the apparently uninvestigated question of whether the Bartlett kernel is optimal among first-order kernels. We demonstrate that, for <span><math><mrow><mi>q</mi><mo><</mo><mn>2</mn></mrow></math></span>, there is no optimal <span><math><mi>q</mi></math></span><span>th-order kernel for HAR testing in the Gaussian<span><span> location model or for minimizing the MSE in </span>spectral density estimation. In fact, for any </span></span><span><math><mrow><mi>q</mi><mo><</mo><mn>2</mn></mrow></math></span>, the space of <span><math><mi>q</mi></math></span>th-order positive-semidefinite kernels is not closed and, moreover, all continuous <span><math><mi>q</mi></math></span>th-order kernels can be decomposed into a weighted sum of <span><math><mi>q</mi></math></span>th and second-order kernels, which suggests that there is no meaningful notion of ‘pure’ <span><math><mi>q</mi></math></span>th-order kernels for <span><math><mrow><mi>q</mi><mo><</mo><mn>2</mn></mrow></math></span>. Nevertheless, it is possible to rank any given collection of <span><math><mi>q</mi></math></span>th-order kernels using the functional <span><math><mrow><msub><mrow><mi>I</mi></mrow><mrow><mi>q</mi></mrow></msub><mrow><mo>[</mo><mi>k</mi><mo>]</mo></mrow><mo>=</mo><msup><mrow><mfenced><mrow><msup><mrow><mi>k</mi></mrow><mrow><mrow><mo>[</mo><mi>q</mi><mo>]</mo></mrow></mrow></msup><mrow><mo>(</mo><mn>0</mn><mo>)</mo></mrow></mrow></mfenced></mrow><mrow><mn>1</mn><mo>/</mo><mi>q</mi></mrow></msup><mo>∫</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>2</mn></mrow></msup><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow><mi>d</mi><mi>t</mi></mrow></math></span> with smaller values corresponding to better asymptotic performance. We examine the value of <span><math><mrow><msub><mrow><mi>I</mi></mrow><mrow><mi>q</mi></mrow></msub><mrow><mo>[</mo><mi>k</mi><mo>]</mo></mrow></mrow></math></span> for a wide variety of first-order es","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"240 2","pages":"Article 105399"},"PeriodicalIF":6.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45646742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chunrong Ai , Li-Hsien Sun , Zheng Zhang , Liping Zhu
{"title":"Testing unconditional and conditional independence via mutual information","authors":"Chunrong Ai , Li-Hsien Sun , Zheng Zhang , Liping Zhu","doi":"10.1016/j.jeconom.2022.07.011","DOIUrl":"10.1016/j.jeconom.2022.07.011","url":null,"abstract":"<div><p>Testing independence has garnered increasing attention in the econometric<span><span> and statistical literature. Many tests have been proposed, most of which are inconsistent against all departures from independence. Few of those tests, though consistent, suffer a significant loss of local power. This study proposes a mutual information test for testing independence. The proposed test is simple to implement and, with a slight loss of local power, is consistent against all departures from independence. The key driving factor is that we estimate the density ratio directly. This value is constant in a state of independence. This is in contrast with related studies that estimate the joint and marginal density functions to form the density ratio. A small-scale simulation study indicates that the proposed test outperforms the existing alternatives in various </span>dependence structures.</span></p></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"240 2","pages":"Article 105335"},"PeriodicalIF":6.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44902718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Financially adaptive clinical trials via option pricing analysis","authors":"Shomesh E. Chaudhuri , Andrew W. Lo","doi":"10.1016/j.jeconom.2020.08.012","DOIUrl":"10.1016/j.jeconom.2020.08.012","url":null,"abstract":"<div><p>The regulatory approval process for new therapies involves costly clinical trials that can span multiple years. When valuing a candidate therapy from a financial perspective, industry<span> sponsors may terminate a program early if clinical evidence suggests market prospects are not as favorable as originally forecasted. Intuition suggests that clinical trials that can be modified as new data are observed, i.e., adaptive trials, are more valuable than trials without this flexibility. To quantify this value, we propose modeling the accrual of information in a clinical trial as a sequence of real options<span>, allowing us to systematically design early-stopping decision boundaries that maximize the economic value to the sponsor. In an empirical analysis of selected disease areas, we find that when a therapy is ineffective, our adaptive financing method can decrease the expected cost incurred by the sponsor in terms of total expenditures, number of patients, and trial length by up to 46%. Moreover, by amortizing the large fixed costs associated with a clinical trial over time, financing these projects becomes less risky, resulting in lower costs of capital and larger valuations when the therapy is effective.</span></span></p></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"240 2","pages":"Article 105026"},"PeriodicalIF":6.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47832611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assumption-lean falsification tests of rate double-robustness of double-machine-learning estimators","authors":"Lin Liu , Rajarshi Mukherjee , James M. Robins","doi":"10.1016/j.jeconom.2023.105500","DOIUrl":"10.1016/j.jeconom.2023.105500","url":null,"abstract":"<div><p><span><span>The class of doubly robust (DR) functionals studied by Rotnitzky et al. (2021) is of central importance in economics and biostatistics. It strictly includes both (i) the class of mean-square </span>continuous functionals<span> that can be written as an expectation of an affine functional of a conditional expectation studied by Chernozhukov et al. (2022b) and the class of functionals studied by Robins et al. (2008). The present state-of-the-art estimators for DR functionals </span></span><span><math><mi>ψ</mi></math></span> are double-machine-learning (DML) estimators (Chernozhukov et al., 2018a). A DML estimator <span><math><msub><mrow><mover><mrow><mi>ψ</mi></mrow><mrow><mo>̂</mo></mrow></mover></mrow><mrow><mn>1</mn></mrow></msub></math></span> of <span><math><mi>ψ</mi></math></span> depends on estimates <span><math><mrow><mover><mrow><mi>p</mi></mrow><mrow><mo>̂</mo></mrow></mover><mrow><mo>(</mo><mi>x</mi><mo>)</mo></mrow></mrow></math></span> and <span><math><mrow><mover><mrow><mi>b</mi></mrow><mrow><mo>̂</mo></mrow></mover><mrow><mo>(</mo><mi>x</mi><mo>)</mo></mrow></mrow></math></span> of a pair of nuisance functions <span><math><mrow><mi>p</mi><mrow><mo>(</mo><mi>x</mi><mo>)</mo></mrow></mrow></math></span> and <span><math><mrow><mi>b</mi><mrow><mo>(</mo><mi>x</mi><mo>)</mo></mrow></mrow></math></span>, and is said to satisfy “rate double-robustness” if the Cauchy–Schwarz upper bound of its bias is <span><math><mrow><mi>o</mi><mrow><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span>. Rate double-robustness implies that the bias is <span><math><mrow><mi>o</mi><mrow><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span>, but the converse is false. Were it achievable, our scientific goal would have been to construct valid, assumption-lean (i.e. no complexity-reducing assumptions on <span><math><mi>b</mi></math></span> or <span><math><mi>p</mi></math></span>) tests of the validity of a nominal <span><math><mrow><mo>(</mo><mn>1</mn><mo>−</mo><mi>α</mi><mo>)</mo></mrow></math></span> Wald confidence interval (CI) centered at <span><math><msub><mrow><mover><mrow><mi>ψ</mi></mrow><mrow><mo>̂</mo></mrow></mover></mrow><mrow><mn>1</mn></mrow></msub></math></span>. But this would require a test of the bias to be <span><math><mrow><mi>o</mi><mrow><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span>, which can be shown not to exist. We therefore adopt the less ambitious goal of falsifying, when possible, an analyst’s justification for her claim that the reported <span><math><mrow><mo>(</mo><mn>1</mn><mo>−</mo><mi>α</mi><mo>)</mo></mrow></math></span> Wald CI is valid. In many instances, an analyst justifies her claim by imposing complexity-reducing assumptions on <span><math><mi>b</","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"240 2","pages":"Article 105500"},"PeriodicalIF":6.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48572684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Maximum likelihood estimation of latent Markov models using closed-form approximations","authors":"Yacine Aït-Sahalia , Chenxu Li , Chen Xu Li","doi":"10.1016/j.jeconom.2020.09.001","DOIUrl":"10.1016/j.jeconom.2020.09.001","url":null,"abstract":"<div><p><span>This paper proposes and implements an efficient and flexible method to compute maximum likelihood estimators of continuous-time models when part of the state vector is latent. </span>Stochastic volatility<span> and term structure models are typical examples. Existing methods integrate out the latent variables using either simulations as in MCMC<span>, or replace the latent variables by observable proxies. By contrast, our approach relies on closed-form approximations to estimate parameters and simultaneously infer the distribution of filters, i.e., that of the latent states conditioning on observations. Without any particular assumption on the filtered distribution, we approximate in closed form a coupled iteration system for updating the likelihood function and filters based on the transition density of the state vector. Our procedure has a linear computational cost with respect to the number of observations, as opposed to the exponential cost implied by the high dimensional integral nature of the likelihood function. We establish the theoretical convergence of our method as the frequency of observation increases and conduct Monte Carlo simulations to demonstrate its performance.</span></span></p></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"240 2","pages":"Article 105008"},"PeriodicalIF":6.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41531077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivan Fernández-Val , Aico van Vuuren , Francis Vella
{"title":"Nonseparable sample selection models with censored selection rules","authors":"Ivan Fernández-Val , Aico van Vuuren , Francis Vella","doi":"10.1016/j.jeconom.2021.01.009","DOIUrl":"10.1016/j.jeconom.2021.01.009","url":null,"abstract":"<div><p>We consider identification and estimation of nonseparable sample selection models with censored selection rules. We employ a control function<span> approach and discuss different objects of interest based on (1) local effects conditional on the control function, and (2) global effects obtained from integration over ranges of values of the control function. We derive conditions for identification of these different objects and suggest strategies for estimation. Moreover, we provide the associated asymptotic theory. These strategies are illustrated in an empirical investigation of the determinants of female wages in the United Kingdom.</span></p></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"240 2","pages":"Article 105088"},"PeriodicalIF":6.3,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54985672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beyond RCP8.5: Marginal mitigation using quasi-representative concentration pathways","authors":"J. Isaac Miller , William A. Brock","doi":"10.1016/j.jeconom.2021.06.007","DOIUrl":"10.1016/j.jeconom.2021.06.007","url":null,"abstract":"<div><p><span>Assessments of decreases in economic damages from climate change mitigation typically rely on climate output from computationally expensive pre-computed runs of general circulation models under a handful of scenarios with discretely varying targets, such as the four representative concentration pathways for CO</span><sub>2</sub><span><span> and other anthropogenically emitted gases. Although such analyses are valuable in informing scientists and policymakers about massive multilateral mitigation goals, we add to the literature by considering potential outcomes from more modest policy changes that may not be represented by any well-known concentration pathway. Specifically, we construct computationally efficient Quasi-representative Concentration Pathways (QCPs) to leverage concentration pathways of existing peer-reviewed scenarios. Computational efficiency allows for bootstrapping to assess uncertainty. We illustrate our methodology by considering the impact on the relative risk of mortality from heat stress in London from the United Kingdom’s </span>net zero emissions goal. More than half of our interval estimate for the business-as-usual scenario covers an annual risk at least that of a COVID-19-like mortality event by 2100. Success of the UK’s policy alone would do little to mitigate the risk.</span></p></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"239 1","pages":"Article 105152"},"PeriodicalIF":6.3,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42410745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spherical autoregressive models, with application to distributional and compositional time series","authors":"Changbo Zhu , Hans-Georg Müller","doi":"10.1016/j.jeconom.2022.12.008","DOIUrl":"10.1016/j.jeconom.2022.12.008","url":null,"abstract":"<div><p>We introduce a new class of autoregressive models for spherical time series. The dimension of the spheres on which the observations of the time series are situated may be finite-dimensional or infinite-dimensional, where in the latter case we consider the Hilbert sphere. Spherical time series arise in various settings. We focus here on distributional and compositional time series. Applying a square root transformation to the densities of the observations of a distributional time series maps the distributional observations to the Hilbert sphere, equipped with the Fisher–Rao metric. Likewise, applying a square root transformation to the components of the observations of a compositional time series maps the compositional observations to a finite-dimensional sphere, equipped with the geodesic metric on spheres. The challenge in modeling such time series lies in the intrinsic non-linearity of spheres and Hilbert spheres, where conventional arithmetic operations such as addition or scalar multiplication are no longer available. To address this difficulty, we consider rotation operators to map observations on the sphere. Specifically, we introduce a class of skew-symmetric operators such that the associated exponential operators are rotation operators that for each given pair of points on the sphere map the first point of the pair to the second point of the pair. We exploit the fact that the space of skew-symmetric operators is Hilbertian to develop autoregressive modeling of geometric differences that correspond to rotations of spherical and distributional time series. Differences expressed in terms of rotations can be taken between the Fréchet mean and the observations or between consecutive observations of the time series. We derive theoretical properties of the ensuing autoregressive models and showcase these approaches with several motivating data. These include a time series of yearly observations of bivariate distributions of the minimum/maximum temperatures for a period of 120 days during each summer for the years 1990-2018 at Los Angeles (LAX) and John F. Kennedy (JFK) international airports. A second data application concerns a compositional time series with annual observations of compositions of energy sources for power generation in the U.S..</p></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"239 2","pages":"Article 105389"},"PeriodicalIF":6.3,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0304407623000209/pdfft?md5=07fa58db0268abdd5d62cfbeb6ffa463&pid=1-s2.0-S0304407623000209-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46972881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The nonparametric Box–Cox model for high-dimensional regression analysis","authors":"He Zhou, Hui Zou","doi":"10.1016/j.jeconom.2023.01.025","DOIUrl":"10.1016/j.jeconom.2023.01.025","url":null,"abstract":"<div><p>The mainstream theory for high-dimensional regression assumes that the underlying true model is a low-dimensional linear regression model. On the other hand, a standard technique in regression analysis<span>, even in the traditional low-dimensional setting, is to employ the Box–Cox transformation for reducing anomalies such as non-additivity and heteroscedasticity in linear regression. In this paper, we propose a new high-dimensional regression method based on a nonparametric Box–Cox model with an unspecified monotone transformation function. Model fitting and computation become much more challenging than the usual penalized regression method, and a two-step method is proposed for the estimation of this model in high-dimensional settings. First, we propose a novel technique called composite probit regression<span> (CPR) and use the folded concave penalized CPR for estimating the regression parameters. The strong oracle property of the estimator is established without knowing the nonparametric transformation function. Next, the nonparametric function is estimated by conducting univariate monotone regression. The computation is done efficiently by using a coordinate-majorization-descent algorithm. Extensive simulation studies show that the proposed method performs well in various settings. Our analysis of the supermarket data demonstrates the superior performance of the proposed method over the standard high-dimensional regression method.</span></span></p></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"239 2","pages":"Article 105419"},"PeriodicalIF":6.3,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46289206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A generalized knockoff procedure for FDR control in structural change detection","authors":"Jingyuan Liu , Ao Sun , Yuan Ke","doi":"10.1016/j.jeconom.2022.07.008","DOIUrl":"10.1016/j.jeconom.2022.07.008","url":null,"abstract":"<div><p><span><span>Controlling false discovery rate (FDR) is crucial for variable selection, multiple testing, among other signal detection problems. In literature, there is certainly no shortage of FDR control strategies when selecting individual features, but the relevant works for structural change detection, such as profile analysis for piecewise constant coefficients and integration analysis with multiple data sources, are limited. In this paper, we propose a generalized knockoff procedure (GKnockoff) for FDR control under such problem settings. We prove that the GKnockoff possesses pairwise exchangeability<span>, and is capable of controlling the exact FDR under finite sample sizes. We further explore GKnockoff under high dimensionality, by first introducing a new screening method to filter the high-dimensional potential structural changes. We adopt a data splitting technique to first reduce the dimensionality via screening and then conduct GKnockoff on the refined selection set. Furthermore, the powers of proposed methods are systematically studied. Numerical comparisons with other methods show the superior performance of GKnockoff, in terms of both FDR control and power. We also implement the proposed methods to analyze a macroeconomic dataset for detecting changes of driven effects </span></span>of economic development on the secondary </span>industry.</p></div>","PeriodicalId":15629,"journal":{"name":"Journal of Econometrics","volume":"239 2","pages":"Article 105331"},"PeriodicalIF":6.3,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48315295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}