Brandon Bolte, N. Huynh, Bumba Mukherjee, Sergio Béjar, N. Schmidt
{"title":"Bayesian Spatial Split-Population Models for the Social Sciences","authors":"Brandon Bolte, N. Huynh, Bumba Mukherjee, Sergio Béjar, N. Schmidt","doi":"10.2139/ssrn.3765112","DOIUrl":"https://doi.org/10.2139/ssrn.3765112","url":null,"abstract":"Survival data often include an “immune” or cured fraction of units that will never experience an event and conversely, an “at risk” fraction that can fail or die. It is also plausible that spatial clustering (i.e., spatial autocorrelation) in latent or unmeasured risk factors among adjacent units can affect their odds of being immune and survival time of interest. To address these methodological challenges, this article introduces a class of parametric Spatial split-population survival models—also denoted as Spatial cure models—that explicitly accounts for the influence of spatial autocorrelation among the underlying risk propensities of units on not only their probability of being immune but also their risk of experiencing the event of interest. Our approach is Bayesian in that we account for spatial autocorrelation in unmeasured risk factors across adjacent units in the cure model’s split-stage (cure rate portion) and survival stage via the conditional autoregressive prior (CAR) prior. The article also presents a set of parametric split-population survival models with non-spatial i.i.d frailties and without frailties, and time-varying covariates can be included in all the models mentioned above. Bayesian inference of the non-spatial and spatial cure models is conducted via a hybrid Markov Chain Monte Carlo (MCMC) algorithm. The relevant full conditional distributions required for MCMC sampling are also derived and presented in the paper. We fit all the models to survival data on post-civil war peace to demonstrate their main applicability and main features.","PeriodicalId":11465,"journal":{"name":"Econometrics: Econometric & Statistical Methods - General eJournal","volume":"56 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91488826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Статистическая теория производственно-технических систем (Statistical Theory of Production-Technical Systems)","authors":"O. Pihnastyi","doi":"10.2139/ssrn.3892498","DOIUrl":"https://doi.org/10.2139/ssrn.3892498","url":null,"abstract":"Russian Abstract: Моделирование производственно-технических систем является эффективным методом их исследования . Распространенный класс образуют производственно-технические системы, где детерминированный характер технологических процессов сочетается с их стохастической природой. Закономерности функционирования производственно-технических систем во многом подобны тем, которые имеются в термодинамических системах. Они столь глубоки и полезны, что провозглашены в качестве общих принципов: Ле Шателье-Самуэльсона, Карно-Хикса. Использование данных принципов рассмотрено для описания технологического процесса производственно-технических систем с серийным или массовым выпуском продукции. English Abstract: Modeling production-technical systems is an effective method for their research. A widespread class is formed by production-technical systems, where the deterministic nature of technological processes is combined with their stochastic nature. The patterns of functioning of production-technical systems are in many respects similar to those that exist in thermodynamic systems. They are so profound and useful so they are proclaimed as general principles: Le Chatelier-Samuelson, Carnot-Hicks. These principles are considered to describe the technological process of production-technical systems with serial or mass production of products.","PeriodicalId":11465,"journal":{"name":"Econometrics: Econometric & Statistical Methods - General eJournal","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85107228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Inclusive Synthetic Control Method","authors":"R. Di Stefano, Giovanni Mellace","doi":"10.2139/ssrn.3737491","DOIUrl":"https://doi.org/10.2139/ssrn.3737491","url":null,"abstract":"The Synthetic Control Method (SCM) estimates the causal effect of a policy intervention in a panel data setting with only a few treated units and control units. The treated outcome in the absence of the intervention is recovered by a weighted average of the control units. The latter cannot be affected by the intervention, neither directly nor indirectly. We introduce the inclusive synthetic control method (iSCM), a novel and intuitive synthetic control modification that allows including units potentially affected directly or indirectly by an intervention in the donor pool. Our method is well suited for applications with multiple treated units where including treated units in the donor pool substantially improves the pre-intervention fit and/or for applications where some of the units in the donor pool might be affected by spillover effects. Our iSCM is very easy to implement, and any synthetic control type estimation and inference procedure can be used. Finally, as an illustrative empirical example, we re-estimate the causal effect of German reunification on GDP per capita allowing for spillover effects from West Germany to Austria.","PeriodicalId":11465,"journal":{"name":"Econometrics: Econometric & Statistical Methods - General eJournal","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83628749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"All at Once! A Comprehensive and Tractable Semi-Parametric Method to Elicit Prospect Theory Components","authors":"Y. T. Kpegli, Brice Corgnet, Adam Zylbersztejn","doi":"10.2139/ssrn.3734348","DOIUrl":"https://doi.org/10.2139/ssrn.3734348","url":null,"abstract":"Eliciting all the components of prospect theory –curvature of the utility function, weighting function and loss aversion– remains an open empirical challenge. We develop a semi-parametric method that keeps the tractability of parametric methods while providing more precise estimates. Using the data of Tversky and Kahneman (1992), we revisit their main parametric results. We reject the convexity of the utility function in the loss domain, find lower probability weighting, and confirm loss aversion. We also report that the probability weighting function does not exhibit duality and equality across domains, in line with cumulative prospect theory and in contrast with original prospect and rank dependent utility theories.","PeriodicalId":11465,"journal":{"name":"Econometrics: Econometric & Statistical Methods - General eJournal","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76933193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Liquidity Guided Machine Learning: The Case of the Volatility Risk Premium","authors":"Eric Ghysels, Ruslan Goyenko, Chengyu Zhang","doi":"10.2139/ssrn.3726743","DOIUrl":"https://doi.org/10.2139/ssrn.3726743","url":null,"abstract":"The financial industry has eagerly adopted machine learning algorithms to improve on traditional predictive models. In this paper we caution against blindly applying such techniques. We compare forecasting ability of machine learning methods in evaluating future payoffs on synthetic variance swaps. Standard machine learning methods tend to identify contracts which are illiquid, and hard to trade. The most successful strategies turn out to be those where we pair machine learning with institutional and market/traders inputs and insights. We show that liquidity guided pre-selection of inputs to machine learning results in trading strategies with improved pay-offs to the writers of variance swap contract replicating portfolio.","PeriodicalId":11465,"journal":{"name":"Econometrics: Econometric & Statistical Methods - General eJournal","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90063086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fractal Surfaces of Synthetical DEM Generated by GRASS GIS Module r.surf.fractal From ETOPO1 Raster Grid","authors":"Polina Lemenkova","doi":"10.9733/JGG.2020R0006.E","DOIUrl":"https://doi.org/10.9733/JGG.2020R0006.E","url":null,"abstract":"The research problem is about to generate artificial fractal landscape surfaces from the Digital Elevation Model (DEM) using a stochastic algorithm by Geographic Resources Analysis Support System Geographic Information System (GRASS GIS) software. Fractal surfaces resemble appearance of natural topographic terrain and its structure using random surface modelling. Study area covers Kuril- Kamchatka region, Sea of Okhotsk, North Pacific Ocean. Techniques were included into GRASS GIS modules (r.relief, d.rast, r.slope.aspect, r.mapcalc) for raster calculation, processing and visualization. Module 'r.surf.fractal' was applied for generating synthetic fractal surface from ETOPO1 DEM GeoTIFF using algorithm of fractal analysis. Three tested dimensions of the fractal surfaces were automatically mapped and visualized. Algorithm of the automated fractal DEM modelling visualized variations in steepness and aspect of the artificially generated slopes in the mountains. Controllable topographic variation of the fractal surfaces was applied for three dimensions: dim=2.0001, 2.0050, 2.0100. Auxiliary modules were used for the visualization of DEMs (d.rast, r.colors, d.vect, r.contour, d.redraw, d.mon). Modules 'r.surf.gauss' and 'r.surf.random' were applied for artificial modelling as Gauss and random based mathematical surfaces, respectively. Univariate statistics for fractal surfaces were computed for comparative analysis of maps representing continuous fields by module 'r.univar': number of cells, min/max, range, mean, variance, standard deviation, variation coefficient and sum. The paper includes 9 maps and GRASS GIS codes used for visualization.","PeriodicalId":11465,"journal":{"name":"Econometrics: Econometric & Statistical Methods - General eJournal","volume":"71 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73779167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal Cross-Sectional Regression","authors":"Z. Liao, Yan Liu","doi":"10.2139/ssrn.3719299","DOIUrl":"https://doi.org/10.2139/ssrn.3719299","url":null,"abstract":"In the context of linear-beta pricing models, we develop a new class of two-pass estimators that \u0000are available in closed form and dominate existing two-pass estimators in terms of estimation \u0000efficiency. Importantly, we map our model into the generalized method of moments (GMM) \u0000framework and show our two-pass estimator is as efficient as the optimal GMM estimator, \u0000which is known to be semiparametrically efficient in the literature. Hence, contrary to popular \u0000belief, information loss does not need to occur when we go from the more methodical GMM \u0000approach to the simple-to-implement two-pass regressors. Intuitively, our estimator improves \u0000efficiency by disentangling the impacts of idiosyncratic and systematic return innovations on \u0000pricing errors in the second-stage cross-sectional regression. As an empirical application of the \u0000new two-pass estimators, we apply our approach to current factor models and shed new light \u0000on the Fama and French (2015) versus Hou, Xue, and Zhang (2015) debate.","PeriodicalId":11465,"journal":{"name":"Econometrics: Econometric & Statistical Methods - General eJournal","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74917702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Why Randomize? Minimax Optimality under Permutation Invariance","authors":"Yuehao Bai","doi":"10.2139/ssrn.3475147","DOIUrl":"https://doi.org/10.2139/ssrn.3475147","url":null,"abstract":"This paper studies finite sample minimax optimal randomization schemes and estimation schemes in estimating parameters including the average treat- ment effect, when treatment effects are heterogeneous. A randomization scheme is a distribution over a group of permutations of a given treatment assignment vector. An estimation scheme is a joint distribution over assignment vectors, linear estimators, and permutations of assignment vectors. The key element in the minimax problem is that the worst case is over a class of distributions of the data which is invariant to a group of permutations. First, I show that given any assignment vector and any estimator, the uniform distribution over the same group of permutations, namely the complete randomization scheme, is minimax optimal. Second, under further assumptions on the class of distributions and the objective function, I show the minimax optimal estimation scheme involves completely randomizing an assignment vector, while the optimal estimator is the difference-in-means under complete invariance and a weighted average of within-block differences under a block structure, and the numbers of treated and untreated units are determined by Neyman allocations.","PeriodicalId":11465,"journal":{"name":"Econometrics: Econometric & Statistical Methods - General eJournal","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77889843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Note on Bayesian Stability and Bayesian Efficiency","authors":"Yi-Chun Chen, Gaoji Hu","doi":"10.2139/ssrn.3709205","DOIUrl":"https://doi.org/10.2139/ssrn.3709205","url":null,"abstract":"In this paper, we extend the stability notion and Bayesian efficiency notion of Liu (2020) to local ones, as well as his result—that under certain intuitive conditions, stable matchings are Bayesian efficient—to an analogous one for local notions. Furthermore, the extended stability notion, and thus Liu’s notion, admits a decentralized foundation via an adaptive matching process.","PeriodicalId":11465,"journal":{"name":"Econometrics: Econometric & Statistical Methods - General eJournal","volume":"702 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87850816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}