{"title":"Predicting Returns Out of Sample: A Naïve Model Averaging Approach","authors":"Huafeng (Jason) Chen, Liang Jiang, Weiwei Liu","doi":"10.2139/ssrn.3455866","DOIUrl":"https://doi.org/10.2139/ssrn.3455866","url":null,"abstract":"\u0000 We propose a naïve model averaging (NMA) method that averages the OLS out-of-sample forecasts and the historical means and produces mostly positive out-of-sample R2s for the variables significant in sample in forecasting market returns. Surprisingly, more sophisticated weighting schemes that combine the predictive variable and historical mean do not consistently perform better. With unstable economic relations and a limited sample size, sophisticated methods may lead to overfitting or be subject to more estimation errors. In such situations, our simple methods may work better. Model misspecification, rather than declining return predictability, likely explains the predictive performance of the NMA method.","PeriodicalId":170198,"journal":{"name":"ERN: Forecasting Techniques (Topic)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134174612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Forecasting with Deep Temporal Hierarchies","authors":"Filotas Theodosiou, N. Kourentzes","doi":"10.2139/ssrn.3918315","DOIUrl":"https://doi.org/10.2139/ssrn.3918315","url":null,"abstract":"In time series analysis and forecasting, the identification of an appropriate model remains a challenging task. Model misspecification can lead to erroneous forecasts and insights. The use of multiple views of the same time series by constructing temporally aggregate levels has been proposed as a way to overcome the model specification and selection uncertainty, with ample empirical evidence of forecast accuracy gains. Temporal Hierarchies is the most popular approach to achieve this, which itself is based on research in hierarchical forecasting. Although there has been substantial progress in this literature, the vast majority of methods rely on a restricted linear combination of different model outputs across the hierarchy. We investigate the use of deep learning to augment temporal hierarchies, relaxing the classical restrictions. Specifically, we look at deep learning for the generation of all the base forecasts, the hierarchical reconciliation, and an end-to-end method that embeds all steps in a single neural network. We inspect the performance of the proposed methods when applied to individual time series, or with global training across complete sets of series. We further investigate the requirements in terms of series set size, illustrating the conditions where deep learning temporal hierarchies outperform conventional temporal hierarchies.","PeriodicalId":170198,"journal":{"name":"ERN: Forecasting Techniques (Topic)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116245713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Liquidity and Mispricing","authors":"D. Huber","doi":"10.2139/ssrn.3718411","DOIUrl":"https://doi.org/10.2139/ssrn.3718411","url":null,"abstract":"The expected return of a strategy that consists of buying underpriced stocks and shorting overpriced ones is substantially larger for illiquid stocks than for liquid ones. This premium can be attributed to the short leg among illiquid stocks, driven by arbitrage asymmetry. The latter effect is also reflected in a univariate sort based on liquidity: a negative premium occurs, contrary to popular beliefs, driven by overpricing among illiquid stocks and underpricing among liquid ones. Furthermore, negative liquidity shocks increase overpricing, whereas positive shocks increase underpricing. These results emphasize the important role of liquidity in explaining the cross-section of expected stock returns.","PeriodicalId":170198,"journal":{"name":"ERN: Forecasting Techniques (Topic)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127996661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Charting By Machines","authors":"Scott Murray, Houping Xiao, Yusen Xia","doi":"10.2139/ssrn.3853436","DOIUrl":"https://doi.org/10.2139/ssrn.3853436","url":null,"abstract":"We test the efficient market hypothesis by using machine learning to forecast future stock returns from historical performance. These forecasts strongly predict the cross section of future stock returns. The predictive power holds in most subperiods, is strong among the largest 500 stocks, and is distinct from momentum and reversal. The forecasting function has important nonlinearities and interactions and is remarkably stable through time. Our research design ensures that our findings are not a result of data mining. These findings question the efficient market hypothesis and indicate that investment strategies based on technical analysis and charting may have merit.","PeriodicalId":170198,"journal":{"name":"ERN: Forecasting Techniques (Topic)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130279104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lawrence R. Klein’s Principles in Modeling and Contributions in Nowcasting, Real-Time Forecasting, and Machine Learning","authors":"R. Mariano, Suleyman Ozmucur","doi":"10.2139/ssrn.3702412","DOIUrl":"https://doi.org/10.2139/ssrn.3702412","url":null,"abstract":"Lawrence R. Klein (September 14, 1920 – October 20, 2013), Nobel Laureate in Economic Sciences in 1980, was one of the leading figures in macro-econometric modeling. Although his contributions to forecasting using simultaneous equations macro models were very well known, his contributions to nowcasting and real-time forecasting, that he worked on in the last 30 years of his life, were generally overlooked by many researchers. The reasons for the miss are related to the ambiguity in terminology, specifically, the terms nowcast or nowcasting, and the empirical, though very significant, nature of his contributions. This paper reviews L. R. Klein’s guiding principles on modeling and his contributions to nowcasting and real-time forecasting, and discusses the connection of these contributions to the present state of fast evolving disciplines, such as economics, econometrics, statistics, data science, and machine learning. In so doing, we argue that L. R. Klein indeed expertly developed pioneering ideas and methodology for nowcasting and real-time forecasting; and the principles and contributions put forward by him are even more relevant now than ever.","PeriodicalId":170198,"journal":{"name":"ERN: Forecasting Techniques (Topic)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114561712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Role of a Latent Value-Relevant Measure in Tracking and Predicting Stock Returns: A FAVAR Approach","authors":"Faisal M. Awwal, Xiaoquan Jiang","doi":"10.2139/ssrn.3635073","DOIUrl":"https://doi.org/10.2139/ssrn.3635073","url":null,"abstract":"This paper attempts to estimate and study the role of 'other information', as posited in the residual income valuation model of Ohlson (1995), for tracking and predicting future returns of the S&P 500. 'Other information' is an unobserved variable and defined as a summary of value-relevant information about events and their effect on future profitability, which is captured in a company's current stock price and returns, but not yet reflected in a company's current financial statements. This suggests a potential to predict subsequent returns. Previous literature has found that traditional valuation metrics (e.g. B/P, E/P, and D/P ratios) have poor predictive power. In this study, we apply a factor augmented vector autoregression (FAVAR) to estimate this value-relevant latent variable and assess its predictive performance. The FAVAR is a suitable model because it enables us to analyze and quantify the linkages of stock market value, profitability, and unobserved factors that are broadly captured by big data. We use a two-step principal components estimation approach to extract the unobserved factors of 78 informational variables from financial market, accounting, investor and consumer sentiment, and macroeconomic data. Our analysis shows that, in comparison to competing measures, the estimated latent value-relevant variable can track contemporaneous stock returns and has statistically reliable power to predict both future real stock returns and excess returns over a Treasury Bill rate, both in- and out-of-sample.","PeriodicalId":170198,"journal":{"name":"ERN: Forecasting Techniques (Topic)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133263989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Implied Cost of Capital: A Deep Learning Approach","authors":"Xinyu Wang","doi":"10.2139/ssrn.3612472","DOIUrl":"https://doi.org/10.2139/ssrn.3612472","url":null,"abstract":"I exploit deep learning techniques trained on a set of common accounting items and constructed to mimic features of the human brain to predict future earnings. I show that this model offers incremental explanatory power in predicting future earnings and in estimating the associated implied cost of capital. My forecasting model exhibits less bias than human analyst forecasts and fits the data substantially better than linear regression models. In addition, the derived implied cost-of-capital estimates substantially outperform linear models in their ability to predict future returns. This study illustrates the power of machine learning techniques to improve the accuracy of accounting forecasting.","PeriodicalId":170198,"journal":{"name":"ERN: Forecasting Techniques (Topic)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126452670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Clarke, Soohun Kim, Kyuseok Lee, Kyoungwon Seo
{"title":"A Structural Model of Analyst Forecasts: Applications to Forecast Informativeness and Dispersion","authors":"Jonathan Clarke, Soohun Kim, Kyuseok Lee, Kyoungwon Seo","doi":"10.2139/ssrn.3604705","DOIUrl":"https://doi.org/10.2139/ssrn.3604705","url":null,"abstract":"We modify Morris and Shin (2002) to develop a structural model of analyst earnings forecasts. The model allows for analysts to herd due to informational effects and non-informational incentives. The benefits of our model are twofold: (1) we can decompose earnings forecasts into informational and bias components, and measure the stock price response to each component, and (2) we can estimate the impact of bias on the dispersion in analyst forecasts. In a pair of empirical exercises, we find a strong relation between the informational component of analyst forecasts and announcement period stock returns. We also find that analyst biases do not have an impact on forecast dispersion.","PeriodicalId":170198,"journal":{"name":"ERN: Forecasting Techniques (Topic)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132392411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can Machine Learning Based Portfolios Outperform Traditional Risk-Based Portfolios? The Need to Account for Covariance Misspecification","authors":"Prayut Jain, Shashi Jain","doi":"10.3390/RISKS7030074","DOIUrl":"https://doi.org/10.3390/RISKS7030074","url":null,"abstract":"The Hierarchical risk parity (HRP) approach of portfolio allocation, introduced by Lopez de Prado (2016), applies graph theory and machine learning to build a diversified portfolio. Like the traditional risk-based allocation methods, HRP is also a function of the estimate of the covariance matrix, however, it does not require its invertibility. In this paper, we first study the impact of covariance misspecification on the performance of the different allocation methods. Next, we study under an appropriate covariance forecast model whether the machine learning based HRP outperforms the traditional risk-based portfolios. For our analysis, we use the test for superior predictive ability on out-of-sample portfolio performance, to determine whether the observed excess performance is significant or if it occurred by chance. We find that when the covariance estimates are crude, inverse volatility weighted portfolios are more robust, followed by the machine learning-based portfolios. Minimum variance and maximum diversification are most sensitive to covariance misspecification. HRP follows the middle ground; it is less sensitive to covariance misspecification when compared with minimum variance or maximum diversification portfolio, while it is not as robust as the inverse volatility weighed portfolio. We also study the impact of the different rebalancing horizon and how the portfolios compare against a market-capitalization weighted portfolio.","PeriodicalId":170198,"journal":{"name":"ERN: Forecasting Techniques (Topic)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130644519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas S. Karathanasopoulos, M. Sovan, Chia Chun Lo, Adam Zaremba, Mohammed Osman
{"title":"Ensemble Models in Forecasting Financial Markets","authors":"Andreas S. Karathanasopoulos, M. Sovan, Chia Chun Lo, Adam Zaremba, Mohammed Osman","doi":"10.21314/JCF.2019.374","DOIUrl":"https://doi.org/10.21314/JCF.2019.374","url":null,"abstract":"In this paper, we study an evolutionary framework for the optimization of various types of neural network structures and parameters. Three different evolutionary algorithms – the genetic algorithm (GA), differential evolution (DE) and the particle swarm optimizer (PSO) – are developed to optimize the structure and the parameters of three different types of neural network: multilayer perceptrons (MLPs), recurrent neural networks (RNNs) and radial basis function (RBF) neural networks. The motivation of this project is to present novel methodologies for the task of forecasting and trading financial indexes. More specifically, the trading and statistical performance of all models is investigated in a forecast simulation of the SPY and the QQQ exchange-traded funds (ETFs) time series over the period January 2006 to December 2015, using the last three years as out-of-sample testing. As it turns out, the RBF-PSO, RBF-DE and RBF-GA ensemble methodologies do remarkably well and outperform all of the other models.","PeriodicalId":170198,"journal":{"name":"ERN: Forecasting Techniques (Topic)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124377505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}