{"title":"Overfitting in portfolio optimization","authors":"Matteo Maggiolo, Oleg Szehr","doi":"10.21314/jrmv.2023.005","DOIUrl":null,"url":null,"abstract":"In this paper we measure the out-of-sample performance of sample-based rolling-window neural network (NN) portfolio optimization strategies. We show that if NN strategies are evaluated using the holdout (train–test split) technique, then high out-of-sample performance scores can commonly be achieved. Although this phenomenon is often employed to validate NN portfolio models, we demonstrate that it constitutes a “fata morgana” that arises due to a particular vulnerability of portfolio optimization to overfitting. To assess whether overfitting is present, we set up a dedicated methodology based on combinatorially symmetric cross-validation that involves performance measurement across different holdout periods and varying portfolio compositions (the random-asset-stabilized combinatorially symmetric cross-validation methodology). We compare a variety of NN strategies with classical extensions of the mean–variance model and the 1 / N strategy. We find that it is by no means trivial to outperform the classical models. While certain NN strategies outperform the 1 / N benchmark, of the almost 30 models that we evaluate explicitly, none is consistently better than the short-sale constrained minimum-variance rule in terms of the Sharpe ratio or the certainty equivalent of returns.","PeriodicalId":43447,"journal":{"name":"Journal of Risk Model Validation","volume":"48 1","pages":"0"},"PeriodicalIF":0.4000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Risk Model Validation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21314/jrmv.2023.005","RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper we measure the out-of-sample performance of sample-based rolling-window neural network (NN) portfolio optimization strategies. We show that if NN strategies are evaluated using the holdout (train–test split) technique, then high out-of-sample performance scores can commonly be achieved. Although this phenomenon is often employed to validate NN portfolio models, we demonstrate that it constitutes a “fata morgana” that arises due to a particular vulnerability of portfolio optimization to overfitting. To assess whether overfitting is present, we set up a dedicated methodology based on combinatorially symmetric cross-validation that involves performance measurement across different holdout periods and varying portfolio compositions (the random-asset-stabilized combinatorially symmetric cross-validation methodology). We compare a variety of NN strategies with classical extensions of the mean–variance model and the 1 / N strategy. We find that it is by no means trivial to outperform the classical models. While certain NN strategies outperform the 1 / N benchmark, of the almost 30 models that we evaluate explicitly, none is consistently better than the short-sale constrained minimum-variance rule in terms of the Sharpe ratio or the certainty equivalent of returns.
期刊介绍:
As monetary institutions rely greatly on economic and financial models for a wide array of applications, model validation has become progressively inventive within the field of risk. The Journal of Risk Model Validation focuses on the implementation and validation of risk models, and aims to provide a greater understanding of key issues including the empirical evaluation of existing models, pitfalls in model validation and the development of new methods. We also publish papers on back-testing. Our main field of application is in credit risk modelling but we are happy to consider any issues of risk model validation for any financial asset class. The Journal of Risk Model Validation considers submissions in the form of research papers on topics including, but not limited to: Empirical model evaluation studies Backtesting studies Stress-testing studies New methods of model validation/backtesting/stress-testing Best practices in model development, deployment, production and maintenance Pitfalls in model validation techniques (all types of risk, forecasting, pricing and rating)