Overfitting in portfolio optimization

IF 0.4 4区 经济学 Q4 BUSINESS, FINANCE
Matteo Maggiolo, Oleg Szehr
{"title":"Overfitting in portfolio optimization","authors":"Matteo Maggiolo, Oleg Szehr","doi":"10.21314/jrmv.2023.005","DOIUrl":null,"url":null,"abstract":"In this paper we measure the out-of-sample performance of sample-based rolling-window neural network (NN) portfolio optimization strategies. We show that if NN strategies are evaluated using the holdout (train–test split) technique, then high out-of-sample performance scores can commonly be achieved. Although this phenomenon is often employed to validate NN portfolio models, we demonstrate that it constitutes a “fata morgana” that arises due to a particular vulnerability of portfolio optimization to overfitting. To assess whether overfitting is present, we set up a dedicated methodology based on combinatorially symmetric cross-validation that involves performance measurement across different holdout periods and varying portfolio compositions (the random-asset-stabilized combinatorially symmetric cross-validation methodology). We compare a variety of NN strategies with classical extensions of the mean–variance model and the 1 / N strategy. We find that it is by no means trivial to outperform the classical models. While certain NN strategies outperform the 1 / N benchmark, of the almost 30 models that we evaluate explicitly, none is consistently better than the short-sale constrained minimum-variance rule in terms of the Sharpe ratio or the certainty equivalent of returns.","PeriodicalId":43447,"journal":{"name":"Journal of Risk Model Validation","volume":"48 1","pages":"0"},"PeriodicalIF":0.4000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Risk Model Validation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21314/jrmv.2023.005","RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper we measure the out-of-sample performance of sample-based rolling-window neural network (NN) portfolio optimization strategies. We show that if NN strategies are evaluated using the holdout (train–test split) technique, then high out-of-sample performance scores can commonly be achieved. Although this phenomenon is often employed to validate NN portfolio models, we demonstrate that it constitutes a “fata morgana” that arises due to a particular vulnerability of portfolio optimization to overfitting. To assess whether overfitting is present, we set up a dedicated methodology based on combinatorially symmetric cross-validation that involves performance measurement across different holdout periods and varying portfolio compositions (the random-asset-stabilized combinatorially symmetric cross-validation methodology). We compare a variety of NN strategies with classical extensions of the mean–variance model and the 1 / N strategy. We find that it is by no means trivial to outperform the classical models. While certain NN strategies outperform the 1 / N benchmark, of the almost 30 models that we evaluate explicitly, none is consistently better than the short-sale constrained minimum-variance rule in terms of the Sharpe ratio or the certainty equivalent of returns.
投资组合优化中的过拟合
本文测量了基于样本的滚动窗口神经网络(NN)投资组合优化策略的样本外性能。我们表明,如果使用hold - out(训练-测试分割)技术评估NN策略,那么通常可以获得高样本外性能分数。虽然这种现象经常被用来验证神经网络投资组合模型,但我们证明它构成了一个“不可预测的结果”,这是由于投资组合优化对过拟合的特殊脆弱性而产生的。为了评估是否存在过拟合,我们建立了一种基于组合对称交叉验证的专用方法,该方法涉及跨不同持有期和不同投资组合组成的绩效测量(随机资产稳定组合对称交叉验证方法)。我们将各种神经网络策略与均值-方差模型的经典扩展和1 / N策略进行比较。我们发现,要胜过经典模型绝非易事。虽然某些神经网络策略优于1 / N基准,但在我们明确评估的近30个模型中,就夏普比率或回报的确定性当量而言,没有一个模型始终优于卖空约束最小方差规则。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.20
自引率
28.60%
发文量
8
期刊介绍: As monetary institutions rely greatly on economic and financial models for a wide array of applications, model validation has become progressively inventive within the field of risk. The Journal of Risk Model Validation focuses on the implementation and validation of risk models, and aims to provide a greater understanding of key issues including the empirical evaluation of existing models, pitfalls in model validation and the development of new methods. We also publish papers on back-testing. Our main field of application is in credit risk modelling but we are happy to consider any issues of risk model validation for any financial asset class. The Journal of Risk Model Validation considers submissions in the form of research papers on topics including, but not limited to: Empirical model evaluation studies Backtesting studies Stress-testing studies New methods of model validation/backtesting/stress-testing Best practices in model development, deployment, production and maintenance Pitfalls in model validation techniques (all types of risk, forecasting, pricing and rating)
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信