Improved accuracy of an analytical approximation for option pricing under stochastic volatility models using deep learning techniques

IF 2.9 2区 数学 Q1 MATHEMATICS, APPLIED
Donghyun Kim , Jeonggyu Huh , Ji-Hun Yoon
{"title":"Improved accuracy of an analytical approximation for option pricing under stochastic volatility models using deep learning techniques","authors":"Donghyun Kim ,&nbsp;Jeonggyu Huh ,&nbsp;Ji-Hun Yoon","doi":"10.1016/j.camwa.2025.03.029","DOIUrl":null,"url":null,"abstract":"<div><div>This paper addresses the challenge of pricing options under stochastic volatility (SV) models, where explicit formulae are often unavailable and parameter estimation requires extensive numerical simulations. Traditional approaches typically either rely on large volumes of historical (option) data (data-driven methods) or generate synthetic prices across wide parameter grids (model-driven methods). In both cases, the scale of data demands can be prohibitively high. We propose an alternative strategy that trains a neural network on the <em>residuals</em> between a fast but approximate pricing formula and numerically generated option prices, rather than learning the full pricing function directly. Focusing on these smaller, smoother residuals notably reduces the complexity of the learning task and lowers data requirements. We further demonstrate theoretically that the Rademacher complexity of the residual function class is significantly smaller, thereby improving generalization with fewer samples. Numerical experiments on fast mean-reverting SV models show that our residual-learning framework achieves accuracy comparable to baseline networks but uses only about one-tenth the training data. These findings highlight the potential of residual-based neural approaches to deliver efficient, accurate pricing and facilitate practical calibration of advanced SV models.</div></div>","PeriodicalId":55218,"journal":{"name":"Computers & Mathematics with Applications","volume":"187 ","pages":"Pages 150-165"},"PeriodicalIF":2.9000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Mathematics with Applications","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0898122125001245","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

This paper addresses the challenge of pricing options under stochastic volatility (SV) models, where explicit formulae are often unavailable and parameter estimation requires extensive numerical simulations. Traditional approaches typically either rely on large volumes of historical (option) data (data-driven methods) or generate synthetic prices across wide parameter grids (model-driven methods). In both cases, the scale of data demands can be prohibitively high. We propose an alternative strategy that trains a neural network on the residuals between a fast but approximate pricing formula and numerically generated option prices, rather than learning the full pricing function directly. Focusing on these smaller, smoother residuals notably reduces the complexity of the learning task and lowers data requirements. We further demonstrate theoretically that the Rademacher complexity of the residual function class is significantly smaller, thereby improving generalization with fewer samples. Numerical experiments on fast mean-reverting SV models show that our residual-learning framework achieves accuracy comparable to baseline networks but uses only about one-tenth the training data. These findings highlight the potential of residual-based neural approaches to deliver efficient, accurate pricing and facilitate practical calibration of advanced SV models.
利用深度学习技术提高随机波动率模型下期权定价分析近似的准确性
本文解决了随机波动率(SV)模型下定价期权的挑战,其中显式公式通常不可用,参数估计需要大量的数值模拟。传统的方法通常要么依赖于大量的历史(期权)数据(数据驱动的方法),要么生成跨宽参数网格的综合价格(模型驱动的方法)。在这两种情况下,数据需求的规模都可能高得令人望而却步。我们提出了一种替代策略,通过快速但近似的定价公式和数值生成的期权价格之间的残差来训练神经网络,而不是直接学习完整的定价函数。关注这些更小、更平滑的残差显著降低了学习任务的复杂性,降低了数据需求。我们进一步从理论上证明了残差函数类的Rademacher复杂度明显更小,从而提高了样本更少的泛化。在快速均值回归SV模型上的数值实验表明,我们的残差学习框架达到了与基线网络相当的精度,但只使用了大约十分之一的训练数据。这些发现突出了基于残差的神经方法在提供高效、准确的定价和促进先进SV模型的实际校准方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers & Mathematics with Applications
Computers & Mathematics with Applications 工程技术-计算机:跨学科应用
CiteScore
5.10
自引率
10.30%
发文量
396
审稿时长
9.9 weeks
期刊介绍: Computers & Mathematics with Applications provides a medium of exchange for those engaged in fields contributing to building successful simulations for science and engineering using Partial Differential Equations (PDEs).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信