{"title":"基于后向微分深度学习的高维非线性后向随机微分方程求解算法","authors":"Lorenc Kapllani, Long Teng","doi":"arxiv-2404.08456","DOIUrl":null,"url":null,"abstract":"In this work, we propose a novel backward differential deep learning-based\nalgorithm for solving high-dimensional nonlinear backward stochastic\ndifferential equations (BSDEs), where the deep neural network (DNN) models are\ntrained not only on the inputs and labels but also the differentials of the\ncorresponding labels. This is motivated by the fact that differential deep\nlearning can provide an efficient approximation of the labels and their\nderivatives with respect to inputs. The BSDEs are reformulated as differential\ndeep learning problems by using Malliavin calculus. The Malliavin derivatives\nof solution to a BSDE satisfy themselves another BSDE, resulting thus in a\nsystem of BSDEs. Such formulation requires the estimation of the solution, its\ngradient, and the Hessian matrix, represented by the triple of processes\n$\\left(Y, Z, \\Gamma\\right).$ All the integrals within this system are\ndiscretized by using the Euler-Maruyama method. Subsequently, DNNs are employed\nto approximate the triple of these unknown processes. The DNN parameters are\nbackwardly optimized at each time step by minimizing a differential learning\ntype loss function, which is defined as a weighted sum of the dynamics of the\ndiscretized BSDE system, with the first term providing the dynamics of the\nprocess $Y$ and the other the process $Z$. An error analysis is carried out to\nshow the convergence of the proposed algorithm. Various numerical experiments\nup to $50$ dimensions are provided to demonstrate the high efficiency. Both\ntheoretically and numerically, it is demonstrated that our proposed scheme is\nmore efficient compared to other contemporary deep learning-based\nmethodologies, especially in the computation of the process $\\Gamma$.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"6 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A backward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations\",\"authors\":\"Lorenc Kapllani, Long Teng\",\"doi\":\"arxiv-2404.08456\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this work, we propose a novel backward differential deep learning-based\\nalgorithm for solving high-dimensional nonlinear backward stochastic\\ndifferential equations (BSDEs), where the deep neural network (DNN) models are\\ntrained not only on the inputs and labels but also the differentials of the\\ncorresponding labels. This is motivated by the fact that differential deep\\nlearning can provide an efficient approximation of the labels and their\\nderivatives with respect to inputs. The BSDEs are reformulated as differential\\ndeep learning problems by using Malliavin calculus. The Malliavin derivatives\\nof solution to a BSDE satisfy themselves another BSDE, resulting thus in a\\nsystem of BSDEs. Such formulation requires the estimation of the solution, its\\ngradient, and the Hessian matrix, represented by the triple of processes\\n$\\\\left(Y, Z, \\\\Gamma\\\\right).$ All the integrals within this system are\\ndiscretized by using the Euler-Maruyama method. Subsequently, DNNs are employed\\nto approximate the triple of these unknown processes. The DNN parameters are\\nbackwardly optimized at each time step by minimizing a differential learning\\ntype loss function, which is defined as a weighted sum of the dynamics of the\\ndiscretized BSDE system, with the first term providing the dynamics of the\\nprocess $Y$ and the other the process $Z$. An error analysis is carried out to\\nshow the convergence of the proposed algorithm. Various numerical experiments\\nup to $50$ dimensions are provided to demonstrate the high efficiency. Both\\ntheoretically and numerically, it is demonstrated that our proposed scheme is\\nmore efficient compared to other contemporary deep learning-based\\nmethodologies, especially in the computation of the process $\\\\Gamma$.\",\"PeriodicalId\":501294,\"journal\":{\"name\":\"arXiv - QuantFin - Computational Finance\",\"volume\":\"6 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuantFin - Computational Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2404.08456\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2404.08456","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A backward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations
In this work, we propose a novel backward differential deep learning-based
algorithm for solving high-dimensional nonlinear backward stochastic
differential equations (BSDEs), where the deep neural network (DNN) models are
trained not only on the inputs and labels but also the differentials of the
corresponding labels. This is motivated by the fact that differential deep
learning can provide an efficient approximation of the labels and their
derivatives with respect to inputs. The BSDEs are reformulated as differential
deep learning problems by using Malliavin calculus. The Malliavin derivatives
of solution to a BSDE satisfy themselves another BSDE, resulting thus in a
system of BSDEs. Such formulation requires the estimation of the solution, its
gradient, and the Hessian matrix, represented by the triple of processes
$\left(Y, Z, \Gamma\right).$ All the integrals within this system are
discretized by using the Euler-Maruyama method. Subsequently, DNNs are employed
to approximate the triple of these unknown processes. The DNN parameters are
backwardly optimized at each time step by minimizing a differential learning
type loss function, which is defined as a weighted sum of the dynamics of the
discretized BSDE system, with the first term providing the dynamics of the
process $Y$ and the other the process $Z$. An error analysis is carried out to
show the convergence of the proposed algorithm. Various numerical experiments
up to $50$ dimensions are provided to demonstrate the high efficiency. Both
theoretically and numerically, it is demonstrated that our proposed scheme is
more efficient compared to other contemporary deep learning-based
methodologies, especially in the computation of the process $\Gamma$.