{"title":"优化停止问题的深度原始双 BSDE 方法","authors":"Jiefei Yang, Guanglian Li","doi":"arxiv-2409.06937","DOIUrl":null,"url":null,"abstract":"We present a new deep primal-dual backward stochastic differential equation\nframework based on stopping time iteration to solve optimal stopping problems.\nA novel loss function is proposed to learn the conditional expectation, which\nconsists of subnetwork parameterization of a continuation value and spatial\ngradients from present up to the stopping time. Notable features of the method\ninclude: (i) The martingale part in the loss function reduces the variance of\nstochastic gradients, which facilitates the training of the neural networks as\nwell as alleviates the error propagation of value function approximation; (ii)\nthis martingale approximates the martingale in the Doob-Meyer decomposition,\nand thus leads to a true upper bound for the optimal value in a non-nested\nMonte Carlo way. We test the proposed method in American option pricing\nproblems, where the spatial gradient network yields the hedging ratio directly.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A deep primal-dual BSDE method for optimal stopping problems\",\"authors\":\"Jiefei Yang, Guanglian Li\",\"doi\":\"arxiv-2409.06937\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a new deep primal-dual backward stochastic differential equation\\nframework based on stopping time iteration to solve optimal stopping problems.\\nA novel loss function is proposed to learn the conditional expectation, which\\nconsists of subnetwork parameterization of a continuation value and spatial\\ngradients from present up to the stopping time. Notable features of the method\\ninclude: (i) The martingale part in the loss function reduces the variance of\\nstochastic gradients, which facilitates the training of the neural networks as\\nwell as alleviates the error propagation of value function approximation; (ii)\\nthis martingale approximates the martingale in the Doob-Meyer decomposition,\\nand thus leads to a true upper bound for the optimal value in a non-nested\\nMonte Carlo way. We test the proposed method in American option pricing\\nproblems, where the spatial gradient network yields the hedging ratio directly.\",\"PeriodicalId\":501294,\"journal\":{\"name\":\"arXiv - QuantFin - Computational Finance\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuantFin - Computational Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06937\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06937","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A deep primal-dual BSDE method for optimal stopping problems
We present a new deep primal-dual backward stochastic differential equation
framework based on stopping time iteration to solve optimal stopping problems.
A novel loss function is proposed to learn the conditional expectation, which
consists of subnetwork parameterization of a continuation value and spatial
gradients from present up to the stopping time. Notable features of the method
include: (i) The martingale part in the loss function reduces the variance of
stochastic gradients, which facilitates the training of the neural networks as
well as alleviates the error propagation of value function approximation; (ii)
this martingale approximates the martingale in the Doob-Meyer decomposition,
and thus leads to a true upper bound for the optimal value in a non-nested
Monte Carlo way. We test the proposed method in American option pricing
problems, where the spatial gradient network yields the hedging ratio directly.