A Functional Model Method for Nonconvex Nonsmooth Conditional Stochastic Optimization

IF 2.6 1区 数学 Q1 MATHEMATICS, APPLIED
Andrzej Ruszczyński, Shangzhe Yang
{"title":"A Functional Model Method for Nonconvex Nonsmooth Conditional Stochastic Optimization","authors":"Andrzej Ruszczyński, Shangzhe Yang","doi":"10.1137/23m1617965","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 3, Page 3064-3087, September 2024. <br/> Abstract. We consider stochastic optimization problems involving an expected value of a nonlinear function of a base random vector and a conditional expectation of another function depending on the base random vector, a dependent random vector, and the decision variables. We call such problems conditional stochastic optimization problems. They arise in many applications, such as uplift modeling, reinforcement learning, and contextual optimization. We propose a specialized single time-scale stochastic method for nonconvex constrained conditional stochastic optimization problems with a Lipschitz smooth outer function and a generalized differentiable inner function. In the method, we approximate the inner conditional expectation with a rich parametric model whose mean squared error satisfies a stochastic version of a Łojasiewicz condition. The model is used by an inner learning algorithm. The main feature of our approach is that unbiased stochastic estimates of the directions used by the method can be generated with one observation from the joint distribution per iteration, which makes it applicable to real-time learning. The directions, however, are not gradients or subgradients of any overall objective function. We prove the convergence of the method with probability one, using the method of differential inclusions and a specially designed Lyapunov function, involving a stochastic generalization of the Bregman distance. Finally, a numerical illustration demonstrates the viability of our approach.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/23m1617965","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

SIAM Journal on Optimization, Volume 34, Issue 3, Page 3064-3087, September 2024.
Abstract. We consider stochastic optimization problems involving an expected value of a nonlinear function of a base random vector and a conditional expectation of another function depending on the base random vector, a dependent random vector, and the decision variables. We call such problems conditional stochastic optimization problems. They arise in many applications, such as uplift modeling, reinforcement learning, and contextual optimization. We propose a specialized single time-scale stochastic method for nonconvex constrained conditional stochastic optimization problems with a Lipschitz smooth outer function and a generalized differentiable inner function. In the method, we approximate the inner conditional expectation with a rich parametric model whose mean squared error satisfies a stochastic version of a Łojasiewicz condition. The model is used by an inner learning algorithm. The main feature of our approach is that unbiased stochastic estimates of the directions used by the method can be generated with one observation from the joint distribution per iteration, which makes it applicable to real-time learning. The directions, however, are not gradients or subgradients of any overall objective function. We prove the convergence of the method with probability one, using the method of differential inclusions and a specially designed Lyapunov function, involving a stochastic generalization of the Bregman distance. Finally, a numerical illustration demonstrates the viability of our approach.
非凸非光滑条件随机优化的函数模型方法
SIAM 优化期刊》,第 34 卷第 3 期,第 3064-3087 页,2024 年 9 月。 摘要。我们考虑的随机优化问题涉及一个基本随机向量的非线性函数的期望值和另一个函数的条件期望值,后者取决于基本随机向量、从属随机向量和决策变量。我们称这类问题为条件随机优化问题。它们出现在许多应用中,如上行建模、强化学习和上下文优化。我们针对非凸约束条件随机优化问题提出了一种专门的单时间尺度随机方法,该方法具有一个 Lipschitz 平滑外函数和一个广义可微分内函数。在该方法中,我们用一个丰富的参数模型来近似内部条件期望,该模型的均方误差满足随机版本的 Łojasiewicz 条件。该模型由内部学习算法使用。我们方法的主要特点是,每次迭代只需从联合分布中观察一次,就能生成方法所用方向的无偏随机估计值,这使其适用于实时学习。然而,这些方向并不是任何总体目标函数的梯度或子梯度。我们利用微分夹杂法和专门设计的 Lyapunov 函数(涉及布雷格曼距离的随机广义)证明了该方法的收敛概率为 1。最后,一个数值说明证明了我们方法的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
SIAM Journal on Optimization
SIAM Journal on Optimization 数学-应用数学
CiteScore
5.30
自引率
9.70%
发文量
101
审稿时长
6-12 weeks
期刊介绍: The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear programming, complementarity problems, stochastic optimization, combinatorial optimization, integer programming, and convex, nonsmooth and variational analysis. Contributions may emphasize optimization theory, algorithms, software, computational practice, applications, or the links between these subjects.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信