Subquadratic submodular function minimization

Deeparnab Chakrabarty, Y. Lee, Aaron Sidford, Sam Chiu-wai Wong
{"title":"Subquadratic submodular function minimization","authors":"Deeparnab Chakrabarty, Y. Lee, Aaron Sidford, Sam Chiu-wai Wong","doi":"10.1145/3055399.3055419","DOIUrl":null,"url":null,"abstract":"Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n2lognM· EO + n3logO(1)nM) time and O(n3log2n· EO +n4logO(1)n)time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], the current shortest non-deterministic proof [Cunningham, 1985] certifying the optimum value of a function requires Ω(n2) function evaluations. The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in O(nM3logn· EO) time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in [-1,1], we give an algorithm which in Õ(n5/3· EO/ε2) time returns an ε-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n2 bound - we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast Fujishige-Wolfe heuristic [Fujishige, 1980; Wolfe, 1976]","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3055399.3055419","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 39

Abstract

Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n2lognM· EO + n3logO(1)nM) time and O(n3log2n· EO +n4logO(1)n)time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], the current shortest non-deterministic proof [Cunningham, 1985] certifying the optimum value of a function requires Ω(n2) function evaluations. The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in O(nM3logn· EO) time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in [-1,1], we give an algorithm which in Õ(n5/3· EO/ε2) time returns an ε-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n2 bound - we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast Fujishige-Wolfe heuristic [Fujishige, 1980; Wolfe, 1976]
次二次次模函数最小化
次模函数最小化(SFM)是一个基本的离散优化问题,它推广了许多众所周知的问题,在各个领域都有应用,并且可以在多项式时间内解决。由于在计算机视觉和机器学习中的应用,快速SFM算法是非常需要的。目前最快的算法[Lee, Sidford, Wong, 2015]分别在O(n2logm·EO + n3logO(1)nM)时间和O(n3log2n·EO +n4logO(1)n)时间内运行,其中M是函数的最大绝对值(假设范围是整数),是在任何集合上计算函数所花费的时间。虽然最著名的查询复杂度下界只有Ω(n) [Harvey, 2008],但目前最短的非确定性证明[Cunningham, 1985]证明一个函数的最优值需要Ω(n2)个函数求值。本文的主要贡献是次二次SFM算法。对于整数值子模函数,我们给出了在O(nM3logn·EO)时间内运行的SFM算法,给出了在任何已知区域内的第一个近线性时间算法。对于范围为[-1,1]的实值子模函数,给出了在Õ(n5/3·EO/ε2)时间内返回ε加性近似解的算法。在它的核心,我们的算法是投影随机亚梯度下降方法在Lovasz扩展的子模块函数中,我们关键地利用子模块性和数据结构来获得快速,即亚线性时间,子梯度更新。后者对于击败n2界至关重要-我们展示了仅访问Lovasz扩展的子梯度的算法,其中包括经验快速的Fujishige- wolfe启发式[Fujishige, 1980;乌尔夫,1976)
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信