A New Minimax Theorem for Randomized Algorithms

IF 2.3 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Journal of the ACM Pub Date : 2023-10-18 DOI:10.1145/3626514
Shalev Ben-David, Eric Blais
{"title":"A New Minimax Theorem for Randomized Algorithms","authors":"Shalev Ben-David, Eric Blais","doi":"10.1145/3626514","DOIUrl":null,"url":null,"abstract":"The celebrated minimax principle of Yao (1977) says that for any Boolean-valued function f with finite domain, there is a distribution μ over the domain of f such that computing f to error ϵ against inputs from μ is just as hard as computing f to error ϵ on worst-case inputs. Notably, however, the distribution μ depends on the target error level ϵ: the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels. In this work, we introduce a new type of minimax theorem which can provide a hard distribution μ that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzo’s hardcore lemma. Our proofs rely on two innovations over the classical approach of using Von Neumann’s minimax theorem or linear programming duality. First, we use Sion’s minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms. Second, we introduce a new way to analyze low-bias randomized algorithms by viewing them as “forecasting algorithms” evaluated by a certain proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more fine-grained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties: for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a fine-grained analysis of small-bias algorithms is required.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"12 2 1","pages":"0"},"PeriodicalIF":2.3000,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the ACM","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3626514","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 1

Abstract

The celebrated minimax principle of Yao (1977) says that for any Boolean-valued function f with finite domain, there is a distribution μ over the domain of f such that computing f to error ϵ against inputs from μ is just as hard as computing f to error ϵ on worst-case inputs. Notably, however, the distribution μ depends on the target error level ϵ: the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels. In this work, we introduce a new type of minimax theorem which can provide a hard distribution μ that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzo’s hardcore lemma. Our proofs rely on two innovations over the classical approach of using Von Neumann’s minimax theorem or linear programming duality. First, we use Sion’s minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms. Second, we introduce a new way to analyze low-bias randomized algorithms by viewing them as “forecasting algorithms” evaluated by a certain proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more fine-grained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties: for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a fine-grained analysis of small-bias algorithms is required.
随机化算法的一个新的极大极小定理
姚(1977)著名的极大极小原理指出,对于任何具有有限定义域的布尔值函数f,在f的定义域上存在一个分布μ,使得计算来自μ的输入的f到误差的λ与计算最坏情况输入的f到误差的λ一样困难。然而,值得注意的是,分布μ取决于目标误差水平ε:对于有界误差来说,紧绷的硬分布对于小偏差来说可能是微不足道的,而对于小偏差水平来说,紧绷的硬分布对于有界误差水平来说可能远非紧绷。在这项工作中,我们引入了一种新的极大极小定理,它可以提供一个同时适用于所有偏置水平的硬分布μ。我们证明了这种方法适用于随机查询复杂度、随机通信复杂度、一些随机电路模型、量子查询和通信复杂度、近似多项式度和近似logrank。我们还证明了Impagliazzo的硬核引理的改进版本。我们的证明依赖于使用冯·诺伊曼极大极小定理或线性规划对偶的经典方法的两个创新。首先,我们用Sion的极大极小定理证明了表示算法代价和分数的双线性函数的比值的极大极小定理。其次,我们引入了一种新的方法来分析低偏差随机算法,将它们视为由某个适当的评分规则评估的“预测算法”。随机算法预测版本的期望分数似乎是一种更细粒度的方法来分析算法的偏差。我们证明了这样的期望分数有许多优雅的数学性质:例如,它们可以线性放大而不是二次放大。我们预计预测算法将在未来的工作中找到用途,其中需要对小偏差算法进行细粒度分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of the ACM
Journal of the ACM 工程技术-计算机:理论方法
CiteScore
7.50
自引率
0.00%
发文量
51
审稿时长
3 months
期刊介绍: The best indicator of the scope of the journal is provided by the areas covered by its Editorial Board. These areas change from time to time, as the field evolves. The following areas are currently covered by a member of the Editorial Board: Algorithms and Combinatorial Optimization; Algorithms and Data Structures; Algorithms, Combinatorial Optimization, and Games; Artificial Intelligence; Complexity Theory; Computational Biology; Computational Geometry; Computer Graphics and Computer Vision; Computer-Aided Verification; Cryptography and Security; Cyber-Physical, Embedded, and Real-Time Systems; Database Systems and Theory; Distributed Computing; Economics and Computation; Information Theory; Logic and Computation; Logic, Algorithms, and Complexity; Machine Learning and Computational Learning Theory; Networking; Parallel Computing and Architecture; Programming Languages; Quantum Computing; Randomized Algorithms and Probabilistic Analysis of Algorithms; Scientific Computing and High Performance Computing; Software Engineering; Web Algorithms and Data Mining
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信