2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)最新文献

筛选
英文 中文
Polynomial Calculus Space and Resolution Width 多项式微积分空间和分辨率宽度
2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2019-11-01 DOI: 10.1109/FOCS.2019.00081
Nicola Galesi, L. Kolodziejczyk, Neil Thapen
{"title":"Polynomial Calculus Space and Resolution Width","authors":"Nicola Galesi, L. Kolodziejczyk, Neil Thapen","doi":"10.1109/FOCS.2019.00081","DOIUrl":"https://doi.org/10.1109/FOCS.2019.00081","url":null,"abstract":"We show that if a k-CNF requires width w to refute in resolution, then it requires space square root of √ω to refute in polynomial calculus, where the space of a polynomial calculus refutation is the number of monomials that must be kept in memory when working through the proof. This is the first analogue, in polynomial calculus, of Atserias and Dalmau's result lower-bounding clause space in resolution by resolution width. As a by-product of our new approach to space lower bounds we give a simple proof of Bonacina's recent result that total space in resolution (the total number of variable occurrences that must be kept in memory) is lower-bounded by the width squared. As corollaries of the main result we obtain some new lower bounds on the PCR space needed to refute specific formulas, as well as partial answers to some open problems about relations between space, size, and degree for polynomial calculus.","PeriodicalId":407139,"journal":{"name":"2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129518361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Non-deterministic Quasi-Polynomial Time is Average-Case Hard for ACC Circuits 非确定性拟多项式时间是ACC电路的平均情况困难
2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2019-11-01 DOI: 10.1109/FOCS.2019.00079
Lijie Chen
{"title":"Non-deterministic Quasi-Polynomial Time is Average-Case Hard for ACC Circuits","authors":"Lijie Chen","doi":"10.1109/FOCS.2019.00079","DOIUrl":"https://doi.org/10.1109/FOCS.2019.00079","url":null,"abstract":"Following the seminal work of [Williams, J. ACM 2014], in a recent breakthrough, [Murray and Williams, STOC 2018] proved that NQP (non-deterministic quasi-polynomial time) does not have polynomial-size ACC^0 circuits. We strengthen the above lower bound to an average case one, by proving that for all constants c, there is a language in NQP, which is not 1/2+1/log^c(n)-approximable by polynomial-size ACC^0 circuits. In fact, our lower bound holds for a larger circuit class: 2^(log^a n)-size ACC^0 circuits with a layer of threshold gates at the bottom, for all constants a. Our work also improves the average-case lower bound for NEXP against polynomial-size ACC circuits by [Chen, Oliveira, and Santhanam, LATIN 2018]. Our new lower bound builds on several interesting components, including: • Barrington's theorem and the existence of an NC^1-complete language which is random self-reducible. • The sub-exponential witness-size lower bound for NE against ACC^0 and the conditional non-deterministic PRG construction in [Williams, SICOMP 2016]. • An “almost'' almost-everywhere MA average-case lower bound (which strengthens the corresponding worst-case lower bound in [Murray and Williams, STOC 2018]). A PSPACE-complete language which is same-length checkable, error-correctable and also has some other nice reducibility properties, which builds on [Trevisan and Vadhan, Computational Complexity 2007]. Moreover, all its reducibility properties have corresponding low-depth non-adaptive oracle circuits. Like other lower bounds proved via the ``algorithmic approach'', the only property of ACC^0 of THR exploited by us is the existence of a non-trivial SAT algorithm for ACC^0 of THR [Williams, STOC 2014]. Therefore, for any typical circuit class ℓ, our results apply to them as well if the corresponding non-trivial SAT (in fact, GAP-UNSAT) algorithms are discovered.","PeriodicalId":407139,"journal":{"name":"2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129185268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Noise Sensitivity on the p -Biased Hypercube p偏超立方体的噪声灵敏度
2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2019-11-01 DOI: 10.1109/FOCS.2019.00075
Noam Lifshitz, Dor Minzer
{"title":"Noise Sensitivity on the p -Biased Hypercube","authors":"Noam Lifshitz, Dor Minzer","doi":"10.1109/FOCS.2019.00075","DOIUrl":"https://doi.org/10.1109/FOCS.2019.00075","url":null,"abstract":"The noise sensitivity of a Boolean function measures how susceptible the value of f on a typical input x to a slight perturbation of the bits of x: it is the probability f(x) and f(y) are different when x is a uniformly chosen n-bit Boolean string, and y is formed by flipping each bit of x with small probability ε. The noise sensitivity of a function is a key concept with applications to combinatorics, complexity theory, learning theory, percolation theory and more. In this paper, we investigate noise sensitivity on the p-biased hypercube, extending the theory for polynomially small p. Specifically, we give sufficient conditions for monotone functions with large groups of symmetries to be noise sensitive (which in some cases are also necessary). As an application, we show that the 2-SAT function is noise sensitive around its critical probability. En route, we study biased versions of the invariance principle for monotone functions and give p-biased versions of Bourgain's tail theorem and the Majority is Stablest theorem, showing that in this case the correct analog of ``small low degree influences'' is lack of correlation with constant width DNF formulas.","PeriodicalId":407139,"journal":{"name":"2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126487441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
SETH-Hardness of Coding Problems 编码问题的硬度
2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2019-11-01 DOI: 10.1109/FOCS.2019.00027
Noah Stephens-Davidowitz, V. Vaikuntanathan
{"title":"SETH-Hardness of Coding Problems","authors":"Noah Stephens-Davidowitz, V. Vaikuntanathan","doi":"10.1109/FOCS.2019.00027","DOIUrl":"https://doi.org/10.1109/FOCS.2019.00027","url":null,"abstract":"We show that assuming the strong exponential-time hypothesis (SETH), there are no non-trivial algorithms for the nearest codeword problem (NCP), the minimum distance problem (MDP), or the nearest codeword problem with preprocessing (NCPP) on linear codes over any finite field. More precisely, we show that there are no NCP, MDP, or NCPP algorithms running in time q^ (1-ε)n for any constant ε>0 for codes with q^n codewords. (In the case of NCPP, we assume non-uniform SETH.) We also show that there are no sub-exponential time algorithms for γ-approximate versions of these problems for some constant γ > 1, under different versions of the exponential-time hypothesis.","PeriodicalId":407139,"journal":{"name":"2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130982506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Settling the Communication Complexity of Combinatorial Auctions with Two Subadditive Buyers 两个次加性买家组合拍卖的通信复杂度求解
2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2019-11-01 DOI: 10.1109/FOCS.2019.00025
Tomer Ezra, M. Feldman, Eric Neyman, Inbal Talgam-Cohen, Matt Weinberg
{"title":"Settling the Communication Complexity of Combinatorial Auctions with Two Subadditive Buyers","authors":"Tomer Ezra, M. Feldman, Eric Neyman, Inbal Talgam-Cohen, Matt Weinberg","doi":"10.1109/FOCS.2019.00025","DOIUrl":"https://doi.org/10.1109/FOCS.2019.00025","url":null,"abstract":"We study the communication complexity of welfare maximization in combinatorial auctions with m items and two players with subadditive valuations. We show that outperforming the trivial 1/2-approximation requires exponential communication, settling an open problem of Dobzinski, Nisan and Schapira [STOC’05, MOR’10] and Feige [STOC’06, SICOMP ’09]. To derive our results, we introduce a new class of subadditive functions that are “far from” fractionally subadditive (XOS) functions, and establish randomized communication lower bounds for a new “near-EQUALITY” problem, both of which may be of independent interest.","PeriodicalId":407139,"journal":{"name":"2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122916419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
FOCS 2019 External Reviewers FOCS 2019外部审稿人
2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2019-11-01 DOI: 10.1109/focs.2019.00008
{"title":"FOCS 2019 External Reviewers","authors":"","doi":"10.1109/focs.2019.00008","DOIUrl":"https://doi.org/10.1109/focs.2019.00008","url":null,"abstract":"","PeriodicalId":407139,"journal":{"name":"2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121955478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximation Algorithms for LCS and LIS with Truly Improved Running Times 真正改善运行时间的LCS和LIS的近似算法
2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2019-11-01 DOI: 10.1109/FOCS.2019.00071
A. Rubinstein, Saeed Seddighin, Zhao Song, Xiaorui Sun
{"title":"Approximation Algorithms for LCS and LIS with Truly Improved Running Times","authors":"A. Rubinstein, Saeed Seddighin, Zhao Song, Xiaorui Sun","doi":"10.1109/FOCS.2019.00071","DOIUrl":"https://doi.org/10.1109/FOCS.2019.00071","url":null,"abstract":"Longest common subsequence (LCS) is a classic and central problem in combinatorial optimization. While LCS admits a quadratic time solution, recent evidence suggests that solving the problem may be impossible in truly subquadratic time. A special case of LCS wherein each character appears at most once in every string is equivalent to the longest increasing subsequence problem (LIS) which can be solved in quasilinear time. In this work, we present novel algorithms for approximating LCS in truly subquadratic time and LIS in truly sublinear time. Our approximation factors depend on the ratio of the optimal solution size over the input size. We denote this ratio by λ and obtain the following results for LCS and LIS without any prior knowledge of λ. • A truly subquadratic time algorithm for LCS with approximation factor O(λ^3). • A truly sublinear time algorithm for LIS with approximation factor O(λ^3). Triangle inequality was recently used by Boroujeni et al. [1] and Chakraborty et al.[2] to present new approximation algorithms for edit distance. Our techniques for LCS extend the notion of triangle inequality to non-metric settings.","PeriodicalId":407139,"journal":{"name":"2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129710465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
FOCS 2019 Organizing Committee and Sponsors FOCS 2019组委会及赞助商
2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2019-11-01 DOI: 10.1109/focs.2019.00006
{"title":"FOCS 2019 Organizing Committee and Sponsors","authors":"","doi":"10.1109/focs.2019.00006","DOIUrl":"https://doi.org/10.1109/focs.2019.00006","url":null,"abstract":"","PeriodicalId":407139,"journal":{"name":"2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133350848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning from Outcomes: Evidence-Based Rankings 从结果中学习:基于证据的排名
2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2019-11-01 DOI: 10.1109/FOCS.2019.00016
C. Dwork, Michael P. Kim, Omer Reingold, G. Rothblum, G. Yona
{"title":"Learning from Outcomes: Evidence-Based Rankings","authors":"C. Dwork, Michael P. Kim, Omer Reingold, G. Rothblum, G. Yona","doi":"10.1109/FOCS.2019.00016","DOIUrl":"https://doi.org/10.1109/FOCS.2019.00016","url":null,"abstract":"Many selection procedures involve ordering candidates according to their qualifications. For example, a university might order applicants according to a perceived probability of graduation within four years, and then select the top 1000 applicants. In this work, we address the problem of ranking members of a population according to their \"probability\" of success, based on a training set of historical binary outcome data (e.g., graduated in four years or not). We show how to obtain rankings that satisfy a number of desirable accuracy and fairness criteria, despite the coarseness of the training data. As the task of ranking is global (the rank of every individual depends not only on their own qualifications, but also on every other individuals' qualifications) ranking is more subtle and vulnerable to manipulation than standard prediction tasks. Towards mitigating unfair discrimination caused by inaccuracies in rankings, we develop two parallel definitions of evidence-based rankings. The first definition relies on a semantic notion of domination-compatibility: if the training data suggest that members of a set S are more qualified (on average) than the members of T, then a ranking that favors T over S (i.e. where T dominates S) is blatantly inconsistent with the evidence, and likely to be discriminatory. The definition asks for domination-compatibility, not just for a pair of sets, but rather for every pair of sets from a rich collection C of subpopulations. The second definition aims at precluding even more general forms of discrimination; this notion of evidence-consistency requires that the ranking must be justified on the basis of consistency with the expectations for every set in the collection C. Somewhat surprisingly, while evidence-consistency is a strictly stronger notion than domination-compatibility when the collection C is predefined, the two notions are equivalent when the collection C may depend on the ranking in question.","PeriodicalId":407139,"journal":{"name":"2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121085546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Why are Proof Complexity Lower Bounds Hard? 为什么证明复杂度下界很难?
2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2019-11-01 DOI: 10.1109/FOCS.2019.00080
J. Pich, R. Santhanam
{"title":"Why are Proof Complexity Lower Bounds Hard?","authors":"J. Pich, R. Santhanam","doi":"10.1109/FOCS.2019.00080","DOIUrl":"https://doi.org/10.1109/FOCS.2019.00080","url":null,"abstract":"We formalize and study the question of whether there are inherent difficulties to showing lower bounds on propositional proof complexity. We establish the following unconditional result: Propositional proof systems cannot efficiently show that truth tables of random Boolean functions lack polynomial size non-uniform proofs of hardness. Assuming a conjecture of Rudich, propositional proof systems also cannot efficiently show that random k-CNFs of linear density lack polynomial size non-uniform proofs of unsatisfiability. Since the statements in question assert the average-case hardness of standard NP problems (MCSP and 3-SAT respectively) against co-nondeterministic circuits for natural distributions, one interpretation of our result is that propositional proof systems are inherently incapable of efficiently proving strong complexity lower bounds in our formalization. Another interpretation is that an analogue of the Razborov-Rudich `natural proofs' barrier holds in proof complexity: under reasonable hardness assumptions, there are natural distributions on hard tautologies for which it is infeasible to show proof complexity lower bounds for strong enough proof systems. For the specific case of the Extended Frege (EF) propositional proof system, we show that at least one of the following cases holds: (1) EF has no efficient proofs of superpolynomial circuit lower bound tautologies for any Boolean function or (2) There is an explicit family of tautologies of each length such that under reasonable hardness assumptions, most tautologies are hard but no propositional proof system can efficiently establish hardness for most tautologies in the family. Thus, under reasonable hardness assumptions, either the Circuit Lower Bounds program toward complexity separations cannot be implemented in EF, or there are inherent obstacles to implementing the Cook-Reckhow program for EF.","PeriodicalId":407139,"journal":{"name":"2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124586156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信