Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing最新文献

筛选
英文 中文
Average-case fine-grained hardness 平均细粒硬度
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055466
Marshall Ball, Alon Rosen, Manuel Sabin, Prashant Nalini Vasudevan
{"title":"Average-case fine-grained hardness","authors":"Marshall Ball, Alon Rosen, Manuel Sabin, Prashant Nalini Vasudevan","doi":"10.1145/3055399.3055466","DOIUrl":"https://doi.org/10.1145/3055399.3055466","url":null,"abstract":"We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL '94), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO '03). We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems - namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) - in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2-o(1) time to compute on average, and that of APSP gives us a function that requires n3-o(1) time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85137519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
The next 700 network programming languages (invited talk) 下700种网络编程语言(特邀演讲)
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2017-06-19 DOI: 10.1145/3055399.3081042
Nate Foster
{"title":"The next 700 network programming languages (invited talk)","authors":"Nate Foster","doi":"10.1145/3055399.3081042","DOIUrl":"https://doi.org/10.1145/3055399.3081042","url":null,"abstract":"Specification and verification of computer networks has become a reality in recent years, with the emergence of domain-specific programming languages and automated reasoning tools. But the design of these frameworks has been largely ad hoc, driven more by the needs of applications and the capabilities of hardware than by any foundational principles. This talk will present NetKAT, a language for programming networks based on a well-studied mathematical foundation: regular languages and finite automata. The talk will describe the design of the language, discuss its semantic underpinnings, and present highlights from ongoing work extending the language with stateful and probabilistic features.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82330487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards optimal two-source extractors and Ramsey graphs 最优双源提取器和Ramsey图
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055429
Gil Cohen
{"title":"Towards optimal two-source extractors and Ramsey graphs","authors":"Gil Cohen","doi":"10.1145/3055399.3055429","DOIUrl":"https://doi.org/10.1145/3055399.3055429","url":null,"abstract":"The main contribution of this work is a construction of a two-source extractor for quasi-logarithmic min-entropy. That is, an extractor for two independent n-bit sources with min-entropy Ο(logn), which is optimal up to the poly(loglogn) factor. A strong motivation for constructing two-source extractors for low entropy is for Ramsey graphs constructions. Our two-source extractor readily yields a (logn)(logloglogn)Ο(1)-Ramsey graph on n vertices. Although there has been exciting progress towards constructing O(logn)-Ramsey graphs in recent years, a line of work that this paper contributes to, it is not clear if current techniques can be pushed so as to match this bound. Interestingly, however, as an artifact of current techniques, one obtains strongly explicit Ramsey graphs, namely, graphs on n vertices where the existence of an edge connecting any pair of vertices can be determined in time poly(logn). On top of our strongly explicit construction, in this work, we consider algorithms that output the entire graph in poly(n)-time, and make progress towards matching the desired Ο(logn) bound in this setting. In our opinion, this is a natural setting in which Ramsey graphs constructions should be studied. The main technical novelty of this work lies in an improved construction of an independence-preserving merger (IPM), a variant of the well-studied notion of a merger, which was recently introduced by Cohen and Schulman. Our construction is based on a new connection to correlation breakers with advice. In fact, our IPM satisfies a stronger and more natural property than that required by the original definition, and we believe it may find further applications.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89109186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
On independent sets, 2-to-2 games, and Grassmann graphs 在独立集,2对2博弈和Grassmann图上
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055432
Subhash Khot, Dor Minzer, S. Safra
{"title":"On independent sets, 2-to-2 games, and Grassmann graphs","authors":"Subhash Khot, Dor Minzer, S. Safra","doi":"10.1145/3055399.3055432","DOIUrl":"https://doi.org/10.1145/3055399.3055432","url":null,"abstract":"We present a candidate reduction from the 3-Lin problem to the 2-to-2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain non-standard sense. A reduction that is sound in this non-standard sense implies that it is NP-hard to distinguish whether an n-vertex graph has an independent set of size ( 1- 1/√2 ) n - o(n) or whether every independent set has size o(n), and consequently, that it is NP-hard to approximate the Vertex Cover problem within a factor √2-o(1).","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77328279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Equivocating Yao: constant-round adaptively secure multiparty computation in the plain model 模糊Yao: plain模型中的恒轮自适应安全多方计算
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055495
R. Canetti, Oxana Poburinnaya, Muthuramakrishnan Venkitasubramaniam
{"title":"Equivocating Yao: constant-round adaptively secure multiparty computation in the plain model","authors":"R. Canetti, Oxana Poburinnaya, Muthuramakrishnan Venkitasubramaniam","doi":"10.1145/3055399.3055495","DOIUrl":"https://doi.org/10.1145/3055399.3055495","url":null,"abstract":"Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable two-message, two-party secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds? We provide a positive answer to this question. We define a new type of encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function. Combining our scheme with non-committing encryption, we obtain the first two-message, two-party computation protocol, and the first constant-rounds multiparty computation protocol, in the plain model, that are secure against semi-honest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89505340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A strongly polynomial algorithm for bimodular integer linear programming 双模整数线性规划的强多项式算法
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055473
S. Artmann, R. Weismantel, R. Zenklusen
{"title":"A strongly polynomial algorithm for bimodular integer linear programming","authors":"S. Artmann, R. Weismantel, R. Zenklusen","doi":"10.1145/3055399.3055473","DOIUrl":"https://doi.org/10.1145/3055399.3055473","url":null,"abstract":"We present a strongly polynomial algorithm to solve integer programs of the form max{cT x: Ax≤ b, xεℤn }, for AεℤmXn with rank(A)=n, bε≤m, cε≤n, and where all determinants of (nXn)-sub-matrices of A are bounded by 2 in absolute value. In particular, this implies that integer programs max{cT x : Q x≤ b, xεℤ≥0n}, where Qε ℤmXn has the property that all subdeterminants are bounded by 2 in absolute value, can be solved in strongly polynomial time. We thus obtain an extension of the well-known result that integer programs with constraint matrices that are totally unimodular are solvable in strongly polynomial time.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89985143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Hardness amplification for entangled games via anchoring 通过锚定增加纠缠博弈的硬度
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055433
Mohammad Bavarian, Thomas Vidick, H. Yuen
{"title":"Hardness amplification for entangled games via anchoring","authors":"Mohammad Bavarian, Thomas Vidick, H. Yuen","doi":"10.1145/3055399.3055433","DOIUrl":"https://doi.org/10.1145/3055399.3055433","url":null,"abstract":"We study the parallel repetition of one-round games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate - in other words, does an analogue of Raz's parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games. We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the Feige-Kilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the Feige-Kilian transformation, our anchoring transformation is completeness preserving. We prove an exponential-decay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games. Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76686469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Bernoulli factories and black-box reductions in mechanism design 机构设计中的伯努利工厂和黑盒缩减
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055492
S. Dughmi, Jason D. Hartline, Robert D. Kleinberg, Rad Niazadeh
{"title":"Bernoulli factories and black-box reductions in mechanism design","authors":"S. Dughmi, Jason D. Hartline, Robert D. Kleinberg, Rad Niazadeh","doi":"10.1145/3055399.3055492","DOIUrl":"https://doi.org/10.1145/3055399.3055492","url":null,"abstract":"We provide a polynomial-time reduction from Bayesian incentive-compatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multi-dimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior black-box reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism's output, which is typically #P-hard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on \"Bernoulli Factories\". In a Bernoulli factory problem, one is given a function mapping the bias of an 'input coin' to that of an 'output coin', and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the \"expectations from samples\" computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by \"exponential weights\": expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of Hartline-Malekian-Kleinberg [2015] exactly incentive compatible.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78259235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Low rank approximation with entrywise l1-norm error 低秩近似与入口方向11范数误差
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055431
Zhao Song, David P. Woodruff, Peilin Zhong
{"title":"Low rank approximation with entrywise l1-norm error","authors":"Zhao Song, David P. Woodruff, Peilin Zhong","doi":"10.1145/3055399.3055431","DOIUrl":"https://doi.org/10.1145/3055399.3055431","url":null,"abstract":"We study the ℓ1-low rank approximation problem, where for a given n x d matrix A and approximation factor α ≤ 1, the goal is to output a rank-k matrix  for which ‖A-Â‖1 ≤ α · min rank-k matrices A′ ‖A-A′‖1, where for an n x d matrix C, we let ‖C‖1 = ∑i=1n ∑j=1d |Ci,j|. This error measure is known to be more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. The problem was shown to be NP-hard by Gillis and Vavasis and a number of heuristics have been proposed. It was asked in multiple places if there are any approximation algorithms. We give the first provable approximation algorithms for ℓ1-low rank approximation, showing that it is possible to achieve approximation factor α = (logd) #183; poly(k) in nnz(A) + (n+d) poly(k) time, where nnz(A) denotes the number of non-zero entries of A. If k is constant, we further improve the approximation ratio to O(1) with a poly(nd)-time algorithm. Under the Exponential Time Hypothesis, we show there is no poly(nd)-time algorithm achieving a (1+1/log1+γ(nd))-approximation, for γ > 0 an arbitrarily small constant, even when k = 1. We give a number of additional results for ℓ1-low rank approximation: nearly tight upper and lower bounds for column subset selection, CUR decompositions, extensions to low rank approximation with respect to ℓp-norms for 1 ≤ p < 2 and earthmover distance, low-communication distributed protocols and low-memory streaming algorithms, algorithms with limited randomness, and bicriteria algorithms. We also give a preliminary empirical evaluation.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74962182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
Algorithms for stable and perturbation-resilient problems 稳定和扰动弹性问题的算法
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055487
Haris Angelidakis, K. Makarychev, Yury Makarychev
{"title":"Algorithms for stable and perturbation-resilient problems","authors":"Haris Angelidakis, K. Makarychev, Yury Makarychev","doi":"10.1145/3055399.3055487","DOIUrl":"https://doi.org/10.1145/3055399.3055487","url":null,"abstract":"We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√≈2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2ε)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2-2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2-2/k+ς)-weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomial-time algorithms for n1-ε-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP).","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85004156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信