2011 IEEE 52nd Annual Symposium on Foundations of Computer Science最新文献

筛选
英文 中文
Fully Homomorphic Encryption without Squashing Using Depth-3 Arithmetic Circuits 使用深度-3算术电路的不压缩全同态加密
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.94
Craig Gentry, S. Halevi
{"title":"Fully Homomorphic Encryption without Squashing Using Depth-3 Arithmetic Circuits","authors":"Craig Gentry, S. Halevi","doi":"10.1109/FOCS.2011.94","DOIUrl":"https://doi.org/10.1109/FOCS.2011.94","url":null,"abstract":"All previously known fully homomorphic encryption (FHE) schemes use Gentry's blueprint:* SWHE: Construct a somewhat homomorphic encryption (SWHE) scheme -- roughly, an encryption scheme that can homomorphically evaluate polynomials up to some degree.* Squash: ``Squash\" the decryption function of the SWHE scheme, so that the scheme can evaluate functions twice as complex (in terms of polynomial degree) than its own decryption function. Do this by adding a ``hint \" to the SHWE public key -- namely, a large set of vectors that has a secret sparse subset that sums to the original secret key.* Bootstrap: Given a SWHE scheme that can evaluate functions twice as complex as its decryption function, apply Gentry's transformation to get a ``leveled\" FHE scheme. To get ``pure\" (non-leveled) FHE, one assumes circular security. Here, we describe a new blueprint for FHE. We show how to eliminate the squashing step, and thereby eliminate the need to assume that the sparse subset sum problem (SSSP) is hard, as all previous leveled FHE schemes have done. Using our new blueprint, we obtain the following results:* A ``simple\" leveled FHE scheme where we replace SSSP with Decision Diffie-Hellman!* The first leveled FHE scheme based entirely on worst-case hardness}. Specifically, we give a leveled FHE scheme with security based on the shortest independent vector problem over ideal lattices (ideal-SIVP).* Some efficiency improvements for FHE.} While the new blueprint does not yet improve computational efficiency, it reduces cipher text length. As in the previous blueprint, we obtain pure FHE by assuming circular security. Our main technique is to express the decryption function of SWHE schemes as a depth-3 ($sum prod sum$) arithmetic circuit. When we evaluate this decryption function homomorphically, we temporarily switch to a multiplicatively homomorphic encryption (MHE) scheme, such as Elgamal, to handle the $prod$ part, after which we translate the result from the MHE scheme back to the SWHE scheme by evaluating the MHE scheme's decryption function within the SWHE scheme. The SWHE scheme only needs to be able to evaluate the MHE scheme's decryption function (plus minor operations), and does not need to have the self-referential property of being able to evaluate its {em own} decryption function, a property that necessitated squashing in the original blueprint.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129250151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 219
Welfare and Profit Maximization with Production Costs 生产成本下的福利和利润最大化
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.68
Avrim Blum, Anupam Gupta, Y. Mansour, Ankit Sharma
{"title":"Welfare and Profit Maximization with Production Costs","authors":"Avrim Blum, Anupam Gupta, Y. Mansour, Ankit Sharma","doi":"10.1109/FOCS.2011.68","DOIUrl":"https://doi.org/10.1109/FOCS.2011.68","url":null,"abstract":"Combinatorial Auctions are a central problem in Algorithmic Mechanism Design: pricing and allocating goods to buyers with complex preferences in order to maximize some desired objective (e.g., social welfare, revenue, or profit). The problem has been well-studied in the case of limited supply (one copy of each item), and in the case of digital goods (the seller can produce additional copies at no cost). Yet in the case of resources -- oil, labor, computing cycles, etc. -- neither of these abstractions is just right: additional supplies of these resources can be found, but at increasing difficulty (marginal cost) as resources are depleted. In this work, we initiate the study of the algorithmic mechanism design problem of combinatorial pricing under increasing marginal cost. The goal is to sell these goods to buyers with unknown and arbitrary combinatorial valuation functions to maximize either the social welfare, or the seller's profit, specifically we focus on the setting of posted item prices with buyers arriving online. We give algorithms that achieve constant factor approximations for a class of natural cost functions -- linear, low-degree polynomial, logarithmic -- and that give logarithmic approximations for more general increasing marginal cost functions (along with a necessary additive loss). We show that these bounds are essentially best possible for these settings.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114510967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
The Power of Linear Estimators 线性估计量的威力
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.81
G. Valiant, Paul Valiant
{"title":"The Power of Linear Estimators","authors":"G. Valiant, Paul Valiant","doi":"10.1109/FOCS.2011.81","DOIUrl":"https://doi.org/10.1109/FOCS.2011.81","url":null,"abstract":"For a broad class of practically relevant distribution properties, which includes entropy and support size, nearly all of the proposed estimators have an especially simple form. Given a set of independent samples from a discrete distribution, these estimators tally the vector of summary statistics -- the number of domain elements seen once, twice, etc. in the sample -- and output the dot product between these summary statistics, and a fixed vector of coefficients. We term such estimators emph{linear}. This historical proclivity towards linear estimators is slightly perplexing, since, despite many efforts over nearly 60 years, all proposed such estimators have significantly sub optimal convergence, compared to the bounds shown in [VV11]. Our main result, in some sense vindicating this insistence on linear estimators, is that for any property in this broad class, there exists a near-optimal linear estimator. Additionally, we give a practical and polynomial-time algorithm for constructing such estimators for any given parameters. While this result does not yield explicit bounds on the sample complexities of these estimation tasks, we leverage the insights provided by this result to give explicit constructions of near-optimal linear estimators for three properties: entropy, $L_1$ distance to uniformity, and for pairs of distributions, $L_1$ distance. Our entropy estimator, when given $O(frac{n}{eps log n})$ independent samples from a distribution of support at most $n,$ will estimate the entropy of the distribution to within additive accuracy $epsilon$, with probability of failure $o(1/poly(n)).$ From the recent lower bounds given in [VV11], this estimator is optimal, to constant factor, both in its dependence on $n$, and its dependence on $epsilon.$ In particular, the inverse-linear convergence rate of this estimator resolves the main open question of [VV11], which left open the possibility that the error decreased only with the square root of the number of samples. Our distance to uniformity estimator, when given $O(frac{m}{eps^2log m})$ independent samples from any distribution, returns an $eps$-accurate estimate of the $L_1$ distance to the uniform distribution of support $m$. This is constant-factor optimal, for constant $epsilon$. Finally, our framework extends naturally to properties of pairs of distributions, including estimating the $L_1$ distance and KL-divergence between pairs of distributions. We give an explicit linear estimator for estimating $L_1$ distance to additive accuracy $epsilon$ using $O(frac{n}{eps^2log n})$ samples from each distribution, which is constant-factor optimal, for constant $epsilon$. This is the first sub linear-sample estimator for this fundamental property.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116787947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 148
Mutual Exclusion with O(log^2 Log n) Amortized Work 互斥与O(log^2 log n)平摊功
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.84
M. A. Bender, Seth Gilbert
{"title":"Mutual Exclusion with O(log^2 Log n) Amortized Work","authors":"M. A. Bender, Seth Gilbert","doi":"10.1109/FOCS.2011.84","DOIUrl":"https://doi.org/10.1109/FOCS.2011.84","url":null,"abstract":"This paper presents a new algorithm for mutual exclusion in which each passage through the critical section costs amortized O(log^2 log n) RMRs with high probability. The algorithm operates in a standard asynchronous, local spinning, shared memory model with an oblivious adversary. It guarantees that every process enters the critical section with high probability. The algorithm achieves its efficient performance by exploiting a connection between mutual exclusion and approximate counting.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115690088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
A Two Prover One Round Game with Strong Soundness 具有强稳健性的两个证明者一轮博弈
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-10-22 DOI: 10.4086/toc.2013.v009a028
Subhash Khot, S. Safra
{"title":"A Two Prover One Round Game with Strong Soundness","authors":"Subhash Khot, S. Safra","doi":"10.4086/toc.2013.v009a028","DOIUrl":"https://doi.org/10.4086/toc.2013.v009a028","url":null,"abstract":"We show that for any fixed prime $q geq 5$ and constant $zeta &gt, 0$, it is NP-hard to distinguish whether a two prove one round game with $q^6$ answers has value at least $1-zeta$ or at most $frac{4}{q}$. The result is obtained by combining two techniques: (i) An Inner PCP based on the {it point versus subspace} test for linear functions. The testis analyzed Fourier analytically. (ii) The Outer/Inner PCP composition that relies on a certain {it sub-code covering} property for Hadamard codes. This is a new and essentially black-box method to translate a {it codeword test}for Hadamard codes to a {it consistency test}, leading to a full PCP construction. As an application, we show that unless NP has quasi-polynomial time deterministic algorithms, the Quadratic Programming Problem is in approximable within factor $(log n)^{1/6 - o(1)}$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130157537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Green Computing Algorithmics 绿色计算算法
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.44
K. Pruhs
{"title":"Green Computing Algorithmics","authors":"K. Pruhs","doi":"10.1109/FOCS.2011.44","DOIUrl":"https://doi.org/10.1109/FOCS.2011.44","url":null,"abstract":"The converging trends of society's desire/need for more sustainable technologies, exponentially increasing power densities within computing devices, and exponentially more computing devices, have inevitably pushed power and energy management into the forefront of computing design and management for purely economic reasons. Thus we are in the midst of a green computing revolution involving the redesign of information technology hardware and software at all levels of the information technology stack. This revolution has spawned a multitude of technological challenges, many of which are algorithmic in nature. We provide pointers into the literature on the green computing algorithmics.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"15 23-24","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120931546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Tight Lower Bounds for 2-query LCCs over Finite Fields 有限域上2查询lcc的紧下界
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.28
Arnab Bhattacharyya, Zeev Dvir, Amir Shpilka, Shubhangi Saraf
{"title":"Tight Lower Bounds for 2-query LCCs over Finite Fields","authors":"Arnab Bhattacharyya, Zeev Dvir, Amir Shpilka, Shubhangi Saraf","doi":"10.1109/FOCS.2011.28","DOIUrl":"https://doi.org/10.1109/FOCS.2011.28","url":null,"abstract":"A Locally Correctable Code (LCC) is an error correcting code that has a probabilistic self-correcting algorithm that, with high probability, can correct any coordinate of the codeword by looking at only a few other coordinates, even if a fraction δ of the coordinates are corrupted. LCCs are a stronger form of LDCs (Locally Decodable Codes) which have received a lot of attention recently due to their many applications and surprising constructions. In this work we show a separation between 2-query LDCs and LCCs over finite fields of prime order. Specifically, we prove a lower bound of the form p^{Ω(δd)} on the length of linear 2-query LCCs over $F_p$, that encode messages of length d. Our bound improves over the known bound of $2^{Ω(δd)} cite{GKST06, KdW04, DS07} which is tight for LDCs. Our proof makes use of tools from additive combinatorics which have played an important role in several recent results in theoretical computer science. Corollaries of our main theorem are new incidence geometry results over finite fields. The first is an improvement to the Sylvester-Gallai theorem over finite fields cite{SS10} and the second is a new analog of Beck's theorem over finite fields.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115720538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
The 1D Area Law and the Complexity of Quantum States: A Combinatorial Approach 一维面积定律与量子态的复杂性:一种组合方法
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.91
D. Aharonov, I. Arad, Zeph Landau, U. Vazirani
{"title":"The 1D Area Law and the Complexity of Quantum States: A Combinatorial Approach","authors":"D. Aharonov, I. Arad, Zeph Landau, U. Vazirani","doi":"10.1109/FOCS.2011.91","DOIUrl":"https://doi.org/10.1109/FOCS.2011.91","url":null,"abstract":"The classical description of quantum states is in general exponential in the number of qubits. Can we get polynomial descriptions for more restricted sets of states such as ground states of interesting subclasses of local Hamiltonians? This is the basic problem in the study of the complexity of ground states, and requires an understanding of multi-particle entanglement and quantum correlations in such states. Area laws provide a fundamental ingredient in the study of the complexity of ground states, since they offer a way to bound in a quantitative way the entanglement in such states. Although they have long been conjectured for many body systems in arbitrary dimensions, a general rigorous was only recently proved in Hastings' seminal paper cite{ref:Has07} for 1D systems. In this paper, we give a combinatorial proof of the 1D area law for the special case of frustration free systems, improving by an exponential factor the scaling in terms of the inverse spectral gap and the dimensionality of the particles. The scaling in terms of the dimension of the particles is a potentially important issue in the context of resolving the 2D case and higher dimensions, which is one of the most important open questions in Hamiltonian complexity. Our proof is based on a reformulation of the detectability lemma, introduced by us in the context of quantum gap amplificationcite{ref:Aha09b}. We give an alternative proof of the detectability lemma, which is not only simpler and more intuitive than the original proof, but also removes a key restriction in the original statement, making it more suitable for this new context. We also give a one page proof of Hastings' proof that the correlations in the ground states of gapped Hamiltonians decay exponentially with the distance, demonstrating the simplicity of the combinatorial approach for those problems.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115661421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Which Networks are Least Susceptible to Cascading Failures? 哪些网络最不容易发生级联故障?
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.38
L. Blume, D. Easley, J. Kleinberg, Robert D. Kleinberg, É. Tardos
{"title":"Which Networks are Least Susceptible to Cascading Failures?","authors":"L. Blume, D. Easley, J. Kleinberg, Robert D. Kleinberg, É. Tardos","doi":"10.1109/FOCS.2011.38","DOIUrl":"https://doi.org/10.1109/FOCS.2011.38","url":null,"abstract":"The spread of a cascading failure through a network is an issue that comes up in many domains: in the contagious failures that spread among financial institutions during a financial crisis, through nodes of a power grid or communication network during a widespread outage, or through a human population during the outbreak of an epidemic disease. Here we study a natural model of threshold contagion: each node is assigned a numerical threshold drawn independently from an underlying distribution, and it will fail as soon as its number of failed neighbors reaches this threshold. Despite the simplicity of the formulation, it has been very challenging to analyze the failure processes that arise from arbitrary threshold distributions, even qualitative questions concerning which graphs are the most resilient to cascading failures in these models have been difficult to resolve. Here we develop a set of new techniques for analyzing the failure probabilities of nodes in arbitrary graphs under this model, and we compare different graphs according to the maximum failure probability of any node in the graph when thresholds are drawn from a given distribution. We find that the space of threshold distributions has a surprisingly rich structure when we consider the risk that these thresholds induce on different graphs: small shifts in the distribution of the thresholds can favor graphs with a maximally clustered structure (i.e., cliques), those with a maximally branching structure (trees), or even intermediate hybrids.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115077727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
Lasserre Hierarchy, Higher Eigenvalues, and Approximation Schemes for Graph Partitioning and Quadratic Integer Programming with PSD Objectives 具有PSD目标的图划分和二次整数规划的Lasserre层次、高特征值和逼近方案
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-10-22 DOI: 10.1109/FOCS.2011.36
V. Guruswami, A. Sinop
{"title":"Lasserre Hierarchy, Higher Eigenvalues, and Approximation Schemes for Graph Partitioning and Quadratic Integer Programming with PSD Objectives","authors":"V. Guruswami, A. Sinop","doi":"10.1109/FOCS.2011.36","DOIUrl":"https://doi.org/10.1109/FOCS.2011.36","url":null,"abstract":"We present an approximation scheme for optimizing certain Quadratic Integer Programming problems with positive semi definite objective functions and global linear constraints. This framework includes well known graph problems such as Minimum graph bisection, Edge expansion, Uniform sparsest cut, and Small Set expansion, as well as the Unique Games problem. These problems are notorious for the existence of huge gaps between the known algorithmic results and NP-hardness results. Our algorithm is based on rounding semi definite programs from the Lasserre hierarchy, and the analysis uses bounds for low-rank approximations of a matrix in Frobenius norm using columns of the matrix. For all the above graph problems, we give an algorithm running in time $n^{O(r/eps^2)}$ with approximation ratio $frac{1+eps}{min{1,lambda_r}}$, where $lambda_r$ is the $r$'th smallest eigenvalue of the normalized graph Laplacian $Lnorm$. In the case of graph bisection and small set expansion, the number of vertices in the cut is within lower-order terms of the stipulated bound. Our results imply $(1+O(eps))$ factor approximation in time $n^{O(r^ast/eps^2)}$ where $r^ast$ is the number of eigenvalues of $Lnorm$ smaller than $1-eps$. This perhaps gives some indication as to why even showing mere APX-hardness for these problems has been elusive, since the reduction must produce graphs with a slowly growing spectrum (and classes like planar graphs which are known to have such a spectral property often admit good algorithms owing to their nice structure). For Unique Games, we give a factor $(1+frac{2+eps}{lambda_r})$ approximation for minimizing the number of unsatisfied constraints in $n^{O(r/eps)}$ time. This improves an earlier bound for solving Unique Games on expanders, and also shows that Lasserre SDPs are powerful enough to solve well-known integrality gap instances for the basic SDP. We also give an algorithm for independent sets in graphs that performs well when the Laplacian does not have too many eigenvalues bigger than $1+o(1)$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129412652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 107
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信