2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)最新文献

筛选
英文 中文
Exponentially-Hard Gap-CSP and Local PRG via Local Hardcore Functions 基于局部硬核函数的指数硬Gap-CSP和局部PRG
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.82
B. Applebaum
{"title":"Exponentially-Hard Gap-CSP and Local PRG via Local Hardcore Functions","authors":"B. Applebaum","doi":"10.1109/FOCS.2017.82","DOIUrl":"https://doi.org/10.1109/FOCS.2017.82","url":null,"abstract":"The gap-ETH assumption (Dinur 2016; Manurangsi and Raghavendra 2016) asserts that it is exponentially-hard to distinguish between a satisfiable 3-CNF formula and a 3-CNF formula which is at most 0.99-satisfiable. We show that this assumption follows from the exponential hardness of finding a satisfying assignment for smooth 3-CNFs. Here smoothness means that the number of satisfying assignments is not much smaller than the number of almost-satisfying assignments. We further show that the latter (smooth-ETH) assumption follows from the exponential hardness of solving constraint satisfaction problems over well-studied distributions, and, more generally, from the existence of any exponentially-hard locally-computable one-way function. This confirms a conjecture of Dinur (ECCC 2016).We also prove an analogous result in the cryptographic setting. Namely, we show that the existence of exponentially-hard locally-computable pseudorandom generator with linear stretch (el-PRG) follows from the existence of an exponentially-hard locally-computable almost regular one-way functions.None of the above assumptions (gap-ETH and el-PRG) was previously known to follow from the hardness of a search problem. Our results are based on a new construction of general (GL-type) hardcore functions that, for any exponentially-hard one-way function, output linearly many hardcore bits, can be locally computed, and consume only a linear amount of random bits. We also show that such hardcore functions have several other useful applications in cryptography and complexity theory.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127443287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Faster (and Still Pretty Simple) Unbiased Estimators for Network (Un)reliability 更快(而且仍然相当简单)的网络(非)可靠性无偏估计器
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.75
David R Karger
{"title":"Faster (and Still Pretty Simple) Unbiased Estimators for Network (Un)reliability","authors":"David R Karger","doi":"10.1109/FOCS.2017.75","DOIUrl":"https://doi.org/10.1109/FOCS.2017.75","url":null,"abstract":"Consider the problem of estimating the (un)reliability of an n-vertex graph when edges fail with probability p. We show that the Recursive Contraction Algorithms for minimum cuts, essentially unchanged and running in n2+o(1) time, yields an unbiased estimator of constant relative variance (and thus an FPRAS with the same time bound) whenever pc","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123044275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Two-Round and Non-Interactive Concurrent Non-Malleable Commitments from Time-Lock Puzzles 时间锁谜题的两轮非交互并发非延展性承诺
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.59
Huijia Lin, R. Pass, Pratik Soni
{"title":"Two-Round and Non-Interactive Concurrent Non-Malleable Commitments from Time-Lock Puzzles","authors":"Huijia Lin, R. Pass, Pratik Soni","doi":"10.1109/FOCS.2017.59","DOIUrl":"https://doi.org/10.1109/FOCS.2017.59","url":null,"abstract":"Non-malleable commitments are a fundamental cryptographic tool for preventing against (concurrent) man-in-the-middle attacks. Since their invention by Dolev, Dwork, and Naor in 1991, the round-complexity of non-malleable commitments has been extensively studied, leading up to constant-round concurrent non-malleable commitments based only on one-way functions, and even 3-round concurrent non-malleable commitments based on subexponential one-way functions.But constructions of two-round, or non-interactive, non-malleable commitments have so far remained elusive; the only known construction relied on a strong and non-falsifiable assumption with a non-malleability flavor. Additionally, a recent result by Pass shows the impossibility of basing two-round non-malleable commitments on falsifiable assumptions using a polynomial-time black-box security reduction.In this work, we show how to overcome this impossibility, using super-polynomial-time hardness assumptions. Our main result demonstrates the existence of a two-round concurrent non-malleable commitment based on sub-exponential standard-type assumptions—notably, assuming the existence of the following primitives (all with subexponential security): (1) non-interactive commitments, (2) ZAPs (i.e., 2-round witness indistinguishable proofs), (3) collision-resistant hash functions, and (4) a weak time-lock puzzle.Primitives (1),(2),(3) can be based on e.g., the discrete log assumption and the RSA assumption. Time-lock puzzles—puzzles that can be solved by brute-force in time 2^t, but cannot be solved significantly faster even using parallel computers—were proposed by Rivest, Shamir, and Wagner in 1996, and have been quite extensively studied since; the most popular instantiation relies on the assumption that 2^t repeated squarings mod N = pq require roughly 2^t parallel time. Our notion of a weak time-lock puzzle, requires only that the puzzle cannot be solved in parallel time 2^{t^≥ilon} (and thus we only need to rely on the relatively mild assumption that there are no huge} improvements in the parallel complexity of repeated squaring algorithms).We additionally show that if replacing assumption (2) for a non-interactive witness indistinguishable proof (NIWI), and (3) for auniform} collision-resistant hash function, then a non-interactive} (i.e., one-message) version of our protocolsatisfies concurrent non-malleability w.r.t. uniform attackers.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Quantum Speed-Ups for Solving Semidefinite Programs 求解半定程序的量子加速
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.45
F. Brandão, K. Svore
{"title":"Quantum Speed-Ups for Solving Semidefinite Programs","authors":"F. Brandão, K. Svore","doi":"10.1109/FOCS.2017.45","DOIUrl":"https://doi.org/10.1109/FOCS.2017.45","url":null,"abstract":"We give a quantum algorithm for solving semidefinite programs (SDPs). It has worst-case running time n^{frac{1}{2}} m^{frac{1}{2}} s^2 poly(log(n), log(m), R, r, 1/δ), with n and s the dimension and row-sparsity of the input matrices, respectively, m the number of constraints, δ the accuracy of the solution, and R, r upper bounds on the size of the optimal primal and dual solutions, respectively. This gives a square-root unconditional speed-up over any classical method for solving SDPs both in n and m. We prove the algorithm cannot be substantially improved (in terms of n and m) giving a Ω(n^{frac{1}{2}}+m^{frac{1}{2}}) quantum lower bound for solving semidefinite programs with constant s, R, r and δ. The quantum algorithm is constructed by a combination of quantum Gibbs sampling and the multiplicative weight method. In particular it is based on a classical algorithm of Arora and Kale for approximately solving SDPs. We present a modification of their algorithm to eliminate the need for solving an inner linear program which may be of independent interest.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132161588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 141
Optimal Interactive Coding for Insertions, Deletions, and Substitutions 插入,删除和替换的最佳交互编码
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.30
Alexander A. Sherstov, Pei Wu
{"title":"Optimal Interactive Coding for Insertions, Deletions, and Substitutions","authors":"Alexander A. Sherstov, Pei Wu","doi":"10.1109/FOCS.2017.30","DOIUrl":"https://doi.org/10.1109/FOCS.2017.30","url":null,"abstract":"Interactive coding, pioneered by Schulman (FOCS 92, STOC 93), is concerned with making communication protocols resilient to adversarial noise. The canonical model allows the adversary to alter a small constant fraction of symbols, chosen at the adversarys discretion, as they pass through the communication channel. Braverman, Gelles, Mao, and Ostrovsky (2015) proposed a far-reaching generalization of this model, whereby the adversary can additionally manipulate the channel by removing and inserting symbols. They showed how to faithfully simulate any protocol in this model with corruption rate up to 1/18, using a constant-size alphabet and a constant-factor overhead in communication. We give an optimal simulation of any protocol in this generalized model of substitutions, insertions, and deletions, tolerating a corruption rate up to 1/4 while keeping the alphabet to a constant size and the communication overhead to a constant factor. Our corruption tolerance matches an impossibility result for corruption rate 1/4 which holds even for substitutions alone (Braverman and Rao, STOC 11).","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130061042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Obfuscating Compute-and-Compare Programs under LWE 在LWE下混淆计算与比较程序
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.61
D. Wichs, Giorgos Zirdelis
{"title":"Obfuscating Compute-and-Compare Programs under LWE","authors":"D. Wichs, Giorgos Zirdelis","doi":"10.1109/FOCS.2017.61","DOIUrl":"https://doi.org/10.1109/FOCS.2017.61","url":null,"abstract":"We show how to obfuscate a large and expressive class of programs, which we call compute-and-compare programs, under the learning-with-errors (LWE) assumption. Each such program CC[f,y] is parametrized by an arbitrary polynomial-time computable function f along with a target value y and we define CC[f,y](x) to output 1 if f(x)=y and 0 otherwise. In other words, the program performs an arbitrary {computation} f and then compares its output against a target y. Our obfuscator satisfies distributional virtual-black-box security, which guarantees that the obfuscated program does not reveal any partial information about the function f or the target value y, as long as they are chosen from some distribution where y has sufficient pseudo-entropy given f. We also extend our result to multi-bit compute-and-compare programs MBCC[f,y,z](x) which output a message z if f(x)=y.Compute-and-compare programs are powerful enough to capture many interesting obfuscation tasks as special cases. This includes obfuscating {conjunctions, and therefore we improve on the prior work of Brakerski et al. (ITCS 16) which constructed a conjunction obfuscator under a non-standard entropic ring-LWE assumption, while here we obfuscate a significantly broader class of programs under standard LWE. We show that our obfuscator has several interesting applications. For example, we can take any encryption scheme and publish an obfuscated plaintext equality tester that allows users to check whether a ciphertext decrypts to some target value y; as long as y has sufficient pseudo-entropy this will not harm semantic security. We can also use our obfuscator to generically upgrade attribute-based encryption to predicate encryption with one-sided attribute-hiding security, and to upgrade witness encryption to indistinguishability obfuscation which is secure for all null circuits. Furthermore, we show that our obfuscator gives new circular-security counter-examples for public-key bit encryption and for unbounded length key cycles.Our result uses the graph-induced multi-linear maps of Gentry, Gorbunov and Halevi (TCC 15), but only in a carefully restricted manner which is provably secure under LWE. Our technique is inspired by ideas introduced in a recent work of Goyal, Koppula and Waters (EUROCRYPT 17) in a seemingly unrelated context.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114431389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 116
Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-and-Solve 分析压缩数据的细粒度复杂性:对解压缩和求解的量化改进
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2017-10-01 DOI: 10.1109/FOCS.2017.26
Amir Abboud, A. Backurs, K. Bringmann, Marvin Künnemann
{"title":"Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-and-Solve","authors":"Amir Abboud, A. Backurs, K. Bringmann, Marvin Künnemann","doi":"10.1109/FOCS.2017.26","DOIUrl":"https://doi.org/10.1109/FOCS.2017.26","url":null,"abstract":"Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size n of data that originally has size N, and we want to solve a problem with time complexity T(⋅). The naïve strategy of decompress-and-solve gives time T(N), whereas the gold standard is time T(n): to analyze the compression as efficiently as if the original data was small.We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar-Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design.We introduce a direly needed framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are:• The O(nN√log(N/n)) bound for LCS and the O(min(N log N, nM)) bound for Pattern Matching with Wildcards are optimal up to N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, M denotes the uncompressed length of the compressed pattern.)• Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the k-Clique conjecture.• We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121674083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
On Small-Depth Frege Proofs for Tseitin for Grids 网格定位的小深度网格证明
J. Håstad
{"title":"On Small-Depth Frege Proofs for Tseitin for Grids","authors":"J. Håstad","doi":"10.1145/3425606","DOIUrl":"https://doi.org/10.1145/3425606","url":null,"abstract":"We prove a lower bound on the size of a small depth Frege refutation of the Tseitin contradiction on the grid. We conclude that polynomial size such refutations must use formulas of almost logarithmic depth.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121772515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Variable-Version Lovász Local Lemma: Beyond Shearer's Bound 变量版本Lovász局部引理:超越Shearer界
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2017-09-15 DOI: 10.1109/FOCS.2017.48
Kun He, Liangpan Li, Xingwu Liu, Yuyi Wang, Mingji Xia
{"title":"Variable-Version Lovász Local Lemma: Beyond Shearer's Bound","authors":"Kun He, Liangpan Li, Xingwu Liu, Yuyi Wang, Mingji Xia","doi":"10.1109/FOCS.2017.48","DOIUrl":"https://doi.org/10.1109/FOCS.2017.48","url":null,"abstract":"A tight criterion under which the abstract version Lovász Local Lemma (abstract-LLL) holds was given by Shearer [41] decades ago. However, little is known about that of the variable version LLL (variable-LLL) where events are generated by independent random variables, though variable- LLL naturally models and is enough for almost all applications of LLL. We introduce a necessary and sufficient criterion for variable-LLL, in terms of the probabilities of the events and the event-variable graph specifying the dependency among the events. Based on this new criterion, we obtain boundaries for two families of event-variable graphs, namely, cyclic and treelike bigraphs. These are the first two non-trivial cases where the variable-LLL boundary is fully determined. As a byproduct, we also provide a universal constructive method to find a set of events whose union has the maximum probability, given the probability vector and the event-variable graph.Though it is #P-hard in general to determine variable- LLL boundaries, we can to some extent decide whether a gap exists between a variable-LLL boundary and the corresponding abstract-LLL boundary. In particular, we show that the gap existence can be decided without solving Shearer’s conditions or checking our variable-LLL criterion. Equipped with this powerful theorem, we show that there is no gap if the base graph of the event-variable graph is a tree, while gap appears if the base graph has an induced cycle of length at least 4. The problem is almost completely solved except when the base graph has only 3-cliques, in which case we also get partial solutions.A set of reduction rules are established that facilitate to infer gap existence of a event-variable graph from known ones. As an application, various event-variable graphs, in particular combinatorial ones, are shown to be gapful/gapless.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"117 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134393624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Learning Multi-Item Auctions with (or without) Samples 学习多项目拍卖与(或没有)样品
2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) Pub Date : 2017-09-01 DOI: 10.1109/FOCS.2017.54
Yang Cai, C. Daskalakis
{"title":"Learning Multi-Item Auctions with (or without) Samples","authors":"Yang Cai, C. Daskalakis","doi":"10.1109/FOCS.2017.54","DOIUrl":"https://doi.org/10.1109/FOCS.2017.54","url":null,"abstract":"We provide algorithms that learn simple auctions whose revenue is approximately optimal in multi-item multi-bidder settings, for a wide range of bidder valuations including unit-demand, additive, constrained additive, XOS, and subadditive. We obtain our learning results in two settings. The first is the commonly studied setting where sample access to the bidders distributions over valuations is given, for both regular distributions and arbitrary distributions with bounded support. Here, our algorithms require polynomially many samples in the number of items and bidders. The second is a more general max-min learning setting that we introduce, where we are given approximate distributions, and we seek to compute a mechanism whose revenue is approximately optimal simultaneously for all true distributions that are close to the ones we were given. These results are more general in that they imply the sample-based results, and are also applicable in settings where we have no sample access to the underlying distributions but have estimated them indirectly via market research or by observation of bidder behavior in previously run, potentially non-truthful auctions.All our results hold for valuation distributions satisfying the standard (and necessary) independence-across-items property. They also generalize and improve upon recent works of Goldner and Karlin cite{GoldnerK16} and Morgenstern and Roughgarden cite{MorgensternR16, which have provided algorithms that learn approximately optimal multi-item mechanisms in more restricted settings with additive, subadditive and unit-demand valuations using sample access to distributions. We generalize these results to the complete unit-demand, additive, and XOS setting, to i.i.d. subadditive bidders, and to the max-min setting.Our results are enabled by new uniform convergence bounds for hypotheses classes under product measures. Our bounds result in exponential savings in sample complexity compared to bounds derived by bounding the VC dimension and are of independent interest.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131943718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信