Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing最新文献

筛选
英文 中文
Data-dependent hashing via nonlinear spectral gaps 基于非线性谱隙的数据相关哈希
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188846
Alexandr Andoni, A. Naor, Aleksandar Nikolov, Ilya P. Razenshteyn, Erik Waingarten
{"title":"Data-dependent hashing via nonlinear spectral gaps","authors":"Alexandr Andoni, A. Naor, Aleksandar Nikolov, Ilya P. Razenshteyn, Erik Waingarten","doi":"10.1145/3188745.3188846","DOIUrl":"https://doi.org/10.1145/3188745.3188846","url":null,"abstract":"We establish a generic reduction from _nonlinear spectral gaps_ of metric spaces to data-dependent Locality-Sensitive Hashing, yielding a new approach to the high-dimensional Approximate Near Neighbor Search problem (ANN) under various distance functions. Using this reduction, we obtain the following results: * For _general_ d-dimensional normed spaces and n-point datasets, we obtain a _cell-probe_ ANN data structure with approximation O(logd/ε2), space dO(1) n1+ε, and dO(1)nε cell probes per query, for any ε>0. No non-trivial approximation was known before in this generality other than the O(√d) bound which follows from embedding a general norm into ℓ2. * For ℓp and Schatten-p norms, we improve the data structure further, to obtain approximation O(p) and sublinear query _time_. For ℓp, this improves upon the previous best approximation 2O(p) (which required polynomial as opposed to near-linear in n space). For the Schatten-p norm, no non-trivial ANN data structure was known before this work. Previous approaches to the ANN problem either exploit the low dimensionality of a metric, requiring space exponential in the dimension, or circumvent the curse of dimensionality by embedding a metric into a ”tractable” space, such as ℓ1. Our new generic reduction proceeds differently from both of these approaches using a novel partitioning method.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74885773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Interactive compression to external information 对外部信息的交互式压缩
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188956
M. Braverman, Gillat Kol
{"title":"Interactive compression to external information","authors":"M. Braverman, Gillat Kol","doi":"10.1145/3188745.3188956","DOIUrl":"https://doi.org/10.1145/3188745.3188956","url":null,"abstract":"We describe a new way of compressing two-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants’ private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) · loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75483300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Hitting sets with near-optimal error for read-once branching programs 对于只读一次的分支程序,命中集的错误接近最优
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188780
M. Braverman, Gil Cohen, Sumegha Garg
{"title":"Hitting sets with near-optimal error for read-once branching programs","authors":"M. Braverman, Gil Cohen, Sumegha Garg","doi":"10.1145/3188745.3188780","DOIUrl":"https://doi.org/10.1145/3188745.3188780","url":null,"abstract":"Nisan (Combinatorica’92) constructed a pseudorandom generator for length n, width n read-once branching programs (ROBPs) with error ε and seed length O(log2n + logn · log(1/ε)). A major goal in complexity theory is to reduce the seed length, hopefully, to the optimal O(logn+log(1/ε)), or to construct improved hitting sets, as these would yield stronger derandomization of BPL and RL, respectively. In contrast to a successful line of work in restricted settings, no progress has been made for general, unrestricted, ROBPs. Indeed, Nisan’s construction is the best pseudorandom generator and, prior to this work, also the best hitting set for unrestricted ROBPs. In this work, we make the first improvement for the general case by constructing a hitting set with seed length O(log2n+log(1/ε)). That is, we decouple ε and n, and obtain near-optimal dependence on the former. The regime of parameters in which our construction strictly improves upon prior works, namely, log(1/ε) ≫ logn, is well-motivated by the work of Saks and Zhou (J.CSS’99) who use pseudorandom generators with error ε = 2−(logn)2 in their proof for BPL ⊆ L3/2. We further suggest a research program towards proving that BPL ⊆ L4/3 in which our result achieves one step. As our main technical tool, we introduce and construct a new type of primitive we call pseudorandom pseudo-distributions. Informally, this is a generalization of pseudorandom generators in which one may assign negative and unbounded weights to paths as opposed to working with probability distributions. We show that such a primitive yields hitting sets and, for derandomization purposes, can be used to derandomize two-sided error algorithms.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85211706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
The minimum Euclidean-norm point in a convex polytope: Wolfe's combinatorial algorithm is exponential 凸多边形的最小欧几里德范数点:Wolfe的组合算法是指数型的
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188820
J. D. Loera, Jamie Haddock, Luis Rademacher
{"title":"The minimum Euclidean-norm point in a convex polytope: Wolfe's combinatorial algorithm is exponential","authors":"J. D. Loera, Jamie Haddock, Luis Rademacher","doi":"10.1145/3188745.3188820","DOIUrl":"https://doi.org/10.1145/3188745.3188820","url":null,"abstract":"The complexity of Philip Wolfe’s method for the minimum Euclidean-norm point problem over a convex polytope has remained unknown since he proposed the method in 1974. We present the first example that Wolfe’s method takes exponential time. Additionally, we improve previous results to show that linear programming reduces in strongly-polynomial time to the minimum norm point problem over a simplex","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80323550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A tighter welfare guarantee for first-price auctions 为首价拍卖提供更严格的福利保障
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188944
D. Hoy, Sam Taggart, Zihe Wang
{"title":"A tighter welfare guarantee for first-price auctions","authors":"D. Hoy, Sam Taggart, Zihe Wang","doi":"10.1145/3188745.3188944","DOIUrl":"https://doi.org/10.1145/3188745.3188944","url":null,"abstract":"This paper proves that the welfare of the first price auction in Bayes-Nash equilibrium is at least a .743-fraction of the welfare of the optimal mechanism assuming agents’ values are independently distributed. The previous best bound was 1−1/e≈.63, derived using smoothness, the standard technique for reasoning about welfare of games in equilibrium. In the worst known example, the first price auction achieves a ≈.869-fraction of the optimal welfare, far better than the theoretical guarantee. Despite this large gap, it was unclear whether the 1−1/e bound was tight. We prove that it is not. Our analysis eschews smoothness, and instead uses the independence assumption on agents’ value distributions to give a more careful accounting of the welfare contribution of agents who win despite not having the highest value.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81440185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Shape of diffusion and size of monochromatic region of a two-dimensional spin system 二维自旋系统的扩散形状和单色区大小
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188836
H. Omidvar, M. Franceschetti
{"title":"Shape of diffusion and size of monochromatic region of a two-dimensional spin system","authors":"H. Omidvar, M. Franceschetti","doi":"10.1145/3188745.3188836","DOIUrl":"https://doi.org/10.1145/3188745.3188836","url":null,"abstract":"We consider an agent-based distributed algorithm with exponentially distributed waiting times in which agents with binary states interact locally over a geometric graph, and based on this interaction and on the value of a common intolerance threshold τ, decide whether to change their states. This model is equivalent to an Asynchronous Cellular Automaton (ACA) with extended Moore neighborhoods, a zero-temperature Ising model with Glauber dynamics, or a Schelling model of self-organized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. We prove a shape theorem for the spread of the “affected” nodes during the process dynamics and show that in the steady state, for τ ∈ (τ*,1−τ*) ∖ {1/2}, where τ* ≈ 0.488, the size of the “mono-chromatic region” at the end of the process is at least exponential in the size of the local neighborhood of interaction with probability approaching one as N grows. Combined with previous results on the expected size of the monochromatic region that provide a matching upper bound, this implies that in the steady state the size of the monochromatic region of any agent is exponential with high probability for the mentioned interval of τ. The shape theorem is based on a novel concentration inequality for the spreading time, and provides a precise geometrical description of the process dynamics. The result on the size of the monochromatic region considerably extends our understanding of the steady state. Showing convergence with high probability, it rules out the possibility that only a small fraction of the nodes are eventually contained in large monochromatic regions, which was left open by previous works.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87060415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The adaptive complexity of maximizing a submodular function 最大化子模函数的自适应复杂度
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188752
Eric Balkanski, Yaron Singer
{"title":"The adaptive complexity of maximizing a submodular function","authors":"Eric Balkanski, Yaron Singer","doi":"10.1145/3188745.3188752","DOIUrl":"https://doi.org/10.1145/3188745.3188752","url":null,"abstract":"In this paper we study the adaptive complexity of submodular optimization. Informally, the adaptive complexity of a problem is the minimal number of sequential rounds required to achieve a constant factor approximation when polynomially-many queries can be executed in parallel at each round. Adaptivity is a fundamental concept that is heavily studied in computer science, largely due to the need for parallelizing computation. Somewhat surprisingly, very little is known about adaptivity in submodular optimization. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, to the best of our knowledge, all that is known to date is that the adaptive complexity is between 1 and Ω(n). Our main result in this paper is a tight characterization showing that the adaptive complexity of maximizing a monotone submodular function under a cardinality constraint is Θ(log n): - We describe an algorithm which requires O(log n) sequential rounds and achieves an approximation that is arbitrarily close to 1/3; - We show that no algorithm can achieve an approximation better than O(1 / log n) with fewer than O(log n / log log n) rounds. Thus, when allowing for parallelization, our algorithm achieves a constant factor approximation exponentially faster than any known existing algorithm for submodular maximization. Importantly, the approximation algorithm is achieved via adaptive sampling and complements a recent line of work on optimization of functions learned from data. In many cases we do not know the functions we optimize and learn them from labeled samples. Recent results show that no algorithm can obtain a constant factor approximation guarantee using polynomially-many labeled samples as in the PAC and PMAC models, drawn from any distribution. Since learning with non-adaptive samples over any distribution results in a sharp impossibility, we consider learning with adaptive samples where the learner obtains poly(n) samples drawn from a distribution of her choice in every round. Our result implies that in the realizable case, where there is a true underlying function generating the data, Θ(log n) batches of adaptive samples are necessary and sufficient to approximately “learn to optimize” a monotone submodular function under a cardinality constraint.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83388415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
Distribution-free junta testing 免费分发军政府测试
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188842
Zhengyang Liu, Xi Chen, R. Servedio, Ying Sheng, Jinyu Xie
{"title":"Distribution-free junta testing","authors":"Zhengyang Liu, Xi Chen, R. Servedio, Ying Sheng, Jinyu Xie","doi":"10.1145/3188745.3188842","DOIUrl":"https://doi.org/10.1145/3188745.3188842","url":null,"abstract":"We study the problem of testing whether an unknown n-variable Boolean function is a k-junta in the distribution-free property testing model, where the distance between functions is measured with respect to an arbitrary and unknown probability distribution over {0,1}n. Our first main result is that distribution-free k-junta testing can be performed, with one-sided error, by an adaptive algorithm that uses Õ(k2)/є queries (independent of n). Complementing this, our second main result is a lower bound showing that any non-adaptive distribution-free k-junta testing algorithm must make Ω(2k/3) queries even to test to accuracy є=1/3. These bounds establish that while the optimal query complexity of non-adaptive k-junta testing is 2Θ(k), for adaptive testing it is poly(k), and thus show that adaptivity provides an exponential improvement in the distribution-free query complexity of testing juntas.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82025559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Towards a proof of the 2-to-1 games conjecture? 为了证明2比1博弈猜想?
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188804
Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, S. Safra
{"title":"Towards a proof of the 2-to-1 games conjecture?","authors":"Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, S. Safra","doi":"10.1145/3188745.3188804","DOIUrl":"https://doi.org/10.1145/3188745.3188804","url":null,"abstract":"We present a polynomial time reduction from gap-3LIN to label cover with 2-to-1 constraints. In the “yes” case the fraction of satisfied constraints is at least 1 −ε, and in the “no” case we show that this fraction is at most ε, assuming a certain (new) combinatorial hypothesis on the Grassmann graph. In other words, we describe a combinatorial hypothesis that implies the 2-to-1 conjecture with imperfect completeness. The companion submitted paper [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] makes some progress towards proving this hypothesis. Our work builds on earlier work by a subset of the authors [Khot, Minzer and Safra, STOC 2017] where a slightly different hypothesis was used to obtain hardness of approximating vertex cover to within factor of √2−ε. The most important implication of this work is (assuming the hypothesis) an NP-hardness gap of 1/2−ε vs. ε for unique games. In addition, we derive optimal NP-hardness for approximating the max-cut-gain problem, NP-hardness of coloring an almost 4-colorable graph with any constant number of colors, and the same √2−ε NP-hardness for approximate vertex cover that was already obtained based on a slightly different hypothesis. Recent progress towards proving our hypothesis [Barak, Kothari and Steurer, ECCC TR18-077], [Dinur, Khot, Kindler, Minzer and Safra, STOC 2018] directly implies some new unconditional NP-hardness results. These include new points of NP-hardness for unique games and for 2-to-1 and 2-to-2 games. More recently, the full version of our hypothesis was proven [Khot, Minzer and Safra, ECCC TR18-006].","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79657507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Fast algorithms for knapsack via convolution and prediction 基于卷积和预测的快速背包算法
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188876
M. Bateni, M. Hajiaghayi, Saeed Seddighin, Cliff Stein
{"title":"Fast algorithms for knapsack via convolution and prediction","authors":"M. Bateni, M. Hajiaghayi, Saeed Seddighin, Cliff Stein","doi":"10.1145/3188745.3188876","DOIUrl":"https://doi.org/10.1145/3188745.3188876","url":null,"abstract":"The knapsack problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most well-known NP-hard problems. The goal is to pack a knapsack of size t with the maximum value from a collection of n items with given sizes and values. Recent evidence suggests that a classic O(nt) dynamic-programming solution for the knapsack problem might be the fastest in the worst case. In fact, solving the knapsack problem was shown to be computationally equivalent to the (min, +) convolution problem, which is thought to be facing a quadratic-time barrier. This hardness is in contrast to the more famous (+, ·) convolution (generally known as polynomial multiplication), that has an O(nlogn)-time solution via Fast Fourier Transform. Our main results are algorithms with near-linear running times (in terms of the size of the knapsack and the number of items) for the knapsack problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by , the running time of our algorithm is Õ((n+t)). If the item values are integers bounded by , our algorithm runs in time Õ(n+t). Best previously known running times were O(nt), O(n2) and O(n) (Pisinger, J. of Alg., 1999). At the core of our algorithms lies the prediction technique: Roughly speaking, this new technique enables us to compute the convolution of two vectors in time (n) when an approximation of the solution within an additive error of is available. Our results also improve the best known strongly polynomial time solutions for knapsack. In the limited size setting, when the items have multiplicities, the fastest strongly polynomial time algorithms for knapsack run in time O(n2 2) and O(n3 2) for the cases of infinite and given multiplicities, respectively. Our results improve both running times by a factor of (n max{1, n/}).","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89220302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信