2011 IEEE 52nd Annual Symposium on Foundations of Computer Science最新文献

筛选
英文 中文
Near Optimal Column-Based Matrix Reconstruction 接近最优的基于列的矩阵重构
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-03-04 DOI: 10.1137/12086755X
Christos Boutsidis, P. Drineas, M. Magdon-Ismail
{"title":"Near Optimal Column-Based Matrix Reconstruction","authors":"Christos Boutsidis, P. Drineas, M. Magdon-Ismail","doi":"10.1137/12086755X","DOIUrl":"https://doi.org/10.1137/12086755X","url":null,"abstract":"We consider low-rank reconstruction of a matrix using a subset of its columns and we present asymptotically optimal algorithms for both spectral norm and Frobenius norm reconstruction. The main tools we introduce to obtain our results are: (i) the use of fast approximate SVD-like decompositions for column-based matrix reconstruction, and (ii) two deterministic algorithms for selecting rows from matrices with orthonormal columns, building upon the sparse representation theorem for decompositions of the identity that appeared in [1].","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126287708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 244
Solving Connectivity Problems Parameterized by Treewidth in Single Exponential Time 单指数时间下树宽参数化连通性问题的求解
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-03-02 DOI: 10.1145/3506707
Marek Cygan, Jesper Nederlof, Marcin Pilipczuk, Michal Pilipczuk, J. M. M. Van Rooij, J. O. Wojtaszczyk
{"title":"Solving Connectivity Problems Parameterized by Treewidth in Single Exponential Time","authors":"Marek Cygan, Jesper Nederlof, Marcin Pilipczuk, Michal Pilipczuk, J. M. M. Van Rooij, J. O. Wojtaszczyk","doi":"10.1145/3506707","DOIUrl":"https://doi.org/10.1145/3506707","url":null,"abstract":"For the vast majority of local problems on graphs of small tree width (where by local we mean that a solution can be verified by checking separately the neighbourhood of each vertex), standard dynamic programming techniques give c^tw |V|^O(1) time algorithms, where tw is the tree width of the input graph G = (V, E) and c is a constant. On the other hand, for problems with a global requirement (usually connectivity) the best -- known algorithms were naive dynamic programming schemes running in at least tw^tw time. We breach this gap by introducing a technique we named Cut&Count that allows to produce c^tw |V|^O(1) time Monte Carlo algorithms for most connectivity-type problems, including Hamiltonian Path, Steiner Tree, Feedback Vertex Set and Connected Dominating Set. These results have numerous consequences in various fields, like parameterized complexity, exact and approximate algorithms on planar and H-minor-free graphs and exact algorithms on graphs of bounded degree. The constant c in our algorithms is in all cases small, and in several cases we are able to show that improving those constants would cause the Strong Exponential Time Hypothesis to fail. In contrast to the problems aiming to minimize the number of connected components that we solve using Cut&Count as mentioned above, we show that, assuming the Exponential Time Hypothesis, the aforementioned gap cannot be breached for some problems that aim to maximize the number of connected components like Cycle Packing.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128753266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 317
Privacy Amplification and Non-malleable Extractors via Character Sums 基于字符和的隐私放大和非延展性提取器
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-02-26 DOI: 10.1137/120868414
Y. Dodis, Xin Li, T. Wooley, David Zuckerman
{"title":"Privacy Amplification and Non-malleable Extractors via Character Sums","authors":"Y. Dodis, Xin Li, T. Wooley, David Zuckerman","doi":"10.1137/120868414","DOIUrl":"https://doi.org/10.1137/120868414","url":null,"abstract":"In studying how to communicate over a public channel with an active adversary, Dodis and Wichs introduced the notion of a non-malleable extractor. A non-malleable extractor dramatically strengthens the notion of a strong extractor. A strong extractor takes two inputs, a weakly-random $x$ and a uniformly random seed $y$, and outputs a string which appears uniform, even given $y$. For a non-malleable extractor $nm$, the output $nm(x,y)$ should appear uniform given $y$ as well as $nm(x,adv(y))$, where $adv$ is an arbitrary function with $adv(y) neq y$. We show that an extractor introduced by Chor and Gold reich is non-malleable when the entropy rate is above half. It outputs a linear number of bits when the entropy rate is $1/2 + alpha$, for any $alpha>0$. Previously, no nontrivial parameters were known for any non-malleable extractor. To achieve a polynomial running time when outputting many bits, we rely on a widely-believed conjecture about the distribution of prime numbers in arithmetic progressions. Our analysis involves a character sum estimate, which may be of independent interest. Using our non-malleable extractor, we obtain protocols for ``privacy amplification & quot;: key agreement between two parties who share a weakly-random secret. Our protocols work in the presence of an active adversary with unlimited computational power, and have asymptotically optimal entropy loss. When the secret has entropy rate greater than $1/2$, the protocol follows from a result of Dodis and Wichs, and takes two rounds. When the secret has entropy rate $delta$ for any constant~$delta>0$, our new protocol takes a constant (polynomial in $1/delta$) number of rounds. Our protocols run in polynomial time under the above well-known conjecture about primes.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114489962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
A Nearly-m log n Time Solver for SDD Linear Systems SDD线性系统的近m log n时间解算器
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-02-23 DOI: 10.1109/FOCS.2011.85
I. Koutis, G. Miller, Richard Peng
{"title":"A Nearly-m log n Time Solver for SDD Linear Systems","authors":"I. Koutis, G. Miller, Richard Peng","doi":"10.1109/FOCS.2011.85","DOIUrl":"https://doi.org/10.1109/FOCS.2011.85","url":null,"abstract":"We present an improved algorithm for solving symmetrically diagonally dominant linear systems. On input of an $ntimes n$ symmetric diagonally dominant matrix $A$ with $m$ non-zero entries and a vector $b$ such that $Abar{x} = b$ for some (unknown) vector $bar{x}$, our algorithm computes a vector $x$ such that $| |{x}-bar{x}| |_A1 in time. O tiled (m log n log (1/epsilon))^2. The solver utilizes in a standard way a 'preconditioning' chain of progressively sparser graphs. To claim the faster running time we make a two-fold improvement in the algorithm for constructing the chain. The new chain exploits previously unknown properties of the graph sparsification algorithm given in [Koutis,Miller,Peng, FOCS 2010], allowing for stronger preconditioning properties.We also present an algorithm of independent interest that constructs nearly-tight low-stretch spanning trees in time Otiled (mlog n), a factor of O (log n) faster than the algorithm in [Abraham,Bartal,Neiman, FOCS 2008]. This speedup directly reflects on the construction time of the preconditioning chain.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125140763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 278
Approximation Algorithms for Correlated Knapsacks and Non-martingale Bandits 相关背包与非鞅强盗的逼近算法
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-02-17 DOI: 10.1109/FOCS.2011.48
Anupam Gupta, Ravishankar Krishnaswamy, M. Molinaro, R. Ravi
{"title":"Approximation Algorithms for Correlated Knapsacks and Non-martingale Bandits","authors":"Anupam Gupta, Ravishankar Krishnaswamy, M. Molinaro, R. Ravi","doi":"10.1109/FOCS.2011.48","DOIUrl":"https://doi.org/10.1109/FOCS.2011.48","url":null,"abstract":"In the stochastic knapsack problem, we are given a knapsack of size B, and a set of items whose sizes and rewards are drawn from a known probability distribution. To know the actual size and reward we have to schedule the item -- when it completes, we get to know these values. The goal is to schedule the items (possibly making adaptive decisions based on the sizes seen so far) to maximize the expected total reward of items which successfully pack into the knapsack. We know constant-factor approximations when (i) the rewards and sizes are independent, and (ii) we cannot prematurely cancel items after we schedule them. What if either or both assumptions are relaxed? Related stochastic packing problems are the multi-armed bandit (and budgeted learning) problems, here one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to (adaptively) decide which arms to pull, in order to maximize the expected reward obtained after B pulls in total. Much recent work on this problem focuses on the case when the evolution of each arm follows a martingale, i.e., when the expected reward from one pull of an arm is the same as the reward at the current state. What if the rewards do not form a martingale? In this paper, we give O(1)-approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations. Extending the ideas developed here, we give O(1)-approximations for MAB problems without the martingale assumption. Indeed, we can show that previously proposed linear programming relaxations for these problems have large integrality gaps. So we propose new time-indexed LP relaxations, using a decomposition and \"gap-filling\" approach, we convert these fractional solutions to distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise randomized adaptive scheduling algorithms.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131911757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
A Constant Factor Approximation Algorithm for Unsplittable Flow on Paths 路径上不可分割流的常因子逼近算法
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-02-17 DOI: 10.1137/120868360
P. Bonsma, J. Schulz, Andreas Wiese
{"title":"A Constant Factor Approximation Algorithm for Unsplittable Flow on Paths","authors":"P. Bonsma, J. Schulz, Andreas Wiese","doi":"10.1137/120868360","DOIUrl":"https://doi.org/10.1137/120868360","url":null,"abstract":"In this paper, we present a constant-factor approximation algorithm for the unsplittable flow problem on a path. This improves on the previous best known approximation factor of O(log n). The approximation ratio of our algorithm is 7+e for any e>0. In the unsplittable flow problem on a path, we are given a capacitated path P and n tasks, each task having a demand, a profit, and start and end vertices. The goal is to compute a maximum profit set of tasks, such that for each edge e of P, the total demand of selected tasks that use e does not exceed the capacity of e. This is a well-studied problem that occurs naturally in various settings, and therefore it has been studied under alternative names, such as resource allocation, bandwidth allocation, resource constrained scheduling, temporal knapsack and interval packing. Polynomial time constant factor approximation algorithms for the problem were previously known only under the no-bottleneck assumption (in which the maximum task demand must be no greater than the minimum edge capacity). We introduce several novel algorithmic techniques, which might be of independent interest: a framework which reduces the problem to instances with a bounded range of capacities, and a new geometrically inspired dynamic program which solves a special case of the maximum weight independent set of rectangles problem to optimality. In addition, we show that the problem is strongly NP-hard even if all edge capacities are equal and all demands are either 1, 2, or 3.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125433841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
On the Complexity of Commuting Local Hamiltonians, and Tight Conditions for Topological Order in Such Systems 这类系统中可交换局部哈密顿量的复杂性及拓扑有序的紧条件
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-02-03 DOI: 10.1109/FOCS.2011.58
D. Aharonov, Lior Eldar
{"title":"On the Complexity of Commuting Local Hamiltonians, and Tight Conditions for Topological Order in Such Systems","authors":"D. Aharonov, Lior Eldar","doi":"10.1109/FOCS.2011.58","DOIUrl":"https://doi.org/10.1109/FOCS.2011.58","url":null,"abstract":"The local Hamiltonian problem plays the equivalent role of SAT in quantum complexity theory. Understanding the complexity of the intermediate case in which the constraints are quantum but all local terms in the Hamiltonian commute, is of importance for conceptual, physical and computational complexity reasons. Bravyi and Vyalyi showed in 2003, using a clever application of the representation theory of C*-algebras, that if the terms in the Hamiltonian are all two-local, the problem is in NP, and the entanglement in the ground states is local. The general case remained open since then. In this paper we extend this result beyond the two-local case, to the case of three-qubit interactions. We then extend our results even further, and show that NP verification is possible for three-wise interaction between qutrits as well, as long as the interaction graph is planar and also \" nearly Euclidean & quot, in some well-defined sense. The proofs imply that in all such systems, the entanglement in the ground states is local. These extensions imply an intriguing sharp transition phenomenon in commuting Hamiltonian systems: the ground spaces of 3-local \" physical & quot, systems based on qubits and qutrits are diagonalizable by a basis whose entanglement is highly local, while even slightly more involved interactions (the particle dimensionality or the locality of the interaction is larger) already exhibit an important long-range entanglement property called Topological Order. Our results thus imply that Kitaev's celebrated Toric code construction is, in a well defined sense, optimal as a construction of Topological Order based on commuting Hamiltonians.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123288099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
The Minimum k-way Cut of Bounded Size is Fixed-Parameter Tractable 有界大小的最小k路切割是固定参数可处理的
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2011-01-24 DOI: 10.1109/FOCS.2011.53
K. Kawarabayashi, M. Thorup
{"title":"The Minimum k-way Cut of Bounded Size is Fixed-Parameter Tractable","authors":"K. Kawarabayashi, M. Thorup","doi":"10.1109/FOCS.2011.53","DOIUrl":"https://doi.org/10.1109/FOCS.2011.53","url":null,"abstract":"We consider the minimum $k$-way cut problem for unweighted undirected graphs with a size bound $s$ on the number of cut edges allowed. Thus we seek to remove as few edges as possible so as to split a graph into $k$ components, or report that this requires cutting more than $s$ edges. We show that this problem is fixed-parameter tractable (FPT) with the standard parameterization in terms of the solution size $s$. More precisely, for $s=O(1)$, we present a quadratic time algorithm. Moreover, we present a much easier linear time algorithm for planar graphs and bounded genus graphs. Our tractability result stands in contrast to known W[1] hardness of related problems. Without the size bound, Downey et al.~[2003] proved that the minimum $k$-way cut problem is W[1] hard with parameter $k$, and this is even for simple unweighted graphs. Downey et al.~asked about the status for planar graphs. We get linear time with fixed parameter $k$ for simple planar graphs since the minimum $k$-way cut of a planar graph is of size at most $6k$. More generally, we get FPT with parameter $k$ for any graph class with bounded average degree. A simple reduction shows that vertex cuts are at least as hard as edge cuts, so the minimum $k$-way vertex cut is also W[1] hard with parameter $k$. Marx [2004] proved that finding a minimum $k$-way vertex cut of size $s$ is also W[1] hard with parameter $s$. Marx asked about the FPT status with edge cuts, which we prove tractable here. We are not aware of any other cut problem where the vertex version is W[1] hard but the edge version is FPT, e.g., Marx [2004] proved that the $k$-terminal cut problem is FPT parameterized by the cut size, both for edge and vertex cuts.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"365 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122774587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 61
Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems 随机组合优化问题的期望效用最大化
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2010-12-14 DOI: 10.1109/FOCS.2011.33
J. Li, A. Deshpande
{"title":"Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems","authors":"J. Li, A. Deshpande","doi":"10.1109/FOCS.2011.33","DOIUrl":"https://doi.org/10.1109/FOCS.2011.33","url":null,"abstract":"We study the stochastic versions of a broad class of combinatorial problems where the weights of the elements in the input dataset are uncertain. The class of problems that we study includes shortest paths, minimum weight spanning trees, and minimum weight matchings over probabilistic graphs, and other combinatorial problems like knapsack. We observe that the expected value is inadequate in capturing different types of {em risk-averse} or {em risk-prone} behaviors, and instead we consider a more general objective which is to maximize the {em expected utility} of the solution for some given utility function, rather than the expected weight (expected weight becomes a special case). We show that we can obtain a polynomial time approximation algorithm with {em additive error} $epsilon$ for any $epsilon>0$, if there is a pseudopolynomial time algorithm for the {em exact} version of the problem (This is true for the problems mentioned above)and the maximum value of the utility function is bounded by a constant. Our result generalizes several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack. Our algorithm for utility maximization makes use of the separability of exponential utility and a technique to decompose a general utility function into exponential utility functions, which may be useful in other stochastic optimization problems.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122111362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Quantum Query Complexity of State Conversion 状态转换的量子查询复杂度
2011 IEEE 52nd Annual Symposium on Foundations of Computer Science Pub Date : 2010-11-12 DOI: 10.1109/FOCS.2011.75
Troy Lee, R. Mittal, B. Reichardt, R. Spalek, M. Szegedy
{"title":"Quantum Query Complexity of State Conversion","authors":"Troy Lee, R. Mittal, B. Reichardt, R. Spalek, M. Szegedy","doi":"10.1109/FOCS.2011.75","DOIUrl":"https://doi.org/10.1109/FOCS.2011.75","url":null,"abstract":"State conversion generalizes query complexity to the problem of converting between two input-dependent quantum states by making queries to the input. We characterize the complexity of this problem by introducing a natural information-theoretic norm that extends the Schur product operator norm. The complexity of converting between two systems of states is given by the distance between them, as measured by this norm. In the special case of function evaluation, the norm is closely related to the general adversary bound, a semi-definite program that lower-bounds the number of input queries needed by a quantum algorithm to evaluate a function. We thus obtain that the general adversary bound characterizes the quantum query complexity of any function whatsoever. This generalizes and simplifies the proof of the same result in the case of boolean input and output. Also in the case of function evaluation, we show that our norm satisfies a remarkable composition property, implying that the quantum query complexity of the composition of two functions is at most the product of the query complexities of the functions, up to a constant. Finally, our result implies that discrete and continuous-time query models are equivalent in the bounded-error setting, even for the general state-conversion problem.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133172822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 150
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信