{"title":"Better Sum Estimation via Weighted Sampling","authors":"Lorenzo Beretta, Jakub Tětek","doi":"10.1145/3650030","DOIUrl":"https://doi.org/10.1145/3650030","url":null,"abstract":"<p>Given a large set <i>U</i> where each item <i>a</i> ∈ <i>U</i> has weight <i>w</i>(<i>a</i>), we want to estimate the total weight <i>W</i> = ∑<sub><i>a</i> ∈ <i>U</i></sub><i>w</i>(<i>a</i>) to within factor of 1 ± ε with some constant probability > 1/2. Since <i>n</i> = |<i>U</i>| is large, we want to do this without looking at the entire set <i>U</i>. In the traditional setting in which we are allowed to sample elements from <i>U</i> uniformly, sampling <i>Ω</i>(<i>n</i>) items is necessary to provide any non-trivial guarantee on the estimate. Therefore, we investigate this problem in different settings: in the <i>proportional</i> setting we can sample items with probabilities proportional to their weights, and in the <i>hybrid</i> setting we can sample both proportionally and uniformly. These settings have applications, for example, in sublinear-time algorithms and distribution testing. </p><p>Sum estimation in the proportional and hybrid setting has been considered before by Motwani, Panigrahy, and Xu [ICALP, 2007]. In their paper, they give both upper and lower bounds in terms of <i>n</i>. Their bounds are near-matching in terms of <i>n</i>, but not in terms of ε. In this paper, we improve both their upper and lower bounds. Our bounds are matching up to constant factors in both settings, in terms of both <i>n</i> and ε. No lower bounds with dependency on ε were known previously. In the proportional setting, we improve their (tilde{O}(sqrt {n}/varepsilon ^{7/2}) ) algorithm to (O(sqrt {n}/varepsilon) ). In the hybrid setting, we improve (tilde{O}(sqrt [3]{n}/ varepsilon ^{9/2}) ) to (O(sqrt [3]{n}/varepsilon ^{4/3}) ). Our algorithms are also significantly simpler and do not have large constant factors. </p><p>We then investigate the previously unexplored scenario in which <i>n</i> is not known to the algorithm. In this case, we obtain a (O(sqrt {n}/varepsilon + log n / varepsilon ^2) ) algorithm for the proportional setting, and a (O(sqrt {n}/varepsilon) ) algorithm for the hybrid setting. This means that in the proportional setting, we may remove the need for advice without greatly increasing the complexity of the problem, while there is a major difference in the hybrid setting. We prove that this difference in the hybrid setting is necessary, by showing a matching lower bound. </p><p>Our algorithms have applications in the area of sublinear-time graph algorithms. Consider a large graph <i>G</i> = (<i>V</i>, <i>E</i>) and the task of (1 ± ε)-approximating |<i>E</i>|. We consider the (standard) settings where we can sample uniformly from <i>E</i> or from both <i>E</i> and <i>V</i>. This relates to sum estimation as follows: we set <i>U</i> = <i>V</i> and the weights to be equal to the degrees. Uniform sampling then corresponds to sampling vertices uniformly. Proportional sampling can be simulated by taking a random edge and picking one of its endpoints at random. If we can only sample uniformly from <i>E</i>, then our results immediat","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"44 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140586508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Ganian, Thekla Hamm, Viktoriia Korchemna, Karolina Okrasa, Kirill Simonov
{"title":"The Fine-Grained Complexity of Graph Homomorphism Parameterized by Clique-Width","authors":"Robert Ganian, Thekla Hamm, Viktoriia Korchemna, Karolina Okrasa, Kirill Simonov","doi":"10.1145/3652514","DOIUrl":"https://doi.org/10.1145/3652514","url":null,"abstract":"<p>The generic homomorphism problem, which asks whether an input graph (G) admits a homomorphism into a fixed target graph (H), has been widely studied in the literature. In this article, we provide a fine-grained complexity classification of the running time of the homomorphism problem with respect to the clique-width of (G) (denoted ({operatorname{cw}})) for virtually all choices of (H) under the Strong Exponential Time Hypothesis. In particular, we identify a property of (H) called the signature number (s(H)) and show that for each (H), the homomorphism problem can be solved in time (mathcal{O^{*}}(s(H)^{{operatorname{cw}}})). Crucially, we then show that this algorithm can be used to obtain essentially tight upper bounds. Specifically, we provide a reduction that yields matching lower bounds for each (H) that is either a projective core or a graph admitting a factorization with additional properties—allowing us to cover all possible target graphs under long-standing conjectures.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"1 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140312585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Isomorphism Testing for Graphs Excluding Small Topological Subgraphs","authors":"Daniel Neuen","doi":"10.1145/3651986","DOIUrl":"https://doi.org/10.1145/3651986","url":null,"abstract":"<p>We give an isomorphism test that runs in time (n^{operatorname{polylog}(h)} ) on all <i>n</i>-vertex graphs excluding some <i>h</i>-vertex graph as a topological subgraph. Previous results state that isomorphism for such graphs can be tested in time (n^{operatorname{polylog}(n)} ) (Babai, STOC 2016) and <i>n</i><sup><i>f</i>(<i>h</i>)</sup> for some function <i>f</i> (Grohe and Marx, SIAM J. Comp., 2015). </p><p>Our result also unifies and extends previous isomorphism tests for graphs of maximum degree <i>d</i> running in time (n^{operatorname{polylog}(d)} ) (SIAM J. Comp., 2023) and for graphs of Hadwiger number <i>h</i> running in time (n^{operatorname{polylog}(h)} ) (SIAM J. Comp., 2023).</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"12 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140127047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Complexity of Finding Fair Many-to-One Matchings","authors":"Niclas Boehmer, Tomohiro Koana","doi":"10.1145/3649220","DOIUrl":"https://doi.org/10.1145/3649220","url":null,"abstract":"<p>We analyze the (parameterized) computational complexity of “fair” variants of bipartite many-to-one matching, where each vertex from the “left” side is matched to exactly one vertex and each vertex from the “right” side may be matched to multiple vertices. We want to find a “fair” matching, in which each vertex from the right side is matched to a “fair” set of vertices. Assuming that each vertex from the left side has one color modeling its “attribute”, we study two fairness criteria. For instance, in one of them, we deem a vertex set fair if for any two colors, the difference between the numbers of their occurrences does not exceed a given threshold. Fairness is, for instance, relevant when finding many-to-one matchings between students and colleges, voters and constituencies, and applicants and firms. Here colors may model sociodemographic attributes, party memberships, and qualifications, respectively. </p><p>We show that finding a fair many-to-one matching is NP-hard even for three colors and maximum degree five. Our main contribution is the design of fixed-parameter tractable algorithms with respect to the number of vertices on the right side. Our algorithms make use of a variety of techniques including color coding. At the core lie integer linear programs encoding Hall like conditions. We establish the correctness of our integer programs, based on Frank’s separation theorem [Frank, Discrete Math. 1982]. We further obtain complete complexity dichotomies regarding the number of colors and the maximum degree of each side.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"195 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139948843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Two-Dimensional Knapsack Problem for Convex Polygons","authors":"Arturo Merino, Andreas Wiese","doi":"10.1145/3644390","DOIUrl":"https://doi.org/10.1145/3644390","url":null,"abstract":"<p>We study the two-dimensional geometric knapsack problem for convex polygons. Given a set of weighted convex polygons and a square knapsack, the goal is to select the most profitable subset of the given polygons that fits non-overlappingly into the knapsack. We allow to rotate the polygons by arbitrary angles. We present a quasi-polynomial time <i>O</i>(1)-approximation algorithm for the general case and a pseudopolynomial time <i>O</i>(1)-approximation algorithm if all input polygons are triangles, both assuming polynomially bounded integral input data. Also, we give a quasi-polynomial time algorithm that computes a solution of optimal weight under resource augmentation, i.e., we allow to increase the size of the knapsack by a factor of 1 + <i>δ</i> for some <i>δ</i> > 0 but compare ourselves with the optimal solution for the original knapsack. To the best of our knowledge, these are the first results for two-dimensional geometric knapsack in which the input objects are more general than axis-parallel rectangles or circles and in which the input polygons can be rotated by arbitrary angles.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"175 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139925795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contraction Decomposition in Unit Disk Graphs and Algorithmic Applications in Parameterized Complexity","authors":"Fahad Panolan, Saket Saurabh, Meirav Zehavi","doi":"10.1145/3648594","DOIUrl":"https://doi.org/10.1145/3648594","url":null,"abstract":"<p>We give a new decomposition theorem in unit disk graphs (UDGs) and demonstrate its applicability in the fields of Structural Graph Theory and Parameterized Complexity. First, our new decomposition theorem shows that the class of UDGs admits an “almost” Contraction Decomposition Theorem. Prior studies on this topic exhibited that the classes of planar graphs [Klein, SICOMP, 2008], graphs of bounded genus [Demaine, Hajiaghayi and Mohar, Combinatorica 2010] and <i>H</i>-minor free graphs [Demaine, Hajiaghayi and Kawarabayashi, STOC 2011] admit a Contraction Decomposition Theorem. Even <i>bounded-degree</i> UDGs can contain arbitrarily large cliques as minors, therefore our result is a significant advance in the study of contraction decompositions. Additionally, this result answers an open question posed by Hajiaghayi (<monospace>www.youtube.com/watch?v=2Bq2gy1N01w</monospace>) regarding the existence of contraction decompositions for classes of graphs beyond <i>H</i>-minor free graphs though under a relaxation of the original formulation. </p><p>Second, we present a “parameteric version” of our new decomposition theorem. We prove that there is an algorithm that given a UDG <i>G</i> and a positive integer <i>k</i>, runs in polynomial time and outputs a collection of (mathcal {O}(k) ) tree decompositions of <i>G</i> with the following properties. Each bag in any of these tree decompositions can be partitioned into (mathcal {O}(k) ) connected pieces (we call this measure the chunkiness of the tree decomposition). Moreover, for any subset <i>S</i> of at most <i>k</i> edges in <i>G</i>, there is a tree decomposition in the collection such that <i>S</i> is <i>well preserved</i> in the decomposition in the following sense. For any bag in the tree decomposition and any edge in <i>S</i> with both endpoints in the bag, either its endpoints lie in different pieces or they lie in a piece which is a clique. Having this decomposition at hand, we show that the design of parameterized algorithms for some cut problems becomes elementary. In particular, our algorithmic applications include single-exponential (or slightly super-exponential) algorithms for well-studied problems such as <span>Min Bisection</span>, <span>Steiner Cut</span>, <i>s</i>-<span>Way Cut</span>, and <span>Edge Multiway Cut-Uncut</span> on UDGs; these algorithms are substantially faster than the best known algorithms for these problems on general graphs.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"26 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139766902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generic Non-Recursive Suffix Array Construction","authors":"Jannik Olbrich, Enno Ohlebusch, Thomas Büchler","doi":"10.1145/3641854","DOIUrl":"https://doi.org/10.1145/3641854","url":null,"abstract":"<p>The suffix array is arguably one of the most important data structures in sequence analysis and consequently there is a multitude of suffix sorting algorithms. However, to this date the <monospace>GSACA</monospace> algorithm introduced in 2015 is the only known non-recursive linear-time suffix array construction algorithm (SACA). Despite its interesting theoretical properties, there has been little effort in improving <monospace>GSACA</monospace>’s non-competitive real-world performance. There is a super-linear algorithm <monospace>DSH</monospace> which relies on the same sorting principle and is faster than <monospace>DivSufSort</monospace>, the fastest SACA for over a decade. The purpose of this paper is twofold: We analyse the sorting principle used in <monospace>GSACA</monospace> and <monospace>DSH</monospace> and exploit its properties in order to give an optimised linear-time algorithm, and we show that it can be very elegantly used to compute both the original extended Burrows-Wheeler transform ((mathsf {eBWT} )) and a bijective version of the Burrows-Wheeler transform ((mathsf {BBWT} )) in linear time. We call the algorithm “generic” since it can be used to compute the regular suffix array and the variants used for the (mathsf {BBWT} ) and (mathsf {eBWT} ). Our suffix array construction algorithm is not only significantly faster than <monospace>GSACA</monospace> but also outperforms <monospace>DivSufSort</monospace> and <monospace>DSH</monospace>. Our (mathsf {BBWT} )-algorithm is faster than or competitive with all other tested (mathsf {BBWT} ) construction implementations on large or repetitive data, and our (mathsf {eBWT} )-algorithm is faster than all other programs on data that is not extremely repetitive.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"52 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139751409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers","authors":"Arun Jambulapati, Aaron Sidford","doi":"10.1145/3593809","DOIUrl":"https://doi.org/10.1145/3593809","url":null,"abstract":"<p>In this paper we provide an <i>O</i>(<i>m</i>loglog<sup><i>O</i>(1)</sup><i>n</i>log (1/ϵ))-expected time algorithm for solving Laplacian systems on <i>n</i>-node <i>m</i>-edge graphs, improving upon the previous best expected runtime of (O(m sqrt {log n} mathrm{log log}^{O(1)} n log (1/epsilon)) ) achieved by (Cohen, Kyng, Miller, Pachocki, Peng, Rao, Xu 2014). To obtain this result we provide efficient constructions of low spectral stretch graph approximations with improved stretch and sparsity bounds. As motivation for this work, we show that for every set of vectors in (mathbb {R}^d ) (not just those induced by graphs) and all integer <i>k</i> > 1 there exist an ultra-sparsifier with <i>d</i> − 1 + <i>O</i>(<i>d</i>/<i>k</i>) re-weighted vectors of relative condition number at most <i>k</i><sup>2</sup>. For small <i>k</i>, this improves upon the previous best known multiplicative factor of (k cdot tilde{O}(log d) ), which is only known for the graph case. Additionally, in the graph case we employ our low-stretch subgraph construction to obtain <i>n</i> − 1 + <i>O</i>(<i>n</i>/<i>k</i>)-edge ultrasparsifiers of relative condition number <i>k</i><sup>1 + <i>o</i>(1)</sup> for <i>k</i> = <i>ω</i>(log <sup><i>δ</i></sup><i>n</i>) for any <i>δ</i> > 0: this improves upon the previous work for <i>k</i> = <i>o</i>(exp (log <sup>1/2 − <i>δ</i></sup><i>n</i>)).</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"29 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139679245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joachim Gudmundsson, Martin P. Seybold, Sampson Wong
{"title":"Map matching queries on realistic input graphs under the Fréchet distance","authors":"Joachim Gudmundsson, Martin P. Seybold, Sampson Wong","doi":"10.1145/3643683","DOIUrl":"https://doi.org/10.1145/3643683","url":null,"abstract":"<p>Map matching is a common preprocessing step for analysing vehicle trajectories. In the theory community, the most popular approach for map matching is to compute a path on the road network that is the most spatially similar to the trajectory, where spatial similarity is measured using the Fréchet distance. A shortcoming of existing map matching algorithms under the Fréchet distance is that every time a trajectory is matched, the entire road network needs to be reprocessed from scratch. An open problem is whether one can preprocess the road network into a data structure, so that map matching queries can be answered in sublinear time. </p><p>In this paper, we investigate map matching queries under the Fréchet distance. We provide a negative result for geometric planar graphs. We show that, unless SETH fails, there is no data structure that can be constructed in polynomial time that answers map matching queries in <i>O</i>((<i>pq</i>)<sup>1 − <i>δ</i></sup>) query time for any <i>δ</i> > 0, where <i>p</i> and <i>q</i> are the complexities of the geometric planar graph and the query trajectory, respectively. We provide a positive result for realistic input graphs, which we regard as the main result of this paper. We show that for <i>c</i>-packed graphs, one can construct a data structure of (tilde{O}(cp) ) size that can answer (1 + ε)-approximate map matching queries in (tilde{O}(c^4 q log ^4 p) ) time, where (tilde{O}(cdot) ) hides lower-order factors and dependence on ε.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"3 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139584796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manuel Cáceres, Massimo Cairo, Andreas Grigorjew, Shahbaz Khan, Brendan Mumey, Romeo Rizzi, Alexandru I. Tomescu, Lucia Williams
{"title":"Width Helps and Hinders Splitting Flows","authors":"Manuel Cáceres, Massimo Cairo, Andreas Grigorjew, Shahbaz Khan, Brendan Mumey, Romeo Rizzi, Alexandru I. Tomescu, Lucia Williams","doi":"10.1145/3641820","DOIUrl":"https://doi.org/10.1145/3641820","url":null,"abstract":"<p>Minimum flow decomposition (MFD) is the NP-hard problem of finding a smallest decomposition of a network flow/circulation <i>X</i> on a directed graph <i>G</i> into weighted source-to-sink paths whose weighted sum equals <i>X</i>. We show that, for acyclic graphs, considering the <i>width</i> of the graph (the minimum number of paths needed to cover all of its edges) yields advances in our understanding of its approximability. For the version of the problem that uses only non-negative weights, we identify and characterise a new class of <i>width-stable</i> graphs, for which a popular heuristic is a <i>O</i>(log Val(X))-approximation (Val(X) being the total flow of <i>X</i>), and strengthen its worst-case approximation ratio from (Omega (sqrt {m}) ) to <i>Ω</i>(<i>m</i>/log <i>m</i>) for sparse graphs, where <i>m</i> is the number of edges in the graph. We also study a new problem on graphs with cycles, Minimum Cost Circulation Decomposition (MCCD), and show that it generalises MFD through a simple reduction. For the version allowing also negative weights, we give a (⌈log ‖<i>X</i>‖⌉ + 1)-approximation (‖<i>X</i>‖ being the maximum absolute value of <i>X</i> on any edge) using a power-of-two approach, combined with parity fixing arguments and a decomposition of unitary circulations (‖<i>X</i>‖ ≤ 1), using a generalised notion of width for this problem. Finally, we disprove a conjecture about the linear independence of minimum (non-negative) flow decompositions posed by [17], but show that its useful implication (polynomial-time assignments of weights to a given set of paths to decompose a flow) holds for the negative version.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"47 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139517871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}