Davi de Andrade, Júlio Araújo, Laure Morelle, Ignasi Sau, Ana Silva
{"title":"On the parameterized complexity of computing good edge-labelings","authors":"Davi de Andrade, Júlio Araújo, Laure Morelle, Ignasi Sau, Ana Silva","doi":"arxiv-2408.15181","DOIUrl":"https://doi.org/arxiv-2408.15181","url":null,"abstract":"A good edge-labeling (gel for short) of a graph $G$ is a function $lambda:\u0000E(G) to mathbb{R}$ such that, for any ordered pair of vertices $(x, y)$ of\u0000$G$, there do not exist two distinct increasing paths from $x$ to $y$, where\u0000``increasing'' means that the sequence of labels is non-decreasing. This notion\u0000was introduced by Bermond et al. [Theor. Comput. Sci. 2013] motivated by\u0000practical applications arising from routing and wavelength assignment problems\u0000in optical networks. Prompted by the lack of algorithmic results about the\u0000problem of deciding whether an input graph admits a gel, called GEL, we\u0000initiate its study from the viewpoint of parameterized complexity. We first\u0000introduce the natural version of GEL where one wants to use at most $c$\u0000distinct labels, which we call $c$-GEL, and we prove that it is NP-complete for\u0000every $c geq 2$ on very restricted instances. We then provide several positive\u0000results, starting with simple polynomial kernels for GEL and $c$-GEL\u0000parameterized by neighborhood diversity or vertex cover. As one of our main\u0000technical contributions, we present an FPT algorithm for GEL parameterized by\u0000the size of a modulator to a forest of stars, based on a novel approach via a\u00002-SAT formulation which we believe to be of independent interest. We also\u0000present FPT algorithms based on dynamic programming for $c$-GEL parameterized\u0000by treewidth and $c$, and for GEL parameterized by treewidth and the maximum\u0000degree. Finally, we answer positively a question of Bermond et al. [Theor.\u0000Comput. Sci. 2013] by proving the NP-completeness of a problem strongly related\u0000to GEL, namely that of deciding whether an input graph admits a so-called\u0000UPP-orientation.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fully Dynamic Shortest Paths in Sparse Digraphs","authors":"Adam Karczmarz, Piotr Sankowski","doi":"arxiv-2408.14406","DOIUrl":"https://doi.org/arxiv-2408.14406","url":null,"abstract":"We study the exact fully dynamic shortest paths problem. For real-weighted\u0000directed graphs, we show a deterministic fully dynamic data structure with\u0000$tilde{O}(mn^{4/5})$ worst-case update time processing arbitrary\u0000$s,t$-distance queries in $tilde{O}(n^{4/5})$ time. This constitutes the first\u0000non-trivial update/query tradeoff for this problem in the regime of sparse\u0000weighted directed graphs.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Power of Proportional Fairness for Non-Clairvoyant Scheduling under Polyhedral Constraints","authors":"Sven Jäger, Alexander Lindermayr, Nicole Megow","doi":"arxiv-2408.14310","DOIUrl":"https://doi.org/arxiv-2408.14310","url":null,"abstract":"The Polytope Scheduling Problem (PSP) was introduced by Im, Kulkarni, and\u0000Munagala (JACM 2018) as a very general abstraction of resource allocation over\u0000time and captures many well-studied problems including classical unrelated\u0000machine scheduling, multidimensional scheduling, and broadcast scheduling. In\u0000PSP, jobs with different arrival times receive processing rates that are\u0000subject to arbitrary packing constraints. An elegant and well-known algorithm\u0000for instantaneous rate allocation with good fairness and efficiency properties\u0000is the Proportional Fairness algorithm (PF), which was analyzed for PSP by Im\u0000et al. We drastically improve the analysis of the PF algorithm for both the general\u0000PSP and several of its important special cases subject to the objective of\u0000minimizing the sum of weighted completion times. We reduce the upper bound on\u0000the competitive ratio from 128 to 27 for general PSP and to 4 for the prominent\u0000class of monotone PSP. For certain heterogeneous machine environments we even\u0000close the substantial gap to the lower bound of 2 for non-clairvoyant\u0000scheduling. Our analysis also gives the first polynomial-time improvements over\u0000the nearly 30-year-old bounds on the competitive ratio of the doubling\u0000framework by Hall, Shmoys, and Wein (SODA 1996) for clairvoyant online\u0000preemptive scheduling on unrelated machines. Somewhat surprisingly, we achieve\u0000this improvement by a non-clairvoyant algorithm, thereby demonstrating that\u0000non-clairvoyance is not a (significant) hurdle. Our improvements are based on exploiting monotonicity properties of PSP,\u0000providing tight dual fitting arguments on structured instances, and showing new\u0000additivity properties on the optimal objective value for scheduling on\u0000unrelated machines. Finally, we establish new connections of PF to matching\u0000markets, and thereby provide new insights on equilibria and their computational\u0000complexity.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Locality Sensitive Orderings in Doubling Metrics","authors":"An La, Hung Le","doi":"arxiv-2408.14617","DOIUrl":"https://doi.org/arxiv-2408.14617","url":null,"abstract":"In their pioneering work, Chan, Har-Peled, and Jones (SICOMP 2020) introduced\u0000locality-sensitive ordering (LSO), and constructed an LSO with a constant\u0000number of orderings for point sets in the $d$-dimensional Euclidean space.\u0000Furthermore, their LSO could be made dynamic effortlessly under point\u0000insertions and deletions, taking $O(log{n})$ time per update by exploiting\u0000Euclidean geometry. Their LSO provides a powerful primitive to solve a host of\u0000geometric problems in both dynamic and static settings. Filtser and Le (STOC\u00002022) constructed the first LSO with a constant number of orderings in the more\u0000general setting of doubling metrics. However, their algorithm is inherently\u0000static since it relies on several sophisticated constructions in intermediate\u0000steps, none of which is known to have a dynamic version. Making their LSO\u0000dynamic would recover the full generality of LSO and provide a general tool to\u0000dynamize a vast number of static constructions in doubling metrics. In this work, we give a dynamic algorithm that has $O(log{n})$ time per\u0000update to construct an LSO in doubling metrics under point insertions and\u0000deletions. We introduce a toolkit of several new data structures: a pairwise\u0000tree cover, a net tree cover, and a leaf tracker. A key technical is\u0000stabilizing the dynamic net tree of Cole and Gottlieb (STOC 2006), a central\u0000dynamic data structure in doubling metrics. Specifically, we show that every\u0000update to the dynamic net tree can be decomposed into a few simple updates to\u0000trees in the net tree cover. As stability is the key to any dynamic algorithm,\u0000our technique could be useful for other problems in doubling metrics. We obtain several algorithmic applications from our dynamic LSO. The most\u0000notably is the first dynamic algorithm for maintaining an $k$-fault tolerant\u0000spanner in doubling metrics with optimal sparsity in optimal $O(log{n})$ time\u0000per update.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojia Xu, Haoyu Liu, Xiaowei Lv, Yongcai Wang, Deying Li
{"title":"An Efficient and Exact Algorithm for Locally h-Clique Densest Subgraph Discovery","authors":"Xiaojia Xu, Haoyu Liu, Xiaowei Lv, Yongcai Wang, Deying Li","doi":"arxiv-2408.14022","DOIUrl":"https://doi.org/arxiv-2408.14022","url":null,"abstract":"Detecting locally, non-overlapping, near-clique densest subgraphs is a\u0000crucial problem for community search in social networks. As a vertex may be\u0000involved in multiple overlapped local cliques, detecting locally densest\u0000sub-structures considering h-clique density, i.e., locally h-clique densest\u0000subgraph (LhCDS) attracts great interests. This paper investigates the LhCDS\u0000detection problem and proposes an efficient and exact algorithm to list the\u0000top-k non-overlapping, locally h-clique dense, and compact subgraphs. We in\u0000particular jointly consider h-clique compact number and LhCDS and design a new\u0000\"Iterative Propose-Prune-and-Verify\" pipeline (IPPV) for top-k LhCDS detection.\u0000(1) In the proposal part, we derive initial bounds for h-clique compact\u0000numbers; prove the validity, and extend a convex programming method to tighten\u0000the bounds for proposing LhCDS candidates without missing any. (2) Then a\u0000tentative graph decomposition method is proposed to solve the challenging case\u0000where a clique spans multiple subgraphs in graph decomposition. (3) To deal\u0000with the verification difficulty, both a basic and a fast verification method\u0000are proposed, where the fast method constructs a smaller-scale flow network to\u0000improve efficiency while preserving the verification correctness. The verified\u0000LhCDSes are returned, while the candidates that remained unsure reenter the\u0000IPPV pipeline. (4) We further extend the proposed methods to locally more\u0000general pattern densest subgraph detection problems. We prove the exactness and\u0000low complexity of the proposed algorithm. Extensive experiments on real\u0000datasets show the effectiveness and high efficiency of IPPV.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New weighted additive spanners","authors":"An La, Hung Le","doi":"arxiv-2408.14638","DOIUrl":"https://doi.org/arxiv-2408.14638","url":null,"abstract":"Ahmed, Bodwin, Sahneh, Kobourov, and Spence (WG 2020) introduced additive\u0000spanners for weighted graphs and constructed (i) a $+2W_{max}$ spanner with\u0000$O(n^{3/2})$ edges and (ii) a $+4W_{max}$ spanner with $tilde{O}(n^{7/5})$\u0000edges, and (iii) a $+8W_{max}$ spanner with $O(n^{4/3})$ edges, for any\u0000weighted graph with $n$ vertices. Here $W_{max} = max_{ein E}w(e)$ is the\u0000maximum edge weight in the graph. Their results for $+2W_{max}$, $+4W_{max}$,\u0000and $+8W_{max}$ match the state-of-the-art bounds for the unweighted\u0000counterparts where $W_{max} = 1$. They left open the question of constructing\u0000a $+6W_{max}$ spanner with $O(n^{4/3})$ edges. Elkin, Gitlitz, and Neiman\u0000(DISC 2021) made significant progress on this problem by showing that there\u0000exists a $+(6+epsilon)W_{max}$ spanner with $O(n^{4/3}/epsilon)$ edges for\u0000any fixed constant $epsilon > 0$. Indeed, their result is stronger as the\u0000additive stretch is local: the stretch for any pair $u,v$ is\u0000$+(6+epsilon)W_{uv}$ where $W_{uv}$ is the maximum weight edge on the shortest\u0000path from $u$ to $v$. In this work, we resolve the problem posted by Ahmed et al. (WG 2020) up to a\u0000poly-logarithmic factor in the number of edges: We construct a $+6W_{max}$\u0000spanner with $tilde{O}(n^{4/3})$ edges. We extend the construction for\u0000$+6$-spanners of Woodruff (ICALP 2010), and our main contribution is an\u0000analysis tailoring to the weighted setting. The stretch of our spanner could\u0000also be made local, in the sense of Elkin, Gitlitz, and Neiman (DISC 2021). We\u0000also study the fast constructions of additive spanners with $+6W_{max}$ and\u0000$+4W_{max}$ stretches. We obtain, among other things, an algorithm for\u0000constructing a $+(6+epsilon)W_{max}$ spanner of\u0000$tilde{O}(frac{n^{4/3}}{epsilon})$ edges in $tilde{O}(n^2)$ time.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantum Speedups for Approximating the John Ellipsoid","authors":"Xiaoyu Li, Zhao Song, Junwei Yu","doi":"arxiv-2408.14018","DOIUrl":"https://doi.org/arxiv-2408.14018","url":null,"abstract":"In 1948, Fritz John proposed a theorem stating that every convex body has a\u0000unique maximal volume inscribed ellipsoid, known as the John ellipsoid. The\u0000John ellipsoid has become fundamental in mathematics, with extensive\u0000applications in high-dimensional sampling, linear programming, and machine\u0000learning. Designing faster algorithms to compute the John ellipsoid is\u0000therefore an important and emerging problem. In [Cohen, Cousins, Lee, Yang COLT\u00002019], they established an algorithm for approximating the John ellipsoid for a\u0000symmetric convex polytope defined by a matrix $A in mathbb{R}^{n times d}$\u0000with a time complexity of $O(nd^2)$. This was later improved to\u0000$O(text{nnz}(A) + d^omega)$ by [Song, Yang, Yang, Zhou 2022], where\u0000$text{nnz}(A)$ is the number of nonzero entries of $A$ and $omega$ is the\u0000matrix multiplication exponent. Currently $omega approx 2.371$ [Alman, Duan,\u0000Williams, Xu, Xu, Zhou 2024]. In this work, we present the first quantum\u0000algorithm that computes the John ellipsoid utilizing recent advances in quantum\u0000algorithms for spectral approximation and leverage score approximation, running\u0000in $O(sqrt{n}d^{1.5} + d^omega)$ time. In the tall matrix regime, our\u0000algorithm achieves quadratic speedup, resulting in a sublinear running time and\u0000significantly outperforming the current best classical algorithms.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongrun Cai, Xue Chen, Wenxuan Shu, Haoyu Wang, Guangyi Zou
{"title":"Revisit the Partial Coloring Method: Prefix Spencer and Sampling","authors":"Dongrun Cai, Xue Chen, Wenxuan Shu, Haoyu Wang, Guangyi Zou","doi":"arxiv-2408.13756","DOIUrl":"https://doi.org/arxiv-2408.13756","url":null,"abstract":"As the most powerful tool in discrepancy theory, the partial coloring method\u0000has wide applications in many problems including the Beck-Fiala problem and\u0000Spencer's celebrated result. Currently, there are two major algorithmic methods\u0000for the partial coloring method: the first approach uses linear algebraic\u0000tools; and the second is called Gaussian measure algorithm. We explore the\u0000advantages of these two methods and show the following results for them\u0000separately. 1. Spencer conjectured that the prefix discrepancy of any $mathbf{A} in\u0000{0,1}^{m times n}$ is $O(sqrt{m})$. We show how to find a partial coloring\u0000with prefix discrepancy $O(sqrt{m})$ and $Omega(n)$ entries in ${ pm 1}$\u0000efficiently. To the best of our knowledge, this provides the first partial\u0000coloring whose prefix discrepancy is almost optimal. However, unlike the\u0000classical discrepancy problem, there is no reduction on the number of variables\u0000$n$ for the prefix problem. By recursively applying partial coloring, we obtain\u0000a full coloring with prefix discrepancy $O(sqrt{m} cdot log\u0000frac{O(n)}{m})$. Prior to this work, the best bounds of the prefix Spencer\u0000conjecture for arbitrarily large $n$ were $2m$ and $O(sqrt{m log n})$. 2. Our second result extends the first linear algebraic approach to a\u0000sampling algorithm in Spencer's classical setting. On the first hand, Spencer\u0000proved that there are $1.99^m$ good colorings with discrepancy $O(sqrt{m})$.\u0000Hence a natural question is to design efficient random sampling algorithms in\u0000Spencer's setting. On the other hand, some applications of discrepancy theory,\u0000prefer a random solution instead of a fixed one. Our second result is an\u0000efficient sampling algorithm whose random output has min-entropy $Omega(n)$\u0000and discrepancy $O(sqrt{m})$. Moreover, our technique extends the linear\u0000algebraic framework by incorporating leverage scores of randomized matrix\u0000algorithms.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Note On Deterministic Submodular Maximization With Bounded Curvature","authors":"Wenxin Li","doi":"arxiv-2409.02943","DOIUrl":"https://doi.org/arxiv-2409.02943","url":null,"abstract":"We show that the recent breakthrough result of [Buchbinder and Feldman,\u0000FOCS'24] could further lead to a deterministic\u0000$(1-kappa_{f}/e-varepsilon)$-approximate algorithm for maximizing a\u0000submodular function with curvature $kappa_{f}$ under matroid constraint.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"106 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Václav Blažej, Satyabrata Jana, M. S. Ramanujan, Peter Strulo
{"title":"On the Parameterized Complexity of Eulerian Strong Component Arc Deletion","authors":"Václav Blažej, Satyabrata Jana, M. S. Ramanujan, Peter Strulo","doi":"arxiv-2408.13819","DOIUrl":"https://doi.org/arxiv-2408.13819","url":null,"abstract":"In this paper, we study the Eulerian Strong Component Arc Deletion problem,\u0000where the input is a directed multigraph and the goal is to delete the minimum\u0000number of arcs to ensure every strongly connected component of the resulting\u0000digraph is Eulerian. This problem is a natural extension of the Directed\u0000Feedback Arc Set problem and is also known to be motivated by certain scenarios\u0000arising in the study of housing markets. The complexity of the problem, when\u0000parameterized by solution size (i.e., size of the deletion set), has remained\u0000unresolved and has been highlighted in several papers. In this work, we answer\u0000this question by ruling out (subject to the usual complexity assumptions) a\u0000fixed-parameter tractable (FPT) algorithm for this parameter and conduct a\u0000broad analysis of the problem with respect to other natural parameterizations.\u0000We prove both positive and negative results. Among these, we demonstrate that\u0000the problem is also hard (W[1]-hard or even para-NP-hard) when parameterized by\u0000either treewidth or maximum degree alone. Complementing our lower bounds, we\u0000establish that the problem is in XP when parameterized by treewidth and FPT\u0000when parameterized either by both treewidth and maximum degree or by both\u0000treewidth and solution size. We show that these algorithms have near-optimal\u0000asymptotic dependence on the treewidth assuming the Exponential Time\u0000Hypothesis.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}