{"title":"Improved Hardness Results of the Cardinality-Based Minimum s-t Cut Problem in Hypergraphs","authors":"Florian Adriaens, Iiro Kumpulainen, Nikolaj Tatti","doi":"arxiv-2409.07201","DOIUrl":"https://doi.org/arxiv-2409.07201","url":null,"abstract":"In hypergraphs an edge that crosses a cut can be split in several ways,\u0000depending on how many nodes are placed on each side of the cut. A\u0000cardinality-based splitting function assigns a nonnegative cost of $w_i$ for\u0000each cut hyperedge $e$ with exactly $i$ nodes on the side of the cut that\u0000contains the minority of nodes from $e$. The cardinality-based minimum $s$-$t$\u0000cut aims to find an $s$-$t$ cut with minimum total cost. Assuming the costs\u0000$w_i$ are polynomially bounded by the input size and $w_0=0$ and $w_1=1$, we\u0000show that if the costs satisfy $w_i > w_{i-j}+w_{j}$ for some $i in {2,\u0000ldots floor*{n/2}}$ and $j in {1,ldots,floor*{i/2}}$, then the problem\u0000becomes NP-hard. Our result also holds for $k$-uniform hypergraphs with $k geq\u00004$. Additionally, we show that the textsc{No-Even-Split} problem in\u0000$4$-uniform hypergraphs is NP-hard.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Position Fair Mechanisms Allocating Indivisible Goods","authors":"Ryoga Mahara, Ryuhei Mizutani, Taihei Oki, Tomohiko Yokoyama","doi":"arxiv-2409.06423","DOIUrl":"https://doi.org/arxiv-2409.06423","url":null,"abstract":"In the fair division problem for indivisible goods, mechanisms that output\u0000allocations satisfying fairness concepts, such as envy-freeness up to one good\u0000(EF1), have been extensively studied. These mechanisms usually require an\u0000arbitrary order of agents as input, which may cause some agents to feel unfair\u0000since the order affects the output allocations. In the context of the\u0000cake-cutting problem, Manabe and Okamoto (2012) introduced meta-envy-freeness\u0000to capture such kind of fairness, which guarantees the absence of envy compared\u0000to different orders of agents. In this paper, we introduce position envy-freeness and its relaxation,\u0000position envy-freeness up to $k$ goods (PEF$k$), for mechanisms in the fair\u0000division problem for indivisible goods, analogous to the meta-envy-freeness.\u0000While the round-robin or the envy-cycle mechanism is not PEF1, we propose a\u0000PEF1 mechanism that always outputs an EF1 allocation. In addition, in the case\u0000of two agents, we prove that any mechanism that always returns a maximum Nash\u0000social welfare allocation is PEF1, and propose a modified adjusted winner\u0000mechanism satisfying PEF1. We further investigate the round-robin and the\u0000envy-cycle mechanisms to measure how far they are from position envy-freeness.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sándor P. Fekete, Ramin Kosfeld, Peter Kramer, Jonas Neutzner, Christian Rieck, Christian Scheffer
{"title":"Coordinated Motion Planning: Multi-Agent Path Finding in a Densely Packed, Bounded Domain","authors":"Sándor P. Fekete, Ramin Kosfeld, Peter Kramer, Jonas Neutzner, Christian Rieck, Christian Scheffer","doi":"arxiv-2409.06486","DOIUrl":"https://doi.org/arxiv-2409.06486","url":null,"abstract":"We study Multi-Agent Path Finding for arrangements of labeled agents in the\u0000interior of a simply connected domain: Given a unique start and target position\u0000for each agent, the goal is to find a sequence of parallel, collision-free\u0000agent motions that minimizes the overall time (the makespan) until all agents\u0000have reached their respective targets. A natural case is that of a simply\u0000connected polygonal domain with axis-parallel boundaries and integer\u0000coordinates, i.e., a simple polyomino, which amounts to a simply connected\u0000union of lattice unit squares or cells. We focus on the particularly\u0000challenging setting of densely packed agents, i.e., one per cell, which\u0000strongly restricts the mobility of agents, and requires intricate coordination\u0000of motion. We provide a variety of novel results for this problem, including (1) a\u0000characterization of polyominoes in which a reconfiguration plan is guaranteed\u0000to exist; (2) a characterization of shape parameters that induce worst-case\u0000bounds on the makespan; (3) a suite of algorithms to achieve asymptotically\u0000worst-case optimal performance with respect to the achievable stretch for cases\u0000with severely limited maneuverability. This corresponds to bounding the ratio\u0000between obtained makespan and the lower bound provided by the max-min distance\u0000between the start and target position of any agent and our shape parameters. Our results extend findings by Demaine et al. (SIAM Journal on Computing,\u00002019) who investigated the problem for solid rectangular domains, and in the\u0000closely related field of Permutation Routing, as presented by Alpert et al.\u0000(Computational Geometry, 2022) for convex pieces of grid graphs.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structured Downsampling for Fast, Memory-efficient Curation of Online Data Streams","authors":"Matthew Andres Moreno, Luis Zaman, Emily Dolson","doi":"arxiv-2409.06199","DOIUrl":"https://doi.org/arxiv-2409.06199","url":null,"abstract":"Operations over data streams typically hinge on efficient mechanisms to\u0000aggregate or summarize history on a rolling basis. For high-volume data steams,\u0000it is critical to manage state in a manner that is fast and memory efficient --\u0000particularly in resource-constrained or real-time contexts. Here, we address\u0000the problem of extracting a fixed-capacity, rolling subsample from a data\u0000stream. Specifically, we explore ``data stream curation'' strategies to fulfill\u0000requirements on the composition of sample time points retained. Our ``DStream''\u0000suite of algorithms targets three temporal coverage criteria: (1) steady\u0000coverage, where retained samples should spread evenly across elapsed data\u0000stream history; (2) stretched coverage, where early data items should be\u0000proportionally favored; and (3) tilted coverage, where recent data items should\u0000be proportionally favored. For each algorithm, we prove worst-case bounds on\u0000rolling coverage quality. We focus on the more practical, application-driven\u0000case of maximizing coverage quality given a fixed memory capacity. As a core\u0000simplifying assumption, we restrict algorithm design to a single update\u0000operation: writing from the data stream to a calculated buffer site -- with\u0000data never being read back, no metadata stored (e.g., sample timestamps), and\u0000data eviction occurring only implicitly via overwrite. Drawing only on\u0000primitive, low-level operations and ensuring full, overhead-free use of\u0000available memory, this ``DStream'' framework ideally suits domains that are\u0000resource-constrained, performance-critical, and fine-grained (e.g., individual\u0000data items as small as single bits or bytes). The proposed approach supports\u0000$mathcal{O}(1)$ data ingestion via concise bit-level operations. To further\u0000practical applications, we provide plug-and-play open-source implementations\u0000targeting both scripted and compiled application domains.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Multiple Secrets in Mastermind","authors":"Milind Prabhu, David Woodruff","doi":"arxiv-2409.06453","DOIUrl":"https://doi.org/arxiv-2409.06453","url":null,"abstract":"In the Generalized Mastermind problem, there is an unknown subset $H$ of the\u0000hypercube ${0,1}^d$ containing $n$ points. The goal is to learn $H$ by making\u0000a few queries to an oracle, which, given a point $q$ in ${0,1}^d$, returns\u0000the point in $H$ nearest to $q$. We give a two-round adaptive algorithm for\u0000this problem that learns $H$ while making at most $exp(tilde{O}(sqrt{d log\u0000n}))$ queries. Furthermore, we show that any $r$-round adaptive randomized\u0000algorithm that learns $H$ with constant probability must make\u0000$exp(Omega(d^{3^{-(r-1)}}))$ queries even when the input has $text{poly}(d)$\u0000points; thus, any $text{poly}(d)$ query algorithm must necessarily use\u0000$Omega(log log d)$ rounds of adaptivity. We give optimal query complexity\u0000bounds for the variant of the problem where queries are allowed to be from\u0000${0,1,2}^d$. We also study a continuous variant of the problem in which $H$\u0000is a subset of unit vectors in $mathbb{R}^d$, and one can query unit vectors\u0000in $mathbb{R}^d$. For this setting, we give an $O(n^{d/2})$ query\u0000deterministic algorithm to learn the hidden set of points.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adversary Resilient Learned Bloom Filters","authors":"Allison Bishop, Hayder Tirmazi","doi":"arxiv-2409.06556","DOIUrl":"https://doi.org/arxiv-2409.06556","url":null,"abstract":"Creating an adversary resilient Learned Bloom Filter\u0000cite{learnedindexstructures} with provable guarantees is an open problem\u0000cite{reviriego1}. We define a strong adversarial model for the Learned Bloom\u0000Filter. We also construct two adversary resilient variants of the Learned Bloom\u0000Filter called the Uptown Bodega Filter and the Downtown Bodega Filter. Our\u0000adversarial model extends an existing adversarial model designed for the\u0000Classical (i.e not ``Learned'') Bloom Filter by Naor Yogev~cite{moni1} and\u0000considers computationally bounded adversaries that run in probabilistic\u0000polynomial time (PPT). We show that if pseudo-random permutations exist, then a\u0000secure Learned Bloom Filter may be constructed with $lambda$ extra bits of\u0000memory and at most one extra pseudo-random permutation in the critical path. We\u0000further show that, if pseudo-random permutations exist, then a textit{high\u0000utility} Learned Bloom Filter may be constructed with $2lambda$ extra bits of\u0000memory and at most one extra pseudo-random permutation in the critical path.\u0000Finally, we construct a hybrid adversarial model for the case where a fraction\u0000of the workload is chosen by an adversary. We show realistic scenarios where\u0000using the Downtown Bodega Filter gives better performance guarantees compared\u0000to alternative approaches in this hybrid model.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"66 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Max Selection","authors":"Trung Dang, Zhiyi Huang","doi":"arxiv-2409.06014","DOIUrl":"https://doi.org/arxiv-2409.06014","url":null,"abstract":"We introduce a new model to study algorithm design under unreliable\u0000information, and apply this model for the problem of finding the uncorrupted\u0000maximum element of a list containing $n$ elements, among which are $k$\u0000corrupted elements. Under our model, algorithms can perform black-box\u0000comparison queries between any pair of elements. However, queries regarding\u0000corrupted elements may have arbitrary output. In particular, corrupted elements\u0000do not need to behave as any consistent values, and may introduce cycles in the\u0000elements' ordering. This imposes new challenges for designing correct\u0000algorithms under this setting. For example, one cannot simply output a single\u0000element, as it is impossible to distinguish elements of a list containing one\u0000corrupted and one uncorrupted element. To ensure correctness, algorithms under\u0000this setting must output a set to make sure the uncorrupted maximum element is\u0000included. We first show that any algorithm must output a set of size at least $min{n,\u00002k + 1}$ to ensure that the uncorrupted maximum is contained in the output\u0000set. Restricted to algorithms whose output size is exactly $min{n, 2k + 1}$,\u0000for deterministic algorithms, we show matching upper and lower bounds of\u0000$Theta(nk)$ comparison queries to produce a set of elements that contains the\u0000uncorrupted maximum. On the randomized side, we propose a 2-stage algorithm\u0000that, with high probability, uses $O(n + k operatorname{polylog} k)$\u0000comparison queries to find such a set, almost matching the $Omega(n)$ queries\u0000necessary for any randomized algorithm to obtain a constant probability of\u0000being correct.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring monotonic priority queues for Dijkstra optimization","authors":"Jonas Costa, Lucas Castro, Rosiane de Freitas","doi":"arxiv-2409.06061","DOIUrl":"https://doi.org/arxiv-2409.06061","url":null,"abstract":"This paper presents a comprehensive overview of monotone priority queues,\u0000focusing on their evolution and application in shortest path algorithms.\u0000Monotone priority queues are characterized by the property that their minimum\u0000key does not decrease over time, making them particularly effective for\u0000label-setting algorithms like Dijkstra's. Some key data structures within this\u0000category are explored, emphasizing those derived directly from Dial's\u0000algorithm, including variations of multi-level bucket structures and radix\u0000heaps. Theoretical complexities and practical considerations of these\u0000structures are discussed, with insights into their development and refinement\u0000provided through a historical timeline.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"167 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FPT approximations for Capacitated Sum of Radii and Diameters","authors":"Arnold Filtser, Ameet Gadekar","doi":"arxiv-2409.04984","DOIUrl":"https://doi.org/arxiv-2409.04984","url":null,"abstract":"The Capacitated Sum of Radii problem involves partitioning a set of points\u0000$P$, where each point $pin P$ has capacity $U_p$, into $k$ clusters that\u0000minimize the sum of cluster radii, such that the number of points in the\u0000cluster centered at point $p$ is at most $U_p$. We begin by showing that the\u0000problem is APX-hard, and that under gap-ETH there is no parameterized\u0000approximation scheme (FPT-AS). We then construct a $approx5.83$-approximation\u0000algorithm in FPT time (improving a previous $approx7.61$ approximation in FPT\u0000time). Our results also hold when the objective is a general monotone symmetric\u0000norm of radii. We also improve the approximation factors for the uniform\u0000capacity case, and for the closely related problem of Capacitated Sum of\u0000Diameters.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, Jie Xue, Meirav Zehavi
{"title":"Subexponential Parameterized Algorithms for Hitting Subgraphs","authors":"Daniel Lokshtanov, Fahad Panolan, Saket Saurabh, Jie Xue, Meirav Zehavi","doi":"arxiv-2409.04786","DOIUrl":"https://doi.org/arxiv-2409.04786","url":null,"abstract":"For a finite set $mathcal{F}$ of graphs, the $mathcal{F}$-Hitting problem\u0000aims to compute, for a given graph $G$ (taken from some graph class\u0000$mathcal{G}$) of $n$ vertices (and $m$ edges) and a parameter\u0000$kinmathbb{N}$, a set $S$ of vertices in $G$ such that $|S|leq k$ and $G-S$\u0000does not contain any subgraph isomorphic to a graph in $mathcal{F}$. As a\u0000generic problem, $mathcal{F}$-Hitting subsumes many fundamental\u0000vertex-deletion problems that are well-studied in the literature. The\u0000$mathcal{F}$-Hitting problem admits a simple branching algorithm with running\u0000time $2^{O(k)}cdot n^{O(1)}$, while it cannot be solved in $2^{o(k)}cdot\u0000n^{O(1)}$ time on general graphs assuming the ETH. In this paper, we establish a general framework to design subexponential\u0000parameterized algorithms for the $mathcal{F}$-Hitting problem on a broad\u0000family of graph classes. Specifically, our framework yields algorithms that\u0000solve $mathcal{F}$-Hitting with running time $2^{O(k^c)}cdot n+O(m)$ for a\u0000constant $c<1$ on any graph class $mathcal{G}$ that admits balanced separators\u0000whose size is (strongly) sublinear in the number of vertices and polynomial in\u0000the size of a maximum clique. Examples include all graph classes of polynomial\u0000expansion and many important classes of geometric intersection graphs. Our\u0000algorithms also apply to the textit{weighted} version of\u0000$mathcal{F}$-Hitting, where each vertex of $G$ has a weight and the goal is to\u0000compute the set $S$ with a minimum weight that satisfies the desired\u0000conditions. The core of our framework is an intricate subexponential branching algorithm\u0000that reduces an instance of $mathcal{F}$-Hitting (on the aforementioned graph\u0000classes) to $2^{O(k^c)}$ general hitting-set instances, where the Gaifman graph\u0000of each instance has treewidth $O(k^c)$, for some constant $c<1$ depending on\u0000$mathcal{F}$ and the graph class.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142225980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}