{"title":"Approximating Maximin Share Allocations","authors":"J. Garg, Patricia C. McGlaughlin, Setareh Taki","doi":"10.4230/OASIcs.SOSA.2019.20","DOIUrl":"https://doi.org/10.4230/OASIcs.SOSA.2019.20","url":null,"abstract":"We study the problem of fair allocation of M indivisible items among N agents using the popular notion of maximin share as our measure of fairness. The maximin share of an agent is the largest value she can guarantee herself if she is allowed to choose a partition of the items into N bundles (one for each agent), on the condition that she receives her least preferred bundle. A maximin share allocation provides each agent a bundle worth at least their maximin share. While it is known that such an allocation need not exist [Procaccia and Wang, 2014; Kurokawa et al., 2016], a series of work [Procaccia and Wang, 2014; David Kurokawa et al., 2018; Amanatidis et al., 2017; Barman and Krishna Murthy, 2017] provided 2/3 approximation algorithms in which each agent receives a bundle worth at least 2/3 times their maximin share. Recently, [Ghodsi et al., 2018] improved the approximation guarantee to 3/4. Prior works utilize intricate algorithms, with an exception of [Barman and Krishna Murthy, 2017] which is a simple greedy solution but relies on sophisticated analysis techniques. In this paper, we propose an alternative 2/3 maximin share approximation which offers both a simple algorithm and straightforward analysis. In contrast to other algorithms, our approach allows for a simple and intuitive understanding of why it works.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"1 1","pages":"20:1-20:11"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84069918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Primal-Dual Circle Representations","authors":"S. Felsner, G. Rote","doi":"10.4230/OASIcs.SOSA.2019.8","DOIUrl":"https://doi.org/10.4230/OASIcs.SOSA.2019.8","url":null,"abstract":"The Koebe-Andreev-Thurston Circle Packing Theorem states that every triangulated planar graph has a circle-contact representation. The theorem has been generalized in various ways. The arguably most prominent generalization assures the existence of a primal-dual circle representation for every 3-connected planar graph. The aim of this note is to give a streamlined proof of this result.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"24 1","pages":"8:1-8:18"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84117187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simple Contention Resolution via Multiplicative Weight Updates","authors":"Yi-Jun Chang, Wenyu Jin, S. Pettie","doi":"10.4230/OASIcs.SOSA.2019.16","DOIUrl":"https://doi.org/10.4230/OASIcs.SOSA.2019.16","url":null,"abstract":"We consider the classic contention resolution problem, in which devices conspire to share some common resource, for which they each need temporary and exclusive access. To ground the discussion, suppose (identical) devices wake up at various times, and must send a single packet over a shared multiple-access channel. In each time step they may attempt to send their packet; they receive ternary feedback {0, 1, 2+} from the channel, 0 indicating silence (no one attempted transmission), 1 indicating success (one device successfully transmitted), and 2+ indicating noise. We prove that a simple strategy suffices to achieve a channel utilization rate of 1/e − O( ), for any > 0. In each step, device i attempts to send its packet with probability pi, then applies a rudimentary multiplicative weight-type update to pi. pi ← pi · e upon hearing silence (0) pi upon hearing success (1) pi · e− /(e−2) upon hearing noise (2+) This scheme works well even if the introduction of devices/packets is adversarial, and even if the adversary can jam time slots (make noise) at will. We prove that if the adversary jams J time slots, then this scheme will achieve channel utilization 1/e− , excluding O(J) wasted slots. Results similar to these (Bender, Fineman, Gilbert, Young, SODA 2016) were already achieved, but with a lower constant efficiency (less than 0.05) and a more complex algorithm. 2012 ACM Subject Classification Networks → Network protocols, Mathematics of computing → Probabilistic algorithms","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"4 1","pages":"16:1-16:16"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88816300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Application of Orthogonal Range Searching for Computing Giant Graph Diameters","authors":"G. Ducoffe","doi":"10.4230/OASIcs.SOSA.2019.12","DOIUrl":"https://doi.org/10.4230/OASIcs.SOSA.2019.12","url":null,"abstract":"A well-known problem for which it is difficult to improve the textbook algorithm is computing the graph diameter. We present two versions of a simple algorithm (one being Monte Carlo and the other deterministic) that for every fixed h and unweighted undirected graph G with n vertices and m edges, either correctly concludes that diam(G) < hn or outputs diam(G), in time O(m + n1+o(1)). The algorithm combines a simple randomized strategy for this problem (Damaschke, IWOCA’16) with a popular framework for computing graph distances that is based on range trees (Cabello and Knauer, Computational Geometry’09). We also prove that under the Strong Exponential Time Hypothesis (SETH), we cannot compute the diameter of a given n-vertex graph in truly subquadratic time, even if the diameter is an Θ(n/ log n). 2012 ACM Subject Classification Theory of computation → Shortest paths, Theory of computation → Problems, reductions and completeness","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"15 1","pages":"12:1-12:7"},"PeriodicalIF":0.0,"publicationDate":"2018-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83449276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simple Concurrent Labeling Algorithms for Connected Components","authors":"S. Liu, R. Tarjan","doi":"10.4230/OASIcs.SOSA.2019.3","DOIUrl":"https://doi.org/10.4230/OASIcs.SOSA.2019.3","url":null,"abstract":"We study a class of simple algorithms for concurrently computing the connected components of an $n$-vertex, $m$-edge graph. Our algorithms are easy to implement in either the COMBINING CRCW PRAM or the MPC computing model. For two related algorithms in this class, we obtain $Theta(lg n)$ step and $Theta(m lg n)$ work bounds. For two others, we obtain $O(lg^2 n)$ step and $O(m lg^2 n)$ work bounds, which are tight for one of them. All our algorithms are simpler than related algorithms in the literature. We also point out some gaps and errors in the analysis of previous algorithms. Our results show that even a basic problem like connected components still has secrets to reveal.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"72 1","pages":"3:1-3:20"},"PeriodicalIF":0.0,"publicationDate":"2018-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83067699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a Unified Theory of Sparsification for Matching Problems","authors":"Sepehr Assadi, A. Bernstein","doi":"10.4230/OASIcs.SOSA.2019.11","DOIUrl":"https://doi.org/10.4230/OASIcs.SOSA.2019.11","url":null,"abstract":"In this paper, we present a construction of a `matching sparsifier', that is, a sparse subgraph of the given graph that preserves large matchings approximately and is robust to modifications of the graph. We use this matching sparsifier to obtain several new algorithmic results for the maximum matching problem: \u0000* An almost $(3/2)$-approximation one-way communication protocol for the maximum matching problem, significantly simplifying the $(3/2)$-approximation protocol of Goel, Kapralov, and Khanna (SODA 2012) and extending it from bipartite graphs to general graphs. \u0000* An almost $(3/2)$-approximation algorithm for the stochastic matching problem, improving upon and significantly simplifying the previous $1.999$-approximation algorithm of Assadi, Khanna, and Li (EC 2017). \u0000* An almost $(3/2)$-approximation algorithm for the fault-tolerant matching problem, which, to our knowledge, is the first non-trivial algorithm for this problem. \u0000Our matching sparsifier is obtained by proving new properties of the edge-degree constrained subgraph (EDCS) of Bernstein and Stein (ICALP 2015; SODA 2016)---designed in the context of maintaining matchings in dynamic graphs---that identifies EDCS as an excellent choice for a matching sparsifier. This leads to surprisingly simple and non-technical proofs of the above results in a unified way. Along the way, we also provide a much simpler proof of the fact that an EDCS is guaranteed to contain a large matching, which may be of independent interest.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"7 1","pages":"11:1-11:20"},"PeriodicalIF":0.0,"publicationDate":"2018-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80075260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Submodular Optimization in the MapReduce Model","authors":"Paul Liu, J. Vondrák","doi":"10.4230/OASIcs.SOSA.2019.18","DOIUrl":"https://doi.org/10.4230/OASIcs.SOSA.2019.18","url":null,"abstract":"Submodular optimization has received significant attention in both practice and theory, as a wide array of problems in machine learning, auction theory, and combinatorial optimization have submodular structure. In practice, these problems often involve large amounts of data, and must be solved in a distributed way. One popular framework for running such distributed algorithms is MapReduce. In this paper, we present two simple algorithms for cardinality constrained submodular optimization in the MapReduce model: the first is a $(1/2-o(1))$-approximation in 2 MapReduce rounds, and the second is a $(1-1/e-epsilon)$-approximation in $frac{1+o(1)}{epsilon}$ MapReduce rounds.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"99 1","pages":"18:1-18:10"},"PeriodicalIF":0.0,"publicationDate":"2018-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82467075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compressed Sensing with Adversarial Sparse Noise via L1 Regression","authors":"Sushrut Karmalkar, Eric Price","doi":"10.4230/OASIcs.SOSA.2019.19","DOIUrl":"https://doi.org/10.4230/OASIcs.SOSA.2019.19","url":null,"abstract":"We present a simple and effective algorithm for the problem of emph{sparse robust linear regression}. In this problem, one would like to estimate a sparse vector $w^* in mathbb{R}^n$ from linear measurements corrupted by sparse noise that can arbitrarily change an adversarially chosen $eta$ fraction of measured responses $y$, as well as introduce bounded norm noise to the responses. For Gaussian measurements, we show that a simple algorithm based on L1 regression can successfully estimate $w^*$ for any $eta < eta_0 approx 0.239$, and that this threshold is tight for the algorithm. The number of measurements required by the algorithm is $O(k log frac{n}{k})$ for $k$-sparse estimation, which is within constant factors of the number needed without any sparse noise. Of the three properties we show---the ability to estimate sparse, as well as dense, $w^*$; the tolerance of a large constant fraction of outliers; and tolerance of adversarial rather than distributional (e.g., Gaussian) dense noise---to the best of our knowledge, no previous result achieved more than two.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"91 2 1","pages":"19:1-19:19"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79581764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relaxed Voronoi: A Simple Framework for Terminal-Clustering Problems","authors":"Arnold Filtser, Robert Krauthgamer, Ohad Trabelsi","doi":"10.4230/OASIcs.SOSA.2019.10","DOIUrl":"https://doi.org/10.4230/OASIcs.SOSA.2019.10","url":null,"abstract":"We reprove three known algorithmic bounds for terminal-clustering problems, using a single framework that leads to simpler proofs. In this genre of problems, the input is a metric space $(X,d)$ (possibly arising from a graph) and a subset of terminals $Ksubset X$, and the goal is to partition the points $X$ such that each part, called a cluster, contains exactly one terminal (possibly with connectivity requirements) so as to minimize some objective. The three bounds we reprove are for Steiner Point Removal on trees [Gupta, SODA 2001], for Metric $0$-Extension in bounded doubling dimension [Lee and Naor, unpublished 2003], and for Connected Metric $0$-Extension [Englert et al., SICOMP 2014]. \u0000A natural approach is to cluster each point with its closest terminal, which would partition $X$ into so-called Voronoi cells, but this approach can fail miserably due to its stringent cluster boundaries. A now-standard fix, which we call the Relaxed-Voronoi framework, is to use enlarged Voronoi cells, but to obtain disjoint clusters, the cells are computed greedily according to some order. This method, first proposed by Calinescu, Karloff and Rabani [SICOMP 2004], was employed successfully to provide state-of-the-art results for terminal-clustering problems on general metrics. However, for restricted families of metrics, e.g., trees and doubling metrics, only more complicated, ad-hoc algorithms are known. Our main contribution is to demonstrate that the Relaxed-Voronoi algorithm is applicable to restricted metrics, and actually leads to relatively simple algorithms and analyses.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"68 1","pages":"10:1-10:14"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83192228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"One (more) line on the most Ancient Algorithm in History","authors":"Ilya Volkovich","doi":"10.1137/1.9781611976014.3","DOIUrl":"https://doi.org/10.1137/1.9781611976014.3","url":null,"abstract":"We give a new simple and short (\"one-line\") analysis for the runtime of the well-known Euclidean Algorithm. While very short simple, the obtained upper bound in near-optimal.","PeriodicalId":93491,"journal":{"name":"Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA)","volume":"14 1","pages":"15-17"},"PeriodicalIF":0.0,"publicationDate":"2018-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81088023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}