{"title":"Towards a Theory of Randomized Shared Memory Algorithms","authors":"Philipp Woelfel","doi":"10.1145/3293611.3338838","DOIUrl":null,"url":null,"abstract":"Randomization has become an invaluable tool to overcome some of the problems associated with asynchrony and faultiness. Allowing processors to use random bits helps to break symmetry, and to reduce the likelihood of undesirable schedules. As a consequence, randomized techniques can lead to simpler and more efficient algorithms, and sometimes to solutions of otherwise unsolvable computational problems. However, the design and the analysis of randomized shared memory algorithms remains challenging. This talk will give an overview of recent progress towards developing a theory of randomized shared memory algorithms. For many years, linearizability [6] has been the gold standard of distributed correctness conditions, and the corner stone of modular programming. In deterministic algorithms, implemented linearizable methods can be assumed to be atomic. But when processes can make random choices, the situation is not the same: Probability distributions of outcomes of algorithms using linearizable methods may be very different from those using equivalent atomic operations [4]. In general, modular algorithm design is much more difficult for randomized algorithms than for deterministic ones. The first part of the talk will present a correctness condition [2, 5] that is suitable for randomized algorithms in certain settings, and will explain why in other settings no such correctness condition exists [3] and what we can do about that. To this date, almost all randomized shared memory algorithms are Las Vegas, meaning they permit no error. Monte Carlo algorithms, which allow errors to occur with small probability, have been studied thoroughly for sequential systems. But in the shared memory world such algorithms have been neglected. The second part of this talk will discuss recent attempts to devise Monte Carlo algorithms for fundamental shared memory problems (e.g., [1]). It will also present some general techniques, that have proved useful in the design of concurrent randomized algorithms.","PeriodicalId":153766,"journal":{"name":"Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3293611.3338838","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Randomization has become an invaluable tool to overcome some of the problems associated with asynchrony and faultiness. Allowing processors to use random bits helps to break symmetry, and to reduce the likelihood of undesirable schedules. As a consequence, randomized techniques can lead to simpler and more efficient algorithms, and sometimes to solutions of otherwise unsolvable computational problems. However, the design and the analysis of randomized shared memory algorithms remains challenging. This talk will give an overview of recent progress towards developing a theory of randomized shared memory algorithms. For many years, linearizability [6] has been the gold standard of distributed correctness conditions, and the corner stone of modular programming. In deterministic algorithms, implemented linearizable methods can be assumed to be atomic. But when processes can make random choices, the situation is not the same: Probability distributions of outcomes of algorithms using linearizable methods may be very different from those using equivalent atomic operations [4]. In general, modular algorithm design is much more difficult for randomized algorithms than for deterministic ones. The first part of the talk will present a correctness condition [2, 5] that is suitable for randomized algorithms in certain settings, and will explain why in other settings no such correctness condition exists [3] and what we can do about that. To this date, almost all randomized shared memory algorithms are Las Vegas, meaning they permit no error. Monte Carlo algorithms, which allow errors to occur with small probability, have been studied thoroughly for sequential systems. But in the shared memory world such algorithms have been neglected. The second part of this talk will discuss recent attempts to devise Monte Carlo algorithms for fundamental shared memory problems (e.g., [1]). It will also present some general techniques, that have proved useful in the design of concurrent randomized algorithms.