Alexander Lindermayr, Nicole Megow, Bertrand Simon
{"title":"通过不完美的预测提高k-Server的双重覆盖率","authors":"Alexander Lindermayr, Nicole Megow, Bertrand Simon","doi":"10.1007/s00453-025-01333-9","DOIUrl":null,"url":null,"abstract":"<div><p>We study the online <b><i>k</i></b>-server problem in a learning-augmented setting. While in the traditional online model, an algorithm has no information about the request sequence, we assume that there is given some advice (for example, machine-learned predictions) on an algorithm’s decision. There is, however, no guarantee on the quality of the prediction, and it might be far from being correct. Our main result is a learning-augmented variation of the well-known Double Coverage algorithm for <b><i>k</i></b>-server on the line (Chrobak et al. in SIAM J Discret Math 4(2):172–181, 1991) in which we integrate predictions as well as our trust into their quality. We give an error-dependent worst-case performance guarantee, which is a function of a user-defined confidence parameter, and which interpolates smoothly between an optimal performance in case that all predictions are correct, and the best-possible performance regardless of the prediction quality. When given good predictions, we improve upon known lower bounds for online algorithms without advice. We further show that our algorithm achieves for any <b><i>k</i></b> almost optimal guarantees, within a class of deterministic learning-augmented algorithms respecting <i>local</i> and <i>memoryless</i> properties. Our algorithm outperforms a previously proposed (more general) learning-augmented algorithm. It is noteworthy that the previous algorithm crucially exploits memory, whereas our algorithm is <i>memoryless</i>. Finally, we demonstrate in experiments the practicability and the superior performance of our algorithm on real-world data.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 11","pages":"1477 - 1517"},"PeriodicalIF":0.7000,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-025-01333-9.pdf","citationCount":"0","resultStr":"{\"title\":\"Boosting Double Coverage for k-Server via Imperfect Predictions\",\"authors\":\"Alexander Lindermayr, Nicole Megow, Bertrand Simon\",\"doi\":\"10.1007/s00453-025-01333-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>We study the online <b><i>k</i></b>-server problem in a learning-augmented setting. While in the traditional online model, an algorithm has no information about the request sequence, we assume that there is given some advice (for example, machine-learned predictions) on an algorithm’s decision. There is, however, no guarantee on the quality of the prediction, and it might be far from being correct. Our main result is a learning-augmented variation of the well-known Double Coverage algorithm for <b><i>k</i></b>-server on the line (Chrobak et al. in SIAM J Discret Math 4(2):172–181, 1991) in which we integrate predictions as well as our trust into their quality. We give an error-dependent worst-case performance guarantee, which is a function of a user-defined confidence parameter, and which interpolates smoothly between an optimal performance in case that all predictions are correct, and the best-possible performance regardless of the prediction quality. When given good predictions, we improve upon known lower bounds for online algorithms without advice. We further show that our algorithm achieves for any <b><i>k</i></b> almost optimal guarantees, within a class of deterministic learning-augmented algorithms respecting <i>local</i> and <i>memoryless</i> properties. Our algorithm outperforms a previously proposed (more general) learning-augmented algorithm. It is noteworthy that the previous algorithm crucially exploits memory, whereas our algorithm is <i>memoryless</i>. Finally, we demonstrate in experiments the practicability and the superior performance of our algorithm on real-world data.</p></div>\",\"PeriodicalId\":50824,\"journal\":{\"name\":\"Algorithmica\",\"volume\":\"87 11\",\"pages\":\"1477 - 1517\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2025-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s00453-025-01333-9.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Algorithmica\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s00453-025-01333-9\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Algorithmica","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s00453-025-01333-9","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Boosting Double Coverage for k-Server via Imperfect Predictions
We study the online k-server problem in a learning-augmented setting. While in the traditional online model, an algorithm has no information about the request sequence, we assume that there is given some advice (for example, machine-learned predictions) on an algorithm’s decision. There is, however, no guarantee on the quality of the prediction, and it might be far from being correct. Our main result is a learning-augmented variation of the well-known Double Coverage algorithm for k-server on the line (Chrobak et al. in SIAM J Discret Math 4(2):172–181, 1991) in which we integrate predictions as well as our trust into their quality. We give an error-dependent worst-case performance guarantee, which is a function of a user-defined confidence parameter, and which interpolates smoothly between an optimal performance in case that all predictions are correct, and the best-possible performance regardless of the prediction quality. When given good predictions, we improve upon known lower bounds for online algorithms without advice. We further show that our algorithm achieves for any k almost optimal guarantees, within a class of deterministic learning-augmented algorithms respecting local and memoryless properties. Our algorithm outperforms a previously proposed (more general) learning-augmented algorithm. It is noteworthy that the previous algorithm crucially exploits memory, whereas our algorithm is memoryless. Finally, we demonstrate in experiments the practicability and the superior performance of our algorithm on real-world data.
期刊介绍:
Algorithmica is an international journal which publishes theoretical papers on algorithms that address problems arising in practical areas, and experimental papers of general appeal for practical importance or techniques. The development of algorithms is an integral part of computer science. The increasing complexity and scope of computer applications makes the design of efficient algorithms essential.
Algorithmica covers algorithms in applied areas such as: VLSI, distributed computing, parallel processing, automated design, robotics, graphics, data base design, software tools, as well as algorithms in fundamental areas such as sorting, searching, data structures, computational geometry, and linear programming.
In addition, the journal features two special sections: Application Experience, presenting findings obtained from applications of theoretical results to practical situations, and Problems, offering short papers presenting problems on selected topics of computer science.