{"title":"学习增强最大流量","authors":"Adam Polak , Maksym Zub","doi":"10.1016/j.ipl.2024.106487","DOIUrl":null,"url":null,"abstract":"<div><p>We propose a framework for speeding up maximum flow computation by using predictions. A prediction is a flow, i.e., an assignment of non-negative flow values to edges, which satisfies the flow conservation property, but does not necessarily respect the edge capacities of the actual instance (since these were unknown at the time of learning). We present an algorithm that, given an <em>m</em>-edge flow network and a predicted flow, computes a maximum flow in <span><math><mi>O</mi><mo>(</mo><mi>m</mi><mi>η</mi><mo>)</mo></math></span> time, where <em>η</em> is the <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> error of the prediction, i.e., the sum over the edges of the absolute difference between the predicted and optimal flow values. Moreover, we prove that, given an oracle access to a distribution over flow networks, it is possible to efficiently PAC-learn a prediction minimizing the expected <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> error over that distribution. Our results fit into the recent line of research on learning-augmented algorithms, which aims to improve over worst-case bounds of classical algorithms by using predictions, e.g., machine-learned from previous similar instances. So far, the main focus in this area was on improving competitive ratios for online problems. Following Dinitz et al. (2021) <span>[6]</span>, our results are among the firsts to improve the running time of an offline problem.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106487"},"PeriodicalIF":0.7000,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning-augmented maximum flow\",\"authors\":\"Adam Polak , Maksym Zub\",\"doi\":\"10.1016/j.ipl.2024.106487\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>We propose a framework for speeding up maximum flow computation by using predictions. A prediction is a flow, i.e., an assignment of non-negative flow values to edges, which satisfies the flow conservation property, but does not necessarily respect the edge capacities of the actual instance (since these were unknown at the time of learning). We present an algorithm that, given an <em>m</em>-edge flow network and a predicted flow, computes a maximum flow in <span><math><mi>O</mi><mo>(</mo><mi>m</mi><mi>η</mi><mo>)</mo></math></span> time, where <em>η</em> is the <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> error of the prediction, i.e., the sum over the edges of the absolute difference between the predicted and optimal flow values. Moreover, we prove that, given an oracle access to a distribution over flow networks, it is possible to efficiently PAC-learn a prediction minimizing the expected <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> error over that distribution. Our results fit into the recent line of research on learning-augmented algorithms, which aims to improve over worst-case bounds of classical algorithms by using predictions, e.g., machine-learned from previous similar instances. So far, the main focus in this area was on improving competitive ratios for online problems. Following Dinitz et al. (2021) <span>[6]</span>, our results are among the firsts to improve the running time of an offline problem.</p></div>\",\"PeriodicalId\":56290,\"journal\":{\"name\":\"Information Processing Letters\",\"volume\":\"186 \",\"pages\":\"Article 106487\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2024-02-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0020019024000176\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0020019024000176","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
我们提出了一个利用预测加速最大流计算的框架。预测是一种流量,即对边的非负流量值的分配,它满足流量守恒属性,但不一定尊重实际实例的边容量(因为在学习时这些容量是未知的)。我们提出了一种算法,在给定一个 m 边流量网络和一个预测流量的情况下,可以在 O(mη) 时间内计算出最大流量,其中 η 是预测的 ℓ1 误差,即预测流量值与最优流量值之间的绝对差值在边上的总和。此外,我们还证明,如果有一个获取流量网络分布的甲骨文,就有可能高效地通过 PAC 学习预测,使该分布的预期 ℓ1 误差最小化。我们的研究成果与最近关于学习增强算法的研究方向不谋而合,后者旨在通过使用预测(例如从以前的类似实例中机器学习的预测)来改进经典算法的最坏情况界限。迄今为止,这一领域的主要研究重点是提高在线问题的竞争比率。继 Dinitz 等人(2021 年)[6]之后,我们的成果是首批改善离线问题运行时间的成果之一。
We propose a framework for speeding up maximum flow computation by using predictions. A prediction is a flow, i.e., an assignment of non-negative flow values to edges, which satisfies the flow conservation property, but does not necessarily respect the edge capacities of the actual instance (since these were unknown at the time of learning). We present an algorithm that, given an m-edge flow network and a predicted flow, computes a maximum flow in time, where η is the error of the prediction, i.e., the sum over the edges of the absolute difference between the predicted and optimal flow values. Moreover, we prove that, given an oracle access to a distribution over flow networks, it is possible to efficiently PAC-learn a prediction minimizing the expected error over that distribution. Our results fit into the recent line of research on learning-augmented algorithms, which aims to improve over worst-case bounds of classical algorithms by using predictions, e.g., machine-learned from previous similar instances. So far, the main focus in this area was on improving competitive ratios for online problems. Following Dinitz et al. (2021) [6], our results are among the firsts to improve the running time of an offline problem.
期刊介绍:
Information Processing Letters invites submission of original research articles that focus on fundamental aspects of information processing and computing. This naturally includes work in the broadly understood field of theoretical computer science; although papers in all areas of scientific inquiry will be given consideration, provided that they describe research contributions credibly motivated by applications to computing and involve rigorous methodology. High quality experimental papers that address topics of sufficiently broad interest may also be considered.
Since its inception in 1971, Information Processing Letters has served as a forum for timely dissemination of short, concise and focused research contributions. Continuing with this tradition, and to expedite the reviewing process, manuscripts are generally limited in length to nine pages when they appear in print.