Daniel Funke, Demian Hespe, Peter Sanders, Sabine Storandt, Carina Truschel
{"title":"Pareto Sums of Pareto Sets: Lower Bounds and Algorithms","authors":"Daniel Funke, Demian Hespe, Peter Sanders, Sabine Storandt, Carina Truschel","doi":"arxiv-2409.10232","DOIUrl":null,"url":null,"abstract":"In bi-criteria optimization problems, the goal is typically to compute the\nset of Pareto-optimal solutions. Many algorithms for these types of problems\nrely on efficient merging or combining of partial solutions and filtering of\ndominated solutions in the resulting sets. In this article, we consider the\ntask of computing the Pareto sum of two given Pareto sets $A, B$ of size $n$.\nThe Pareto sum $C$ contains all non-dominated points of the Minkowski sum $M =\n\\{a+b|a \\in A, b\\in B\\}$. Since the Minkowski sum has a size of $n^2$, but the\nPareto sum $C$ can be much smaller, the goal is to compute $C$ without having\nto compute and store all of $M$. We present several new algorithms for\nefficient Pareto sum computation, including an output-sensitive successive\nalgorithm with a running time of $O(n \\log n + nk)$ and a space consumption of\n$O(n+k)$ for $k=|C|$. If the elements of $C$ are streamed, the space\nconsumption reduces to $O(n)$. For output sizes $k \\geq 2n$, we prove a\nconditional lower bound for Pareto sum computation, which excludes running\ntimes in $O(n^{2-\\delta})$ for $\\delta > 0$ unless the (min,+)-convolution\nhardness conjecture fails. The successive algorithm matches this lower bound\nfor $k \\in \\Theta(n)$. However, for $k \\in \\Theta(n^2)$, the successive\nalgorithm exhibits a cubic running time. But we also present an algorithm with\nan output-sensitive space consumption and a running time of $O(n^2 \\log n)$,\nwhich matches the lower bound up to a logarithmic factor even for large $k$.\nFurthermore, we describe suitable engineering techniques to improve the\npractical running times of our algorithms. Finally, we provide an extensive\ncomparative experimental study on generated and real-world data. As a showcase\napplication, we consider preprocessing-based bi-criteria route planning in road\nnetworks.","PeriodicalId":501525,"journal":{"name":"arXiv - CS - Data Structures and Algorithms","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Data Structures and Algorithms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10232","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In bi-criteria optimization problems, the goal is typically to compute the
set of Pareto-optimal solutions. Many algorithms for these types of problems
rely on efficient merging or combining of partial solutions and filtering of
dominated solutions in the resulting sets. In this article, we consider the
task of computing the Pareto sum of two given Pareto sets $A, B$ of size $n$.
The Pareto sum $C$ contains all non-dominated points of the Minkowski sum $M =
\{a+b|a \in A, b\in B\}$. Since the Minkowski sum has a size of $n^2$, but the
Pareto sum $C$ can be much smaller, the goal is to compute $C$ without having
to compute and store all of $M$. We present several new algorithms for
efficient Pareto sum computation, including an output-sensitive successive
algorithm with a running time of $O(n \log n + nk)$ and a space consumption of
$O(n+k)$ for $k=|C|$. If the elements of $C$ are streamed, the space
consumption reduces to $O(n)$. For output sizes $k \geq 2n$, we prove a
conditional lower bound for Pareto sum computation, which excludes running
times in $O(n^{2-\delta})$ for $\delta > 0$ unless the (min,+)-convolution
hardness conjecture fails. The successive algorithm matches this lower bound
for $k \in \Theta(n)$. However, for $k \in \Theta(n^2)$, the successive
algorithm exhibits a cubic running time. But we also present an algorithm with
an output-sensitive space consumption and a running time of $O(n^2 \log n)$,
which matches the lower bound up to a logarithmic factor even for large $k$.
Furthermore, we describe suitable engineering techniques to improve the
practical running times of our algorithms. Finally, we provide an extensive
comparative experimental study on generated and real-world data. As a showcase
application, we consider preprocessing-based bi-criteria route planning in road
networks.