{"title":"通过加权采样实现更好的总和估计","authors":"Lorenzo Beretta, Jakub Tětek","doi":"10.1145/3650030","DOIUrl":null,"url":null,"abstract":"<p>Given a large set <i>U</i> where each item <i>a</i> ∈ <i>U</i> has weight <i>w</i>(<i>a</i>), we want to estimate the total weight <i>W</i> = ∑<sub><i>a</i> ∈ <i>U</i></sub><i>w</i>(<i>a</i>) to within factor of 1 ± ε with some constant probability > 1/2. Since <i>n</i> = |<i>U</i>| is large, we want to do this without looking at the entire set <i>U</i>. In the traditional setting in which we are allowed to sample elements from <i>U</i> uniformly, sampling <i>Ω</i>(<i>n</i>) items is necessary to provide any non-trivial guarantee on the estimate. Therefore, we investigate this problem in different settings: in the <i>proportional</i> setting we can sample items with probabilities proportional to their weights, and in the <i>hybrid</i> setting we can sample both proportionally and uniformly. These settings have applications, for example, in sublinear-time algorithms and distribution testing. </p><p>Sum estimation in the proportional and hybrid setting has been considered before by Motwani, Panigrahy, and Xu [ICALP, 2007]. In their paper, they give both upper and lower bounds in terms of <i>n</i>. Their bounds are near-matching in terms of <i>n</i>, but not in terms of ε. In this paper, we improve both their upper and lower bounds. Our bounds are matching up to constant factors in both settings, in terms of both <i>n</i> and ε. No lower bounds with dependency on ε were known previously. In the proportional setting, we improve their \\(\\tilde{O}(\\sqrt {n}/\\varepsilon ^{7/2}) \\) algorithm to \\(O(\\sqrt {n}/\\varepsilon) \\). In the hybrid setting, we improve \\(\\tilde{O}(\\sqrt [3]{n}/ \\varepsilon ^{9/2}) \\) to \\(O(\\sqrt [3]{n}/\\varepsilon ^{4/3}) \\). Our algorithms are also significantly simpler and do not have large constant factors. </p><p>We then investigate the previously unexplored scenario in which <i>n</i> is not known to the algorithm. In this case, we obtain a \\(O(\\sqrt {n}/\\varepsilon + \\log n / \\varepsilon ^2) \\) algorithm for the proportional setting, and a \\(O(\\sqrt {n}/\\varepsilon) \\) algorithm for the hybrid setting. This means that in the proportional setting, we may remove the need for advice without greatly increasing the complexity of the problem, while there is a major difference in the hybrid setting. We prove that this difference in the hybrid setting is necessary, by showing a matching lower bound. </p><p>Our algorithms have applications in the area of sublinear-time graph algorithms. Consider a large graph <i>G</i> = (<i>V</i>, <i>E</i>) and the task of (1 ± ε)-approximating |<i>E</i>|. We consider the (standard) settings where we can sample uniformly from <i>E</i> or from both <i>E</i> and <i>V</i>. This relates to sum estimation as follows: we set <i>U</i> = <i>V</i> and the weights to be equal to the degrees. Uniform sampling then corresponds to sampling vertices uniformly. Proportional sampling can be simulated by taking a random edge and picking one of its endpoints at random. If we can only sample uniformly from <i>E</i>, then our results immediately give a \\(O(\\sqrt {|V|} / \\varepsilon) \\) algorithm. When we may sample both from <i>E</i> and <i>V</i>, our results imply an algorithm with complexity \\(O(\\sqrt [3]{|V|}/\\varepsilon ^{4/3}) \\). Surprisingly, one of our subroutines provides an (1 ± ε)-approximation of |<i>E</i>| using \\(\\tilde{O}(d/\\varepsilon ^2) \\) expected samples, where <i>d</i> is the average degree, under the mild assumption that at least a constant fraction of vertices are non-isolated. This subroutine works in the setting where we can sample uniformly from both <i>V</i> and <i>E</i>. We find this remarkable since it is <i>O</i>(1/ε<sup>2</sup>) for sparse graphs.</p>","PeriodicalId":50922,"journal":{"name":"ACM Transactions on Algorithms","volume":"44 1","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Better Sum Estimation via Weighted Sampling\",\"authors\":\"Lorenzo Beretta, Jakub Tětek\",\"doi\":\"10.1145/3650030\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Given a large set <i>U</i> where each item <i>a</i> ∈ <i>U</i> has weight <i>w</i>(<i>a</i>), we want to estimate the total weight <i>W</i> = ∑<sub><i>a</i> ∈ <i>U</i></sub><i>w</i>(<i>a</i>) to within factor of 1 ± ε with some constant probability > 1/2. Since <i>n</i> = |<i>U</i>| is large, we want to do this without looking at the entire set <i>U</i>. In the traditional setting in which we are allowed to sample elements from <i>U</i> uniformly, sampling <i>Ω</i>(<i>n</i>) items is necessary to provide any non-trivial guarantee on the estimate. Therefore, we investigate this problem in different settings: in the <i>proportional</i> setting we can sample items with probabilities proportional to their weights, and in the <i>hybrid</i> setting we can sample both proportionally and uniformly. These settings have applications, for example, in sublinear-time algorithms and distribution testing. </p><p>Sum estimation in the proportional and hybrid setting has been considered before by Motwani, Panigrahy, and Xu [ICALP, 2007]. In their paper, they give both upper and lower bounds in terms of <i>n</i>. Their bounds are near-matching in terms of <i>n</i>, but not in terms of ε. In this paper, we improve both their upper and lower bounds. Our bounds are matching up to constant factors in both settings, in terms of both <i>n</i> and ε. No lower bounds with dependency on ε were known previously. In the proportional setting, we improve their \\\\(\\\\tilde{O}(\\\\sqrt {n}/\\\\varepsilon ^{7/2}) \\\\) algorithm to \\\\(O(\\\\sqrt {n}/\\\\varepsilon) \\\\). In the hybrid setting, we improve \\\\(\\\\tilde{O}(\\\\sqrt [3]{n}/ \\\\varepsilon ^{9/2}) \\\\) to \\\\(O(\\\\sqrt [3]{n}/\\\\varepsilon ^{4/3}) \\\\). Our algorithms are also significantly simpler and do not have large constant factors. </p><p>We then investigate the previously unexplored scenario in which <i>n</i> is not known to the algorithm. In this case, we obtain a \\\\(O(\\\\sqrt {n}/\\\\varepsilon + \\\\log n / \\\\varepsilon ^2) \\\\) algorithm for the proportional setting, and a \\\\(O(\\\\sqrt {n}/\\\\varepsilon) \\\\) algorithm for the hybrid setting. This means that in the proportional setting, we may remove the need for advice without greatly increasing the complexity of the problem, while there is a major difference in the hybrid setting. We prove that this difference in the hybrid setting is necessary, by showing a matching lower bound. </p><p>Our algorithms have applications in the area of sublinear-time graph algorithms. Consider a large graph <i>G</i> = (<i>V</i>, <i>E</i>) and the task of (1 ± ε)-approximating |<i>E</i>|. We consider the (standard) settings where we can sample uniformly from <i>E</i> or from both <i>E</i> and <i>V</i>. This relates to sum estimation as follows: we set <i>U</i> = <i>V</i> and the weights to be equal to the degrees. Uniform sampling then corresponds to sampling vertices uniformly. Proportional sampling can be simulated by taking a random edge and picking one of its endpoints at random. If we can only sample uniformly from <i>E</i>, then our results immediately give a \\\\(O(\\\\sqrt {|V|} / \\\\varepsilon) \\\\) algorithm. When we may sample both from <i>E</i> and <i>V</i>, our results imply an algorithm with complexity \\\\(O(\\\\sqrt [3]{|V|}/\\\\varepsilon ^{4/3}) \\\\). Surprisingly, one of our subroutines provides an (1 ± ε)-approximation of |<i>E</i>| using \\\\(\\\\tilde{O}(d/\\\\varepsilon ^2) \\\\) expected samples, where <i>d</i> is the average degree, under the mild assumption that at least a constant fraction of vertices are non-isolated. This subroutine works in the setting where we can sample uniformly from both <i>V</i> and <i>E</i>. We find this remarkable since it is <i>O</i>(1/ε<sup>2</sup>) for sparse graphs.</p>\",\"PeriodicalId\":50922,\"journal\":{\"name\":\"ACM Transactions on Algorithms\",\"volume\":\"44 1\",\"pages\":\"\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2024-03-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Algorithms\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3650030\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Algorithms","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3650030","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
摘要
给定一个大集合 U,其中每个项目 a∈U 都有权重 w(a),我们希望以某种恒定概率 > 1/2 将总权重 W = ∑a∈Uw(a) 估计到因子 1 ± ε 以内。由于 n = |U| 较大,我们想在不查看整个集合 U 的情况下做到这一点。在允许从 U 中均匀抽样元素的传统设置中,要想对估计值提供任何非微不足道的保证,就必须抽样 Ω(n) 项。因此,我们在不同的设置中研究了这个问题:在比例设置中,我们可以以与权重成比例的概率对项目进行抽样;在混合设置中,我们可以同时按比例和均匀地对项目进行抽样。这些设置在亚线性时间算法和分布测试等方面都有应用。Motwani、Panigrahy 和 Xu [ICALP, 2007]曾考虑过比例和混合设置下的和估计。他们在论文中给出了以 n 为单位的上限和下限。在本文中,我们改进了他们的上界和下界。在 n 和 ε 两种情况下,我们的边界都与常数因子相匹配。在按比例设置中,我们将他们的 \(\tilde{O}(\sqrt {n}/\varepsilon ^{7/2}) \) 算法改进为 \(O(\sqrt {n}/\varepsilon) \)。在混合环境中,我们将(\tilde{O}(\sqrt [3]{n}/\varepsilon ^{9/2}) 算法)改进为(O(\sqrt [3]{n}/\varepsilon ^{4/3}) 算法)。我们的算法也明显更简单,而且没有大的常数因子。然后,我们研究了之前未曾探索过的算法不知道 n 的情况。在这种情况下,我们得到了一个比例算法的\(O(\sqrt {n}/\varepsilon + \log n / \varepsilon ^2)\)算法,以及一个混合算法的\(O(\sqrt {n}/\varepsilon) \)算法。这意味着,在比例环境下,我们可以在不大大增加问题复杂度的情况下消除对建议的需求,而在混合环境下则存在重大差异。我们通过展示一个匹配的下限,证明了混合设置中的这种差异是必要的。我们的算法可应用于亚线性时间图算法领域。考虑大型图 G = (V, E) 和 (1 ± ε)-approximating |E| 的任务。我们考虑从 E 或 E 和 V 中均匀采样的(标准)设置,这与和估计的关系如下:我们设置 U = V,权重等于度。这样,均匀采样就相当于对顶点进行均匀采样。比例抽样可以通过随机抽取一条边并随机选取其中一个端点来模拟。如果我们只能从 E 中均匀采样,那么我们的结果会立即给出一个(O(\sqrt {|V|} / \varepsilon) \)算法。当我们可以同时从 E 和 V 中采样时,我们的结果意味着算法的复杂度是 \(O(\sqrt [3]{|V|}/\varepsilon ^{4/3}) \)。令人惊讶的是,在至少有一部分顶点是非孤立的这一温和假设下,我们的一个子程序使用 \(\tilde{O}(d/\varepsilon ^2) \) 预期样本提供了(1 ± ε)-|E|的近似值,其中 d 是平均度。这个子程序在我们可以从 V 和 E 中均匀采样的情况下也能工作。我们发现这一点很了不起,因为它对稀疏图的计算结果是 O(1/ε2)。
Given a large set U where each item a ∈ U has weight w(a), we want to estimate the total weight W = ∑a ∈ Uw(a) to within factor of 1 ± ε with some constant probability > 1/2. Since n = |U| is large, we want to do this without looking at the entire set U. In the traditional setting in which we are allowed to sample elements from U uniformly, sampling Ω(n) items is necessary to provide any non-trivial guarantee on the estimate. Therefore, we investigate this problem in different settings: in the proportional setting we can sample items with probabilities proportional to their weights, and in the hybrid setting we can sample both proportionally and uniformly. These settings have applications, for example, in sublinear-time algorithms and distribution testing.
Sum estimation in the proportional and hybrid setting has been considered before by Motwani, Panigrahy, and Xu [ICALP, 2007]. In their paper, they give both upper and lower bounds in terms of n. Their bounds are near-matching in terms of n, but not in terms of ε. In this paper, we improve both their upper and lower bounds. Our bounds are matching up to constant factors in both settings, in terms of both n and ε. No lower bounds with dependency on ε were known previously. In the proportional setting, we improve their \(\tilde{O}(\sqrt {n}/\varepsilon ^{7/2}) \) algorithm to \(O(\sqrt {n}/\varepsilon) \). In the hybrid setting, we improve \(\tilde{O}(\sqrt [3]{n}/ \varepsilon ^{9/2}) \) to \(O(\sqrt [3]{n}/\varepsilon ^{4/3}) \). Our algorithms are also significantly simpler and do not have large constant factors.
We then investigate the previously unexplored scenario in which n is not known to the algorithm. In this case, we obtain a \(O(\sqrt {n}/\varepsilon + \log n / \varepsilon ^2) \) algorithm for the proportional setting, and a \(O(\sqrt {n}/\varepsilon) \) algorithm for the hybrid setting. This means that in the proportional setting, we may remove the need for advice without greatly increasing the complexity of the problem, while there is a major difference in the hybrid setting. We prove that this difference in the hybrid setting is necessary, by showing a matching lower bound.
Our algorithms have applications in the area of sublinear-time graph algorithms. Consider a large graph G = (V, E) and the task of (1 ± ε)-approximating |E|. We consider the (standard) settings where we can sample uniformly from E or from both E and V. This relates to sum estimation as follows: we set U = V and the weights to be equal to the degrees. Uniform sampling then corresponds to sampling vertices uniformly. Proportional sampling can be simulated by taking a random edge and picking one of its endpoints at random. If we can only sample uniformly from E, then our results immediately give a \(O(\sqrt {|V|} / \varepsilon) \) algorithm. When we may sample both from E and V, our results imply an algorithm with complexity \(O(\sqrt [3]{|V|}/\varepsilon ^{4/3}) \). Surprisingly, one of our subroutines provides an (1 ± ε)-approximation of |E| using \(\tilde{O}(d/\varepsilon ^2) \) expected samples, where d is the average degree, under the mild assumption that at least a constant fraction of vertices are non-isolated. This subroutine works in the setting where we can sample uniformly from both V and E. We find this remarkable since it is O(1/ε2) for sparse graphs.
期刊介绍:
ACM Transactions on Algorithms welcomes submissions of original research of the highest quality dealing with algorithms that are inherently discrete and finite, and having mathematical content in a natural way, either in the objective or in the analysis. Most welcome are new algorithms and data structures, new and improved analyses, and complexity results. Specific areas of computation covered by the journal include
combinatorial searches and objects;
counting;
discrete optimization and approximation;
randomization and quantum computation;
parallel and distributed computation;
algorithms for
graphs,
geometry,
arithmetic,
number theory,
strings;
on-line analysis;
cryptography;
coding;
data compression;
learning algorithms;
methods of algorithmic analysis;
discrete algorithms for application areas such as
biology,
economics,
game theory,
communication,
computer systems and architecture,
hardware design,
scientific computing