{"title":"For unknown-but-bounded errors, interval estimates are often better than averaging","authors":"G. Walster, V. Kreinovich","doi":"10.1145/230922.230926","DOIUrl":null,"url":null,"abstract":"For many measuring devices, the only information that we have about them is their biggest possible error ε > 0. In other words, we know that the error Δ<i>x</i> = <i>x</i> - <i>x</i> (i.e., the difference between the measured value <i>x</i> and the actual values <i>x</i>) is random, that this error can sometimes become as big as ε or - ε, but we do not have any information about the probabilities of different values of error.Methods of statistics enable us to generate a better estimate for <i>x</i> by making several measurements <i>x<sub>1</sub>, ..., x<sub>n</sub>.</i> For example, if the average error is 0 (<i>E</i>(Δ<i>x</i>) = 0), then after <i>n</i> measurements, we can take an average <i>x</i> = (<i>x</i><sub>1</sub> + ... + <i>x</i><sub>n</sub>)/<i>n</i>, and get an estimate whose standard deviation (and the corresponding confidence intervals) are √<i>n</i> times smaller.Another estimate comes from interval analysis: for every measurement <i>x</i><sub>i</sub>, we know that the actual value <i>x</i> belongs to an interval [<i>x</i><sub>i</sub>-ε, <i>x</i><sub>i</sub>+ε]. So, <i>x</i> belongs to the intersection of all these intervals. In one sense, this estimate is better than the one based on traditional engineering statistics (i.e., averaging): interval estimation is <i>guaranteed.</i> In this paper, we show that for many cases, this intersection is also better in the sense that it gives a more <i>accurate</i> estimate for <i>x</i> than averaging: namely, under certain reasonable conditions, the <i>error of this interval estimate decreases faster (as 1/n) than the error of the average (that only decreases as</i> 1/ √n).A similar result is proved for a multi-dimensional case, when we measure several auxiliary quantities, and use the measurement results to estimate the value of the desired quantity <i>y</i>.","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"127 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Signum Newsletter","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/230922.230926","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
For many measuring devices, the only information that we have about them is their biggest possible error ε > 0. In other words, we know that the error Δx = x - x (i.e., the difference between the measured value x and the actual values x) is random, that this error can sometimes become as big as ε or - ε, but we do not have any information about the probabilities of different values of error.Methods of statistics enable us to generate a better estimate for x by making several measurements x1, ..., xn. For example, if the average error is 0 (E(Δx) = 0), then after n measurements, we can take an average x = (x1 + ... + xn)/n, and get an estimate whose standard deviation (and the corresponding confidence intervals) are √n times smaller.Another estimate comes from interval analysis: for every measurement xi, we know that the actual value x belongs to an interval [xi-ε, xi+ε]. So, x belongs to the intersection of all these intervals. In one sense, this estimate is better than the one based on traditional engineering statistics (i.e., averaging): interval estimation is guaranteed. In this paper, we show that for many cases, this intersection is also better in the sense that it gives a more accurate estimate for x than averaging: namely, under certain reasonable conditions, the error of this interval estimate decreases faster (as 1/n) than the error of the average (that only decreases as 1/ √n).A similar result is proved for a multi-dimensional case, when we measure several auxiliary quantities, and use the measurement results to estimate the value of the desired quantity y.