{"title":"Linear approximation by interpretive testing","authors":"Barry Barlow","doi":"10.1145/503643.503676","DOIUrl":null,"url":null,"abstract":"This paper deals with a model designed to fit a curve to data subject to error. In approximating functions, the criterion of goodness of fit is to some degree arbitrary as there are several criterion which may be used. By letting f(~) denote the true functional value at xL, y(~) denote the approximating functional value at x6, and d~ denote (f(x~)-y(~)) in all cases, it is possible to list a few of these criterion as follows: a)Criterion i suggests making~.od ~ a minimum, where n is i less than the number of data points given. This is attractive because of its s~mplicity but is of little use in that it leads to ambiguous results. b)Criterion 2 suggests making~[d~ a minimum. This has some usage but can~ -allow one erroneous value to overly influence the evaluation of the summation value. c)The Mini-Max or Cnebychev criterion suggests that a boundary (d) be placed on the error (~) and one should strive to keep the error within the upper and lower limits of the boundaries. The approach used by this model is known as the Least-Squares criterion. The concept of linear approximation in the Least-Squares approach states that the best approximation in this sense is one where the A~'s are determined such that the sum of the squared difference of the true and approximating functions is made a minimum, where A~ are the coefficients of the approximating function It can also be stated~[f(x~-y(~)]~a minimum. As the title o~'this~=°'--paper implies, this model only deals with approximations of linear curves by Least Squares, ie. functions of the form: f(x)~A,@~(x) where n is the degree of the polynomial, AK is the coeffieient of the term K, and ~(x) is the argument of A~. For example, in the function y(x)=Ao+A,x+ A~x z , the following holds true: ¢o(X):],~,(x)=x, $ ¢~(x):x ~. The A~'s will be approximated by deriving a set of simultaneous equations using the following formula and then solving the.matrix: The equations given by this method are termed the Least-Squares equations. The ~} or aggregate notation is used for both discrete and continuous models. Depending upon the approximating function, one would substitute the i for a model having discrete data, and ~ for a model having continuous data. Deriving the LeastSquares normal equations for f(x)=Ao+Aox gives the following:","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th annual Southeast regional conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/503643.503676","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper deals with a model designed to fit a curve to data subject to error. In approximating functions, the criterion of goodness of fit is to some degree arbitrary as there are several criterion which may be used. By letting f(~) denote the true functional value at xL, y(~) denote the approximating functional value at x6, and d~ denote (f(x~)-y(~)) in all cases, it is possible to list a few of these criterion as follows: a)Criterion i suggests making~.od ~ a minimum, where n is i less than the number of data points given. This is attractive because of its s~mplicity but is of little use in that it leads to ambiguous results. b)Criterion 2 suggests making~[d~ a minimum. This has some usage but can~ -allow one erroneous value to overly influence the evaluation of the summation value. c)The Mini-Max or Cnebychev criterion suggests that a boundary (d) be placed on the error (~) and one should strive to keep the error within the upper and lower limits of the boundaries. The approach used by this model is known as the Least-Squares criterion. The concept of linear approximation in the Least-Squares approach states that the best approximation in this sense is one where the A~'s are determined such that the sum of the squared difference of the true and approximating functions is made a minimum, where A~ are the coefficients of the approximating function It can also be stated~[f(x~-y(~)]~a minimum. As the title o~'this~=°'--paper implies, this model only deals with approximations of linear curves by Least Squares, ie. functions of the form: f(x)~A,@~(x) where n is the degree of the polynomial, AK is the coeffieient of the term K, and ~(x) is the argument of A~. For example, in the function y(x)=Ao+A,x+ A~x z , the following holds true: ¢o(X):],~,(x)=x, $ ¢~(x):x ~. The A~'s will be approximated by deriving a set of simultaneous equations using the following formula and then solving the.matrix: The equations given by this method are termed the Least-Squares equations. The ~} or aggregate notation is used for both discrete and continuous models. Depending upon the approximating function, one would substitute the i for a model having discrete data, and ~ for a model having continuous data. Deriving the LeastSquares normal equations for f(x)=Ao+Aox gives the following: