{"title":"Approximation properties relative to continuous scale space for hybrid discretizations of Gaussian derivative operators","authors":"Tony Lindeberg","doi":"arxiv-2405.05095","DOIUrl":null,"url":null,"abstract":"This paper presents an analysis of properties of two hybrid discretization\nmethods for Gaussian derivatives, based on convolutions with either the\nnormalized sampled Gaussian kernel or the integrated Gaussian kernel followed\nby central differences. The motivation for studying these discretization\nmethods is that in situations when multiple spatial derivatives of different\norder are needed at the same scale level, they can be computed significantly\nmore efficiently compared to more direct derivative approximations based on\nexplicit convolutions with either sampled Gaussian kernels or integrated\nGaussian kernels. While these computational benefits do also hold for the genuinely discrete\napproach for computing discrete analogues of Gaussian derivatives, based on\nconvolution with the discrete analogue of the Gaussian kernel followed by\ncentral differences, the underlying mathematical primitives for the discrete\nanalogue of the Gaussian kernel, in terms of modified Bessel functions of\ninteger order, may not be available in certain frameworks for image processing,\nsuch as when performing deep learning based on scale-parameterized filters in\nterms of Gaussian derivatives, with learning of the scale levels. In this paper, we present a characterization of the properties of these\nhybrid discretization methods, in terms of quantitative performance measures\nconcerning the amount of spatial smoothing that they imply, as well as the\nrelative consistency of scale estimates obtained from scale-invariant feature\ndetectors with automatic scale selection, with an emphasis on the behaviour for\nvery small values of the scale parameter, which may differ significantly from\ncorresponding results obtained from the fully continuous scale-space theory, as\nwell as between different types of discretization methods.","PeriodicalId":501061,"journal":{"name":"arXiv - CS - Numerical Analysis","volume":"6 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Numerical Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.05095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents an analysis of properties of two hybrid discretization
methods for Gaussian derivatives, based on convolutions with either the
normalized sampled Gaussian kernel or the integrated Gaussian kernel followed
by central differences. The motivation for studying these discretization
methods is that in situations when multiple spatial derivatives of different
order are needed at the same scale level, they can be computed significantly
more efficiently compared to more direct derivative approximations based on
explicit convolutions with either sampled Gaussian kernels or integrated
Gaussian kernels. While these computational benefits do also hold for the genuinely discrete
approach for computing discrete analogues of Gaussian derivatives, based on
convolution with the discrete analogue of the Gaussian kernel followed by
central differences, the underlying mathematical primitives for the discrete
analogue of the Gaussian kernel, in terms of modified Bessel functions of
integer order, may not be available in certain frameworks for image processing,
such as when performing deep learning based on scale-parameterized filters in
terms of Gaussian derivatives, with learning of the scale levels. In this paper, we present a characterization of the properties of these
hybrid discretization methods, in terms of quantitative performance measures
concerning the amount of spatial smoothing that they imply, as well as the
relative consistency of scale estimates obtained from scale-invariant feature
detectors with automatic scale selection, with an emphasis on the behaviour for
very small values of the scale parameter, which may differ significantly from
corresponding results obtained from the fully continuous scale-space theory, as
well as between different types of discretization methods.