Gijs Bellaard, Daan Bon, Gautam Pai, B. Smets, R. Duits
{"title":"Analysis of (sub-)Riemannian PDE-G-CNNs","authors":"Gijs Bellaard, Daan Bon, Gautam Pai, B. Smets, R. Duits","doi":"10.48550/arXiv.2210.00935","DOIUrl":null,"url":null,"abstract":"Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep learning. Typically, G-CNNs have the advantage over CNNs that they do not waste network capacity on training symmetries that should have been hard-coded in the network. The recently introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalizes G-CNNs. PDE-G-CNNs have the core advantages that they simultaneously (1) reduce network complexity, (2) increase classification performance, and (3) provide geometric interpretability. Their implementations primarily consist of linear and morphological convolutions with kernels. In this paper, we show that the previously suggested approximative morphological kernels do not always accurately approximate the exact kernels accurately. More specifically, depending on the spatial anisotropy of the Riemannian metric, we argue that one must resort to sub-Riemannian approximations. We solve this problem by providing a new approximative kernel that works regardless of the anisotropy. We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones. We test the effectiveness of multiple approximative kernels within the PDE-G-CNN framework on two datasets, and observe an improvement with the new approximative kernels. We report that the PDE-G-CNNs again allow for a considerable reduction of network complexity while having comparable or better performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have the advantage of better geometric interpretability over G-CNNs, as the morphological kernels are related to association fields from neurogeometry.","PeriodicalId":16196,"journal":{"name":"Journal of Mathematical Imaging and Vision","volume":"1 1","pages":"1-25"},"PeriodicalIF":1.3000,"publicationDate":"2022-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Mathematical Imaging and Vision","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.48550/arXiv.2210.00935","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1
Abstract
Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep learning. Typically, G-CNNs have the advantage over CNNs that they do not waste network capacity on training symmetries that should have been hard-coded in the network. The recently introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalizes G-CNNs. PDE-G-CNNs have the core advantages that they simultaneously (1) reduce network complexity, (2) increase classification performance, and (3) provide geometric interpretability. Their implementations primarily consist of linear and morphological convolutions with kernels. In this paper, we show that the previously suggested approximative morphological kernels do not always accurately approximate the exact kernels accurately. More specifically, depending on the spatial anisotropy of the Riemannian metric, we argue that one must resort to sub-Riemannian approximations. We solve this problem by providing a new approximative kernel that works regardless of the anisotropy. We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones. We test the effectiveness of multiple approximative kernels within the PDE-G-CNN framework on two datasets, and observe an improvement with the new approximative kernels. We report that the PDE-G-CNNs again allow for a considerable reduction of network complexity while having comparable or better performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have the advantage of better geometric interpretability over G-CNNs, as the morphological kernels are related to association fields from neurogeometry.
期刊介绍:
The Journal of Mathematical Imaging and Vision is a technical journal publishing important new developments in mathematical imaging. The journal publishes research articles, invited papers, and expository articles.
Current developments in new image processing hardware, the advent of multisensor data fusion, and rapid advances in vision research have led to an explosive growth in the interdisciplinary field of imaging science. This growth has resulted in the development of highly sophisticated mathematical models and theories. The journal emphasizes the role of mathematics as a rigorous basis for imaging science. This provides a sound alternative to present journals in this area. Contributions are judged on the basis of mathematical content. Articles may be physically speculative but need to be mathematically sound. Emphasis is placed on innovative or established mathematical techniques applied to vision and imaging problems in a novel way, as well as new developments and problems in mathematics arising from these applications.
The scope of the journal includes:
computational models of vision; imaging algebra and mathematical morphology
mathematical methods in reconstruction, compactification, and coding
filter theory
probabilistic, statistical, geometric, topological, and fractal techniques and models in imaging science
inverse optics
wave theory.
Specific application areas of interest include, but are not limited to:
all aspects of image formation and representation
medical, biological, industrial, geophysical, astronomical and military imaging
image analysis and image understanding
parallel and distributed computing
computer vision architecture design.