Gijs Bellaard, Daan L. J. Bon, Gautam Pai, Bart M. N. Smets, Remco Duits
{"title":"(亚)黎曼pde - g - cnn的分析","authors":"Gijs Bellaard, Daan L. J. Bon, Gautam Pai, Bart M. N. Smets, Remco Duits","doi":"10.1007/s10851-023-01147-w","DOIUrl":null,"url":null,"abstract":"Abstract Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep learning. Typically, G-CNNs have the advantage over CNNs that they do not waste network capacity on training symmetries that should have been hard-coded in the network. The recently introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalizes G-CNNs. PDE-G-CNNs have the core advantages that they simultaneously (1) reduce network complexity, (2) increase classification performance, and (3) provide geometric interpretability. Their implementations primarily consist of linear and morphological convolutions with kernels. In this paper, we show that the previously suggested approximative morphological kernels do not always accurately approximate the exact kernels accurately. More specifically, depending on the spatial anisotropy of the Riemannian metric, we argue that one must resort to sub-Riemannian approximations. We solve this problem by providing a new approximative kernel that works regardless of the anisotropy. We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones. We test the effectiveness of multiple approximative kernels within the PDE-G-CNN framework on two datasets, and observe an improvement with the new approximative kernels. We report that the PDE-G-CNNs again allow for a considerable reduction of network complexity while having comparable or better performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have the advantage of better geometric interpretability over G-CNNs, as the morphological kernels are related to association fields from neurogeometry.","PeriodicalId":16196,"journal":{"name":"Journal of Mathematical Imaging and Vision","volume":"88 1","pages":"0"},"PeriodicalIF":1.3000,"publicationDate":"2023-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Analysis of (sub-)Riemannian PDE-G-CNNs\",\"authors\":\"Gijs Bellaard, Daan L. J. Bon, Gautam Pai, Bart M. N. Smets, Remco Duits\",\"doi\":\"10.1007/s10851-023-01147-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep learning. Typically, G-CNNs have the advantage over CNNs that they do not waste network capacity on training symmetries that should have been hard-coded in the network. The recently introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalizes G-CNNs. PDE-G-CNNs have the core advantages that they simultaneously (1) reduce network complexity, (2) increase classification performance, and (3) provide geometric interpretability. Their implementations primarily consist of linear and morphological convolutions with kernels. In this paper, we show that the previously suggested approximative morphological kernels do not always accurately approximate the exact kernels accurately. More specifically, depending on the spatial anisotropy of the Riemannian metric, we argue that one must resort to sub-Riemannian approximations. We solve this problem by providing a new approximative kernel that works regardless of the anisotropy. We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones. We test the effectiveness of multiple approximative kernels within the PDE-G-CNN framework on two datasets, and observe an improvement with the new approximative kernels. We report that the PDE-G-CNNs again allow for a considerable reduction of network complexity while having comparable or better performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have the advantage of better geometric interpretability over G-CNNs, as the morphological kernels are related to association fields from neurogeometry.\",\"PeriodicalId\":16196,\"journal\":{\"name\":\"Journal of Mathematical Imaging and Vision\",\"volume\":\"88 1\",\"pages\":\"0\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2023-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Mathematical Imaging and Vision\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s10851-023-01147-w\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Mathematical Imaging and Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10851-023-01147-w","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1
摘要
群等变卷积神经网络(g - cnn)已成功应用于几何深度学习。通常,g - cnn比cnn有一个优势,即它们不会浪费网络容量来训练应该在网络中硬编码的对称性。最近引入的基于pde的g - cnn框架(pde - g - cnn)是对g - cnn的推广。pde - g - cnn的核心优势是同时(1)降低网络复杂度,(2)提高分类性能,(3)提供几何可解释性。它们的实现主要由带核的线性和形态卷积组成。在本文中,我们证明了先前提出的近似形态学核并不总是准确地接近精确核。更具体地说,根据黎曼度量的空间各向异性,我们认为必须采用次黎曼近似。我们通过提供一个新的近似核来解决这个问题,该核不受各向异性的影响。我们提供了新的定理,对近似核具有更好的误差估计,并证明它们都具有与精确核相同的反射对称性。我们在两个数据集上测试了PDE-G-CNN框架内多个近似核的有效性,并观察到了新的近似核的改进。我们报告说,pde - g - cnn再次允许大大降低网络复杂性,同时在两个数据集上具有与g - cnn和cnn相当或更好的性能。此外,pde - g - cnn具有比g - cnn更好的几何可解释性,因为形态学核与神经几何学的关联场相关。
Abstract Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep learning. Typically, G-CNNs have the advantage over CNNs that they do not waste network capacity on training symmetries that should have been hard-coded in the network. The recently introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalizes G-CNNs. PDE-G-CNNs have the core advantages that they simultaneously (1) reduce network complexity, (2) increase classification performance, and (3) provide geometric interpretability. Their implementations primarily consist of linear and morphological convolutions with kernels. In this paper, we show that the previously suggested approximative morphological kernels do not always accurately approximate the exact kernels accurately. More specifically, depending on the spatial anisotropy of the Riemannian metric, we argue that one must resort to sub-Riemannian approximations. We solve this problem by providing a new approximative kernel that works regardless of the anisotropy. We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones. We test the effectiveness of multiple approximative kernels within the PDE-G-CNN framework on two datasets, and observe an improvement with the new approximative kernels. We report that the PDE-G-CNNs again allow for a considerable reduction of network complexity while having comparable or better performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have the advantage of better geometric interpretability over G-CNNs, as the morphological kernels are related to association fields from neurogeometry.
期刊介绍:
The Journal of Mathematical Imaging and Vision is a technical journal publishing important new developments in mathematical imaging. The journal publishes research articles, invited papers, and expository articles.
Current developments in new image processing hardware, the advent of multisensor data fusion, and rapid advances in vision research have led to an explosive growth in the interdisciplinary field of imaging science. This growth has resulted in the development of highly sophisticated mathematical models and theories. The journal emphasizes the role of mathematics as a rigorous basis for imaging science. This provides a sound alternative to present journals in this area. Contributions are judged on the basis of mathematical content. Articles may be physically speculative but need to be mathematically sound. Emphasis is placed on innovative or established mathematical techniques applied to vision and imaging problems in a novel way, as well as new developments and problems in mathematics arising from these applications.
The scope of the journal includes:
computational models of vision; imaging algebra and mathematical morphology
mathematical methods in reconstruction, compactification, and coding
filter theory
probabilistic, statistical, geometric, topological, and fractal techniques and models in imaging science
inverse optics
wave theory.
Specific application areas of interest include, but are not limited to:
all aspects of image formation and representation
medical, biological, industrial, geophysical, astronomical and military imaging
image analysis and image understanding
parallel and distributed computing
computer vision architecture design.