Alignable kernel network

IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Ying Xie , Jixiang Wang , Zhiqiang Xu , Junnan Shen , Lijie Wen , Rongbin Xu , Hang Xu , Yun Yang
{"title":"Alignable kernel network","authors":"Ying Xie ,&nbsp;Jixiang Wang ,&nbsp;Zhiqiang Xu ,&nbsp;Junnan Shen ,&nbsp;Lijie Wen ,&nbsp;Rongbin Xu ,&nbsp;Hang Xu ,&nbsp;Yun Yang","doi":"10.1016/j.inffus.2024.102758","DOIUrl":null,"url":null,"abstract":"<div><div>To enhance the adaptability and performance of Convolutional Neural Networks (CNN), we present an adaptable mechanism called Alignable Kernel (AliK) unit, which dynamically adjusts the receptive field (RF) dimensions of a model in response to varying stimuli. The branches of AliK unit are integrated through a novel align transformation softmax attention, incorporating prior knowledge through rank ordering constraints. The attention weightings across the branches establish the effective RF scales, leveraged by neurons in the fusion layer. This mechanism is inspired by neuroscientific observations indicating that the RF dimensions of neurons in the visual cortex vary with the stimulus, a feature often overlooked in CNN architectures. By aggregating successive AliK ensembles, we develop a deep network architecture named the Alignable Kernel Network (AliKNet). AliKNet with interdisciplinary design improves the network’s performance and interpretability by taking direct inspiration from the structure and function of human neural systems, especially the visual cortex. Empirical evaluations in the domains of image classification and semantic segmentation have demonstrated that AliKNet excels over numerous state-of-the-art architectures, achieving this without increasing model complexity. Furthermore, we demonstrate that AliKNet can identify target objects across various scales, confirming their ability to dynamically adapt their RF sizes in response to the input data.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"115 ","pages":"Article 102758"},"PeriodicalIF":14.7000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253524005360","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

To enhance the adaptability and performance of Convolutional Neural Networks (CNN), we present an adaptable mechanism called Alignable Kernel (AliK) unit, which dynamically adjusts the receptive field (RF) dimensions of a model in response to varying stimuli. The branches of AliK unit are integrated through a novel align transformation softmax attention, incorporating prior knowledge through rank ordering constraints. The attention weightings across the branches establish the effective RF scales, leveraged by neurons in the fusion layer. This mechanism is inspired by neuroscientific observations indicating that the RF dimensions of neurons in the visual cortex vary with the stimulus, a feature often overlooked in CNN architectures. By aggregating successive AliK ensembles, we develop a deep network architecture named the Alignable Kernel Network (AliKNet). AliKNet with interdisciplinary design improves the network’s performance and interpretability by taking direct inspiration from the structure and function of human neural systems, especially the visual cortex. Empirical evaluations in the domains of image classification and semantic segmentation have demonstrated that AliKNet excels over numerous state-of-the-art architectures, achieving this without increasing model complexity. Furthermore, we demonstrate that AliKNet can identify target objects across various scales, confirming their ability to dynamically adapt their RF sizes in response to the input data.
可对齐的内核网络
为了提高卷积神经网络(CNN)的适应性和性能,我们提出了一种称为可对齐内核(AliK)单元的适应性机制,它能根据不同的刺激动态调整模型的感受野(RF)维度。AliK 单元的各分支通过一种新颖的对齐转换软最大注意力进行整合,并通过排序约束纳入先验知识。各分支的注意力权重建立了有效的射频尺度,并被融合层中的神经元所利用。这一机制的灵感来源于神经科学的观察结果,即视觉皮层中神经元的射频维度会随着刺激的变化而变化,而这一特征在 CNN 架构中经常被忽视。通过聚合连续的 AliK 集合,我们开发出了一种名为可对齐核网络(AliKNet)的深度网络架构。跨学科设计的 AliKNet 直接从人类神经系统(尤其是视觉皮层)的结构和功能中汲取灵感,从而提高了网络的性能和可解释性。在图像分类和语义分割领域的实证评估表明,AliKNet 优于众多最先进的架构,而且在不增加模型复杂度的情况下实现了这一目标。此外,我们还证明,AliKNet 能够识别各种尺度的目标对象,从而证实了它们能够根据输入数据动态调整射频大小。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信