{"title":"Dmixnet: a dendritic multi-layered perceptron architecture for image recognition","authors":"Weixiang Xu, Yaotong Song, Shubham Gupta, Dongbao Jia, Jun Tang, Zhenyu Lei, Shangce Gao","doi":"10.1007/s10462-025-11123-y","DOIUrl":null,"url":null,"abstract":"<div><p>In the field of image recognition, the all-MLP architecture (MLP-Mixer) shows superior performance. However, the current MLP-Mixer is solely based on fully connected layers. The nonlinear capability of fully connected layers is relatively weak, and their simple stacked structure has limitations under complex conditions. Therefore, inspired by the diversity of neurons in the human brain, we propose an innovative DMixNet, a dendritic multi-layered perceptron architecture. Rooted in the theory of dendritic neurons from neuroscience, we propose a dendritic neural unit (DNU) that enhances DMixNet with stronger biological interpretability and more robust nonlinear processing capabilities. The flexibility of dendritic structures allows the DNU to adjust its architecture to achieve different functionalities. Based on the DNU, we propose a novel channel fusion network <span>\\(\\text {DNU}_\\text {E}\\)</span> and a dendritic classifier <span>\\(\\text {DNU}_\\text {C}\\)</span>. The <span>\\(\\text {DNU}_\\text {E}\\)</span> substitutes the traditional two fully connected layers as the channel mixer, constructing a dendritic mixer layer to enhance the fusion capability of channel information within the entire framework. Meanwhile, the <span>\\(\\text {DNU}_\\text {C}\\)</span> replaces the traditional linear classifier, effectively improving the model’s classification performance. Experimental results demonstrate that DMixNet achieves improvements of 2.13%, 4.79%, 4.71%, 23.14% on the CIFAR-10, CIFAR-100, Tiny-ImageNet and COIL-100 benchmark image recognition datasets, respectively, as well as a 14.78% enhancement on the medical image classification dataset PathMNIST, outperforming other state-of-the-art architectures. Code is available at https://github.com/KarilynXu/DMixNet.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 5","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11123-y.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11123-y","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In the field of image recognition, the all-MLP architecture (MLP-Mixer) shows superior performance. However, the current MLP-Mixer is solely based on fully connected layers. The nonlinear capability of fully connected layers is relatively weak, and their simple stacked structure has limitations under complex conditions. Therefore, inspired by the diversity of neurons in the human brain, we propose an innovative DMixNet, a dendritic multi-layered perceptron architecture. Rooted in the theory of dendritic neurons from neuroscience, we propose a dendritic neural unit (DNU) that enhances DMixNet with stronger biological interpretability and more robust nonlinear processing capabilities. The flexibility of dendritic structures allows the DNU to adjust its architecture to achieve different functionalities. Based on the DNU, we propose a novel channel fusion network \(\text {DNU}_\text {E}\) and a dendritic classifier \(\text {DNU}_\text {C}\). The \(\text {DNU}_\text {E}\) substitutes the traditional two fully connected layers as the channel mixer, constructing a dendritic mixer layer to enhance the fusion capability of channel information within the entire framework. Meanwhile, the \(\text {DNU}_\text {C}\) replaces the traditional linear classifier, effectively improving the model’s classification performance. Experimental results demonstrate that DMixNet achieves improvements of 2.13%, 4.79%, 4.71%, 23.14% on the CIFAR-10, CIFAR-100, Tiny-ImageNet and COIL-100 benchmark image recognition datasets, respectively, as well as a 14.78% enhancement on the medical image classification dataset PathMNIST, outperforming other state-of-the-art architectures. Code is available at https://github.com/KarilynXu/DMixNet.
在图像识别领域,全mlp架构(MLP-Mixer)表现出优越的性能。然而,目前的MLP-Mixer完全基于完全连接的层。全连接层的非线性能力相对较弱,其简单的堆叠结构在复杂条件下存在局限性。因此,受人类大脑神经元多样性的启发,我们提出了一种创新的DMixNet,一种树突多层感知器架构。基于神经科学的树突神经元理论,我们提出了一种树突神经单元(DNU),它增强了DMixNet具有更强的生物可解释性和更强大的非线性处理能力。树突结构的灵活性允许DNU调整其结构以实现不同的功能。基于DNU,我们提出了一种新的信道融合网络\(\text {DNU}_\text {E}\)和树突分类器\(\text {DNU}_\text {C}\)。\(\text {DNU}_\text {E}\)取代传统的两个全连接层作为通道混频器,构建了一个树突混频器层,增强了整个框架内通道信息的融合能力。同时,\(\text {DNU}_\text {C}\)取代了传统的线性分类器,有效地提高了模型的分类性能。实验结果表明,DMixNet实现了2.13的改进%, 4.79%, 4.71%, 23.14% on the CIFAR-10, CIFAR-100, Tiny-ImageNet and COIL-100 benchmark image recognition datasets, respectively, as well as a 14.78% enhancement on the medical image classification dataset PathMNIST, outperforming other state-of-the-art architectures. Code is available at https://github.com/KarilynXu/DMixNet.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.