具有综合注意机制的复合眼启发的多尺度神经结构。

IF 6.4
Ferrante Neri, Mengchen Yang, Yu Xue
{"title":"具有综合注意机制的复合眼启发的多尺度神经结构。","authors":"Ferrante Neri, Mengchen Yang, Yu Xue","doi":"10.1142/S0129065725500650","DOIUrl":null,"url":null,"abstract":"<p><p>In the context of neural system structure modeling and complex visual tasks, the effective integration of multi-scale features and contextual information is critical for enhancing model performance. This paper proposes a biologically inspired hybrid neural network architecture - CompEyeNet - which combines the global modeling capacity of transformers with the efficiency of lightweight convolutional structures. The backbone network, multi-attention transformer backbone network (MATBN), integrates multiple attention mechanisms to collaboratively model local details and long-range dependencies. The neck network, compound eye neck network (CENN), introduces high-resolution feature layers and efficient attention fusion modules to significantly enhance multi-scale information representation and reconstruction capability. CompEyeNet is evaluated on three authoritative medical image segmentation datasets: MICCAI-CVC-ClinicDB, ISIC2018, and MICCAI-tooth-segmentation, demonstrating its superior performance. Experimental results show that compared to models such as Deeplab, Unet, and the YOLO series, CompEyeNet achieves better performance with fewer parameters. Specifically, compared to the baseline model YOLOv11, CompEyeNet reduces the number of parameters by an average of 38.31%. On key performance metrics, the average Dice coefficient improves by 0.87%, the Jaccard index by 1.53%, Precision by 0.58%, and Recall by 1.11%. These findings verify the advantages of the proposed architecture in terms of parameter efficiency and accuracy, highlighting the broad application potential of bio-inspired attention-fusion hybrid neural networks in neural system modeling and image analysis.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550065"},"PeriodicalIF":6.4000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Compound-Eye-Inspired Multi-Scale Neural Architecture with Integrated Attention Mechanisms.\",\"authors\":\"Ferrante Neri, Mengchen Yang, Yu Xue\",\"doi\":\"10.1142/S0129065725500650\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In the context of neural system structure modeling and complex visual tasks, the effective integration of multi-scale features and contextual information is critical for enhancing model performance. This paper proposes a biologically inspired hybrid neural network architecture - CompEyeNet - which combines the global modeling capacity of transformers with the efficiency of lightweight convolutional structures. The backbone network, multi-attention transformer backbone network (MATBN), integrates multiple attention mechanisms to collaboratively model local details and long-range dependencies. The neck network, compound eye neck network (CENN), introduces high-resolution feature layers and efficient attention fusion modules to significantly enhance multi-scale information representation and reconstruction capability. CompEyeNet is evaluated on three authoritative medical image segmentation datasets: MICCAI-CVC-ClinicDB, ISIC2018, and MICCAI-tooth-segmentation, demonstrating its superior performance. Experimental results show that compared to models such as Deeplab, Unet, and the YOLO series, CompEyeNet achieves better performance with fewer parameters. Specifically, compared to the baseline model YOLOv11, CompEyeNet reduces the number of parameters by an average of 38.31%. On key performance metrics, the average Dice coefficient improves by 0.87%, the Jaccard index by 1.53%, Precision by 0.58%, and Recall by 1.11%. These findings verify the advantages of the proposed architecture in terms of parameter efficiency and accuracy, highlighting the broad application potential of bio-inspired attention-fusion hybrid neural networks in neural system modeling and image analysis.</p>\",\"PeriodicalId\":94052,\"journal\":{\"name\":\"International journal of neural systems\",\"volume\":\" \",\"pages\":\"2550065\"},\"PeriodicalIF\":6.4000,\"publicationDate\":\"2025-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International journal of neural systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/S0129065725500650\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of neural systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S0129065725500650","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在神经系统结构建模和复杂视觉任务的背景下,多尺度特征和上下文信息的有效整合是提高模型性能的关键。本文提出了一种受生物学启发的混合神经网络结构——CompEyeNet,它结合了变压器的全局建模能力和轻量级卷积结构的效率。主干网——多注意变压器主干网(MATBN)集成了多种注意机制,以协同建模局部细节和远程依赖关系。颈部网络,即复合眼颈部网络(CENN),引入了高分辨率的特征层和高效的注意力融合模块,显著增强了多尺度信息表示和重建能力。CompEyeNet在MICCAI-CVC-ClinicDB、ISIC2018和miccai -牙齿分割三个权威医学图像分割数据集上进行了评估,显示了其优越的性能。实验结果表明,与Deeplab、Unet和YOLO系列等模型相比,CompEyeNet以更少的参数获得了更好的性能。具体而言,与基线模型YOLOv11相比,CompEyeNet平均减少了38.31%的参数数量。在关键性能指标上,Dice的平均系数提高了0.87%,Jaccard指数提高了1.53%,Precision提高了0.58%,Recall提高了1.11%。这些发现验证了所提出的架构在参数效率和准确性方面的优势,突出了仿生注意力融合混合神经网络在神经系统建模和图像分析方面的广泛应用潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Compound-Eye-Inspired Multi-Scale Neural Architecture with Integrated Attention Mechanisms.

In the context of neural system structure modeling and complex visual tasks, the effective integration of multi-scale features and contextual information is critical for enhancing model performance. This paper proposes a biologically inspired hybrid neural network architecture - CompEyeNet - which combines the global modeling capacity of transformers with the efficiency of lightweight convolutional structures. The backbone network, multi-attention transformer backbone network (MATBN), integrates multiple attention mechanisms to collaboratively model local details and long-range dependencies. The neck network, compound eye neck network (CENN), introduces high-resolution feature layers and efficient attention fusion modules to significantly enhance multi-scale information representation and reconstruction capability. CompEyeNet is evaluated on three authoritative medical image segmentation datasets: MICCAI-CVC-ClinicDB, ISIC2018, and MICCAI-tooth-segmentation, demonstrating its superior performance. Experimental results show that compared to models such as Deeplab, Unet, and the YOLO series, CompEyeNet achieves better performance with fewer parameters. Specifically, compared to the baseline model YOLOv11, CompEyeNet reduces the number of parameters by an average of 38.31%. On key performance metrics, the average Dice coefficient improves by 0.87%, the Jaccard index by 1.53%, Precision by 0.58%, and Recall by 1.11%. These findings verify the advantages of the proposed architecture in terms of parameter efficiency and accuracy, highlighting the broad application potential of bio-inspired attention-fusion hybrid neural networks in neural system modeling and image analysis.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信