用于快速脑肿瘤图像分割的全局卷积自作用模块

IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Wei-An Yang;Devin Lautan;Tong-Wei Weng;Wan-Chun Lin;Yamin Kao;Chien-Chang Chen
{"title":"用于快速脑肿瘤图像分割的全局卷积自作用模块","authors":"Wei-An Yang;Devin Lautan;Tong-Wei Weng;Wan-Chun Lin;Yamin Kao;Chien-Chang Chen","doi":"10.1109/TETCI.2024.3375075","DOIUrl":null,"url":null,"abstract":"Integrating frameworks of Fermi normalization and fast data density functional transform (fDDFT), we established a new global convolutional self-action module to reduce the computational complexity in modern deep convolutional neural networks (CNNs). The Fermi normalization conflates mathematical properties of sigmoid function and z-score normalization with high efficiency. Global convolutional kernels embedded in the fDDFT simultaneously extract global features from whole input images through long-range dependency. The fDDFT endows the transformed images with a smoothness property, so the images can be substantially down-sampled before the global convolutions and then resized back to the original dimensions without losing accuracy. To inspect the feasibility of the synergy of Fermi normalization and fDDFT and the combinational effect with modern CNNs, we applied the dimension-fusion U-Net as a backbone and utilized the datasets from BraTS 2020. Experimental results exhibited that the model embedded with the module saved 57%–60% computational costs and raised 50%–53% inferencing speeds compared to the naïve D-UNet model. Furthermore, the module enhanced the accuracy of brain tumor image segmentation. The dice scores of the work are 0.9221 for whole tumors, 0.8760 for tumor cores, 0.8659 for enhancing tumors, and 0.8362 for peritumoral edema. These results exhibit comparable performance to the winner of BraTS 2020. Our results also validate that image inputs processed by the module provide aligned and unified bases, establishing a specific space with optimized feature map combinations to reduce computational complexity efficiently. The module significantly boosted the performance of training and inferencing without losing model accuracy.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 6","pages":"3848-3859"},"PeriodicalIF":5.3000,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Global Convolutional Self-Action Module for Fast Brain Tumor Image Segmentation\",\"authors\":\"Wei-An Yang;Devin Lautan;Tong-Wei Weng;Wan-Chun Lin;Yamin Kao;Chien-Chang Chen\",\"doi\":\"10.1109/TETCI.2024.3375075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Integrating frameworks of Fermi normalization and fast data density functional transform (fDDFT), we established a new global convolutional self-action module to reduce the computational complexity in modern deep convolutional neural networks (CNNs). The Fermi normalization conflates mathematical properties of sigmoid function and z-score normalization with high efficiency. Global convolutional kernels embedded in the fDDFT simultaneously extract global features from whole input images through long-range dependency. The fDDFT endows the transformed images with a smoothness property, so the images can be substantially down-sampled before the global convolutions and then resized back to the original dimensions without losing accuracy. To inspect the feasibility of the synergy of Fermi normalization and fDDFT and the combinational effect with modern CNNs, we applied the dimension-fusion U-Net as a backbone and utilized the datasets from BraTS 2020. Experimental results exhibited that the model embedded with the module saved 57%–60% computational costs and raised 50%–53% inferencing speeds compared to the naïve D-UNet model. Furthermore, the module enhanced the accuracy of brain tumor image segmentation. The dice scores of the work are 0.9221 for whole tumors, 0.8760 for tumor cores, 0.8659 for enhancing tumors, and 0.8362 for peritumoral edema. These results exhibit comparable performance to the winner of BraTS 2020. Our results also validate that image inputs processed by the module provide aligned and unified bases, establishing a specific space with optimized feature map combinations to reduce computational complexity efficiently. The module significantly boosted the performance of training and inferencing without losing model accuracy.\",\"PeriodicalId\":13135,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"volume\":\"8 6\",\"pages\":\"3848-3859\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10475355/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10475355/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

结合费米归一化和快速数据密度函数变换(fDDFT)框架,我们建立了一个新的全局卷积自作用模块,以降低现代深度卷积神经网络(CNN)的计算复杂度。费米归一化融合了sigmoid函数和z-score归一化的数学特性,具有很高的效率。嵌入 fDDFT 的全局卷积核同时通过长程依赖性从整个输入图像中提取全局特征。fDDFT 使变换后的图像具有平滑性,因此在进行全局卷积之前,可以对图像进行大幅降采样,然后再将其调整回原始尺寸,而不会降低精度。为了检验费米归一化和 fDDFT 协同作用的可行性以及与现代 CNN 的结合效果,我们以维度融合 U-Net 为骨干,并利用 BraTS 2020 的数据集。实验结果表明,嵌入了该模块的模型比单纯的 D-UNet 模型节省了 57%-60% 的计算成本,推理速度提高了 50%-53% 。此外,该模块还提高了脑肿瘤图像分割的准确性。整个肿瘤的骰子分数为 0.9221,肿瘤核心的骰子分数为 0.8760,增强肿瘤的骰子分数为 0.8659,瘤周水肿的骰子分数为 0.8362。这些结果与 BraTS 2020 的优胜者表现相当。我们的结果还验证了该模块处理的图像输入提供了对齐和统一的基础,建立了具有优化特征图组合的特定空间,从而有效降低了计算复杂度。该模块大大提高了训练和推断的性能,同时不降低模型的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Global Convolutional Self-Action Module for Fast Brain Tumor Image Segmentation
Integrating frameworks of Fermi normalization and fast data density functional transform (fDDFT), we established a new global convolutional self-action module to reduce the computational complexity in modern deep convolutional neural networks (CNNs). The Fermi normalization conflates mathematical properties of sigmoid function and z-score normalization with high efficiency. Global convolutional kernels embedded in the fDDFT simultaneously extract global features from whole input images through long-range dependency. The fDDFT endows the transformed images with a smoothness property, so the images can be substantially down-sampled before the global convolutions and then resized back to the original dimensions without losing accuracy. To inspect the feasibility of the synergy of Fermi normalization and fDDFT and the combinational effect with modern CNNs, we applied the dimension-fusion U-Net as a backbone and utilized the datasets from BraTS 2020. Experimental results exhibited that the model embedded with the module saved 57%–60% computational costs and raised 50%–53% inferencing speeds compared to the naïve D-UNet model. Furthermore, the module enhanced the accuracy of brain tumor image segmentation. The dice scores of the work are 0.9221 for whole tumors, 0.8760 for tumor cores, 0.8659 for enhancing tumors, and 0.8362 for peritumoral edema. These results exhibit comparable performance to the winner of BraTS 2020. Our results also validate that image inputs processed by the module provide aligned and unified bases, establishing a specific space with optimized feature map combinations to reduce computational complexity efficiently. The module significantly boosted the performance of training and inferencing without losing model accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.30
自引率
7.50%
发文量
147
期刊介绍: The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys. TETCI is an electronics only publication. TETCI publishes six issues per year. Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信