TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images.

Q3 Agricultural and Biological Sciences
Junjie Liang, Cihui Yang, Mengjie Zeng, Xixi Wang
{"title":"TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images.","authors":"Junjie Liang, Cihui Yang, Mengjie Zeng, Xixi Wang","doi":"10.21037/qims-21-919","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Medical image segmentation plays a vital role in computer-aided diagnosis (CAD) systems. Both convolutional neural networks (CNNs) with strong local information extraction capacities and transformers with excellent global representation capacities have achieved remarkable performance in medical image segmentation. However, because of the semantic differences between local and global features, how to combine convolution and transformers effectively is an important challenge in medical image segmentation.</p><p><strong>Methods: </strong>In this paper, we proposed TransConver, a U-shaped segmentation network based on convolution and transformer for automatic and accurate brain tumor segmentation in MRI images. Unlike the recently proposed transformer and convolution based models, we proposed a parallel module named transformer-convolution inception (TC-inception), which extracts local and global information via convolution blocks and transformer blocks, respectively, and integrates them by a cross-attention fusion with global and local feature (CAFGL) mechanism. Meanwhile, the improved skip connection structure named skip connection with cross-attention fusion (SCCAF) mechanism can alleviate the semantic differences between encoder features and decoder features for better feature fusion. In addition, we designed 2D-TransConver and 3D-TransConver for 2D and 3D brain tumor segmentation tasks, respectively, and verified the performance and advantage of our model through brain tumor datasets.</p><p><strong>Results: </strong>We trained our model on 335 cases from the training dataset of MICCAI BraTS2019 and evaluated the model's performance based on 66 cases from MICCAI BraTS2018 and 125 cases from MICCAI BraTS2019. Our TransConver achieved the best average Dice score of 83.72% and 86.32% on BraTS2019 and BraTS2018, respectively.</p><p><strong>Conclusions: </strong>We proposed a transformer and convolution parallel network named TransConver for brain tumor segmentation. The TC-Inception module effectively extracts global information while retaining local details. The experimental results demonstrated that good segmentation requires the model to extract local fine-grained details and global semantic information simultaneously, and our TransConver effectively improves the accuracy of brain tumor segmentation.</p>","PeriodicalId":7426,"journal":{"name":"American Entomologist","volume":"47 1","pages":"2397-2415"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8923874/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Entomologist","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.21037/qims-21-919","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Agricultural and Biological Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Medical image segmentation plays a vital role in computer-aided diagnosis (CAD) systems. Both convolutional neural networks (CNNs) with strong local information extraction capacities and transformers with excellent global representation capacities have achieved remarkable performance in medical image segmentation. However, because of the semantic differences between local and global features, how to combine convolution and transformers effectively is an important challenge in medical image segmentation.

Methods: In this paper, we proposed TransConver, a U-shaped segmentation network based on convolution and transformer for automatic and accurate brain tumor segmentation in MRI images. Unlike the recently proposed transformer and convolution based models, we proposed a parallel module named transformer-convolution inception (TC-inception), which extracts local and global information via convolution blocks and transformer blocks, respectively, and integrates them by a cross-attention fusion with global and local feature (CAFGL) mechanism. Meanwhile, the improved skip connection structure named skip connection with cross-attention fusion (SCCAF) mechanism can alleviate the semantic differences between encoder features and decoder features for better feature fusion. In addition, we designed 2D-TransConver and 3D-TransConver for 2D and 3D brain tumor segmentation tasks, respectively, and verified the performance and advantage of our model through brain tumor datasets.

Results: We trained our model on 335 cases from the training dataset of MICCAI BraTS2019 and evaluated the model's performance based on 66 cases from MICCAI BraTS2018 and 125 cases from MICCAI BraTS2019. Our TransConver achieved the best average Dice score of 83.72% and 86.32% on BraTS2019 and BraTS2018, respectively.

Conclusions: We proposed a transformer and convolution parallel network named TransConver for brain tumor segmentation. The TC-Inception module effectively extracts global information while retaining local details. The experimental results demonstrated that good segmentation requires the model to extract local fine-grained details and global semantic information simultaneously, and our TransConver effectively improves the accuracy of brain tumor segmentation.

TransConver:用于开发核磁共振成像图像中脑肿瘤自动分割的变换器和卷积并行网络。
背景:医学图像分割在计算机辅助诊断(CAD)系统中起着至关重要的作用。具有强大局部信息提取能力的卷积神经网络(CNN)和具有出色全局表示能力的变换器在医学图像分割中都取得了显著的性能。然而,由于局部特征和全局特征之间存在语义差异,如何将卷积和变换器有效地结合起来是医学图像分割领域面临的一个重要挑战:本文提出了一种基于卷积和变换器的 U 型分割网络 TransConver,用于自动、准确地分割核磁共振成像图像中的脑肿瘤。与最近提出的基于变压器和卷积的模型不同,我们提出了一个名为 "变压器-卷积萃取(TC-inception)"的并行模块,它分别通过卷积块和变压器块提取局部和全局信息,并通过全局和局部特征交叉融合(CAFGL)机制对其进行整合。同时,改进后的跳转连接结构被命名为跳转连接与交叉注意融合(SCCAF)机制,可以缓解编码器特征与解码器特征之间的语义差异,从而实现更好的特征融合。此外,我们还针对二维和三维脑肿瘤分割任务分别设计了 2D-TransConver 和 3D-TransConver 模型,并通过脑肿瘤数据集验证了模型的性能和优势:我们在MICCAI BraTS2019训练数据集的335个病例上训练了模型,并根据MICCAI BraTS2018的66个病例和MICCAI BraTS2019的125个病例评估了模型的性能。我们的 TransConver 在 BraTS2019 和 BraTS2018 上分别取得了 83.72% 和 86.32% 的最佳平均 Dice 分数:我们提出了一种名为 TransConver 的变换器和卷积并行网络,用于脑肿瘤分割。TC-Inception模块能有效提取全局信息,同时保留局部细节。实验结果表明,良好的分割需要模型同时提取局部细粒度细节和全局语义信息,而我们的 TransConver 有效提高了脑肿瘤分割的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
American Entomologist
American Entomologist Agricultural and Biological Sciences-Insect Science
CiteScore
0.50
自引率
0.00%
发文量
48
期刊介绍: American Entomologist shares relevant information and thought-provoking dialogue about the practice, value, and impact of insect science across a diverse entomological community. Since March 2001, issues that are two years old have been made freely available online in PDF format. All book reviews published since 2001 are also freely available online. American Entomologist''s precursor, Bulletin of the Entomological Society of America, published from 1955 to 1989. Bulletin content is available via the American Entomologist website.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信