基于U-Net的多尺度脑肿瘤分割方法

Lei Wang, Mingtao Liu, Yunyu Wang, Xianbiao Bai, Mengjie Zhu, Fuchun Zhang
{"title":"基于U-Net的多尺度脑肿瘤分割方法","authors":"Lei Wang, Mingtao Liu, Yunyu Wang, Xianbiao Bai, Mengjie Zhu, Fuchun Zhang","doi":"10.1109/CCISP55629.2022.9974427","DOIUrl":null,"url":null,"abstract":"A accurately segmented tumor region has great significance in assessing the sick person with the conditions. Aiming at the problems that existing deep learning has limited ability to perceive 3D context in medical image segmentation tasks, and the edge information of tumors cannot be well preserved. Therefore, we propose an effective method to improve 3D U-Net model for segmentation. Firstly, adding a multi-scale feature extraction module can extract more receptive fields and improve the adaptability of the model to features of different scales. Secondly, decoding the position attention mechanism is added after the first upsampling, so that more effective global and local details can be extracted. Using the public dataset BraTS 2020 for training and testing, the average dice values of the proposed network model in the overall tumor area, tumor core region and tumor enhancement area reached 88.96%, 86.48% and 84.32%, respectively. From those results, we can see that the improved model has a better segmentation effect in evaluation indexes than basic models.","PeriodicalId":431851,"journal":{"name":"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A multi-scale method based on U-Net for brain tumor segmentation\",\"authors\":\"Lei Wang, Mingtao Liu, Yunyu Wang, Xianbiao Bai, Mengjie Zhu, Fuchun Zhang\",\"doi\":\"10.1109/CCISP55629.2022.9974427\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A accurately segmented tumor region has great significance in assessing the sick person with the conditions. Aiming at the problems that existing deep learning has limited ability to perceive 3D context in medical image segmentation tasks, and the edge information of tumors cannot be well preserved. Therefore, we propose an effective method to improve 3D U-Net model for segmentation. Firstly, adding a multi-scale feature extraction module can extract more receptive fields and improve the adaptability of the model to features of different scales. Secondly, decoding the position attention mechanism is added after the first upsampling, so that more effective global and local details can be extracted. Using the public dataset BraTS 2020 for training and testing, the average dice values of the proposed network model in the overall tumor area, tumor core region and tumor enhancement area reached 88.96%, 86.48% and 84.32%, respectively. From those results, we can see that the improved model has a better segmentation effect in evaluation indexes than basic models.\",\"PeriodicalId\":431851,\"journal\":{\"name\":\"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCISP55629.2022.9974427\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCISP55629.2022.9974427","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

准确的肿瘤区域分割对患者的病情评估具有重要意义。针对现有深度学习在医学图像分割任务中对三维上下文感知能力有限,肿瘤边缘信息不能很好保存的问题。因此,我们提出了一种有效的改进三维U-Net模型的分割方法。首先,增加多尺度特征提取模块,可以提取更多的接受域,提高模型对不同尺度特征的适应性;其次,在第一次上采样后加入位置注意解码机制,提取更有效的全局和局部细节;使用公共数据集BraTS 2020进行训练和测试,所提出的网络模型在整个肿瘤区域、肿瘤核心区域和肿瘤增强区域的平均骰子值分别达到88.96%、86.48%和84.32%。从这些结果可以看出,改进模型在评价指标上的分割效果优于基本模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A multi-scale method based on U-Net for brain tumor segmentation
A accurately segmented tumor region has great significance in assessing the sick person with the conditions. Aiming at the problems that existing deep learning has limited ability to perceive 3D context in medical image segmentation tasks, and the edge information of tumors cannot be well preserved. Therefore, we propose an effective method to improve 3D U-Net model for segmentation. Firstly, adding a multi-scale feature extraction module can extract more receptive fields and improve the adaptability of the model to features of different scales. Secondly, decoding the position attention mechanism is added after the first upsampling, so that more effective global and local details can be extracted. Using the public dataset BraTS 2020 for training and testing, the average dice values of the proposed network model in the overall tumor area, tumor core region and tumor enhancement area reached 88.96%, 86.48% and 84.32%, respectively. From those results, we can see that the improved model has a better segmentation effect in evaluation indexes than basic models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信