Comparison of automatic prostate zones segmentation models in MRI images using U-net-like architectures

Pablo Cesar Quihui-Rubio, G. Ochoa-Ruiz, M. González-Mendoza, Gerardo Rodriguez-Hernandez, Christian Mata
{"title":"Comparison of automatic prostate zones segmentation models in MRI images using U-net-like architectures","authors":"Pablo Cesar Quihui-Rubio, G. Ochoa-Ruiz, M. González-Mendoza, Gerardo Rodriguez-Hernandez, Christian Mata","doi":"10.48550/arXiv.2207.09483","DOIUrl":null,"url":null,"abstract":". Prostate cancer is the second-most frequently diagnosed cancer and the sixth leading cause of cancer death in males worldwide. The main problem that specialists face during the diagnosis of prostate cancer is the localization of Regions of Interest (ROI) containing a tumor tissue. Currently, the segmentation of this ROI in most cases is carried out manually by expert doctors, but the procedure is plagued with low detection rates (of about 27-44%) or over-diagnosis in some patients. Therefore, several research works have tackled the challenge of automatically segmenting and extracting features of the ROI from magnetic resonance images, as this process can greatly facilitate many diagnostic and therapeutic applications. However, the lack of clear prostate boundaries, the heterogeneity inherent to the prostate tissue, and the variety of prostate shapes makes this process very difficult to automate.In this work, six deep learning models were trained and analyzed with a dataset of MRI images obtained from the Centre Hospitalaire de Dijon and Universitat Politecnica de Catalunya. We carried out a comparison of multiple deep learning models (i.e. U-Net, Attention U-Net, Dense-UNet, Attention Dense-UNet, R2U-Net, and Attention R2U-Net) using categorical cross-entropy loss function. The analysis was performed using three metrics commonly used for image segmentation: Dice score, Jaccard index, and mean squared error. The model that give us the best result segmenting all the zones was R2U-Net, which achieved 0.869, 0.782, and 0.00013 for Dice, Jaccard and mean squared error, respectively.","PeriodicalId":166595,"journal":{"name":"Mexican International Conference on Artificial Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mexican International Conference on Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2207.09483","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

. Prostate cancer is the second-most frequently diagnosed cancer and the sixth leading cause of cancer death in males worldwide. The main problem that specialists face during the diagnosis of prostate cancer is the localization of Regions of Interest (ROI) containing a tumor tissue. Currently, the segmentation of this ROI in most cases is carried out manually by expert doctors, but the procedure is plagued with low detection rates (of about 27-44%) or over-diagnosis in some patients. Therefore, several research works have tackled the challenge of automatically segmenting and extracting features of the ROI from magnetic resonance images, as this process can greatly facilitate many diagnostic and therapeutic applications. However, the lack of clear prostate boundaries, the heterogeneity inherent to the prostate tissue, and the variety of prostate shapes makes this process very difficult to automate.In this work, six deep learning models were trained and analyzed with a dataset of MRI images obtained from the Centre Hospitalaire de Dijon and Universitat Politecnica de Catalunya. We carried out a comparison of multiple deep learning models (i.e. U-Net, Attention U-Net, Dense-UNet, Attention Dense-UNet, R2U-Net, and Attention R2U-Net) using categorical cross-entropy loss function. The analysis was performed using three metrics commonly used for image segmentation: Dice score, Jaccard index, and mean squared error. The model that give us the best result segmenting all the zones was R2U-Net, which achieved 0.869, 0.782, and 0.00013 for Dice, Jaccard and mean squared error, respectively.
基于u -net结构的MRI图像前列腺区自动分割模型的比较
. 前列腺癌是全球第二大最常诊断的癌症,也是导致男性癌症死亡的第六大原因。专家在前列腺癌诊断过程中面临的主要问题是包含肿瘤组织的感兴趣区域(ROI)的定位。目前,大多数情况下,该ROI的分割是由专家医生手动进行的,但该过程存在检出率低(约为27-44%)或部分患者过度诊断的问题。因此,一些研究工作已经解决了从磁共振图像中自动分割和提取ROI特征的挑战,因为这一过程可以极大地促进许多诊断和治疗应用。然而,缺乏明确的前列腺边界,前列腺组织固有的异质性,以及前列腺形状的多样性使得这一过程非常难以自动化。在这项工作中,使用从第戎中心医院和加泰罗尼亚理工大学获得的MRI图像数据集对六个深度学习模型进行了训练和分析。我们使用分类交叉熵损失函数对多个深度学习模型(即U-Net、Attention U-Net、Dense-UNet、Attention Dense-UNet、R2U-Net和Attention R2U-Net)进行了比较。使用三个常用的图像分割指标进行分析:骰子得分,Jaccard指数和均方误差。为我们分割所有区域提供最佳结果的模型是R2U-Net, Dice, Jaccard和均方误差分别达到0.869,0.782和0.00013。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信