基于深度学习的多采集几何图形和系统的三维经直肠超声图像自动前列腺分割

N. Orlando, D. Gillies, I. Gyacskov, A. Fenster
{"title":"基于深度学习的多采集几何图形和系统的三维经直肠超声图像自动前列腺分割","authors":"N. Orlando, D. Gillies, I. Gyacskov, A. Fenster","doi":"10.1117/12.2549804","DOIUrl":null,"url":null,"abstract":"Transrectal ultrasound (TRUS) fusion-guided biopsy and brachytherapy (BT) offer promising diagnostic and therapeutic improvements to conventional practice for prostate cancer. One key component of these procedures is accurate segmentation of the prostate in three-dimensional (3D) TRUS images to define margins used for accurate targeting and guidance techniques. However, manual prostate segmentation is a time-consuming and difficult process that must be completed by the physician intraoperatively, often while the patient is under sedation (biopsy) or anesthetic (BT). Providing physicians with a quick and accurate prostate segmentation immediately after acquiring a 3D TRUS image could benefit multiple minimally invasive prostate interventional procedures and greatly reduce procedure time. Our solution to this limitation is the development of a convolutional neural network to segment the prostate in 3D TRUS images using multiple commercial ultrasound systems. Training of a modified U-Net was performed on 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsy and BT procedures. Our approach for 3D segmentation involved prediction on 2D radial slices, which were reconstructed into a 3D geometry. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 unseen 3D side-fire images. Pixel map comparisons (Dice similarity coefficient (DSC), recall, and precision) and volume percent difference (VPD) were computed to assess error in the segmentation algorithm. Our algorithm performed with a 93.5% median DSC and 5.89% median VPD with a <0.7 s computation time, offering the possibility for reduced treatment time during prostate interventional procedures.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"14 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Deep learning-based automatic prostate segmentation in 3D transrectal ultrasound images from multiple acquisition geometries and systems\",\"authors\":\"N. Orlando, D. Gillies, I. Gyacskov, A. Fenster\",\"doi\":\"10.1117/12.2549804\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Transrectal ultrasound (TRUS) fusion-guided biopsy and brachytherapy (BT) offer promising diagnostic and therapeutic improvements to conventional practice for prostate cancer. One key component of these procedures is accurate segmentation of the prostate in three-dimensional (3D) TRUS images to define margins used for accurate targeting and guidance techniques. However, manual prostate segmentation is a time-consuming and difficult process that must be completed by the physician intraoperatively, often while the patient is under sedation (biopsy) or anesthetic (BT). Providing physicians with a quick and accurate prostate segmentation immediately after acquiring a 3D TRUS image could benefit multiple minimally invasive prostate interventional procedures and greatly reduce procedure time. Our solution to this limitation is the development of a convolutional neural network to segment the prostate in 3D TRUS images using multiple commercial ultrasound systems. Training of a modified U-Net was performed on 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsy and BT procedures. Our approach for 3D segmentation involved prediction on 2D radial slices, which were reconstructed into a 3D geometry. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 unseen 3D side-fire images. Pixel map comparisons (Dice similarity coefficient (DSC), recall, and precision) and volume percent difference (VPD) were computed to assess error in the segmentation algorithm. Our algorithm performed with a 93.5% median DSC and 5.89% median VPD with a <0.7 s computation time, offering the possibility for reduced treatment time during prostate interventional procedures.\",\"PeriodicalId\":302939,\"journal\":{\"name\":\"Medical Imaging: Image-Guided Procedures\",\"volume\":\"14 5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical Imaging: Image-Guided Procedures\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2549804\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Imaging: Image-Guided Procedures","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2549804","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

经直肠超声(TRUS)融合引导活检和近距离放射治疗(BT)为前列腺癌的传统诊断和治疗提供了有希望的改进。这些程序的一个关键组成部分是在三维(3D) TRUS图像中精确分割前列腺,以定义用于精确靶向和引导技术的边缘。然而,手动前列腺分割是一个耗时且困难的过程,必须由医生在术中完成,通常是在患者处于镇静(活检)或麻醉(BT)状态下。在获得三维TRUS图像后,为医生提供快速准确的前列腺分割,有利于多次微创前列腺介入手术,大大缩短手术时间。我们的解决方案是开发卷积神经网络,使用多个商业超声系统在3D TRUS图像中分割前列腺。改良U-Net对临床活检和BT过程中获得的84张末射和122张侧射3D TRUS图像进行了训练。我们的三维分割方法涉及对二维径向切片进行预测,并将其重建为三维几何形状。手动轮廓提供了训练、验证和测试数据集所需的注释,测试数据集由20个看不见的3D侧火图像组成。计算像素图比较(骰子相似系数(DSC),召回率和精度)和体积百分比差(VPD)来评估分割算法的误差。我们的算法的DSC中位数为93.5%,VPD中位数为5.89%,计算时间<0.7 s,为缩短前列腺介入手术的治疗时间提供了可能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep learning-based automatic prostate segmentation in 3D transrectal ultrasound images from multiple acquisition geometries and systems
Transrectal ultrasound (TRUS) fusion-guided biopsy and brachytherapy (BT) offer promising diagnostic and therapeutic improvements to conventional practice for prostate cancer. One key component of these procedures is accurate segmentation of the prostate in three-dimensional (3D) TRUS images to define margins used for accurate targeting and guidance techniques. However, manual prostate segmentation is a time-consuming and difficult process that must be completed by the physician intraoperatively, often while the patient is under sedation (biopsy) or anesthetic (BT). Providing physicians with a quick and accurate prostate segmentation immediately after acquiring a 3D TRUS image could benefit multiple minimally invasive prostate interventional procedures and greatly reduce procedure time. Our solution to this limitation is the development of a convolutional neural network to segment the prostate in 3D TRUS images using multiple commercial ultrasound systems. Training of a modified U-Net was performed on 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsy and BT procedures. Our approach for 3D segmentation involved prediction on 2D radial slices, which were reconstructed into a 3D geometry. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 unseen 3D side-fire images. Pixel map comparisons (Dice similarity coefficient (DSC), recall, and precision) and volume percent difference (VPD) were computed to assess error in the segmentation algorithm. Our algorithm performed with a 93.5% median DSC and 5.89% median VPD with a <0.7 s computation time, offering the possibility for reduced treatment time during prostate interventional procedures.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信