基于体素渲染的CBCT图像颅骨分割。

Qin Liu, Chunfeng Lian, Deqiang Xiao, Lei Ma, Han Deng, Xu Chen, Dinggang Shen, Pew-Thian Yap, James J Xia
{"title":"基于体素渲染的CBCT图像颅骨分割。","authors":"Qin Liu,&nbsp;Chunfeng Lian,&nbsp;Deqiang Xiao,&nbsp;Lei Ma,&nbsp;Han Deng,&nbsp;Xu Chen,&nbsp;Dinggang Shen,&nbsp;Pew-Thian Yap,&nbsp;James J Xia","doi":"10.1007/978-3-030-87589-3_63","DOIUrl":null,"url":null,"abstract":"<p><p>Skull segmentation from three-dimensional (3D) cone-beam computed tomography (CBCT) images is critical for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Convolutional neural network (CNN)-based methods are currently dominating volumetric image segmentation, but these methods suffer from the limited GPU memory and the large image size (<i>e.g</i>., 512 × 512 × 448). Typical ad-hoc strategies, such as down-sampling or patch cropping, will degrade segmentation accuracy due to insufficient capturing of local fine details or global contextual information. Other methods such as Global-Local Networks (GLNet) are focusing on the improvement of neural networks, aiming to combine the local details and the global contextual information in a GPU memory-efficient manner. However, all these methods are operating on regular grids, which are computationally inefficient for volumetric image segmentation. In this work, we propose a novel VoxelRend-based network (VR-U-Net) by combining a memory-efficient variant of 3D U-Net with a voxel-based rendering (VoxelRend) module that refines local details via voxel-based predictions on non-regular grids. Establishing on relatively coarse feature maps, the VoxelRend module achieves significant improvement of segmentation accuracy with a fraction of GPU memory consumption. We evaluate our proposed VR-U-Net in the skull segmentation task on a high-resolution CBCT dataset collected from local hospitals. Experimental results show that the proposed VR-U-Net yields high-quality segmentation results in a memory-efficient manner, highlighting the practical value of our method.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"615-623"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8675180/pdf/nihms-1762343.pdf","citationCount":"2","resultStr":"{\"title\":\"Skull Segmentation from CBCT Images via Voxel-Based Rendering.\",\"authors\":\"Qin Liu,&nbsp;Chunfeng Lian,&nbsp;Deqiang Xiao,&nbsp;Lei Ma,&nbsp;Han Deng,&nbsp;Xu Chen,&nbsp;Dinggang Shen,&nbsp;Pew-Thian Yap,&nbsp;James J Xia\",\"doi\":\"10.1007/978-3-030-87589-3_63\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Skull segmentation from three-dimensional (3D) cone-beam computed tomography (CBCT) images is critical for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Convolutional neural network (CNN)-based methods are currently dominating volumetric image segmentation, but these methods suffer from the limited GPU memory and the large image size (<i>e.g</i>., 512 × 512 × 448). Typical ad-hoc strategies, such as down-sampling or patch cropping, will degrade segmentation accuracy due to insufficient capturing of local fine details or global contextual information. Other methods such as Global-Local Networks (GLNet) are focusing on the improvement of neural networks, aiming to combine the local details and the global contextual information in a GPU memory-efficient manner. However, all these methods are operating on regular grids, which are computationally inefficient for volumetric image segmentation. In this work, we propose a novel VoxelRend-based network (VR-U-Net) by combining a memory-efficient variant of 3D U-Net with a voxel-based rendering (VoxelRend) module that refines local details via voxel-based predictions on non-regular grids. Establishing on relatively coarse feature maps, the VoxelRend module achieves significant improvement of segmentation accuracy with a fraction of GPU memory consumption. We evaluate our proposed VR-U-Net in the skull segmentation task on a high-resolution CBCT dataset collected from local hospitals. Experimental results show that the proposed VR-U-Net yields high-quality segmentation results in a memory-efficient manner, highlighting the practical value of our method.</p>\",\"PeriodicalId\":74092,\"journal\":{\"name\":\"Machine learning in medical imaging. MLMI (Workshop)\",\"volume\":\" \",\"pages\":\"615-623\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8675180/pdf/nihms-1762343.pdf\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine learning in medical imaging. MLMI (Workshop)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-030-87589-3_63\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2021/9/21 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning in medical imaging. MLMI (Workshop)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-87589-3_63","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/9/21 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

三维锥形束计算机断层扫描(CBCT)图像的颅骨分割对于颅颌面畸形的诊断和治疗计划至关重要。基于卷积神经网络(CNN)的方法目前在体积图像分割中占主导地位,但这些方法受到GPU内存有限和图像尺寸较大(例如512 × 512 × 448)的影响。典型的特殊策略,如降采样或斑块裁剪,会降低分割的准确性,因为没有充分捕获局部细节或全局上下文信息。其他方法如global - local Networks (GLNet)则专注于神经网络的改进,旨在以GPU内存高效的方式将局部细节和全局上下文信息结合起来。然而,所有这些方法都是在规则网格上操作的,这对于体积图像分割来说计算效率很低。在这项工作中,我们提出了一种新的基于VoxelRend的网络(VR-U-Net),通过将3D U-Net的内存高效变体与基于体素的渲染(VoxelRend)模块相结合,该模块通过基于体素的非规则网格预测来细化局部细节。VoxelRend模块建立在相对粗糙的特征映射上,以一小部分GPU内存消耗实现了分割精度的显著提高。我们在从当地医院收集的高分辨率CBCT数据集上评估了我们提出的VR-U-Net在颅骨分割任务中的应用。实验结果表明,本文提出的VR-U-Net算法在节省内存的前提下,获得了高质量的分割结果,突出了本文方法的实用价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Skull Segmentation from CBCT Images via Voxel-Based Rendering.

Skull Segmentation from CBCT Images via Voxel-Based Rendering.

Skull segmentation from three-dimensional (3D) cone-beam computed tomography (CBCT) images is critical for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Convolutional neural network (CNN)-based methods are currently dominating volumetric image segmentation, but these methods suffer from the limited GPU memory and the large image size (e.g., 512 × 512 × 448). Typical ad-hoc strategies, such as down-sampling or patch cropping, will degrade segmentation accuracy due to insufficient capturing of local fine details or global contextual information. Other methods such as Global-Local Networks (GLNet) are focusing on the improvement of neural networks, aiming to combine the local details and the global contextual information in a GPU memory-efficient manner. However, all these methods are operating on regular grids, which are computationally inefficient for volumetric image segmentation. In this work, we propose a novel VoxelRend-based network (VR-U-Net) by combining a memory-efficient variant of 3D U-Net with a voxel-based rendering (VoxelRend) module that refines local details via voxel-based predictions on non-regular grids. Establishing on relatively coarse feature maps, the VoxelRend module achieves significant improvement of segmentation accuracy with a fraction of GPU memory consumption. We evaluate our proposed VR-U-Net in the skull segmentation task on a high-resolution CBCT dataset collected from local hospitals. Experimental results show that the proposed VR-U-Net yields high-quality segmentation results in a memory-efficient manner, highlighting the practical value of our method.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信