基于深度学习的超声计算机断层扫描面外图像虚拟重聚焦

Zhaohui Liu, Xinan Zhu, Jiameng Wang, Shanshan Wang, Mingyue Ding, M. Yuchi
{"title":"基于深度学习的超声计算机断层扫描面外图像虚拟重聚焦","authors":"Zhaohui Liu, Xinan Zhu, Jiameng Wang, Shanshan Wang, Mingyue Ding, M. Yuchi","doi":"10.1109/IUS54386.2022.9957222","DOIUrl":null,"url":null,"abstract":"Ultrasound computed tomography (USCT) has attracted increasing attention for the potential to quantify the acoustic properties of tissues. The three-dimensional (3D) image can be reconstructed by stacking a group of cross-sectional USCT images with sub-millimeter isotropic spatial resolution. However, the interval of the slice images is set in millimeter-scale with a trade-off of imaging speed and axial resolution, resulting in a loss of axial information and a deviation in volumetric measurements. This paper demonstrates a framework based on a deep neural network to virtually refocus a two-dimensional USCT image onto the user-defined 3D surfaces without relying on any additional axial scanning, increasing the number of slice images by 40 times. In the training stage, an input image is appended with a distance matrix (DM) that represents the distance of the target plane from the plane of input image along the axial direction, each image is refocused to a series of planes ranging from -5mm to 5mm with a step size of 0.25mm, that is, 20 planes above and 20 planes below, thus forming around 16000 image pairs. The residual U-Net learns to refocus an input image appended with a DM to a user-define plane. Once the training is complete, a virtual 3D image stack can be generated when provided with a single image appended with a series of DMs. The proposed framework can virtually refocus a single image to multiple user-define planes represented by different DMs, enabling the USCT 3D imaging with sub-millimeter slice intervals.","PeriodicalId":272387,"journal":{"name":"2022 IEEE International Ultrasonics Symposium (IUS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Learning-based Virtual Refocusing of Out-of-Plane Images for Ultrasound Computed Tomography\",\"authors\":\"Zhaohui Liu, Xinan Zhu, Jiameng Wang, Shanshan Wang, Mingyue Ding, M. Yuchi\",\"doi\":\"10.1109/IUS54386.2022.9957222\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Ultrasound computed tomography (USCT) has attracted increasing attention for the potential to quantify the acoustic properties of tissues. The three-dimensional (3D) image can be reconstructed by stacking a group of cross-sectional USCT images with sub-millimeter isotropic spatial resolution. However, the interval of the slice images is set in millimeter-scale with a trade-off of imaging speed and axial resolution, resulting in a loss of axial information and a deviation in volumetric measurements. This paper demonstrates a framework based on a deep neural network to virtually refocus a two-dimensional USCT image onto the user-defined 3D surfaces without relying on any additional axial scanning, increasing the number of slice images by 40 times. In the training stage, an input image is appended with a distance matrix (DM) that represents the distance of the target plane from the plane of input image along the axial direction, each image is refocused to a series of planes ranging from -5mm to 5mm with a step size of 0.25mm, that is, 20 planes above and 20 planes below, thus forming around 16000 image pairs. The residual U-Net learns to refocus an input image appended with a DM to a user-define plane. Once the training is complete, a virtual 3D image stack can be generated when provided with a single image appended with a series of DMs. The proposed framework can virtually refocus a single image to multiple user-define planes represented by different DMs, enabling the USCT 3D imaging with sub-millimeter slice intervals.\",\"PeriodicalId\":272387,\"journal\":{\"name\":\"2022 IEEE International Ultrasonics Symposium (IUS)\",\"volume\":\"65 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Ultrasonics Symposium (IUS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IUS54386.2022.9957222\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Ultrasonics Symposium (IUS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IUS54386.2022.9957222","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

超声计算机断层扫描(USCT)因其量化组织声学特性的潜力而受到越来越多的关注。通过叠加一组亚毫米各向同性空间分辨率的横截面USCT图像,可以重建三维图像。然而,切片图像的间隔设置为毫米级,需要权衡成像速度和轴向分辨率,从而导致轴向信息的丢失和体积测量的偏差。本文展示了一个基于深度神经网络的框架,该框架可以在不依赖任何额外的轴向扫描的情况下,将二维USCT图像重新聚焦到用户定义的3D表面上,将切片图像的数量增加了40倍。在训练阶段,在输入图像上附加一个距离矩阵(DM), DM表示目标平面到输入图像平面沿轴向的距离,每幅图像以0.25mm的步长重新聚焦到-5mm到5mm的一系列平面上,即上20个平面,下20个平面,从而形成约16000对图像。残差U-Net学习将带有DM的输入图像重新聚焦到用户定义平面。一旦训练完成,当提供附加一系列dm的单个图像时,可以生成虚拟3D图像堆栈。所提出的框架实际上可以将单个图像重新聚焦到由不同dm表示的多个用户定义平面,从而实现亚毫米切片间隔的USCT 3D成像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Learning-based Virtual Refocusing of Out-of-Plane Images for Ultrasound Computed Tomography
Ultrasound computed tomography (USCT) has attracted increasing attention for the potential to quantify the acoustic properties of tissues. The three-dimensional (3D) image can be reconstructed by stacking a group of cross-sectional USCT images with sub-millimeter isotropic spatial resolution. However, the interval of the slice images is set in millimeter-scale with a trade-off of imaging speed and axial resolution, resulting in a loss of axial information and a deviation in volumetric measurements. This paper demonstrates a framework based on a deep neural network to virtually refocus a two-dimensional USCT image onto the user-defined 3D surfaces without relying on any additional axial scanning, increasing the number of slice images by 40 times. In the training stage, an input image is appended with a distance matrix (DM) that represents the distance of the target plane from the plane of input image along the axial direction, each image is refocused to a series of planes ranging from -5mm to 5mm with a step size of 0.25mm, that is, 20 planes above and 20 planes below, thus forming around 16000 image pairs. The residual U-Net learns to refocus an input image appended with a DM to a user-define plane. Once the training is complete, a virtual 3D image stack can be generated when provided with a single image appended with a series of DMs. The proposed framework can virtually refocus a single image to multiple user-define planes represented by different DMs, enabling the USCT 3D imaging with sub-millimeter slice intervals.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信