{"title":"Deep Learning-based Virtual Refocusing of Out-of-Plane Images for Ultrasound Computed Tomography","authors":"Zhaohui Liu, Xinan Zhu, Jiameng Wang, Shanshan Wang, Mingyue Ding, M. Yuchi","doi":"10.1109/IUS54386.2022.9957222","DOIUrl":null,"url":null,"abstract":"Ultrasound computed tomography (USCT) has attracted increasing attention for the potential to quantify the acoustic properties of tissues. The three-dimensional (3D) image can be reconstructed by stacking a group of cross-sectional USCT images with sub-millimeter isotropic spatial resolution. However, the interval of the slice images is set in millimeter-scale with a trade-off of imaging speed and axial resolution, resulting in a loss of axial information and a deviation in volumetric measurements. This paper demonstrates a framework based on a deep neural network to virtually refocus a two-dimensional USCT image onto the user-defined 3D surfaces without relying on any additional axial scanning, increasing the number of slice images by 40 times. In the training stage, an input image is appended with a distance matrix (DM) that represents the distance of the target plane from the plane of input image along the axial direction, each image is refocused to a series of planes ranging from -5mm to 5mm with a step size of 0.25mm, that is, 20 planes above and 20 planes below, thus forming around 16000 image pairs. The residual U-Net learns to refocus an input image appended with a DM to a user-define plane. Once the training is complete, a virtual 3D image stack can be generated when provided with a single image appended with a series of DMs. The proposed framework can virtually refocus a single image to multiple user-define planes represented by different DMs, enabling the USCT 3D imaging with sub-millimeter slice intervals.","PeriodicalId":272387,"journal":{"name":"2022 IEEE International Ultrasonics Symposium (IUS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Ultrasonics Symposium (IUS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IUS54386.2022.9957222","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Ultrasound computed tomography (USCT) has attracted increasing attention for the potential to quantify the acoustic properties of tissues. The three-dimensional (3D) image can be reconstructed by stacking a group of cross-sectional USCT images with sub-millimeter isotropic spatial resolution. However, the interval of the slice images is set in millimeter-scale with a trade-off of imaging speed and axial resolution, resulting in a loss of axial information and a deviation in volumetric measurements. This paper demonstrates a framework based on a deep neural network to virtually refocus a two-dimensional USCT image onto the user-defined 3D surfaces without relying on any additional axial scanning, increasing the number of slice images by 40 times. In the training stage, an input image is appended with a distance matrix (DM) that represents the distance of the target plane from the plane of input image along the axial direction, each image is refocused to a series of planes ranging from -5mm to 5mm with a step size of 0.25mm, that is, 20 planes above and 20 planes below, thus forming around 16000 image pairs. The residual U-Net learns to refocus an input image appended with a DM to a user-define plane. Once the training is complete, a virtual 3D image stack can be generated when provided with a single image appended with a series of DMs. The proposed framework can virtually refocus a single image to multiple user-define planes represented by different DMs, enabling the USCT 3D imaging with sub-millimeter slice intervals.