{"title":"使用基于变压器的深度学习模型在单能量计算机断层扫描系统中生成合成低能量单色图像","authors":"Yuhei Koike, Shingo Ohira, Sayaka Kihara, Yusuke Anetai, Hideki Takegawa, Satoaki Nakamura, Masayoshi Miyazaki, Koji Konishi, Noboru Tanigawa","doi":"10.1007/s10278-024-01111-z","DOIUrl":null,"url":null,"abstract":"<p>While dual-energy computed tomography (DECT) technology introduces energy-specific information in clinical practice, single-energy CT (SECT) is predominantly used, limiting the number of people who can benefit from DECT. This study proposed a novel method to generate synthetic low-energy virtual monochromatic images at 50 keV (sVMI<sub>50keV</sub>) from SECT images using a transformer-based deep learning model, SwinUNETR. Data were obtained from 85 patients who underwent head and neck radiotherapy. Among these, the model was built using data from 70 patients for whom only DECT images were available. The remaining 15 patients, for whom both DECT and SECT images were available, were used to predict from the actual SECT images. We used the SwinUNETR model to generate sVMI<sub>50keV</sub>. The image quality was evaluated, and the results were compared with those of the convolutional neural network-based model, Unet. The mean absolute errors from the true VMI<sub>50keV</sub> were 36.5 ± 4.9 and 33.0 ± 4.4 Hounsfield units for Unet and SwinUNETR, respectively. SwinUNETR yielded smaller errors in tissue attenuation values compared with those of Unet. The contrast changes in sVMI<sub>50keV</sub> generated by SwinUNETR from SECT were closer to those of DECT-derived VMI<sub>50keV</sub> than the contrast changes in Unet-generated sVMI<sub>50keV</sub>. This study demonstrated the potential of transformer-based models for generating synthetic low-energy VMIs from SECT images, thereby improving the image quality of head and neck cancer imaging. It provides a practical and feasible solution to obtain low-energy VMIs from SECT data that can benefit a large number of facilities and patients without access to DECT technology.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"50 1","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Synthetic Low-Energy Monochromatic Image Generation in Single-Energy Computed Tomography System Using a Transformer-Based Deep Learning Model\",\"authors\":\"Yuhei Koike, Shingo Ohira, Sayaka Kihara, Yusuke Anetai, Hideki Takegawa, Satoaki Nakamura, Masayoshi Miyazaki, Koji Konishi, Noboru Tanigawa\",\"doi\":\"10.1007/s10278-024-01111-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>While dual-energy computed tomography (DECT) technology introduces energy-specific information in clinical practice, single-energy CT (SECT) is predominantly used, limiting the number of people who can benefit from DECT. This study proposed a novel method to generate synthetic low-energy virtual monochromatic images at 50 keV (sVMI<sub>50keV</sub>) from SECT images using a transformer-based deep learning model, SwinUNETR. Data were obtained from 85 patients who underwent head and neck radiotherapy. Among these, the model was built using data from 70 patients for whom only DECT images were available. The remaining 15 patients, for whom both DECT and SECT images were available, were used to predict from the actual SECT images. We used the SwinUNETR model to generate sVMI<sub>50keV</sub>. The image quality was evaluated, and the results were compared with those of the convolutional neural network-based model, Unet. The mean absolute errors from the true VMI<sub>50keV</sub> were 36.5 ± 4.9 and 33.0 ± 4.4 Hounsfield units for Unet and SwinUNETR, respectively. SwinUNETR yielded smaller errors in tissue attenuation values compared with those of Unet. The contrast changes in sVMI<sub>50keV</sub> generated by SwinUNETR from SECT were closer to those of DECT-derived VMI<sub>50keV</sub> than the contrast changes in Unet-generated sVMI<sub>50keV</sub>. This study demonstrated the potential of transformer-based models for generating synthetic low-energy VMIs from SECT images, thereby improving the image quality of head and neck cancer imaging. It provides a practical and feasible solution to obtain low-energy VMIs from SECT data that can benefit a large number of facilities and patients without access to DECT technology.</p>\",\"PeriodicalId\":50214,\"journal\":{\"name\":\"Journal of Digital Imaging\",\"volume\":\"50 1\",\"pages\":\"\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-04-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Digital Imaging\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s10278-024-01111-z\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Digital Imaging","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s10278-024-01111-z","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Synthetic Low-Energy Monochromatic Image Generation in Single-Energy Computed Tomography System Using a Transformer-Based Deep Learning Model
While dual-energy computed tomography (DECT) technology introduces energy-specific information in clinical practice, single-energy CT (SECT) is predominantly used, limiting the number of people who can benefit from DECT. This study proposed a novel method to generate synthetic low-energy virtual monochromatic images at 50 keV (sVMI50keV) from SECT images using a transformer-based deep learning model, SwinUNETR. Data were obtained from 85 patients who underwent head and neck radiotherapy. Among these, the model was built using data from 70 patients for whom only DECT images were available. The remaining 15 patients, for whom both DECT and SECT images were available, were used to predict from the actual SECT images. We used the SwinUNETR model to generate sVMI50keV. The image quality was evaluated, and the results were compared with those of the convolutional neural network-based model, Unet. The mean absolute errors from the true VMI50keV were 36.5 ± 4.9 and 33.0 ± 4.4 Hounsfield units for Unet and SwinUNETR, respectively. SwinUNETR yielded smaller errors in tissue attenuation values compared with those of Unet. The contrast changes in sVMI50keV generated by SwinUNETR from SECT were closer to those of DECT-derived VMI50keV than the contrast changes in Unet-generated sVMI50keV. This study demonstrated the potential of transformer-based models for generating synthetic low-energy VMIs from SECT images, thereby improving the image quality of head and neck cancer imaging. It provides a practical and feasible solution to obtain low-energy VMIs from SECT data that can benefit a large number of facilities and patients without access to DECT technology.
期刊介绍:
The Journal of Digital Imaging (JDI) is the official peer-reviewed journal of the Society for Imaging Informatics in Medicine (SIIM). JDI’s goal is to enhance the exchange of knowledge encompassed by the general topic of Imaging Informatics in Medicine such as research and practice in clinical, engineering, and information technologies and techniques in all medical imaging environments. JDI topics are of interest to researchers, developers, educators, physicians, and imaging informatics professionals.
Suggested Topics
PACS and component systems; imaging informatics for the enterprise; image-enabled electronic medical records; RIS and HIS; digital image acquisition; image processing; image data compression; 3D, visualization, and multimedia; speech recognition; computer-aided diagnosis; facilities design; imaging vocabularies and ontologies; Transforming the Radiological Interpretation Process (TRIP™); DICOM and other standards; workflow and process modeling and simulation; quality assurance; archive integrity and security; teleradiology; digital mammography; and radiological informatics education.