Isabel Montero, Saman Sotoudeh-Paima, Ehsan Abadi, Ehsan Samei
{"title":"Intra- and inter-scanner CT variability and their impact on diagnostic tasks.","authors":"Isabel Montero, Saman Sotoudeh-Paima, Ehsan Abadi, Ehsan Samei","doi":"10.1117/12.3047016","DOIUrl":"https://doi.org/10.1117/12.3047016","url":null,"abstract":"<p><p>The increased development and production of Computed Tomography (CT) scanner technology has expanded patient access to advanced and affordable medical imaging technologies but has also introduced sources of variability in the clinical imaging landscape, which may influence patient care. This study examines the impact of intra-scanner and inter-scanner variability on image quality and quantitative imaging tasks, with a focus on the detectability index (d') as a measure of patient-specific task performance. We evaluated 813 clinical phantom image sets from the COPDGene study, aggregated by CT scanner make, model, and acquisition and reconstruction protocol. Each phantom image set was assessed for image quality metrics, including the Noise Power Spectrum (NPS) and in-plane Modulation Transfer Function (MTF). The d' index was calculated for 12 hypothetical lesion detection tasks, emulating clinically relevant lung and liver lesions of varying sizes and contrast levels. Qualitatively, analysis showed intra-scanner variability in NPS and MTF curves measured for identical acquisition and reconstruction settings. Inter-scanner comparisons demonstrated variability in d' measurements across different scanner makes and models, of similar acquisition and reconstruction settings. The study showed an intra-scanner variability of up to 13.7% and an inter-scanner variability of up to 19.3% in the d' index. These findings emphasize the need for considering scanner variability in patient-centered care and indicate that CT technology may influence the reliability of imaging tasks. The results of this study further motivate the development of virtual scanner models to better model and mitigate the variability observed in the clinical imaging landscape.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035824/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144037036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Fenwick, Navid NaderiAlizadeh, Vahid Tarokh, Darin Clark, Jayasai Rajagopal, Anuj Kapadia, Nicholas Felice, Ehsan Samei, Ehsan Abadi
{"title":"Black-box Optimization of CT Acquisition and Reconstruction Parameters: A Reinforcement Learning Approach.","authors":"David Fenwick, Navid NaderiAlizadeh, Vahid Tarokh, Darin Clark, Jayasai Rajagopal, Anuj Kapadia, Nicholas Felice, Ehsan Samei, Ehsan Abadi","doi":"10.1117/12.3046807","DOIUrl":"https://doi.org/10.1117/12.3046807","url":null,"abstract":"<p><p>Protocol optimization is critical in Computed Tomography (CT) for achieving desired diagnostic image quality while minimizing radiation dose. Due to the inter-effect of influencing CT parameters, traditional optimization methods rely on the testing of exhaustive combinations of these parameters. This poses a notable limitation due to the impracticality of exhaustive parameter testing. This study introduces a novel methodology leveraging Virtual Imaging Trials (VITs) and reinforcement learning to more efficiently optimize CT protocols. Computational phantoms with liver lesions were imaged using a validated CT simulator and reconstructed with a novel CT reconstruction Toolkit. The optimization parameter space included tube voltage, tube current, reconstruction kernel, slice thickness, and pixel size. The optimization process was done using a Proximal Policy Optimization (PPO) agent which was trained to maximize the Detectability Index (d') of the liver lesion for each reconstructed image. Results showed that our reinforcement learning approach found the absolute maximum d' across the test cases while requiring 79.7% fewer steps compared to an exhaustive search, demonstrating both accuracy and computational efficiency, offering a efficient and robust framework for CT protocol optimization. The flexibility of the proposed technique allows for use of varying image quality metrics as the objective metric to maximize for. Our findings highlight the advantages of combining VIT and reinforcement learning for CT protocol management.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035822/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Gong, Shravani A Kharat, Shuai Leng, Lifeng Yu, Scott S Hsieh, Joel G Fletcher, Cynthia H McCollough
{"title":"Improving low-contrast liver metastasis detectability in deep-learning CT denoising using adaptive local fusion driven by total uncertainty and predictive mean.","authors":"Hao Gong, Shravani A Kharat, Shuai Leng, Lifeng Yu, Scott S Hsieh, Joel G Fletcher, Cynthia H McCollough","doi":"10.1117/12.3047080","DOIUrl":"10.1117/12.3047080","url":null,"abstract":"<p><p>Emerging deep-learning-based CT denoising techniques have the potential to improve diagnostic image quality in low-dose CT exams. However, aggressive radiation dose reduction and the intrinsic uncertainty in convolutional neural network (CNN) outputs are detrimental to detecting critical lesions (e.g., liver metastases) in CNN-denoised images. To tackle these issues, we characterized CNN output distribution via total uncertainty (i.e., data + model uncertainties) and predictive mean. Local mean-uncertainty-ratio (MUR) was calculated to detect highly unreliable regions in the denoised images. A MUR-driven adaptive local fusion (ALF) process was developed to adaptively merge local predictive means with the original noisy images, thereby improving image robustness. This process was incorporated into a previously validated deep-learning model observer to quantify liver metastasis detectability, using area under localization receiver operating characteristic curve (LAUC) as the figure-of-merit. For proof-of-concept, the proposed method was established and validated for a ResNet-based CT denoising method. A recent patient abdominal CT dataset was used in validation, involving 3 lesion sizes (7, 9, and 11 mm), 3 lesion contrasts (15, 20, and 25 HU), and 3 dose levels (25%, 50%, and 100% dose). Visual inspection and quantitative analyses were conducted. Statistical significance was tested. Total uncertainty at lesions and liver background generally increased as radiation dose decreased. With fixed dose, lesion-wise MUR showed no dependency on lesion size or contrast, but exhibited large variance across lesion locations (MUR range ~0.7 to 19). Compared to original ResNet-based denoising, the MUR-driven ALF consistently improved lesion detectability in challenging conditions such as lower dose, smaller lesion size, or lower contrast (range of absolute gain in LAUC: 0.04 to 0.1; P-value 0.008). The proposed method has the potential to improve reliability of deep-learning CT denoising and enhance lesion detection.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12070600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ensembled YOLO for multiorgan detection in chest x-rays.","authors":"Sivaramakrishnan Rajaraman, Zhaohui Liang, Zhiyun Xue, Sameer Antani","doi":"10.1117/12.3047210","DOIUrl":"https://doi.org/10.1117/12.3047210","url":null,"abstract":"<p><p>Chest radiographs are a vital tool for identifying pathological changes within the thoracic cavity. Artificial intelligence (AI) and machine learning (ML) driven screening or diagnostic applications require accurate detection of anatomical structures within the Chest X-ray (CXR) image. The You Only Look Once (YOLO) object detection models have recently gained prominence for their efficacy in detecting anatomical structures in medical images. However, state-of-the-art results using it are typically for single anatomical organ detection. Advanced image analysis would benefit from simultaneous detection more than one anatomical organ. In this work we propose a multi-organ detection technique through two recent YOLO versions and their sub-variants. We evaluate their effectiveness in detecting lung and heart regions in CXRs simultaneously. We used the JSRT CXR dataset for internal training, validation, and testing. Further, the generalizability of the models is evaluated using two external test sets, viz., the Montgomery CXR dataset and a subset of the RSNA CXR dataset against available annotations therein. Our evaluation demonstrates that YOLOv9 models notably outperform YOLOv8 variants. We demonstrated further improvements in detection performance through ensemble approaches.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13407 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996225/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Gong, Thomas M Huber, Timothy Winfree, Scott S Hsieh, Lifeng Yu, Shuai Leng, Cynthia H McCollough
{"title":"Simulating scanner-and algorithm-specific 3D CT noise texture using physics-informed 2D and 2.5D generative neural network models.","authors":"Hao Gong, Thomas M Huber, Timothy Winfree, Scott S Hsieh, Lifeng Yu, Shuai Leng, Cynthia H McCollough","doi":"10.1117/12.3047909","DOIUrl":"10.1117/12.3047909","url":null,"abstract":"<p><p>Low-dose CT simulation is needed to assess reconstruction/denoising techniques and optimize dose. Projection-domain noise-insertion methods require manufacturers' proprietary tools. Image-domain noise-insertion methods face various challenges that affect generalizability, and few have been systematically validated for 3D noise synthesis. To improve generalizability, we presented a <b>p</b>hysics-informed model-based gener<b>a</b>tive neura<b>l</b> network for simulating scann<b>e</b>r- and algorithm-specific low-dose C<b>T e</b>xams (PALETTE). PALETTE included a noise-prior-generation process, a Noise2Noisier sub-network, and a noise-texture-synthesis sub-network. Custom regularization terms were developed to enforce 3D noise texture quality. Using PALETTE, one 2D and two 2.5D models (denoted as 2.5D <i>N-N</i> and <i>N-1</i>) were developed to conduct 2D and effective 3D noise modeling, respectively (input/output images: 2D - 1/1, 2.5D <i>N-N</i> - 3/3, 2.5D <i>N-1</i> - 5/1). These models were trained and tested with an open-access abdominal CT dataset, including 20 testing cases reconstructed with two kernels and various field-of-view. In visual inspection, the 2D and 2.5D <i>N-N</i> models generated realistic local and global noise texture, while 2.5D <i>N-1</i> showed more perceptual difference using the sharper kernel and coronal reformat. In quantitative evaluation, local noise level was compared using mean-absolute-percent-difference (MAPD), and global spectral similarity was assessed using spectral correlation mapper (SCM) and spectral angle mapper (SAM). The 2D model provided equivalent or relatively better performance than 2.5D models, showing well-matched local noise levels and high spectral similarity compared to the reference (sharper/smoother kernels): MAPD - 2D 1.5%/5.6% (p>0.05), 2.5D <i>N-N</i> 8.5%/7.9% (p<0.05), 2.5D <i>N-1</i> 12.3%/10.9% (p<0.05); mean SCM - 2D 0.97/0.97, 2.5D <i>N-N</i> 0.96/0.97, 2.5D <i>N-1</i> 0.85/0.97; mean SAM - 2D 0.12/0.12, 2.5D <i>N-N</i> 0.14/0.12, 2.5D <i>N-1</i> 0.37/0.12. With tripled model width, the 2.5D <i>N-N</i> outperformed <i>N-1</i>. This indicated 2.5D models need more learning capacity to further enhance 3D noise modeling. Using physics-based prior information, PALETTE can provide high-quality low-dose CT simulation to resemble scanner- and algorithm-specific 3D noise characteristics.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12070530/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongxing Zhou, Jarod Wellinghoff, Cynthia H McCollough, Lifeng Yu
{"title":"Patient-specific Channelized Hotelling observer to estimate lesion detectability in CT.","authors":"Zhongxing Zhou, Jarod Wellinghoff, Cynthia H McCollough, Lifeng Yu","doi":"10.1117/12.3047381","DOIUrl":"10.1117/12.3047381","url":null,"abstract":"<p><p>Task-based image quality assessment is essential for CT protocol and radiation dose optimization. Despite many ongoing efforts, there is still an unmet need to measure and monitor the quality of images acquired from each patient exam. In this work, we developed a patient-specific channelized Hotelling observer (CHO)-based method to estimate the lesion detectability for each individual patient scan. The ensemble of background was created from patient images to include both relatively uniform regions and anatomically varying regions. Signals were modelled from lesions of different sizes and contrast levels after incorporating the effect of contrast-dependent spatial resolution. Index of detectability (d') was estimated using a CHO framework. This method was applied to clinical patient images obtained from a CT scanner at 3 different radiation dose levels. The d' for 5 different lesion size/contrast conditions was calculated across the scan range of each patient exam. The average noise levels and the d' averaged from 5 conditions were 13.2/3.78, 17.1/2.93 and 21.9/2.43 at 100%, 50% and 25% dose levels, respectively.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12086740/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicole Gunderson, Pengcheng Chen, Jeremy S Ruthberg, Randall A Bly, Eric J Seibel, Waleed M Abuzeid
{"title":"High-Fidelity 3D Reconstruction for Accurate Anatomical Measurements in Endoscopic Sinus Surgery.","authors":"Nicole Gunderson, Pengcheng Chen, Jeremy S Ruthberg, Randall A Bly, Eric J Seibel, Waleed M Abuzeid","doi":"10.1117/12.3046391","DOIUrl":"https://doi.org/10.1117/12.3046391","url":null,"abstract":"<p><p>Achieving an accurate representation of the surgical scene is essential, as it enables precise surgical navigation. Surgeons currently rely on preoperative computed tomography (CT) scans to represent the surgical scene and plan sinus procedures. However, as tissue is resected and manipulated, the anatomy represented in preoperative images becomes increasingly inaccurate and outdated. The endoscopic 3D reconstruction provides an alternative solution to this challenge, for it captures the current surgical scene. Nevertheless, achieving high reconstruction accuracy is crucial in endoscopic sinus surgery (ESS), where tissue margins lie within submillimeter distances to critical anatomy such as the orbits, cranial nerves, carotid arteries, and dura mater. To fulfill the need for a highly accurate intraoperative method of surgical scene modeling in ESS, we propose a system to generate 3D reconstructions of the sinus to garner relevant qualitative and quantitative anatomic information that substantially diverges from preoperative CT images as the surgery progresses. To achieve this, the pipeline of Neural Radiance Fields (NeRF) is expanded to include methods that simulate stereoscopic views using only a monocular endoscope to iteratively refine the depth of reconstructions. The presented workflow provides accurate depth maps, global scaling, and geometric information without camera pose-tracking tools or fiducial markers. Additional methods of point cloud denoising, outlier removal, and dropout patching have been developed and implemented to increase point cloud robustness. This expanded workflow demonstrates the ability to create high-resolution and accurate 3D reconstructions of the surgical scene. Using a series of three cadaveric specimens, measurements of critical anatomy were evaluated with average reconstruction errors for ethmoid length and height being 0.25mm and 0.52mm, respectively.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13408 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12047413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144014059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scott S Hsieh, Katsuyuki Taguchi, Marlies C Goorden, Dennis R Schaart
{"title":"The potential of scintillator-based photon counting detectors: evaluation using Monte Carlo simulations.","authors":"Scott S Hsieh, Katsuyuki Taguchi, Marlies C Goorden, Dennis R Schaart","doi":"10.1117/12.3045837","DOIUrl":"10.1117/12.3045837","url":null,"abstract":"<p><p>Direct conversion photon counting detectors (PCDs) using CdTe, CZT, or Si for the sensor material are being investigated and manufactured. Indirect conversion, scintillator-based PCDs have historically thought to be too slow for the high flux requirements of diagnostic CT. Recent scintillators investigated for e.g. PET applications are very fast and inspire us to rethink this paradigm. We evaluate the potential of a LaBr<sub>3</sub>:Ce PCD using Monte Carlo simulations. We compared a CdTe PCD and a LaBr<sub>3</sub>:Ce PCD, assuming a pixel density of 9 pixels/mm<sup>2</sup> in each case and a surrounding 2D anti-scatter grid. A 1×1 mm<sup>2</sup> area was illuminated by flat field X-rays and the lower bound on the noise for varying contrast types and material decomposition scenarios was calculated. For conventional imaging without material decomposition, the LaBr<sub>3</sub>:Ce PCD performed worse than CdTe because of the need to wrap pixels in reflector, which reduces geometric efficiency. For water-bone material decomposition, the two PCDs performed similarly with our assumptions on pulse shape and PCD geometry. For three-material decomposition with a K-edge imaging agent, LaBr<sub>3</sub>:Ce reduced variance by about 35% because of the elimination of charge sharing that is present in CdTe. These results motivate further exploration of scintillator-based PCDs as an alternative to direct conversion PCDs, especially with future K-edge imaging agents.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12100487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lin Guo, Fleming Y M Lure, Teresa Wu, Fulin Cai, Stefan Jaeger, Bin Zheng, Jordan Fuhrman, Hui Li, Maryellen L Giger, Andrei Gabrielian, Alex Rosenthal, Darrell E Hurt, Ziv Yaniv, Li Xia, Weijun Fang, Jingzhe Liu
{"title":"Automated multi-lesion annotation in chest X-rays: annotating over 450,000 images from public datasets using the AI-based Smart Imagery Framing and Truthing (SIFT) system.","authors":"Lin Guo, Fleming Y M Lure, Teresa Wu, Fulin Cai, Stefan Jaeger, Bin Zheng, Jordan Fuhrman, Hui Li, Maryellen L Giger, Andrei Gabrielian, Alex Rosenthal, Darrell E Hurt, Ziv Yaniv, Li Xia, Weijun Fang, Jingzhe Liu","doi":"10.1117/12.3047189","DOIUrl":"https://doi.org/10.1117/12.3047189","url":null,"abstract":"<p><p>This work utilized an artificial intelligence (AI)-based image annotation tool, Smart Imagery Framing and Truthing (SIFT), to annotate pulmonary lesions and abnormalities and their corresponding boundaries on 452,602 chest X-ray (CXR) images (22 different types of desired lesions) from four publicly available datasets (CheXpert Dataset, ChestX-ray14 Dataset, MIDRC Dataset, and NIAID TB Portals Dataset). SIFT is based on Multi-task, Optimal-recommendation, and Max-predictive Classification and Segmentation (MOM ClaSeg) technologies to identify and delineate 65 different abnormal regions of interest (ROI) on CXR images, provide a confidence score for each labeled ROI, and various recommendations of abnormalities for each ROI, if the confidence score is not high enough. The MOM ClaSeg System integrating Mask R-CNN and Decision Fusion Network is developed on a training dataset of over 300,000 CXRs, containing over 240,000 confirmed abnormal CXRs with over 300,000 confirmed ROIs corresponding to 65 different abnormalities and over 67,000 normal (i.e., \"no finding\") CXRs. After quality control, the CXRs are entered into the SIFT system to automatically predict the abnormality type (\"Predicted Abnormality\") and corresponding boundary locations for the ROIs displayed on each original image. The results indicated that the SIFT system can determine the abnormality types of labeled ROIs and their boundary coordinates with high efficiency (improved 7.92 times) when radiologists used SIFT as an aide compared to radiologists using a traditional semi-automatic method. The SIFT system achieves an average sensitivity of 89.38%±11.46% across four datasets. This can significantly improve the quality and quantity of training and testing sets to develop AI technologies.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13409 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12034099/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144000263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaojie Chang, Madeleine Wilson, Emily K Koons, Hao Gong, Scott S Hsieh, Lifeng Yu, Cynthia H McCollough, Shuai Leng
{"title":"Contrast-guided Virtual Monoenergetic Image Synthesis via Adversarial Learning for Coronary CT Angiography using Photon Counting Detector CT.","authors":"Shaojie Chang, Madeleine Wilson, Emily K Koons, Hao Gong, Scott S Hsieh, Lifeng Yu, Cynthia H McCollough, Shuai Leng","doi":"10.1117/12.3047277","DOIUrl":"https://doi.org/10.1117/12.3047277","url":null,"abstract":"<p><p>Coronary CT angiography (cCTA) is a non-invasive diagnostic test for coronary artery disease (CAD) that often faces challenges with dense calcifications and stents due to blooming artifacts, leading to stenosis overestimation. Virtual monoenergetic images (VMIs) from photon counting detector CT (PCD-CT) provide distinct clinical benefits. Lower keV VMIs enhance iodine and bone contrasts but struggle with blooming artifacts, while higher keV VMIs effectively reduce beam hardening, blooming, and metal artifacts but diminish contrast, presenting a trade-off among different keV levels. To address this, we introduce a contrast-guided virtual monoenergetic image synthesis framework (CITRINE) utilizing adversarial learning to synthesize images by integrating beneficial spectral characteristics from various keV levels. In this study, CITRINE is trained and validated with cardiac PCD-CT images using 100 keV and 70 keV VMIs as examples, showcasing its ability to synthesize images that combine the reduced blooming artifacts of 100 keV VMIs with the high contrast-to-noise features of 70 keV VMIs. CITRINE's performance was evaluated on three patient cCTA cases quantitatively and qualitatively in terms of image quality and assessments of percent diameter luminal stenosis. The synthesized images showed reduced blooming artifacts, similar to those observed at 100 keV VMI, and exhibited high iodine contrast in the coronary lumen, comparable to that of 70 keV VMI. Notably, compared to the original 70 keV VMI, CITRINE images achieved approximately 25% reduction in percent diameter stenosis while maintaining consistent contrast levels. These results confirm CITRINE's effectiveness in improving diagnostic accuracy and efficiency in cCTA by leveraging the full potential of multi-energy and PCD-CT technologies.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144082620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}