Isabel Montero, Saman Sotoudeh-Paima, Ehsan Abadi, Ehsan Samei
{"title":"Intra- and inter-scanner CT variability and their impact on diagnostic tasks.","authors":"Isabel Montero, Saman Sotoudeh-Paima, Ehsan Abadi, Ehsan Samei","doi":"10.1117/12.3047016","DOIUrl":"https://doi.org/10.1117/12.3047016","url":null,"abstract":"<p><p>The increased development and production of Computed Tomography (CT) scanner technology has expanded patient access to advanced and affordable medical imaging technologies but has also introduced sources of variability in the clinical imaging landscape, which may influence patient care. This study examines the impact of intra-scanner and inter-scanner variability on image quality and quantitative imaging tasks, with a focus on the detectability index (d') as a measure of patient-specific task performance. We evaluated 813 clinical phantom image sets from the COPDGene study, aggregated by CT scanner make, model, and acquisition and reconstruction protocol. Each phantom image set was assessed for image quality metrics, including the Noise Power Spectrum (NPS) and in-plane Modulation Transfer Function (MTF). The d' index was calculated for 12 hypothetical lesion detection tasks, emulating clinically relevant lung and liver lesions of varying sizes and contrast levels. Qualitatively, analysis showed intra-scanner variability in NPS and MTF curves measured for identical acquisition and reconstruction settings. Inter-scanner comparisons demonstrated variability in d' measurements across different scanner makes and models, of similar acquisition and reconstruction settings. The study showed an intra-scanner variability of up to 13.7% and an inter-scanner variability of up to 19.3% in the d' index. These findings emphasize the need for considering scanner variability in patient-centered care and indicate that CT technology may influence the reliability of imaging tasks. The results of this study further motivate the development of virtual scanner models to better model and mitigate the variability observed in the clinical imaging landscape.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035824/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144037036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Fenwick, Navid NaderiAlizadeh, Vahid Tarokh, Darin Clark, Jayasai Rajagopal, Anuj Kapadia, Nicholas Felice, Ehsan Samei, Ehsan Abadi
{"title":"Black-box Optimization of CT Acquisition and Reconstruction Parameters: A Reinforcement Learning Approach.","authors":"David Fenwick, Navid NaderiAlizadeh, Vahid Tarokh, Darin Clark, Jayasai Rajagopal, Anuj Kapadia, Nicholas Felice, Ehsan Samei, Ehsan Abadi","doi":"10.1117/12.3046807","DOIUrl":"https://doi.org/10.1117/12.3046807","url":null,"abstract":"<p><p>Protocol optimization is critical in Computed Tomography (CT) for achieving desired diagnostic image quality while minimizing radiation dose. Due to the inter-effect of influencing CT parameters, traditional optimization methods rely on the testing of exhaustive combinations of these parameters. This poses a notable limitation due to the impracticality of exhaustive parameter testing. This study introduces a novel methodology leveraging Virtual Imaging Trials (VITs) and reinforcement learning to more efficiently optimize CT protocols. Computational phantoms with liver lesions were imaged using a validated CT simulator and reconstructed with a novel CT reconstruction Toolkit. The optimization parameter space included tube voltage, tube current, reconstruction kernel, slice thickness, and pixel size. The optimization process was done using a Proximal Policy Optimization (PPO) agent which was trained to maximize the Detectability Index (d') of the liver lesion for each reconstructed image. Results showed that our reinforcement learning approach found the absolute maximum d' across the test cases while requiring 79.7% fewer steps compared to an exhaustive search, demonstrating both accuracy and computational efficiency, offering a efficient and robust framework for CT protocol optimization. The flexibility of the proposed technique allows for use of varying image quality metrics as the objective metric to maximize for. Our findings highlight the advantages of combining VIT and reinforcement learning for CT protocol management.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035822/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Gong, Shravani A Kharat, Shuai Leng, Lifeng Yu, Scott S Hsieh, Joel G Fletcher, Cynthia H McCollough
{"title":"Improving low-contrast liver metastasis detectability in deep-learning CT denoising using adaptive local fusion driven by total uncertainty and predictive mean.","authors":"Hao Gong, Shravani A Kharat, Shuai Leng, Lifeng Yu, Scott S Hsieh, Joel G Fletcher, Cynthia H McCollough","doi":"10.1117/12.3047080","DOIUrl":"10.1117/12.3047080","url":null,"abstract":"<p><p>Emerging deep-learning-based CT denoising techniques have the potential to improve diagnostic image quality in low-dose CT exams. However, aggressive radiation dose reduction and the intrinsic uncertainty in convolutional neural network (CNN) outputs are detrimental to detecting critical lesions (e.g., liver metastases) in CNN-denoised images. To tackle these issues, we characterized CNN output distribution via total uncertainty (i.e., data + model uncertainties) and predictive mean. Local mean-uncertainty-ratio (MUR) was calculated to detect highly unreliable regions in the denoised images. A MUR-driven adaptive local fusion (ALF) process was developed to adaptively merge local predictive means with the original noisy images, thereby improving image robustness. This process was incorporated into a previously validated deep-learning model observer to quantify liver metastasis detectability, using area under localization receiver operating characteristic curve (LAUC) as the figure-of-merit. For proof-of-concept, the proposed method was established and validated for a ResNet-based CT denoising method. A recent patient abdominal CT dataset was used in validation, involving 3 lesion sizes (7, 9, and 11 mm), 3 lesion contrasts (15, 20, and 25 HU), and 3 dose levels (25%, 50%, and 100% dose). Visual inspection and quantitative analyses were conducted. Statistical significance was tested. Total uncertainty at lesions and liver background generally increased as radiation dose decreased. With fixed dose, lesion-wise MUR showed no dependency on lesion size or contrast, but exhibited large variance across lesion locations (MUR range ~0.7 to 19). Compared to original ResNet-based denoising, the MUR-driven ALF consistently improved lesion detectability in challenging conditions such as lower dose, smaller lesion size, or lower contrast (range of absolute gain in LAUC: 0.04 to 0.1; P-value 0.008). The proposed method has the potential to improve reliability of deep-learning CT denoising and enhance lesion detection.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12070600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xu Han, Fangfang Fan, Jingzhao Rong, Zhen Li, Georges El Fakhri, Qingyu Chen, Xiaofeng Liu
{"title":"Fair Text to Medical Image Diffusion Model with Subgroup Distribution Aligned Tuning.","authors":"Xu Han, Fangfang Fan, Jingzhao Rong, Zhen Li, Georges El Fakhri, Qingyu Chen, Xiaofeng Liu","doi":"10.1117/12.3046450","DOIUrl":"10.1117/12.3046450","url":null,"abstract":"<p><p>The Text to Medical Image (T2MedI) approach using latent diffusion models holds significant promise for addressing the scarcity of medical imaging data and elucidating the appearance distribution of lesions corresponding to specific patient status descriptions. Like natural image synthesis models, our investigations reveal that the T2MedI model may exhibit biases towards certain subgroups, potentially neglecting minority groups present in the training dataset. In this study, we initially developed a T2MedI model adapted from the pre-trained Imagen framework. This model employs a fixed Contrastive Language-Image Pre-training (CLIP) text encoder, with its decoder fine-tuned using medical images from the Radiology Objects in Context (ROCO) dataset. We conduct both qualitative and quantitative analyses to examine its gender bias. To address this issue, we propose a subgroup distribution alignment method during fine-tuning on a target application dataset. Specifically, this process involves an alignment loss, guided by an off-the-shelf sensitivity-subgroup classifier, which aims to synchronize the classification probabilities between the generated images and those expected in the target dataset. Additionally, we preserve image quality through a CLIP-consistency regularization term, based on a knowledge distillation framework. For evaluation purposes, we designated the BraTS18 dataset as the target, and developed a gender classifier based on brain magnetic resonance (MR) imaging slices derived from it. Our methodology significantly mitigates gender representation inconsistencies in the generated MR images, aligning them more closely with the gender distribution in the BraTS18 dataset.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13411 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12360154/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144884423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
X Wang, G Shi, A Sivakumar, T Ye, A Sylvester, J W Stayman, W Zbijewski
{"title":"A Conditional Generative Diffusion Model of Trabecular Bone with Tunable Microstructure.","authors":"X Wang, G Shi, A Sivakumar, T Ye, A Sylvester, J W Stayman, W Zbijewski","doi":"10.1117/12.3049125","DOIUrl":"10.1117/12.3049125","url":null,"abstract":"<p><strong>Purpose: </strong>We developed a generative model capable of producing synthetic trabecular bone that can be precisely tuned to achieve specific structural characteristics, such as bone volume fraction (BV/TV), trabecular thickness (Tb.Th), and spacing (Tb.Sp).</p><p><strong>Methods: </strong>The generative model is based on Diffusion Transformers (DiT), a latent diffusion approach employing a transformer architecture in the denoising network. To control the microstructure characteristics of the synthetic trabecular bone samples, the model is conditioned on BV/TV, Tb.Th, and Tb.Sp. The training data involved 29898 256×256-pixel Regions of Interest (ROIs) extracted from micro-CT volumes ( <math><mn>50</mn> <mspace></mspace> <mi>μ</mi> <mtext>m</mtext></math> voxel size) of 20 femoral bone specimens, paired with trabecular metrics computed within each ROI; the training/validation split was 9:1. For testing, 3499 synthetic bone samples were generated over a wide range of condition (target) microstructure metrics. Results were evaluated in terms of (i) the ability to cover real-world distribution of trabecular structures (coverage), (ii) agreement with target metric values (Pearson Correlation), and (iii) consistency of the metrics across multiple realizations of the DiT model with fixed condition (Coefficient of Variation, CV).</p><p><strong>Results: </strong>The model achieved good coverage of real-world bone microstructures and visual similarity to true trabecular ROIs. Pearson Correlations against the condition (target) metric values were high: 0.9540 for BV/TV, 0.9618 for Tb.Th, and 0.9835 Tb.Sp. Microstructural characteristics of the synthetic samples were stable across DiT realizations, with CV ranging from 3.37% to 11.78% for BV/TV, 2.27% to 3.22% for Tb.Th, and 2.53% to 5.00% for Tb.Sp.</p><p><strong>Conclusion: </strong>The proposed generative model is capable of generating realistic digital trabecular bones that can be precisely tuned to achieve specified microstructural characteristics. Possible applications include virtual clinical trials of new skeletal image biomarkers and establishing priors for advanced image reconstruction.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13410 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12302783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144746429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Gong, Thomas M Huber, Timothy Winfree, Scott S Hsieh, Lifeng Yu, Shuai Leng, Cynthia H McCollough
{"title":"Simulating scanner-and algorithm-specific 3D CT noise texture using physics-informed 2D and 2.5D generative neural network models.","authors":"Hao Gong, Thomas M Huber, Timothy Winfree, Scott S Hsieh, Lifeng Yu, Shuai Leng, Cynthia H McCollough","doi":"10.1117/12.3047909","DOIUrl":"10.1117/12.3047909","url":null,"abstract":"<p><p>Low-dose CT simulation is needed to assess reconstruction/denoising techniques and optimize dose. Projection-domain noise-insertion methods require manufacturers' proprietary tools. Image-domain noise-insertion methods face various challenges that affect generalizability, and few have been systematically validated for 3D noise synthesis. To improve generalizability, we presented a <b>p</b>hysics-informed model-based gener<b>a</b>tive neura<b>l</b> network for simulating scann<b>e</b>r- and algorithm-specific low-dose C<b>T e</b>xams (PALETTE). PALETTE included a noise-prior-generation process, a Noise2Noisier sub-network, and a noise-texture-synthesis sub-network. Custom regularization terms were developed to enforce 3D noise texture quality. Using PALETTE, one 2D and two 2.5D models (denoted as 2.5D <i>N-N</i> and <i>N-1</i>) were developed to conduct 2D and effective 3D noise modeling, respectively (input/output images: 2D - 1/1, 2.5D <i>N-N</i> - 3/3, 2.5D <i>N-1</i> - 5/1). These models were trained and tested with an open-access abdominal CT dataset, including 20 testing cases reconstructed with two kernels and various field-of-view. In visual inspection, the 2D and 2.5D <i>N-N</i> models generated realistic local and global noise texture, while 2.5D <i>N-1</i> showed more perceptual difference using the sharper kernel and coronal reformat. In quantitative evaluation, local noise level was compared using mean-absolute-percent-difference (MAPD), and global spectral similarity was assessed using spectral correlation mapper (SCM) and spectral angle mapper (SAM). The 2D model provided equivalent or relatively better performance than 2.5D models, showing well-matched local noise levels and high spectral similarity compared to the reference (sharper/smoother kernels): MAPD - 2D 1.5%/5.6% (p>0.05), 2.5D <i>N-N</i> 8.5%/7.9% (p<0.05), 2.5D <i>N-1</i> 12.3%/10.9% (p<0.05); mean SCM - 2D 0.97/0.97, 2.5D <i>N-N</i> 0.96/0.97, 2.5D <i>N-1</i> 0.85/0.97; mean SAM - 2D 0.12/0.12, 2.5D <i>N-N</i> 0.14/0.12, 2.5D <i>N-1</i> 0.37/0.12. With tripled model width, the 2.5D <i>N-N</i> outperformed <i>N-1</i>. This indicated 2.5D models need more learning capacity to further enhance 3D noise modeling. Using physics-based prior information, PALETTE can provide high-quality low-dose CT simulation to resemble scanner- and algorithm-specific 3D noise characteristics.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12070530/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongxing Zhou, Jarod Wellinghoff, Cynthia H McCollough, Lifeng Yu
{"title":"Patient-specific Channelized Hotelling observer to estimate lesion detectability in CT.","authors":"Zhongxing Zhou, Jarod Wellinghoff, Cynthia H McCollough, Lifeng Yu","doi":"10.1117/12.3047381","DOIUrl":"10.1117/12.3047381","url":null,"abstract":"<p><p>Task-based image quality assessment is essential for CT protocol and radiation dose optimization. Despite many ongoing efforts, there is still an unmet need to measure and monitor the quality of images acquired from each patient exam. In this work, we developed a patient-specific channelized Hotelling observer (CHO)-based method to estimate the lesion detectability for each individual patient scan. The ensemble of background was created from patient images to include both relatively uniform regions and anatomically varying regions. Signals were modelled from lesions of different sizes and contrast levels after incorporating the effect of contrast-dependent spatial resolution. Index of detectability (d') was estimated using a CHO framework. This method was applied to clinical patient images obtained from a CT scanner at 3 different radiation dose levels. The d' for 5 different lesion size/contrast conditions was calculated across the scan range of each patient exam. The average noise levels and the d' averaged from 5 conditions were 13.2/3.78, 17.1/2.93 and 21.9/2.43 at 100%, 50% and 25% dose levels, respectively.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12086740/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ensembled YOLO for multiorgan detection in chest x-rays.","authors":"Sivaramakrishnan Rajaraman, Zhaohui Liang, Zhiyun Xue, Sameer Antani","doi":"10.1117/12.3047210","DOIUrl":"10.1117/12.3047210","url":null,"abstract":"<p><p>Chest radiographs are a vital tool for identifying pathological changes within the thoracic cavity. Artificial intelligence (AI) and machine learning (ML) driven screening or diagnostic applications require accurate detection of anatomical structures within the Chest X-ray (CXR) image. The You Only Look Once (YOLO) object detection models have recently gained prominence for their efficacy in detecting anatomical structures in medical images. However, state-of-the-art results using it are typically for single anatomical organ detection. Advanced image analysis would benefit from simultaneous detection more than one anatomical organ. In this work we propose a multi-organ detection technique through two recent YOLO versions and their sub-variants. We evaluate their effectiveness in detecting lung and heart regions in CXRs simultaneously. We used the JSRT CXR dataset for internal training, validation, and testing. Further, the generalizability of the models is evaluated using two external test sets, viz., the Montgomery CXR dataset and a subset of the RSNA CXR dataset against available annotations therein. Our evaluation demonstrates that YOLOv9 models notably outperform YOLOv8 variants. We demonstrated further improvements in detection performance through ensemble approaches.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13407 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996225/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiqing Shen, David Dreizin, Blanca Inigo, Mathias Unberath
{"title":"ProtoSAM-2D: 2D Semantic Segment Anything Model with Mask-Level Prototype-Learning and Distillation.","authors":"Yiqing Shen, David Dreizin, Blanca Inigo, Mathias Unberath","doi":"10.1117/12.3047044","DOIUrl":"10.1117/12.3047044","url":null,"abstract":"<p><p>Semantic segmentation on medical images has been marginally improved by deep learning methods with higher accuracy and efficiency in delineating anatomical structures and pathologies. However, traditional deep learning methods approaches have relied on fully supervised training using specific datasets on specific image modalities, limiting their adaptability across diverse medical imaging scenarios. The emergence of foundation models like the Segment Anything Model (SAM) has opened new avenues for interactive instance segmentation, but they lack semantic understanding, particularly in medical contexts where anatomical knowledge is important. To address this gap, we introduce ProtoSAM-2D, an enhancement of SAM-Med2D that integrates semantic capabilities into the interactive segmentation framework for 2D medical images. Our approach leverages a novel mask-level prototype prediction mechanism to generate and classify feature representations for each segmented instance by comparing them to learned prototypes. It enables efficient categorization of diverse anatomical structures and facilitates rapid adaptation to new classes. To optimize computational efficiency, we implement a distillation method that reduces the complexity of both the SAM architecture and the prototype classification head while maintaining high-quality semantic segmentation. We evaluate ProtoSAM-2D on multi-organ segmentation tasks across two imaging modalities, demonstrating its effectiveness in zero-shot and few-shot learning scenarios. By combining the flexibility of SAM with prototype-based learning, ProtoSAM-2D offers a novel solution for adaptable semantic segmentation across diverse medical imaging tasks.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13406 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12270500/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scott S Hsieh, Katsuyuki Taguchi, Marlies C Goorden, Dennis R Schaart
{"title":"The potential of scintillator-based photon counting detectors: evaluation using Monte Carlo simulations.","authors":"Scott S Hsieh, Katsuyuki Taguchi, Marlies C Goorden, Dennis R Schaart","doi":"10.1117/12.3045837","DOIUrl":"10.1117/12.3045837","url":null,"abstract":"<p><p>Direct conversion photon counting detectors (PCDs) using CdTe, CZT, or Si for the sensor material are being investigated and manufactured. Indirect conversion, scintillator-based PCDs have historically thought to be too slow for the high flux requirements of diagnostic CT. Recent scintillators investigated for e.g. PET applications are very fast and inspire us to rethink this paradigm. We evaluate the potential of a LaBr<sub>3</sub>:Ce PCD using Monte Carlo simulations. We compared a CdTe PCD and a LaBr<sub>3</sub>:Ce PCD, assuming a pixel density of 9 pixels/mm<sup>2</sup> in each case and a surrounding 2D anti-scatter grid. A 1×1 mm<sup>2</sup> area was illuminated by flat field X-rays and the lower bound on the noise for varying contrast types and material decomposition scenarios was calculated. For conventional imaging without material decomposition, the LaBr<sub>3</sub>:Ce PCD performed worse than CdTe because of the need to wrap pixels in reflector, which reduces geometric efficiency. For water-bone material decomposition, the two PCDs performed similarly with our assumptions on pulse shape and PCD geometry. For three-material decomposition with a K-edge imaging agent, LaBr<sub>3</sub>:Ce reduced variance by about 35% because of the elimination of charge sharing that is present in CdTe. These results motivate further exploration of scintillator-based PCDs as an alternative to direct conversion PCDs, especially with future K-edge imaging agents.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12100487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144144828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}