Proceedings of SPIE--the International Society for Optical Engineering最新文献

筛选
英文 中文
High-Fidelity 3D Reconstruction for Accurate Anatomical Measurements in Endoscopic Sinus Surgery. 高保真三维重建用于内镜鼻窦手术的精确解剖测量。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2025-02-01 Epub Date: 2025-04-07 DOI: 10.1117/12.3046391
Nicole Gunderson, Pengcheng Chen, Jeremy S Ruthberg, Randall A Bly, Eric J Seibel, Waleed M Abuzeid
{"title":"High-Fidelity 3D Reconstruction for Accurate Anatomical Measurements in Endoscopic Sinus Surgery.","authors":"Nicole Gunderson, Pengcheng Chen, Jeremy S Ruthberg, Randall A Bly, Eric J Seibel, Waleed M Abuzeid","doi":"10.1117/12.3046391","DOIUrl":"10.1117/12.3046391","url":null,"abstract":"<p><p>Achieving an accurate representation of the surgical scene is essential, as it enables precise surgical navigation. Surgeons currently rely on preoperative computed tomography (CT) scans to represent the surgical scene and plan sinus procedures. However, as tissue is resected and manipulated, the anatomy represented in preoperative images becomes increasingly inaccurate and outdated. The endoscopic 3D reconstruction provides an alternative solution to this challenge, for it captures the current surgical scene. Nevertheless, achieving high reconstruction accuracy is crucial in endoscopic sinus surgery (ESS), where tissue margins lie within submillimeter distances to critical anatomy such as the orbits, cranial nerves, carotid arteries, and dura mater. To fulfill the need for a highly accurate intraoperative method of surgical scene modeling in ESS, we propose a system to generate 3D reconstructions of the sinus to garner relevant qualitative and quantitative anatomic information that substantially diverges from preoperative CT images as the surgery progresses. To achieve this, the pipeline of Neural Radiance Fields (NeRF) is expanded to include methods that simulate stereoscopic views using only a monocular endoscope to iteratively refine the depth of reconstructions. The presented workflow provides accurate depth maps, global scaling, and geometric information without camera pose-tracking tools or fiducial markers. Additional methods of point cloud denoising, outlier removal, and dropout patching have been developed and implemented to increase point cloud robustness. This expanded workflow demonstrates the ability to create high-resolution and accurate 3D reconstructions of the surgical scene. Using a series of three cadaveric specimens, measurements of critical anatomy were evaluated with average reconstruction errors for ethmoid length and height being 0.25mm and 0.52mm, respectively.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13408 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12047413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144014059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated multi-lesion annotation in chest X-rays: annotating over 450,000 images from public datasets using the AI-based Smart Imagery Framing and Truthing (SIFT) system. 胸部x射线中的自动多病灶注释:使用基于人工智能的智能图像框架和真相(SIFT)系统注释来自公共数据集的超过45万张图像。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2025-02-01 Epub Date: 2025-04-10 DOI: 10.1117/12.3047189
Lin Guo, Fleming Y M Lure, Teresa Wu, Fulin Cai, Stefan Jaeger, Bin Zheng, Jordan Fuhrman, Hui Li, Maryellen L Giger, Andrei Gabrielian, Alex Rosenthal, Darrell E Hurt, Ziv Yaniv, Li Xia, Weijun Fang, Jingzhe Liu
{"title":"Automated multi-lesion annotation in chest X-rays: annotating over 450,000 images from public datasets using the AI-based Smart Imagery Framing and Truthing (SIFT) system.","authors":"Lin Guo, Fleming Y M Lure, Teresa Wu, Fulin Cai, Stefan Jaeger, Bin Zheng, Jordan Fuhrman, Hui Li, Maryellen L Giger, Andrei Gabrielian, Alex Rosenthal, Darrell E Hurt, Ziv Yaniv, Li Xia, Weijun Fang, Jingzhe Liu","doi":"10.1117/12.3047189","DOIUrl":"10.1117/12.3047189","url":null,"abstract":"<p><p>This work utilized an artificial intelligence (AI)-based image annotation tool, Smart Imagery Framing and Truthing (SIFT), to annotate pulmonary lesions and abnormalities and their corresponding boundaries on 452,602 chest X-ray (CXR) images (22 different types of desired lesions) from four publicly available datasets (CheXpert Dataset, ChestX-ray14 Dataset, MIDRC Dataset, and NIAID TB Portals Dataset). SIFT is based on Multi-task, Optimal-recommendation, and Max-predictive Classification and Segmentation (MOM ClaSeg) technologies to identify and delineate 65 different abnormal regions of interest (ROI) on CXR images, provide a confidence score for each labeled ROI, and various recommendations of abnormalities for each ROI, if the confidence score is not high enough. The MOM ClaSeg System integrating Mask R-CNN and Decision Fusion Network is developed on a training dataset of over 300,000 CXRs, containing over 240,000 confirmed abnormal CXRs with over 300,000 confirmed ROIs corresponding to 65 different abnormalities and over 67,000 normal (i.e., \"no finding\") CXRs. After quality control, the CXRs are entered into the SIFT system to automatically predict the abnormality type (\"Predicted Abnormality\") and corresponding boundary locations for the ROIs displayed on each original image. The results indicated that the SIFT system can determine the abnormality types of labeled ROIs and their boundary coordinates with high efficiency (improved 7.92 times) when radiologists used SIFT as an aide compared to radiologists using a traditional semi-automatic method. The SIFT system achieves an average sensitivity of 89.38%±11.46% across four datasets. This can significantly improve the quality and quantity of training and testing sets to develop AI technologies.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13409 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12034099/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144000263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of a "3.5th generation" photon counting detector CT architecture for higher spatial resolution and decreased ring artifact. “3.5代”光子计数检测器CT结构设计,提高空间分辨率,减少环形伪影。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1117/12.3045834
Scott S Hsieh
{"title":"Design of a \"3.5th generation\" photon counting detector CT architecture for higher spatial resolution and decreased ring artifact.","authors":"Scott S Hsieh","doi":"10.1117/12.3045834","DOIUrl":"10.1117/12.3045834","url":null,"abstract":"<p><p>Fourth generation CT was originally conceived to reduce ring artifacts from inhomogeneities in early energy integrating detector (EID) modules. These inhomogeneities are well controlled in modern EID modules but have reappeared in photon counting detector (PCD) modules, where fabrication techniques are not yet mature. Fourth generation CT was abandoned decades ago because of its high cost and scatter. We propose grafting its central insight into 3rd generation CT using a compact, modified X-ray source that operates with a high-speed flying focal spot over a limited range of travel (e.g., 1 cm). The PCD must be modified so that measured data is rebinned on-the-fly, so that data bandwidth requirements across the slip ring are unchanged. In this geometry, data from each PCD pixel is distributed to a several contiguous radial indices. This reduces ring artifacts that stem from pixel inhomogeneities and also allows recovery of missing data that is due to dead pixels or occlusion by the anti-scatter grid. Finally, if the dwell time at each focal spot location is very short (sub-microsecond), the maximum instantaneous surface temperature at the anode is reduced. This could be used to reduce focal spot size while maintaining the thermal limit of the focal track.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12108132/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144164144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recovery of GLRLM Features in Degraded Images using Deep Learning and Image Property Models. 利用深度学习和图像属性模型在退化图像中恢复GLRLM特征。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2025-02-01 Epub Date: 2025-04-11 DOI: 10.1117/12.3047257
Yijie Yuan, Huay Din, Grace Hyun Kim, Michael McNitt-Gray, J Webster Stayman, Grace J Gang
{"title":"Recovery of GLRLM Features in Degraded Images using Deep Learning and Image Property Models.","authors":"Yijie Yuan, Huay Din, Grace Hyun Kim, Michael McNitt-Gray, J Webster Stayman, Grace J Gang","doi":"10.1117/12.3047257","DOIUrl":"10.1117/12.3047257","url":null,"abstract":"<p><p>Radiomics models have been extensively used to predict clinical outcomes across various applications. However, their generalizability is often limited by undesirable feature values variability due to diverse imaging conditions. To address this issue, we previously developed a dual-domain deep learning approach to recover ground truth feature values in the presence of known blur and noise. The model consists of a differentiable approximation for radiomics calculation and a dual-domain loss function. We demonstrated model performance for gray-level co-occurrence matrix (GLCM) and histogram-based features. In this work, we extend the method to gray-level run length matrix (GLRLM) feature recovery. We introduce a novel algorithm for the differentiable approximation of GLRLMs. We assessed the performance of the GLRLM feature restoration network using lung CT image patches, with a focus on the accuracy of recovered feature values and classification performance between normal and COVID-positive lungs. The proposed network outperformed the baselines, achieving the lowest MSE in GLRLM feature recovery. Furthermore, a classification model based on the recovered GLRLM features reached an accuracy of 86.65%, closely aligning with the 88.85% accuracy of models using ground truth features, compared to 82.00% accuracy from degraded features. These results demonstrate the potential of our method as a robust tool for radiomics standardization.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13406 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12291091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144735865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Estimation of Anatomy and Implants in X-ray CT using a Mixed Prior Model. 基于混合先验模型的x线CT解剖和植入物联合估计。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1117/12.3046496
Xiao Jiang, Grace J Gang, J Webster Stayman
{"title":"Joint Estimation of Anatomy and Implants in X-ray CT using a Mixed Prior Model.","authors":"Xiao Jiang, Grace J Gang, J Webster Stayman","doi":"10.1117/12.3046496","DOIUrl":"10.1117/12.3046496","url":null,"abstract":"<p><p>Medical implants are often made of dense materials and pose great challenges to accurate CT reconstruction and visualization, especially in regions close to or surrounding implants. Moreover, it is common that diagnostics involving implanted patients require distinct visualization strategies for implants and anatomy indvidually. In this work, we propose a novel approach for joint estimation of anatomy and implants as separate image volumes using a mixed prior model. This prior model leverages a learning-based diffusion prior for the anatomy image and a simple 0-norm sparsity prior for implants to decouple the two volumes. Additionally, a hybrid mono-polyenergetic forward model is employed to effectively accommodate the spectral effects of implants. The proposed reconstruction process alternates between two steps: Diffusion posterior sampling is used to update the anatomy image, and classic optimization updates to the implant image and associated spectral coefficients. Evaluation in spine imaging with metal pedicle screw implants demonstrates that the proposed algorithm can achieve accurate decompositions. Moreover, anatomy reconstruction between the two pedicle screws, an area where all competing algorithms typically fail, is successful in visualizing details. The proposed algorithm also effectively avoids streaking and beam hardening artifacts in soft tissue, achieving 15.25% higher PSNR and 24.29% higher SSIM compared to normalized metal artifacts reduction (NMAR). These results suggest that mixed prior models can help to separate spatially and spectrally distinct objects that differ from standard anatomical features in ordinary single-energy CT to not only improve image quality but to enhance visualization of the two distinct image volumes.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12306201/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144755326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrast-guided Virtual Monoenergetic Image Synthesis via Adversarial Learning for Coronary CT Angiography using Photon Counting Detector CT. 基于对抗性学习的冠状动脉CT血管造影虚拟单能图像合成。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1117/12.3047277
Shaojie Chang, Madeleine Wilson, Emily K Koons, Hao Gong, Scott S Hsieh, Lifeng Yu, Cynthia H McCollough, Shuai Leng
{"title":"Contrast-guided Virtual Monoenergetic Image Synthesis via Adversarial Learning for Coronary CT Angiography using Photon Counting Detector CT.","authors":"Shaojie Chang, Madeleine Wilson, Emily K Koons, Hao Gong, Scott S Hsieh, Lifeng Yu, Cynthia H McCollough, Shuai Leng","doi":"10.1117/12.3047277","DOIUrl":"https://doi.org/10.1117/12.3047277","url":null,"abstract":"<p><p>Coronary CT angiography (cCTA) is a non-invasive diagnostic test for coronary artery disease (CAD) that often faces challenges with dense calcifications and stents due to blooming artifacts, leading to stenosis overestimation. Virtual monoenergetic images (VMIs) from photon counting detector CT (PCD-CT) provide distinct clinical benefits. Lower keV VMIs enhance iodine and bone contrasts but struggle with blooming artifacts, while higher keV VMIs effectively reduce beam hardening, blooming, and metal artifacts but diminish contrast, presenting a trade-off among different keV levels. To address this, we introduce a contrast-guided virtual monoenergetic image synthesis framework (CITRINE) utilizing adversarial learning to synthesize images by integrating beneficial spectral characteristics from various keV levels. In this study, CITRINE is trained and validated with cardiac PCD-CT images using 100 keV and 70 keV VMIs as examples, showcasing its ability to synthesize images that combine the reduced blooming artifacts of 100 keV VMIs with the high contrast-to-noise features of 70 keV VMIs. CITRINE's performance was evaluated on three patient cCTA cases quantitatively and qualitatively in terms of image quality and assessments of percent diameter luminal stenosis. The synthesized images showed reduced blooming artifacts, similar to those observed at 100 keV VMI, and exhibited high iodine contrast in the coronary lumen, comparable to that of 70 keV VMI. Notably, compared to the original 70 keV VMI, CITRINE images achieved approximately 25% reduction in percent diameter stenosis while maintaining consistent contrast levels. These results confirm CITRINE's effectiveness in improving diagnostic accuracy and efficiency in cCTA by leveraging the full potential of multi-energy and PCD-CT technologies.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144082620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Role of Harmonization: A Systematic Analysis of Various Task-based Scenarios. 协调的作用:对各种基于任务的情景的系统分析。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1117/12.3047096
Shao-Jun Xia, Liesbeth Vancoillie, Saman Sotoudeh-Paima, Mojtaba Zarei, Fong Chi Ho, Fakrul Islam Tushar, Xiaoyang Chen, Lavsen Dahal, Kyle J Lafata, Ehsan Abadi, Joseph Y Lo, Ehsan Samei
{"title":"The Role of Harmonization: A Systematic Analysis of Various Task-based Scenarios.","authors":"Shao-Jun Xia, Liesbeth Vancoillie, Saman Sotoudeh-Paima, Mojtaba Zarei, Fong Chi Ho, Fakrul Islam Tushar, Xiaoyang Chen, Lavsen Dahal, Kyle J Lafata, Ehsan Abadi, Joseph Y Lo, Ehsan Samei","doi":"10.1117/12.3047096","DOIUrl":"https://doi.org/10.1117/12.3047096","url":null,"abstract":"<p><p>In medical imaging, harmonization plays a crucial role in reducing variability arising from diverse imaging devices and protocols. Patient images obtained under different computed tomography (CT) scan conditions may show varying performance when analyzed using an artificial intelligence model or quantitative assessment. This necessitates the need for harmonization. Virtual imaging trial (VIT) through digital simulation can be used to develop and assess the effectiveness of harmonization models to minimize data variability. The purpose of this study was to assess the utility of a VIT platform for harmonization across a range of lung imaging scenarios. To ensure consistent and reliable analysis across different virtual imaging datasets, we conducted a multi-objective assessment encompassing three typical task-based scenarios: lung structure segmentation, chronic obstructive pulmonary disease (COPD) quantification, and lung nodule quantification. A physics-informed deep neural network was applied as the unified harmonization model for all three tasks. Evaluation results before and after harmonization reveal three findings: 1) modestly improved Dice scores and reduced Hausdorff Distances at 95th Percentile in lung structure segmentation; 2) decreased variation in biomarkers and radiomics features in COPD quantification; and 3) increased number of radiomics features with high intraclass correlation coefficient in lung nodule quantification. The results demonstrate the significant potential of harmonization across various task-based scenarios and provide a benchmark for the design of efficient harmonizers.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035823/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144059445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative Accuracy of CT Protocols for Cross-sectional and Longitudinal Assessment of COPD: A Virtual Imaging Study. 慢性阻塞性肺病横断面和纵向评估的CT定量准确性:一项虚拟成像研究。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1117/12.3046945
Mridul Bhattarai, Daniel W Shin, Fong Chi Ho, Saman Sotoudeh-Paima, Ilmar Hein, Steven Ross, Naruomi Akino, Kirsten L Boedeker, Ehsan Samei, Ehsan Abadi
{"title":"Quantitative Accuracy of CT Protocols for Cross-sectional and Longitudinal Assessment of COPD: A Virtual Imaging Study.","authors":"Mridul Bhattarai, Daniel W Shin, Fong Chi Ho, Saman Sotoudeh-Paima, Ilmar Hein, Steven Ross, Naruomi Akino, Kirsten L Boedeker, Ehsan Samei, Ehsan Abadi","doi":"10.1117/12.3046945","DOIUrl":"https://doi.org/10.1117/12.3046945","url":null,"abstract":"<p><p>Chronic obstructive pulmonary disease (COPD), encompassing chronic bronchitis and emphysema, requires precise quantification through CT imaging to accurately assess disease severity and progression. However, inconsistencies in imaging protocols often lead to unreliable measurements. This study aims to optimize CT acquisition and reconstruction protocols for cross-sectional and longitudinal CT measurements of COPD using a virtual (<i>in-silico</i>) imaging framework. We developed human models at various stages of emphysema and bronchitis, informed by the COPDGene cohort. The specifications of a clinical CT scanner (Aquilion ONE Prism, Canon Medical Systems) were integrated into a CT simulator. This simulation framework was validated against experimental data. The analysis focused on the impact of tube current and kernel sharpness on two COPD biomarkers: LAA-950 (percentage of lung voxels with attenuation less than -950 HU) and Pi10 (the square root of the wall area around an airway with an internal perimeter of 10 mm) and mean absolute error (MAE; a voxel-wise error metric for emphysema density measurements). The increase in dose level showed minimal impact on the Pi10 measurements, but affected the LAA-950, with a reduction in variability observed at higher dose levels. Increasing kernel sharpness introduced variability in the LAA-950 and Pi10 measurements and higher MAE with sharper kernels. Longitudinal analysis demonstrated that kernel sharpness contributed more to variability in the COPD biomarker measurements over time compared to dose level. Similarly, cross-sectional assessments showed that an increase in MAE, while a decrease in Pi10 measurement error with sharper kernels. The study underlines the need for standardized task-specific imaging protocols to enhance the reliability and accuracy of COPD assessments, thus improving diagnostic precision and patient assessments.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035825/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144043551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring bias in spectral CT material decomposition: a simulation-based approach. 探索光谱CT材料分解中的偏差:基于模拟的方法。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1117/12.3047261
Milan Smulders, Dufan Wu, Rajiv Gupta
{"title":"Exploring bias in spectral CT material decomposition: a simulation-based approach.","authors":"Milan Smulders, Dufan Wu, Rajiv Gupta","doi":"10.1117/12.3047261","DOIUrl":"https://doi.org/10.1117/12.3047261","url":null,"abstract":"<p><strong>Introduction -: </strong>Computed tomography (CT) imaging has seen significant advancements with the introduction of spectral CT, which improves material differentiation by acquiring images at multiple energy levels. Photon-counting CT (PCCT) is an emerging technique to implement spectral CT with photon counting detectors that may discriminate detected photon energies to different energy bins. Material differentiation is achieved by decomposing the acquired data into two-material models such as brain/bone or brain/iodine. However, such decomposition is susceptible to bias due to inaccurate physical modeling. In this study, we aim to study the relationship between the material decomposition bias and the energy thresholds used in PCCT, under ideal, noiseless models.</p><p><strong>Methods -: </strong>A projection-based material decomposition model was used to directly decompose projection data. Bias simulation was performed using a Shepp-Logan phantom with brain/bone and brain/iodine as basis materials. X-ray spectra were generated using a fixed 10 keV threshold and a varying threshold sampled from 20 to 90 keV, with extra sampling points around iodine's k-edge. Virtual monoenergetic images (VMIs) at 60 keV and 140 keV were analyzed to evaluate bias for each material and material pair.</p><p><strong>Results -: </strong>Lower energy thresholds (<40 keV) introduced a larger bias in material decomposition, with peaks observed between 30 and 40 keV, particularly around the k-edge of iodine. The bias generally decreased with increasing thresholds above 50 keV, especially for non-basis materials. This trend was consistent across brain/bone and brain/iodine bases and for both 60 and 140 keV VMIs.</p><p><strong>Conclusion -: </strong>Energy thresholds significantly affect the accuracy of projection-based material decomposition in PCCT. Greater differences between thresholds lead to reduced decomposition bias. Future research should incorporate non-ideal detector responses and noise, as well as explore image-domain decomposition and real phantom studies with possible translation to improve patient care.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13405 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12060251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144044107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRISM Lite: A lightweight model for interactive 3D placenta segmentation in ultrasound. PRISM Lite:用于超声交互式3D胎盘分割的轻量级模型。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2025-02-01 Epub Date: 2025-04-11 DOI: 10.1117/12.3047410
Hao Li, Baris Oguz, Gabriel Arenas, Xing Yao, Jiacheng Wang, Alison Pouch, Brett Byram, Nadav Schwartz, Ipek Oguz
{"title":"PRISM Lite: A lightweight model for interactive 3D placenta segmentation in ultrasound.","authors":"Hao Li, Baris Oguz, Gabriel Arenas, Xing Yao, Jiacheng Wang, Alison Pouch, Brett Byram, Nadav Schwartz, Ipek Oguz","doi":"10.1117/12.3047410","DOIUrl":"10.1117/12.3047410","url":null,"abstract":"<p><p>Placenta volume measured from 3D ultrasound (3DUS) images is an important tool for tracking the growth trajectory and is associated with pregnancy outcomes. Manual segmentation is the gold standard, but it is time-consuming and subjective. Although fully automated deep learning algorithms perform well, they do not always yield high-quality results for each case. Interactive segmentation models could address this issue. However, there is limited work on interactive segmentation models for the placenta. Despite their segmentation accuracy, these methods may not be feasible for clinical use as they require relatively large computational power which may be especially prohibitive in low-resource environments, or on mobile devices. In this paper, we propose a lightweight interactive segmentation model aiming for clinical use to interactively segment the placenta from 3DUS images in real-time. The proposed model adopts the segmentation from our fully automated model for initialization and is designed in a human-in-the-loop manner to achieve iterative improvements. The Dice score and normalized surface Dice are used as evaluation metrics. The results show that our model can achieve superior performance in segmentation compared to state-of-the-art models while using significantly fewer parameters. Additionally, the proposed model is much faster for inference and robust to poor initial masks. The code is available at https://github.com/MedICL-VU/PRISM-placenta.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13406 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12128914/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信