{"title":"System design options for a high-resolution, conventional scintillator detector.","authors":"Scott S Hsieh","doi":"10.1117/12.3085929","DOIUrl":"https://doi.org/10.1117/12.3085929","url":null,"abstract":"<p><p>It is commonly believed that energy integrating detector (EID) CT has limited spatial resolution because of reflective septa placed around each pixel. Reducing the pixel size would increase the proportion of detector area lost. We point out that this can be avoided if the detector is tilted, because then rays would penetrate reflective septa and reach buried scintillator. But how should this be accomplished? We consider three options for system design: (1) with the false focal spot geometry, the detector is focused at a \"false\" point about 5 cm from the true focal spot. The anti-scatter grid (ASG) must still point towards the true focal spot. (2) With tilted detector modules, each detector tile is tilted by a few degrees. Slight discontinuities are created at boundaries between tiles, which could cause artifacts if not appropriately handled. The anti-scatter grid would have to float above some pixels. (3) With tilted cuts, the scintillator is structured with angled cuts rather than vertical cuts before the cuts are infilled with reflective septa. This minimizes changes to reconstruction and the ASG. All three options would improve fill factor. Using ray tracing simulations and assuming a reflective septa width of 0.1 mm, we estimate that the effective fill factor is increased from 81% for a conventional, 1 mm pitch detector to 91% with a high-resolution, 0.5 mm pitch detector that is tilted. These concepts open up a pathway for high-resolution CT without requiring photon counting technology.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13924 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13095171/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huay Din, Yijie Yuan, Grace Hyun Kim, Michael McNitt-Gray, J Webster Stayman, Grace J Gang
{"title":"Using a Physics-Based Approach to Standardize Radiomics Values: Experimental Validation in an Anthropomorphic Phantom on a Clinical CT Scanner Using a Range of Dose Levels and Reconstruction Kernels.","authors":"Huay Din, Yijie Yuan, Grace Hyun Kim, Michael McNitt-Gray, J Webster Stayman, Grace J Gang","doi":"10.1117/12.3087679","DOIUrl":"https://doi.org/10.1117/12.3087679","url":null,"abstract":"<p><p>Radiomics relies on quantitative features to discern the underlying biological signatures. However, feature dependence on the imaging systems themselves hampers the creation of reproducible and generalizable models. We have previously proposed a novel framework to remove the effects of system blur and image noise on radiomic calculations and performed validation in simulation studies. In this work, we extended the analysis and evaluated the method on CT data acquired of an anthropomorphic phantom with realistic lung textures. Data was acquired at five different dose levels and reconstructed using eight different reconstruction kernels. To test the generalizability of the method, we applied our proposed method to standardize from all possible starting and reference kernel pairs under all measured dose levels for a total of 320 cases (8×8×5). Standardization was performed for radiomics features from four classes, histogram, GLCM, gray-level run length matrix-(GLRLM), and wavelet transforms. Results indicate that standardized radiomics features are closer to the reference and on average, the average absolute percentage difference from reference over all features is improved by a factor of three compared with unstandardized features. In addition, we found that standardization from a smoother to a sharper kernel is a more challenging task and that performance is comparable across all dose levels. This work shows that the proposed standardization method is effective in standardizing radiomics feature values across a wide range of imaging conditions in clinical CT.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13924 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13095156/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Random forest-based out-of-distribution detection for robust lung cancer segmentation.","authors":"Aneesh Rangnekar, Harini Veeraraghavan","doi":"10.1117/12.3088144","DOIUrl":"10.1117/12.3088144","url":null,"abstract":"<p><p>Accurate detection and segmentation of cancerous lesions from computed tomography (CT) scans is essential for automated treatment planning and cancer treatment response assessment. Transformer-based models with self-supervised pretraining can produce reliably accurate segmentation from in-distribution (ID) data but degrade when applied to out-of-distribution (OOD) datasets. We address this challenge with RF-Deep, a random forest classifier that utilizes deep features from a pretrained transformer encoder of the segmentation model to detect OOD scans and enhance segmentation reliability. The segmentation model comprises a Swin Transformer encoder, pretrained with masked image modeling (SimMIM) on 10,432 unlabeled 3D CT scans covering cancerous and non-cancerous conditions, with a convolution decoder, trained to segment lung cancers in 317 3D scans. Independent testing was performed on 603 3D CT public datasets that included one ID dataset and four OOD datasets comprising chest CTs with pulmonary embolism (PE) and COVID-19, and abdominal CTs with kidney cancers and healthy volunteers. RF-Deep detected OOD cases with a FPR95 of 18.26%, 27.66%, and < 1.0% on PE, COVID-19, and abdominal CTs, consistently outperforming established OOD approaches. The RF-Deep classifier provides a simple and effective approach to enhance reliability of cancer segmentation in ID and OOD scenarios.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13055919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147640720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chloe M S Choi, Aneesh Rangnekar, Jue Jiang, Harini Veeraraghavan
{"title":"Topological data analysis visualization for interpretable assessment of AI contouring quality.","authors":"Chloe M S Choi, Aneesh Rangnekar, Jue Jiang, Harini Veeraraghavan","doi":"10.1117/12.3087776","DOIUrl":"https://doi.org/10.1117/12.3087776","url":null,"abstract":"<p><p>Advances in artificial intelligence have increased the availability of auto-segmentation tools. However, conventional accuracy metrics cannot capture regional segmentation differences between AI models or with respect to reference segmentations, necessary to interpret contouring variations. To address this, we developed a novel distance metric based on topological data analysis (TDA) to evaluate 3D point cloud representations of segmentations applied to six organs-at-risk (OARs) and lung gross tumor volume (GTV). A total of 34 CTs and 54 CBCTs were analyzed to compare a bespoke AI segmentation method with reference clinical contours. TDA involved: (1) converting segmentations into 3D point clouds, (2) clustering them into regions via K-means with fixed seeds and cluster numbers determined by the Elbow method, (3) constructing directed graphs for AI and reference clusters using centroids as nodes, and (4) computing distances using unbalanced optimal mass transport. Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95) were also calculated. TDA successfully identified local regions of high deviation in both OARs and GTVs of varying shapes. It correlated positively with HD95 and negatively with DSC based on Pearson's correlation coefficient. Computation was efficient, averaging 1.72 s, and TDA effectively highlighted regions of greatest mismatch, providing quantitative visualization of poor concordance. In conclusion, we developed a new TDA metric for comparing auto-segmentation of GTV and OARs. Importantly, it allows visualization of mismatching regions thus potentially allowing faster contour editing and evaluation.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13076017/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147693951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao Jiang, Wojciech B Zbijewski, J Webster Stayman
{"title":"Forward and Back Projectors for Gradient-based Rigid Motion Estimation in X-ray Imaging.","authors":"Xiao Jiang, Wojciech B Zbijewski, J Webster Stayman","doi":"10.1117/12.3085679","DOIUrl":"10.1117/12.3085679","url":null,"abstract":"<p><p>Accurate rigid motion estimation plays a crucial role in numerous X-ray imaging tasks, including 2D/3D registration, motion-compensated reconstruction, and geometric calibration. Despite its importance, gradient-based optimization for these tasks has been constrained by the absence of efficient and generalizable projectors that are differentiable with respect to motion. In this work, we introduce a framework for differentiable forward- and back-projectors that enables scalable, accurate, and memory-efficient gradient computation. Unlike prior approaches that depend on auto-differentiation or are limited to specific projector algorithms, our method derives a general analytical gradient formulation for both forward and backprojection in the continuous domain. The key insight is that the motion gradients of these operations can be expressed directly in terms of the original projection operators themselves, yielding a unified gradient computation scheme applicable across diverse projector types. Building on this analytical foundation, we implement a discretized version equipped with an acceleration strategy that effectively balances computational efficiency and memory consumption. Experimental evaluations demonstrate the capability of the proposed approach: in 2D/3D registration, our method achieves approximately 8× speedup over an existing differentiable forward projector with comparable accuracy, and in motion-compensated analytical reconstruction, it enhances image sharpness and structural fidelity on physical phantom data while offering substantial efficiency gains over existing gradient-based method.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13089803/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147724741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of Fluence Reduction versus Sparsity for Diffusion Posterior Sampling Reconstruction in Low-Dose CT.","authors":"Zimo Liu, Xin Wang, Xiao Jiang, Altea Lorenzon, Grace J Gang, J Webster Stayman","doi":"10.1117/12.3087836","DOIUrl":"https://doi.org/10.1117/12.3087836","url":null,"abstract":"<p><p>Low-dose computed tomography (CT) remains a popular research topic with the advent of an increasing number of algorithmic solutions to control noise. One such approach that enforces data consistency through a model-based data likelihood term but that also includes a deep learning generative prior is Diffusion Posterior Sampling (DPS). This technique is formulated within a probabilistic framework and is capable of generating high-quality reconstructions under noisy and/or undersampled conditions. However, one major unanswered question is, given the opportunity to design a low-dose protocol, how should low dose be achieved - through sparse sampling or reduced fluence per projection. In this work, we conducted a simulation study and systematically investigated the impact of acquisition parameters - the number of views <math><mo>(</mo> <msub><mrow><mi>n</mi></mrow> <mrow><mtext>view</mtext></mrow> </msub> <mo>)</mo></math> and incident photons per view <math><mo>(</mo> <msub><mrow><mi>I</mi></mrow> <mrow><mn>0</mn></mrow> </msub> <mo>)</mo></math> - on DPS-based CT reconstruction. We performed a 2D sweep over different combinations of the number of views <math><mo>(</mo> <msub><mrow><mi>n</mi></mrow> <mrow><mtext>view</mtext></mrow> </msub> <mo>)</mo></math> and incident photons per view <math><mo>(</mo> <msub><mrow><mi>I</mi></mrow> <mrow><mn>0</mn></mrow> </msub> <mo>)</mo></math> and compared reconstructions with an equivalent total incident photons (TIP). Reconstruction quality was evaluated in terms of PSNR (Peak Signal-to-Noise Ratio), bias, and posterior sample variability. We found that the number of views had a strong influence on image quality and that most performance curves showed a transition where too few views had a large negative impact on performance. We observed that there is an advantage to be gained by jointly optimizing both the fluence per view and the number of views, with a trend of an increasing number of views required for a higher total incident fluence. These findings provide a strategy for optimizing CT acquisition protocols that adapt both fluence per view and sparsity to optimally maintain image quality at reduced radiation doses.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13924 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13095152/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J Webster Stayman, Yue Fan, Xiao Jiang, Grace J Gang
{"title":"Design of Multiple Focal Spots for High-Resolution CT.","authors":"J Webster Stayman, Yue Fan, Xiao Jiang, Grace J Gang","doi":"10.1117/12.3087742","DOIUrl":"10.1117/12.3087742","url":null,"abstract":"<p><p>High spatial resolution in computed tomography (CT) is increasingly limited by the x-ray source. While x-ray focal spots can be made very small, such source tend to have very limited fluence. Although longer integration times can be used to deliver the required fluence, this can result in long scan times and resolution losses due to motion. Recent work has suggested that multiple structured x-ray sources may be able to help deliver both the required fluence and the fine resolution details in projection data to enable better high-resolution images. In this work, we develop an analytic predictor of task-based performance for a multi-source CT system with structured focal spots. We adopt an observer model and design a dual source system for high resolution tasks and evaluate that system compared to a traditional small monolithic focal spot system. We demonstrate that the design process can be used to enhance high resolution performance showing the potential efficacy of the analysis and design process.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13924 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13089855/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147724720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Chen, Mojtaba Safari, Rowan Barker-Clarke, Xiaofeng Yang, Jacob G Scott
{"title":"Prostate Cancer Classification Using Quantum Machine Learning on Multi-parametric MRI.","authors":"Peng Chen, Mojtaba Safari, Rowan Barker-Clarke, Xiaofeng Yang, Jacob G Scott","doi":"10.1117/12.3085665","DOIUrl":"https://doi.org/10.1117/12.3085665","url":null,"abstract":"<p><p>Prostate cancer is one of the most common malignancies in men, and accurate classification of lesions into clinically significant or insignificant categories is essential for patient management. Multiparametric MRI, including T2-weighted (T2W) and apparent diffusion coefficient (ADC) imaging, enables extraction of quantitative radiomic features that can be exploited by machine learning for improved diagnosis. While classical machine-learning models such as support vector machines (SVM), random forests (RF), and extreme gradient boosting (XGBoost) have shown strong performance in radiomics-based classification, quantum machine learning offers a new paradigm that leverages quantum feature spaces, potentially uncovering complex patterns inaccessible to classical kernels. In this study, we systematically compared three classical classifiers (SVM-RBF, RF, and XGBoost) with three quantum support vector machine (QSVM) variants: amplitude encoding, angle encoding, and angle encoding with a projected quantum kernel, for classifying 299 prostate lesions from the PROSTATEx Challenge dataset. Radiomics features were extracted from T2W and ADC images. A nested stratified cross-validation pipeline was employed, with feature selection performed in each outer fold and hyperparameters optimized via grid search. QSVM-amplitude encoding achieved the highest mean AUC (0.799 ± 0.082), outperforming SVM-RBF (0.608 ± 0.244) and matching or exceeding RF (0.728 ± 0.083) and XGBoost (0.720 ± 0.065), while offering higher sensitivity at comparable specificity. These findings demonstrate that qubit-efficient QSVMs can deliver competitive or superior performance in small-sample, low-dimensional clinical imaging settings, highlighting their potential for prostate cancer lesion classification.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13110830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yihao Liu, Andrew J McNeil, Bohan Jiang, Michael E Kim, Inga Saknite, Aaron T Zhao, Placide Mbala-Kingebeni, Olivier Tshiani Mbaya, Tyra Silaphet, Rachel Weiss, Lori E Dodd, Edward W Cowen, Steven Z Pavletic, Veronique Nussenblatt, Benoit M Dawant, Bennett A Landman, Eric R Tkaczyk
{"title":"Assessing Open-world Foundation Models for Zero-shot Skin Segmentation in Clinical Dermatological Photographs.","authors":"Yihao Liu, Andrew J McNeil, Bohan Jiang, Michael E Kim, Inga Saknite, Aaron T Zhao, Placide Mbala-Kingebeni, Olivier Tshiani Mbaya, Tyra Silaphet, Rachel Weiss, Lori E Dodd, Edward W Cowen, Steven Z Pavletic, Veronique Nussenblatt, Benoit M Dawant, Bennett A Landman, Eric R Tkaczyk","doi":"10.1117/12.3085924","DOIUrl":"https://doi.org/10.1117/12.3085924","url":null,"abstract":"<p><p>Skin segmentation from clinical photography is a crucial step in dermatological image analysis. However, the variability in skin tones, lighting conditions, anatomical regions, and the presence of additional objects introduces significant challenges. Due to these complexities, the segmentation process is often performed manually, as developing an algorithm capable of handling such diverse conditions is particularly difficult. Recently, open-world foundation models have emerged, offering the potential to generalize across diverse and unseen conditions. These models present a promising opportunity for dermatology. In this work, we adopt two such models-Grounding DINO and SAM 2-to construct a pipeline for <i>zero-shot</i> skin segmentation in dermatology. We evaluated our approach on two clinical skin photography datasets comprising 27,378 images. Based on a manual rating protocol, 77.1% of the segmentations were deemed acceptable, demonstrating robustness in handling real-world clinical photographs. Our results highlight the potential of open-world foundation models to address a challenging problem in dermatology with minimal human involvement.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13105278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Chen, Kailin Yang, Mojtaba Safari, Junbo Peng, Bingqi Guo, Peng Qi, Arda Durmaz, Jacob Miller, Shlomo Koyfman, Richard L J Qiu, Xiaofeng Yang, Jacob G Scott
{"title":"A Pilot Study of Multimodal Dosiomics and Longitudinal Delta-Radiomics for Predicting Radiation-Induced Xerostomia in Head-and-Neck Cancer.","authors":"Peng Chen, Kailin Yang, Mojtaba Safari, Junbo Peng, Bingqi Guo, Peng Qi, Arda Durmaz, Jacob Miller, Shlomo Koyfman, Richard L J Qiu, Xiaofeng Yang, Jacob G Scott","doi":"10.1117/12.3086834","DOIUrl":"https://doi.org/10.1117/12.3086834","url":null,"abstract":"<p><p>Radiation-induced xerostomia remains a common and debilitating side effect in head-and-neck cancer radiotherapy, despite advances in volumetric modulated arc therapy (VMAT). Traditional dose-volume histogram (DVH) metrics capture only part of the variation in toxicity, motivating the use of multimodal imaging biomarkers such as dosiomics and radiomics to characterize dose distribution and tissue response better. In this pilot study, we present an integrated framework combining DVH metrics, 3D dosiomics features, baseline planning CT (pCT) radiomics, and novel longitudinal delta-radiomics derived from daily cone-beam CT-based synthetic CT (sCT) images to predict post-treatment xerostomia severity. In a cohort of ten high-risk oropharyngeal cancer patients treated with VMAT at the Cleveland Clinic, wrapper-based feature selection yielded a compact set of 15 predictors (5 DVH, 3 dosiomics, 4 pCT radiomics, 3 Δ-sCT radiomics). Using cross-validation, four classifiers, including support-vector machine (SVM), regularized logistic regression (GLMnet), Naïve Bayes, and k-nearest neighbors, achieved consistently strong performance for discriminating grade I vs. grade II xerostomia, with AUC of 0.97-1.00, accuracy of 0.90-0.93, uniformly high sensitivity (1.00), specificity of 0.75-0.83, and F1 scores of 0.923-0.945. SVM and GLMnet showed the best overall balance of discrimination and robustness. These results demonstrate the potential of integrating dosiomics with multiphase radiomics, particularly time-resolved delta-radiomics, for individualized xerostomia risk prediction.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13110829/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}