Bohan Jiang, Yihao Liu, Andrew J McNeil, Chloe Cho, Inga Saknite, Steven Z Pavletic, Edward W Cowen, Rachel Weiss, Bennett A Landman, Benoit M Dawant, Eric R Tkaczyk
{"title":"Integrating 2D Dermatological Photography with 3D Anatomical Surfaces.","authors":"Bohan Jiang, Yihao Liu, Andrew J McNeil, Chloe Cho, Inga Saknite, Steven Z Pavletic, Edward W Cowen, Rachel Weiss, Bennett A Landman, Benoit M Dawant, Eric R Tkaczyk","doi":"10.1117/12.3085708","DOIUrl":"https://doi.org/10.1117/12.3085708","url":null,"abstract":"<p><p>Standardized 2D photography plays an essential role in dermatologic practice, supporting longitudinal documentation, patient monitoring, and consensus-based clinical scoring. However, photographs taken from limited views often suffer from reduced anatomical context and missing body-location information. 3D representations enable unified spatial interpretation of multi-view imagery. Recent developments in computer vision have made it feasible to infer dense correspondences between 2D images and a 3D human mesh. In this study, we explored integrating 2D dermatological images with a 3D surface model using DensePose, a deep learning-based human dense correspondence framework. This creates an anatomically grounded representation that supports mesh-level analyses and recovers spatial context for each image. We use a dataset including four full body photographs (front, back, and each side) from each of 147 subjects with chronic graft-versus-host disease, for a total of 588 images. Our method integrates these multiple 2D full-body photographs captured across varied body shapes and camera angles into a 3D mesh. We further showed that the resulting 3D mesh enables quantification of the extent to which individual 2D images, or their combinations, represent the complete body surface. On average, a single full body view captures 28% of the body surface, while adding a second, third, and fourth view increases average coverage to 50%, 72%, and 80%, respectively. To assess spatial consistency, we annotated up to 10 anatomical landmarks per patient on 80 images across 20 patients and reported a median pairwise geodesic distance between corresponding landmarks of 4.6 cm. These findings can guide how dermatology images are captured and support future opportunities in monitoring, education, and communication using existing infrastructure.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13105281/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint Estimation of Scatter Distribution and Material Maps in Volumetric Dual-Layer Cone-Beam CT.","authors":"Xiao Jiang, Yue Fan, Altea Lorenzon, Grace J Gang, J Webster Stayman","doi":"10.1117/12.3086291","DOIUrl":"https://doi.org/10.1117/12.3086291","url":null,"abstract":"<p><p>Dual-energy cone-beam CT (DE-CBCT) provides a flexible spectral imaging approach for volumetric material quantification, but its accuracy can be severely degraded by photon scatter. We propose a joint scatter estimation and one-step material decomposition framework for DE-CBCT using a dual-layer flat-panel detector. In this method, scatter is explicitly modeled in the physical forward model, while a diffusion model provides a sophisticated anatomical prior for material decomposition and also acts as a scatter-free prior for scatter estimation. The joint estimation framework operates iteratively, alternating between diffusion posterior sampling and prior-guided scatter estimation to jointly recover material density maps and scatter distributions. Experiments using an anthropomorphic head phantom show that direct material decomposition from cone-beam projections leads to significant bias and cross-talk between water and bone channels. In contrast, the proposed joint estimation strategy achieves accurate water-bone separation and substantially improves quantitative accuracy. The proposed method provides a promising framework for accurate spectral CBCT imaging.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13924 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13095153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"One-Step Material Decomposition Using Spectral Diffusion Posterior Sampling in Sparse-View Dual-Layer CT.","authors":"Yue Fan, Xiao Jiang, Zimo Liu, Grace J Gang, J Webster Stayman","doi":"10.1117/12.3087781","DOIUrl":"https://doi.org/10.1117/12.3087781","url":null,"abstract":"<p><p>Spectral X-ray imaging technology can enhance interventional imaging by providing material-specific information. However, typical interventional systems with cone beam CT (CBCT) face limitations in scan range and acquisition speed, making sparse-view imaging an attractive option for faster, lower-dose scans, although sparse data pose greater challenges for accurate material quantification. We previously proposed a Spectral Diffusion Posterior Sampling (Spectral DPS) framework for generalizable one-step material decomposition, which integrates a learned diffusion prior with a physics-based forward model. The approach requires only a single unconditional training while providing flexibility of application across different system configurations. In this work, we investigate the performance of Spectral DPS on a physical dual-layer CBCT system under different sparse-view conditions. Specifically, an anthropomorphic head phantom was scanned on a bench-top CBCT system equipped with a dual-layer flat-panel detector, and one-step material decomposition was performed using Spectral DPS with the number of projections ranging from 720 down to 45 views. Results demonstrate that Spectral DPS can achieve accurate water-bone separation, and maintains uniform soft tissue appearance even though such homogeneous regions represent an out-of-distribution condition. Decomposition results remain robust down to 120 views, with minimal loss of structural detail and quantitative accuracy. At more extreme sparsity, Spectral DPS exhibits increased variability and artifacts, though bony structures remain discernible at as few as 90 views. These findings highlight the potential of Spectral DPS to enable accurate and robust material decomposition under certain sparse-view acquisitions for interventional CBCT imaging.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13924 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13102010/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucie Dole, Juan C Prieto, Jerusha Weaver, Hashiya Kana, Emily W Gower
{"title":"Automated Detection and Assessment of Post-Operative Eyelid Outcome in Trachomatous Trichiasis Surgery.","authors":"Lucie Dole, Juan C Prieto, Jerusha Weaver, Hashiya Kana, Emily W Gower","doi":"10.1117/12.3086682","DOIUrl":"https://doi.org/10.1117/12.3086682","url":null,"abstract":"<p><p>Trachomatous trichiasis (TT) is an ocular condition in which the eyelid turns inward causing the eyelashes to scratch the eye, leading to blindness and affecting millions of people worldwide. The standard treatment for TT is surgery, where an incision is made to rotate the eyelid margin outward, repositioning the eyelashes to their original position. However, outcomes after surgery are often suboptimal, with a high risk of post-operative trichiasis (PTT). Studies have shown that the appearance of the immediate post-operative eyelid strongly correlates with the success of the procedure after six weeks, emphasizing the importance of early identification and correction of poor surgical results. We propose a mobile application that detects post-operative eyelids at higher risk of poor outcomes, enabling surgeons in the field to have immediate feedback to perform the necessary corrections and improve patient outcomes. The algorithm is based on the well-established Faster R-CNN model, which detects and classifies parts of the eyelid into three categories: under-correction, overcorrection, or appropriate correction. The model achieved 75.7% recall for under-correction and 75.6% recall for overcorrection, demonstrating strong sensitivity in identifying potential for adverse outcomes. The UI/UX of the application was designed with an intuitive interface that allows users to take a picture of an eyelid and evaluate the surgical result using the model. The algorithm runs in under 12 seconds and has been tested by TT surgeons in the field. This work has the potential to significantly improve post-operative trichiasis outcomes, reducing PTT rates, and improving life quality in resource-limited settings.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13109787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Color CT, literally.","authors":"Scott S Hsieh","doi":"10.1117/12.3086134","DOIUrl":"https://doi.org/10.1117/12.3086134","url":null,"abstract":"<p><p>Spectral CT has long been described as \"color CT\" figuratively, with energy bands of invisible x-rays analogous to the wavelength bands of visible light. Here, we propose to take the idea of \"color CT\" concept literally: what would happen if the photodiodes used in energy-integrating detectors were color sensitive, just like subpixels in consumer cameras are color sensitive? This idea at first seems to yield no tangible benefit, but we hypothesize that spectral <i>optical</i> imaging can be used to measure spectral <i>X-ray</i> imaging. Using information from publicly available datasheets, we modeled the optical transport of gadolinium oxysulfide (GOS), the most common scintillator used today. GOS emits majority green but minority red light, and the green light preferentially self-absorbed. This explains the characteristic yellow color of raw GOS crystal, and leads us to predict that low energy X-rays, which deposit energy at shallower depths, will produce relatively more red photons that reach the photodiode. This effect is weak but occurs after X-ray quanta have been amplified (roughly 1000-fold) into visible light quanta, so it could be detected with appropriately sensitive digitization electronics. We compare the proposed color detector to conventional dual-layer detectors, and find that the spectral separation is worse primarily because of the GOS K-edge. While its spectral performance would be poor, it could be made widely available because it requires changes in the silicon circuitry only. It is possible that color photodiodes would one day replace monochromatic diodes in energy-integrating detectors, making spectral imaging standard among new scanners.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13924 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13105320/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yannan Lin, Anne C Hoyt, Nina M Capiro, Olivia Linden, Luoting Zhuang, Mehmet Ulvi Saygi Ayvaci, Mehmet Eren Ahsen, Ya-Chen Tina Shih, William Hsu
{"title":"Modeling Workflow, Operational, and Financial Implications of AI-Enabled Same-Day Diagnostic Follow-Up for Screening Mammograms.","authors":"Yannan Lin, Anne C Hoyt, Nina M Capiro, Olivia Linden, Luoting Zhuang, Mehmet Ulvi Saygi Ayvaci, Mehmet Eren Ahsen, Ya-Chen Tina Shih, William Hsu","doi":"10.1117/12.3087732","DOIUrl":"10.1117/12.3087732","url":null,"abstract":"<p><p>While much of the evaluation of artificial intelligence (AI) in healthcare has focused on technical performance metrics such as accuracy or area under the curve, real-world adoption critically depends on how AI reshapes clinical workflows, operations, and revenue streams. Simulation models provide a means to anticipate these impacts before implementation, allowing stakeholders to weigh benefits against potential harm. In this study, we used discrete-event simulation to evaluate an AI-assisted workflow for same-day diagnostic breast imaging following abnormal screening mammograms. The revised workflow captured an additional of 1.1% mammography screening patients who might otherwise be lost to follow-up. It also eliminated the need for a second visit for diagnostic workup for 11% of mammography screening patients. It also increased daily work relative value units by 4.8%, translating to an estimated $15,979 in additional annual gain, while extending clinic operating hours by 2.9%, amounting to 109.5 hours annually. These findings highlight how simulation modeling can inform the operational and financial implications of AI adoption in imaging workflows in clinical practice.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13061406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147647836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chongyu Qu, Allen J Luna, Thomas Z Li, Junchao Zhu, Junlin Guo, Juming Xiong, Kim L Sandler, Bennett A Landman, Yuankai Huo
{"title":"Cohort-Aware Agents for Individualized Lung Cancer Risk Prediction Using a Retrieval-Augmented Model Selection Framework.","authors":"Chongyu Qu, Allen J Luna, Thomas Z Li, Junchao Zhu, Junlin Guo, Juming Xiong, Kim L Sandler, Bennett A Landman, Yuankai Huo","doi":"10.1117/12.3087567","DOIUrl":"10.1117/12.3087567","url":null,"abstract":"<p><p>Accurate lung cancer risk prediction remains challenging due to substantial variability across patient populations and clinical settings-<b>no single model performs best for all cohorts</b>. To address this, we propose personalized lung cancer risk prediction agent that dynamically selects the most appropriate model for each patient by combining cohort-specific knowledge with modern retrieval and reasoning techniques. Given a patient's CT scan and structured metadata-including demographic, clinical, and nodule-level features-the agent <b>first</b> performs cohort retrieval using FAISS-based similarity search across nine diverse real-world cohorts to identify the most relevant patient population from a multi-institutional database. <b>Second</b>, a Large Language Model (LLM) is prompted with the retrieved cohort and its associated performance metrics to recommend the optimal prediction algorithm from a pool of eight representative models, including classical linear risk models (e.g., Mayo, Brock), temporally-aware models (e.g., TD-VIT, DLSTM), and multi-modal computer vision based approaches (e.g., Liao, Sybil, DLS, DLI). This two-stage agent pipeline-retrieval via FAISS and reasoning via LLM-enables dynamic, cohort-aware risk prediction personalized to each patient's profile. Building on this architecture, the agent supports flexible and cohort-driven model selection across diverse clinical populations, offering a practical path toward individualized risk assessment in real-world lung cancer screening.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13077649/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147693995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chunming Gu, Aeden K Davis, Andrew R Thoreson, Kristin D Zhao, Nirusha Lachman, Yunhong Shu, Francis Baffour, Cynthia H McCollough, Shuai Leng, Hao Gong
{"title":"AI-assisted six-dimensional CT imaging for joint instability assessment.","authors":"Chunming Gu, Aeden K Davis, Andrew R Thoreson, Kristin D Zhao, Nirusha Lachman, Yunhong Shu, Francis Baffour, Cynthia H McCollough, Shuai Leng, Hao Gong","doi":"10.1117/12.3087185","DOIUrl":"https://doi.org/10.1117/12.3087185","url":null,"abstract":"<p><p>Arthritis can lead to long-term functional disability and morbidity, yet assessing joint instability (i.e., the precursor of osteoarthritis) remains challenging with current standard static imaging methods. To tackle this challenge, we propose artificial intelligence (AI)-assisted six-dimensional CT (6DCT) imaging which integrates 3D spatial data, temporal information, spectral information, and joint kinematics, acquired from photon-counting-detector (PCD) CT. A physics-informed motion correction network (ATOM) was developed to reduce motion artifacts and improve quantitative accuracy in dynamic wrist CT images. Another physics-informed prior-assisted Bayesian network (PMBD) was developed to perform multi-material decomposition and facilitate soft tissue differentiation. Kinematics metrics were derived from motion-corrected bone images to characterize the patterns of joint motion. Patient, cadaver, and phantom scans were used in validation. ATOM reduced motion artifacts (p<0.05; before / after correction): e.g., low-intensity region scores 0.72±0.03 / 0.77±0.03; Structural similarity index 0.94±0.05 / 0.99±0.01. PMBD enhanced material quantification accuracy compared to conventional iterative algorithm: mean-absolute-percent-error PMBD [0.6%, 3.4%], conventional [0.8%, 5%]. Also, PMBD further improved the differentiation between collagen and water, successfully highlighting pathological features in cadaver ligaments. These techniques were further integrated for the downstream kinematic analyses. The proposed 6DCT has the potential to provide accurate visualization and quantitative assessment of dynamic joint pathology.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13924 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13105322/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147791529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Li, Yijie Yuan, Xin Wang, Huay Din, Cheng Ting Lin, Grace Hyun Kim, Michael F McNitt-Gray, J Webster Stayman, Grace J Gang
{"title":"A Generative Model of Lung CT Conditioned on Radiomics Features.","authors":"Patrick Li, Yijie Yuan, Xin Wang, Huay Din, Cheng Ting Lin, Grace Hyun Kim, Michael F McNitt-Gray, J Webster Stayman, Grace J Gang","doi":"10.1117/12.3088074","DOIUrl":"10.1117/12.3088074","url":null,"abstract":"<p><p>Deep learning image generation has been an active area of research in a number of applications. However, traditional generative models are not able to control specific properties of the image outputs. In this work, we propose a deep learning model that produces images according to user-specified texture feature values. We adopted a diffusion transformer architecture and used texture features to condition the reverse process. The model was trained on lung patches from a public lung CT database. Two texture features, autocorrelation and inverse difference derived from the Gray-Level Co-Occurrence Matrix were used as conditional inputs. We evaluated the ability of the model to produce samples with similar feature values as the conditional inputs. Both in-distribution and out-of-distribution conditions were evaluated. Results indicate that the model is able to generate image patches resembling lung parenchyma. The autocorrelation and inverse difference of generated images have good agreement with and exhibit low variability around the conditional inputs. The concordance correlation coefficient between real and generated samples is 0.9962 for autocorrelation and 0.9402 for inverse difference. Visual assessment of image samples reveals that real and generated images share similar features, consistent with their radiomic properties. Findings from this work indicate that the diffusion transformer model is able to generate images with texture features closely aligning with the conditional inputs, supporting its utility for highly controlled data generation for a variety of applications.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13924 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13089804/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147724675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhennong Chen, Siyeop Yoon, Matthew Tivnan, Junyoung Park, Quanzheng Li, Dufan Wu
{"title":"Noise2Noise Diffusion for Thin-Slice Brain CT Denoising without Clean Training Data.","authors":"Zhennong Chen, Siyeop Yoon, Matthew Tivnan, Junyoung Park, Quanzheng Li, Dufan Wu","doi":"10.1117/12.3085968","DOIUrl":"10.1117/12.3085968","url":null,"abstract":"<p><p>Thin-slice and ultra-high-resolution (UHR) computed tomography (CT) images usually suffer from excessive noise due to limited radiation dose being distributed into small detector units. In the context of deep learning-based image denoising, it is challenging to obtain clean training data from real patients for thin-slice CT because it is unethical to apply excessively high doses to the patients. Supervised learning with noise insertion would face unmatched noise models and possibly domain shift of the training data, which leads to deteriorated model performance. In this work, we proposed a novel method that combined the diffusion model with Noise2Noise, which achieved high-quality noise reduction without requiring clean training data. A conditional denoising diffusion probabilistic model (cDDPM) was trained to sample a CT slice from its two adjacent slices. Because of the noise independence between the input and target, DDPM would sample another noise realization of the target slice. During the inference, the trained DDPM was sampled multiple times to acquire multiple samples of the target slice, which were averaged for a slice with lower noise. The method was validated with simulated thin-slice brain CTs, demonstrating improved quantitative metrics and visual impressions compared to Noise2Noise UNet and supervised DDPM with a slightly mismatched noise model. The mean absolute errors (MAE) of the brain tissues were 4.12, 3.27, and 2.62 for Noise2Noise UNet, supervised DDPM, and the proposed method, respectively. The perceptual loss (LPIPS) was 0.0917, 0.0635, and 0.0422 for the three methods, respectively.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13924 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13070551/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147679327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}