Radiology advancesPub Date : 2025-01-28eCollection Date: 2025-01-01DOI: 10.1093/radadv/umaf004
Mingwei Xie, Haonan Wang, Zehong Yang, Ming Gao, Guangzi Shi, Xingnan Liao, Zhongqiang Luo, Xiaomeng Li, Jun Shen
{"title":"Artificial intelligence model for automatic 3-dimensional reconstruction of ossicular chain and bony labyrinth from high-resolution CT.","authors":"Mingwei Xie, Haonan Wang, Zehong Yang, Ming Gao, Guangzi Shi, Xingnan Liao, Zhongqiang Luo, Xiaomeng Li, Jun Shen","doi":"10.1093/radadv/umaf004","DOIUrl":"10.1093/radadv/umaf004","url":null,"abstract":"<p><strong>Background: </strong>Three-dimensional (3D) reconstruction of ossicular chain and bony labyrinth based on temporal bone high-resolution CT (HRCT) is useful for diagnosis and treatment guidance of middle and inner ear diseases. However, these structures are small and irregular, making manual reconstruction time-consuming.</p><p><strong>Purpose: </strong>To develop and validate an artificial intelligence (AI) model based on semisupervised learning for automated 3D reconstruction of ossicular chain and bony labyrinth on HRCT images.</p><p><strong>Methods: </strong>HRCT images from 304 ears of 152 consecutive patients retrospectively collected from a single center were randomly divided into training (246 ears), validation (28 ears), and internal test (30 ears) cohorts for model development. A novel semisupervised ear bone segmentation framework was used to train the AI model, and its performance was evaluated by Dice similarity coefficients. The trained algorithm was applied to a temporally independent test dataset of 30 ears of 15 patients from the same center for comparison with manual 3D reconstruction for processing time, target volume, and visual assessment of segmentation.</p><p><strong>Results: </strong>The AI model demonstrated a Dice score of 0.948 (95% CI, 0.940-0.955) for the internal and 0.979 (95% CI, 0.973-0.986) for the temporally independent test sets. In the latter dataset, the AI model required 2% or less processing time of manual 3D reconstruction for each ear (17.7 seconds ± 10.1 vs 1080.5 seconds ± 149.8; <i>P</i> < .001) and had an accuracy comparable to human experts in the volume and visual assessment of segmentation targets (<i>P</i> = .237-1.000). In a subgroup analysis, the model achieved accurate segmentation (Dice scores of 0.98-0.99) across various diseases (eg, otitis media, mastoiditis, otosclerosis, middle and inner ear malformations, Ménière disease).</p><p><strong>Conclusion: </strong>The AI model enables robust, efficient and accurate 3D reconstruction for the small structures such as ossicular chain and bony labyrinth on HRCT images.</p>","PeriodicalId":519940,"journal":{"name":"Radiology advances","volume":"2 1","pages":"umaf004"},"PeriodicalIF":0.0,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12429245/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145246290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radiology advancesPub Date : 2025-01-06eCollection Date: 2025-01-01DOI: 10.1093/radadv/umaf001
Binsheng Zhao, Nancy Obuchowski, Hao Yang, Yen Chou, Hong Ma, Pingzhen Guo, Ying Tang, Lawrence Schwartz, Daniel Sullivan
{"title":"Comparing quantitative imaging biomarker alliance volumetric CT classifications with RECIST response categories.","authors":"Binsheng Zhao, Nancy Obuchowski, Hao Yang, Yen Chou, Hong Ma, Pingzhen Guo, Ying Tang, Lawrence Schwartz, Daniel Sullivan","doi":"10.1093/radadv/umaf001","DOIUrl":"10.1093/radadv/umaf001","url":null,"abstract":"<p><strong>Purpose: </strong>To assess agreement between CT volumetry change classifications derived from Quantitative Imaging Biomarker Alliance Profile cut-points (ie, QIBA CTvol classifications) and the Response Evaluation Criteria in Solid Tumors (RECIST) categories.</p><p><strong>Materials and methods: </strong>Target lesions in lung, liver, and lymph nodes were randomly chosen from patients in 10 historical clinical trials for various cancers, ensuring a balanced representation of lesion types, diameter ranges described in the QIBA Profile, and variations in change magnitudes. Three radiologists independently segmented these lesions at baseline and follow-up scans using 2 software tools. Two types of predefined disagreements were assessed: Type I: substantive disagreement, where the disagreement between QIBA CTvol classifications and RECIST categories could not be attributed to the improved sensitivity of volumetry in detecting changes; and Type II: disagreement potentially arising from the improved sensitivity of volumetry in detecting changes. The proportion of lesions with disagreements between QIBA CTvol and RECIST, as well as the type of disagreements, was reported along with 95% CIs, both overall and within subgroups representing various factors.</p><p><strong>Results: </strong>A total of 2390 measurements from 478 lesions (158 lungs, 170 livers, 150 lymph nodes) in 281 patients were included. QIBA CTvol agreed with RECIST in 66.6% of interpretations. Of the 33.4% of interpretations with discrepancies, substantive disagreement (Type I) occurred in only 1.5% (95% CI: [0.8%, 2.1%]). Factors such as scanner vendor (<i>P</i> = .584), segmentation tool (<i>P</i> = .331), and lesion type (<i>P</i> = .492) were not significant predictors of disagreement. Significantly more disagreements were observed for larger lesions (≥50 mm, as defined in the QIBA Profile).</p><p><strong>Conclusion: </strong>We conclude that QIBA CTvol classifications agree with RECIST categories.</p>","PeriodicalId":519940,"journal":{"name":"Radiology advances","volume":"2 1","pages":"umaf001"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11739520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143019615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radiology advancesPub Date : 2025-01-01Epub Date: 2025-02-18DOI: 10.1093/radadv/umaf009
Yifan Peng, Qingyu Chen, George Shih
{"title":"DeepSeek is open-access and the next AI disrupter for radiology.","authors":"Yifan Peng, Qingyu Chen, George Shih","doi":"10.1093/radadv/umaf009","DOIUrl":"10.1093/radadv/umaf009","url":null,"abstract":"","PeriodicalId":519940,"journal":{"name":"Radiology advances","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12363671/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144985337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radiology advancesPub Date : 2024-12-17eCollection Date: 2025-01-01DOI: 10.1093/radadv/umae035
Alexander Zhou, Zelong Liu, Andrew Tieu, Nikhil Patel, Sean Sun, Anthony Yang, Peter Choi, Hao-Chih Lee, Mickael Tordjman, Louisa Deyer, Yunhao Mei, Valentin Fauveau, Georgios Soultanidis, Bachir Taouli, Mingqian Huang, Amish Doshi, Zahi A Fayad, Timothy Deyer, Xueyan Mei
{"title":"MRAnnotator: multi-anatomy and many-sequence MRI segmentation of 44 structures.","authors":"Alexander Zhou, Zelong Liu, Andrew Tieu, Nikhil Patel, Sean Sun, Anthony Yang, Peter Choi, Hao-Chih Lee, Mickael Tordjman, Louisa Deyer, Yunhao Mei, Valentin Fauveau, Georgios Soultanidis, Bachir Taouli, Mingqian Huang, Amish Doshi, Zahi A Fayad, Timothy Deyer, Xueyan Mei","doi":"10.1093/radadv/umae035","DOIUrl":"10.1093/radadv/umae035","url":null,"abstract":"<p><strong>Purpose: </strong>To develop a deep learning model for multi-anatomy segmentation of diverse anatomic structures on MRI.</p><p><strong>Materials and methods: </strong>In this retrospective study, 44 structures were annotated using a model-assisted workflow with manual human finalization in 2 curated datasets: an internal dataset of 1518 MRI sequences (843 patients) from various clinical sites within a health system, and an external dataset of 397 MRI sequences (263 patients) from an independent imaging center for benchmarking. The internal dataset was used to train an nnU-Net model (MRAnnotator), while the external dataset evaluated MRAnnotator's generalizability across significant image acquisition distribution shifts. MRAnnotator was further benchmarked against an nnU-Net model trained on the AMOS dataset and 2 current multi-anatomy MRI segmentation models, TotalSegmentator MRI (TSM) and MRSegmentator (MRS). Performance throughout was quantified using the Dice score.</p><p><strong>Results: </strong>MRAnnotator achieved an overall average Dice score of 0.878 (95% CI: 0.873, 0.884) on the internal dataset test set and 0.875 (95% CI: 0.869, 0.880) on the external dataset benchmark, demonstrating strong generalization (<i>P</i> = .899). On the AMOS test set, MRAnnotator achieved comparable performance for relevant classes (0.889 [0.866, 0.909]) to an AMOS-trained nnU-Net (0.895 [0.871, 0.915]) (<i>P</i> = .361) and outperformed TSM (0.822 [0.800, 0.842], <i>P</i> < .001) and MRS (0.867 [0.844, 0.887], <i>P</i> < .001). TSM and MRS were also evaluated on the relevant classes from the internal and external datasets and were unable to achieve comparable performance to MRAnnotator.</p><p><strong>Conclusion: </strong>MRAnnotator achieves robust and generalizable MRI segmentation across 44 anatomic structures. Future direction will incorporate additional anatomic structures into the datasets and model. Model weights are publicly available on GitHub. The external test set with annotations is available upon request.</p>","PeriodicalId":519940,"journal":{"name":"Radiology advances","volume":"2 1","pages":"umae035"},"PeriodicalIF":0.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12429175/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145246316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radiology advancesPub Date : 2024-12-12eCollection Date: 2025-01-01DOI: 10.1093/radadv/umae034
Leon M Bischoff, Sebastian Nowak, Maximilian Mader, Maike Theis, Thomas Vollbrecht, Alexander Isaak, Daniel Kuetting, Claus C Pieper, Annegret Geipel, Florian Kipfmueller, Brigitte Strizek, Alois M Sprinkart, Julian A Luetkens
{"title":"Fetal MRI deep learning segmentation of body and lung in congenital diaphragmatic hernia.","authors":"Leon M Bischoff, Sebastian Nowak, Maximilian Mader, Maike Theis, Thomas Vollbrecht, Alexander Isaak, Daniel Kuetting, Claus C Pieper, Annegret Geipel, Florian Kipfmueller, Brigitte Strizek, Alois M Sprinkart, Julian A Luetkens","doi":"10.1093/radadv/umae034","DOIUrl":"10.1093/radadv/umae034","url":null,"abstract":"<p><strong>Purpose: </strong>To determine if deep learning (DL) segmentation of total fetal body volume (TFBV) and total fetal lung volume (TFLV) in fetuses with congenital diaphragmatic hernia has comparable performance to manual segmentation.</p><p><strong>Materials and methods: </strong>A total of 208 fetal MRI studies with congenital diaphragmatic hernia, acquired between August 2007 and September 2023, were retrospectively included. TFBV and TFLV were extracted from manual tissue segmentations in balanced gradient echo and single shot T2-weighted turbo spin echo sequences. MRI studies were split into training (n = 188) and hold-out test data (n = 20). Wilcoxon signed-rank test was used to compare manual and DL-based segmentations by 2 U-Nets. Manual and DL segmentation times were noted and compared using Student's t-test. The observed/expected ratio of the total lung volume (O/E TLV) as a prognostic marker for postnatal survival was calculated. Outcome predictions of O/E TLV for postnatal death were assessed with univariate regression analysis.</p><p><strong>Results: </strong>Manual segmentation times were higher compared to DL segmentations (30 ± 7 minutes versus 0.25 ± 0.05 minutes, <i>P </i><<i> </i>.001). Manual and DL-based TFBV were similar (1317 ± 498 mL versus 1306 ± 491 mL; <i>P</i> = .04; Dice score: 0.98 ± 0.01). TFLV (19.4 ± 11.5 mL versus 18.7 ± 12.4 mL; <i>P = </i>.11; Dice score: 0.84 ± 0.09) and O/E TLV (39.3 ± 18.1 mL versus 37.7 ± 19.1 mL, <i>P = </i>.13) were not significantly different. Postnatal mortality was negatively associated with higher manual O/E TLV (odds ratio: 0.97; 95% confidence interval [CI], 0.96-0.98; <i>P </i><<i> </i>.001) and DL O/E TLV (odds ratio: 0.97; 95% CI, 0.96-0.98; <i>P </i><<i> </i>.001).</p><p><strong>Conclusion: </strong>DL for body and lung segmentation in fetuses with congenital diaphragmatic hernia allows reliable and rapid calculations of the observed/expected ratio and equally predicts prognostic outcome.</p>","PeriodicalId":519940,"journal":{"name":"Radiology advances","volume":"2 1","pages":"umae034"},"PeriodicalIF":0.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12429181/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145246311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radiology advancesPub Date : 2024-12-09eCollection Date: 2025-01-01DOI: 10.1093/radadv/umae032
Taehee Kim, Heejun Shin, Yong Sub Song, Jong Hyuk Lee, Hyungjin Kim, Dongmyung Shin
{"title":"Artificial intelligence software for detecting unsuspected lung cancer on chest radiographs in an asymptomatic population.","authors":"Taehee Kim, Heejun Shin, Yong Sub Song, Jong Hyuk Lee, Hyungjin Kim, Dongmyung Shin","doi":"10.1093/radadv/umae032","DOIUrl":"10.1093/radadv/umae032","url":null,"abstract":"<p><strong>Background: </strong>Detecting clinically unsuspected lung cancer on chest radiographs is challenging. Artificial intelligence (AI) software that performs comparably to radiologists may serve as a useful tool.</p><p><strong>Purpose: </strong>To evaluate the lung cancer detection performance of a commercially available AI software and to that of humans in a healthy population.</p><p><strong>Materials and methods: </strong>This retrospective study used chest radiographs from the Prostate, Lung, Colorectal, and Ovarian (PLCO) cancer screening trial in the United States between November 1993 and July 2001 with pathological cancer diagnosis follow-up to 2009 (median 11.3 years). The software's predictions were compared to the PLCO radiologists' reads. A reader study was performed with a subset comparing the software to 3 experienced radiologists.</p><p><strong>Results: </strong>The analysis included 24 370 individuals (mean age 62.6 <math><mo>±</mo></math> 5.4; median age 62; cancer rate 2%), with 213 individuals (mean age 63.6 <math><mo>±</mo></math> 5.5; median age 63; cancer rate 46%) for the reader study. AI achieved higher specificity (0.910 for AI vs. 0.803 for radiologists, <i>P</i> < .001), positive predictive value (0.054 for AI vs. 0.032 for radiologists, <i>P</i> < .001), but lower sensitivity (0.326 for AI vs. 0.412 for radiologists, <i>P</i> = .001) than the PLCO radiologists. When we calibrated the sensitivity of AI to match it with the PLCO radiologists, AI had higher specificity (0.815 for AI vs. 0.803 for radiologists, <i>P</i> < .001). In the reader study, AI achieved higher sensitivity than readers 1 and 3 (0.608 for AI vs. 0.588 for reader 1, <i>P</i> = .789 vs. 0.588 for reader 3, <i>P</i> = .803) but lower specificity than reader 1 (0.888 for AI vs. 0.905 for reader 1, <i>P</i> = .814). Compared to reader 2, AI showed higher specificity (0.888 for AI vs. 0.819 for reader 2, <i>P</i> = .153) but lower sensitivity (0.888 for AI vs. 0.905 for reader 1, <i>P</i> = .814).</p><p><strong>Conclusion: </strong>AI detects lung cancer on chest radiographs among asymptomatic individuals with comparable performance to experienced radiologists.</p>","PeriodicalId":519940,"journal":{"name":"Radiology advances","volume":"2 1","pages":"umae032"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12429178/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145246274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radiology advancesPub Date : 2024-11-25eCollection Date: 2024-10-01DOI: 10.1093/radadv/umae031
Aaron So, Ki Seok Choo, Ji Won Lee, Yun-Hyeon Kim, Mustafa Haider, Mahmud Hasan, Serag El-Ganga, Akshaye Goela, Patrick Teefy, Yeon Hyeon Choe
{"title":"Fractional flow reserve measurement using dynamic CT perfusion imaging in patients with coronary artery disease.","authors":"Aaron So, Ki Seok Choo, Ji Won Lee, Yun-Hyeon Kim, Mustafa Haider, Mahmud Hasan, Serag El-Ganga, Akshaye Goela, Patrick Teefy, Yeon Hyeon Choe","doi":"10.1093/radadv/umae031","DOIUrl":"10.1093/radadv/umae031","url":null,"abstract":"<p><strong>Purposes: </strong>The objective was to evaluate the accuracy of a novel CT dynamic angiographic imaging (CT-DAI) algorithm for rapid fractional flow reserve (FFR) measurement in patients with coronary artery disease (CAD).</p><p><strong>Materials and methods: </strong>This retrospective study included 14 patients (age 58.5 ± 10.6 years, 11 males) with CAD who underwent stress dynamic CT myocardial perfusion scanning with a dual-source CT scanner. The included patients had analyzable proximal and distal coronary artery segments adjacent to the stenosis in the perfusion images and had corresponding invasive catheter-based FFR measurements for that stenosis. An in-house software based on the CT-DAI algorithm was used to compute FFR using the pre- and post- lesion coronary time-enhancement curves obtained from the stress myocardial perfusion images. The CT-DAI derived FFR values were then compared to the corresponding catheter-based invasive FFR values. A coronary artery stenosis was considered functionally significant for FFR value <0.8.</p><p><strong>Results: </strong>The CT-DAI derived FFR values were in agreement with the invasive FFR values in all 15 coronary arteries in 14 patients, resulting in 100% per-vessel and per-patient diagnostic accuracy. FFR derived using CT-DAI (<i>M</i> = 0.768, SD = 0.156) showed an excellent linear correlation (<i>R</i> = 0.910, <i>P</i> < .001) and statistical indifference (<i>P</i>= .655) with that measured using invasive catheter-based method (<i>M</i> = 0.796, SD = 0.149). Bland-Altman analysis showed no significant proportional bias.</p><p><strong>Conclusion: </strong>The novel CT-DAI algorithm can reliably compute FFR across a coronary artery stenosis directly from dynamic CT myocardial perfusion images, facilitating rapid on-site hemodynamic assessment of the epicardial coronary artery stenosis in patients with CAD.</p>","PeriodicalId":519940,"journal":{"name":"Radiology advances","volume":"1 4","pages":"umae031"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11706786/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142961112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radiology advancesPub Date : 2024-11-15eCollection Date: 2022-01-01DOI: 10.1093/radadv/umae030
Ozkan Cigdem, Shengjia Chen, Chaojie Zhang, Kyunghyun Cho, Richard Kijowski, Cem M Deniz
{"title":"Estimating time-to-total knee replacement on radiographs and MRI: a multimodal approach using self-supervised deep learning.","authors":"Ozkan Cigdem, Shengjia Chen, Chaojie Zhang, Kyunghyun Cho, Richard Kijowski, Cem M Deniz","doi":"10.1093/radadv/umae030","DOIUrl":"10.1093/radadv/umae030","url":null,"abstract":"<p><strong>Purpose: </strong>Accurately predicting the expected duration of time until total knee replacement (time-to-TKR) is crucial for patient management and health care planning. Predicting when surgery may be needed, especially within shorter windows like 3 years, allows clinicians to plan timely interventions and health care systems to allocate resources more effectively. Existing models lack the precision for such time-based predictions. A survival analysis model for predicting time-to-TKR was developed using features from medical images and clinical measurements.</p><p><strong>Methods: </strong>From the Osteoarthritis Initiative dataset, all knees with clinical variables, MRI scans, radiographs, and quantitative and semiquantitative assessments from images were identified. This resulted in 895 knees that underwent TKR within the 9-year follow-up period, as specified by the Osteoarthritis Initiative study design, and 786 control knees that did not undergo TKR (right-censored, indicating their status beyond the 9-year follow-up is unknown). These knees were used for model training and testing. Additionally, 518 and 164 subjects from the Multi-Center Osteoarthritis Study and internal hospital data were used for external testing, respectively. Deep learning models were utilized to extract features from radiographs and MR scans. Extracted features, clinical variables, and image assessments were used in survival analysis with Lasso Cox feature selection and a random survival forest model to predict time-to-TKR.</p><p><strong>Results: </strong>The proposed model exhibited strong discrimination power by integrating self-supervised deep learning features with clinical variables (eg, age, body mass index, pain score) and image assessment measurements (eg, Kellgren-Lawrence grade, joint space narrowing, bone marrow lesion size, cartilage morphology) from multiple modalities. The model achieved an area under the curve of 94.5 (95% CI, 94.0-95.1) for predicting the time-to-TKR.</p><p><strong>Conclusions: </strong>The proposed model demonstrated the potential of self-supervised learning and multimodal data fusion in accurately predicting time-to-TKR that may assist physicians to develop personalize treatment strategies.</p>","PeriodicalId":519940,"journal":{"name":"Radiology advances","volume":"1 4","pages":"umae030"},"PeriodicalIF":0.0,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11687945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142916814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Journal editing and peer review in the international setting.","authors":"Niraj Nirmal Pandey, Vivianne Aguilera Freitas, Haidara Almansour","doi":"10.1093/radadv/umae028","DOIUrl":"10.1093/radadv/umae028","url":null,"abstract":"","PeriodicalId":519940,"journal":{"name":"Radiology advances","volume":"1 4","pages":"umae028"},"PeriodicalIF":0.0,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12429172/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145246297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}