Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Determination of Fungiform Papilla Number Using Deep Learning Methods. 利用深度学习方法确定真菌状乳头数量。
Journal of imaging informatics in medicine Pub Date : 2026-05-07 DOI: 10.1007/s10278-026-01984-2
Sümeyye Çelik, Alican Kuran, Kerem Kayabay, Umut Seki, Enver Alper Sinanoğlu
{"title":"Determination of Fungiform Papilla Number Using Deep Learning Methods.","authors":"Sümeyye Çelik, Alican Kuran, Kerem Kayabay, Umut Seki, Enver Alper Sinanoğlu","doi":"10.1007/s10278-026-01984-2","DOIUrl":"https://doi.org/10.1007/s10278-026-01984-2","url":null,"abstract":"<p><p>This study aimed to develop a deep learning-based method for the automatic detection and counting of fungiform papillae (FP) on the dorsal surface of the human tongue. FP density and morphology may serve as biomarkers for taste function and systemic disease diagnosis. Manual counting is time-consuming and subjective; therefore, an objective and reproducible artificial intelligence (AI) method was designed to provide a reliable quantitative assessment. A deeplearning object detection model was constructed using the Ultralytics YOLOv11 architecture. A dataset of 177 high-resolution toluidine blue-stained tongue images was manually annotated and dividedin to training, validation, and test sets. Three-foldnestedcross-validation was employed for hyperparameter optimization. Transfer learning was applied by freezing 22 backbone layers, and the detection heads were trained using tuned learning rates and decay factors. Early stopping was used to prevent overfitting. Model performance was evaluated on the independent test set. The model achieved 0.678 precision, 0.740 recall, and 0.707 F1 score, reflecting balanced detection performance. Compared with existing studies, our model demonstrated improved generalization and robustness. The mean absolute error (37.52; 19.48% of the true mean) and root mean square error (43.83) indicated reliable counting accuracy given the natural variability of FP counts (192.56 ± 63.14). The proposed YOLOv11-based model provides a fast, accurate, and reproducible alternative to manual FP counting. This approach may support large-scale clinical and research applications where FP analysis serves as a potential biomarker of health status.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147848159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spinal Cord Radiomics-Driven Machine Learning Predicts Meaningful Clinical Improvement After Surgery for Degenerative Cervical Myelopathy: A Pilot Study. 脊髓放射学驱动的机器学习预测退行性颈椎病手术后有意义的临床改善:一项试点研究。
Journal of imaging informatics in medicine Pub Date : 2026-05-06 DOI: 10.1007/s10278-026-01987-z
Ramesh M Arnest, Kevin M Koch, Matthew D Budde, Amulya Setlur, Anjishnu Banerjee, Aditya Vedantam
{"title":"Spinal Cord Radiomics-Driven Machine Learning Predicts Meaningful Clinical Improvement After Surgery for Degenerative Cervical Myelopathy: A Pilot Study.","authors":"Ramesh M Arnest, Kevin M Koch, Matthew D Budde, Amulya Setlur, Anjishnu Banerjee, Aditya Vedantam","doi":"10.1007/s10278-026-01987-z","DOIUrl":"https://doi.org/10.1007/s10278-026-01987-z","url":null,"abstract":"<p><p>A prospective observational cohort study. To determine whether machine learning models using radiomic features derived from preoperative MRI, clinical variables, or their combination can predict achievement of the minimum clinically important difference (MCID) in function and quality of life after surgery for degenerative cervical myelopathy (DCM). Predicting surgical outcomes in DCM remains challenging, as conventional MRI and clinical scores incompletely reflect spinal cord pathology. Radiomics quantifies voxel-level intensity and texture patterns from routine MRI, providing quantitative measures of tissue heterogeneity that may serve as imaging biomarkers of recovery potential. Forty-six patients with DCM underwent preoperative 3D T2-weighted MRI and surgical decompression. Spinal cord radiomic features (Shape3D, First-Order, GLCM, and GLSZM) were extracted using PyRadiomics. Baseline clinical variables included age, sex, duration of symptoms, T2 hyperintensity, and functional scores assessed with the baseline mJOA and SF-36 PCS scores. Three-month MCID achievement was defined using established thresholds. Predictive models were developed using radiomic features, clinical variables, or their combination. For mJOA MCID, the combined radiomics-clinical model achieved the best performance (AUC = 0.88 ± 0.13). For SF-36 PCS MCID, the combined model achieved an AUC = 0.78 ± 0.17 and an AUCPR of 0.82 ± 0.14. SHapley Additive exPlanations identified texture-based radiomic features and age as dominant predictors for mJOA MCID, whereas first-order radiomic features and baseline SF-36 PCS were most influential for SF-36 PCS MCID. MRI-based spinal cord radiomics improves prediction of meaningful postoperative recovery beyond clinical data, supporting their potential as imaging biomarkers for individualized prognostication in DCM.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147848384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
S-MEOD: A Novel Evaluation Metric for Frame-Based Medical Object Detection. s - method:一种新的基于帧的医学目标检测评价度量。
Journal of imaging informatics in medicine Pub Date : 2026-05-06 DOI: 10.1007/s10278-026-01957-5
Isaac Honarmand Rad, Seyedreza Taghizadeh
{"title":"S-MEOD: A Novel Evaluation Metric for Frame-Based Medical Object Detection.","authors":"Isaac Honarmand Rad, Seyedreza Taghizadeh","doi":"10.1007/s10278-026-01957-5","DOIUrl":"https://doi.org/10.1007/s10278-026-01957-5","url":null,"abstract":"<p><p>Traditional metrics such as precision, recall, mean Average Precision (mAP), and F-score are widely used to evaluate object detection models. However, in some frame-based medical scenarios, these metrics often fail to capture the true effectiveness of models. For instance, in frame-based data, an object detection model may detect true positives in just a few frames, resulting in a perfect precision, but miss the same targets in other frames, leading to a high number of false negatives and, as a result, a very low recall. In practice, this model may still function effectively as a medical assistant and accurately identify critical features. Yet, the traditional metrics do not reflect this acceptable performance. This study aims to address this limitation by introducing a new evaluation metric tailored for frame-based medical object detection tasks. We propose the S-MEOD (Sequential Method of Evaluation for Object Detection), a novel metric that combines Sequence-aware Precision (SaP) and Sequence-oriented Detection (SoD) to provide a more comprehensive assessment of model performance. The metric was evaluated on frame-based sequences using object detection models, including YOLO-based architectures, with experiments on medical data. Experimental evaluations showed that S-MEOD provides a more accurate and intuitive reflection of model effectiveness in frame-based detection compared to traditional metrics. In our experimental evaluation on coronary angiography data, increasing the confidence threshold led to higher precision (up to 0.964) and mAP50 ( <math><mrow><mo>≈</mo> <mn>0.49</mn></mrow> </math> ), but caused recall to drop from ( <math><mrow><mo>≈</mo> <mn>0.22</mn></mrow> </math> ) to 0.028 and the F1-score from ( <math><mrow><mo>≈</mo> <mn>0.29</mn></mrow> </math> ) to 0.055; correspondingly, S-MEOD, where lower values indicate better performance, increased from 1.30 at low thresholds to 2.06 at high thresholds, indicating a substantial deterioration in temporal detection performance. Compared to traditional metrics, S-MEOD more accurately reflects clinically relevant detection behavior by distinguishing between sparse high-precision detections and genuine sequence-level detection failure. The S-MEOD offers an easy-to-interpret and reliable alternative to existing metrics for evaluating frame-based medical object detection models. Its adoption could improve the assessment of clinical applicability and redefine performance standards in medical imaging research.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147848187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Large Language Models for Turkish Emergency CT Impression Drafting: Quality, Critical Omissions, and Readability. 评估土耳其紧急CT印象起草的大型语言模型:质量,关键遗漏和可读性。
Journal of imaging informatics in medicine Pub Date : 2026-05-06 DOI: 10.1007/s10278-026-01989-x
Halil Tekdemir, Esra Çıvgın, Şebnem Akpınar, Büşra Nur Tekdemir, Ali Bahadır Özdemir, Erdal Altaş, Alperen Sefa Toker, Ebru Şengül Parlak, İzzet Selçuk Parlak
{"title":"Evaluating Large Language Models for Turkish Emergency CT Impression Drafting: Quality, Critical Omissions, and Readability.","authors":"Halil Tekdemir, Esra Çıvgın, Şebnem Akpınar, Büşra Nur Tekdemir, Ali Bahadır Özdemir, Erdal Altaş, Alperen Sefa Toker, Ebru Şengül Parlak, İzzet Selçuk Parlak","doi":"10.1007/s10278-026-01989-x","DOIUrl":"https://doi.org/10.1007/s10278-026-01989-x","url":null,"abstract":"<p><p>The purpose of the study is to compare large language models (LLMs) for drafting Turkish emergency CT impression text and to quantify quality, critical omission risk, and readability across anatomical regions. In this retrospective observational study, 1374 emergency CT reports were screened; 802 met inclusion criteria (abdomen 204, chest 200, cranial 198, head, and neck 200). Sections were provided to four LLMs (Grok-2, ChatGPT-4o-Latest, Gemini-2.0-Flash, DeepSeek-V3-FW) to generate impression drafts. Two radiologists rated impressions on a 4-point Likert scale. We recorded omissions of predefined critical findings by region and calculated the Ateşman readability index. Likert scores varied by model and region, with higher mean scores in head and neck and cranial examinations. Critical omissions were uncommon overall but showed model- and region-specific patterns; the highest omission rate occurred in abdominal CT for one model. Readability differed by text type, with radiologist impressions and higher-performing models generally showing similar and relatively high readability. LLM-generated Turkish CT impressions can reach acceptable quality in selected settings, but occasional critical omissions persist. These tools should be used as decision-support and require clinician oversight rather than standalone deployment.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147848183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of an Artificial Intelligence-Based Automated System for Identifying Primary and Permanent Teeth in Mixed Dentition Panoramic Radiographs. 基于人工智能的混合牙列全景x线照片中乳牙和恒牙自动识别系统的性能。
Journal of imaging informatics in medicine Pub Date : 2026-05-04 DOI: 10.1007/s10278-026-01990-4
Everton Flaiban, Elaine Dinardi Barioni, Lana Ferreira Santos, Sérgio Lúcio Pereira de Castro Lopes, Andre Luiz Ferreira Costa
{"title":"Performance of an Artificial Intelligence-Based Automated System for Identifying Primary and Permanent Teeth in Mixed Dentition Panoramic Radiographs.","authors":"Everton Flaiban, Elaine Dinardi Barioni, Lana Ferreira Santos, Sérgio Lúcio Pereira de Castro Lopes, Andre Luiz Ferreira Costa","doi":"10.1007/s10278-026-01990-4","DOIUrl":"https://doi.org/10.1007/s10278-026-01990-4","url":null,"abstract":"<p><p>This study aimed to assess the diagnostic performance of the Brazilian-developed artificial intelligence system DIO Inteligência® for automatic detection and classification of primary and permanent teeth in panoramic radiographs of patients in mixed dentition, using expert radiologist consensus as the reference standard. In this retrospective diagnostic accuracy study, 110 digital panoramic radiographs from patients aged 6-12 years were analyzed. The AI system automatically identified and classified individual teeth according to FDI notation. A total of 4622 teeth with definitive reference classification were included. The system's output was compared with a gold standard established by consensus of two experienced dentomaxillofacial radiologists. Diagnostic performance metrics, including accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), were calculated. Overall, the AI system demonstrated high diagnostic performance, achieving an accuracy of 91%, sensitivity of 92%, and specificity of 72%. The PPV was 99%, whereas the NPV was 29%. Performance remained consistently high across most permanent tooth groups, with accuracy values around 96% and PPVs close to 100%. Third molars showed slightly lower metrics compared with other permanent groups. Primary teeth also demonstrated favorable classification performance, with high sensitivity and PPV. These findings may suggest that the DIO Inteligência® system shows robust performance in detecting and classifying primary and permanent teeth in mixed dentition panoramic radiographs, supporting its potential role as a reliable adjunct tool in pediatric dental imaging interpretation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147848209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent Advances in Generative AI for Healthcare Applications. 医疗保健应用生成式人工智能的最新进展。
Journal of imaging informatics in medicine Pub Date : 2026-05-01 DOI: 10.1007/s10278-026-01908-0
Yasin Shokrollahi, Jose Colmenarez, Wenxi Liu, Sahar Yarmohammadtoosky, Matthew M Nikahd, Pengfei Dong, Xianqi Li, Linxia Gu
{"title":"Recent Advances in Generative AI for Healthcare Applications.","authors":"Yasin Shokrollahi, Jose Colmenarez, Wenxi Liu, Sahar Yarmohammadtoosky, Matthew M Nikahd, Pengfei Dong, Xianqi Li, Linxia Gu","doi":"10.1007/s10278-026-01908-0","DOIUrl":"https://doi.org/10.1007/s10278-026-01908-0","url":null,"abstract":"<p><p>Artificial intelligence (AI) has catalyzed revolutionary changes across various sectors, notably in healthcare. In particular, generative AI-led by diffusion models and transformer architectures-has enabled significant breakthroughs in medical imaging (including image reconstruction, image-to-image translation, generation, and classification), protein structure prediction, clinical documentation, diagnostic assistance, radiology interpretation, clinical decision support, medical coding, and billing, as well as drug design and molecular representation. These innovations have enhanced clinical diagnosis, data reconstruction, and drug synthesis. This review paper aims to offer a comprehensive synthesis of recent advances in healthcare applications of generative AI, with an emphasis on diffusion and transformer models. Moreover, we discuss current capabilities, limitations, and outline promising research directions. Serving as both a reference for researchers and a guide for practitioners, this work offers an integrated view of the state of the art, its impact on healthcare, and its future potential.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147825537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility of No-Code Deep Learning for Diagnosing Bone Metastasis in Bone Scans: A Comparative Study of Teachable Machine and ResNet. 无代码深度学习在骨扫描中诊断骨转移的可行性:可教机器和ResNet的比较研究。
Journal of imaging informatics in medicine Pub Date : 2026-05-01 DOI: 10.1007/s10278-026-01981-5
Sehyun Pak, Ji Young Woo, Ik Yang, Hye Joo Son, Soo-Jong Kim, Suk Hyun Lee
{"title":"Feasibility of No-Code Deep Learning for Diagnosing Bone Metastasis in Bone Scans: A Comparative Study of Teachable Machine and ResNet.","authors":"Sehyun Pak, Ji Young Woo, Ik Yang, Hye Joo Son, Soo-Jong Kim, Suk Hyun Lee","doi":"10.1007/s10278-026-01981-5","DOIUrl":"https://doi.org/10.1007/s10278-026-01981-5","url":null,"abstract":"<p><p>This study explored the feasibility of developing a model that can diagnose positive and negative bone metastasis from bone scan images using Teachable Machine by Google, a no-code AI platform that does not require programming skills or a GPU environment. A fourth-year medical student trained deep learning models using a Teachable Machine on a dataset of 4626 bone scan images from patients with cancer (mean age 65.1 ± 11.3 years; 50.5% female). Because of severe class imbalance (bone metastasis positive:negative = 400:4226), we compared the diagnostic performance of two strategies (original set and augmented dataset with tenfold data augmentation applied to positive images). We investigated the diagnostic performance using various hyperparameters (epochs 50-1000, batch sizes 16-32) with a learning rate of 0.001. The final model generated by Teachable Machine was compared with a conventional deep learning model based on ResNet50. The combination of epoch = 150 and batch size = 16 showed the optimal diagnostic performance. The overall sensitivity, specificity, and positive and negative predictive values were 57.1%, 93.9%, 90.4%, and 68.7%, respectively. Both Teachable Machine and ResNet50 showed good diagnostic performance (area under the curve = 0.812 and 0.869, respectively), although the diagnostic performance of Teachable Machine was inferior to that of the conventional ResNet50 model (p < 0.001). Given its convenience, Teachable Machine represents a valuable and accessible tool for medical education and preliminary model development. It allows researchers without programming skills or GPU resources to construct feasibility models for medical image classification.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147825569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Clinical Evaluation of "Memory-Efficient" Synthetic 3D Generative Adversarial Networks (GAN) Head-to-Head to State of Art: Results on Computed Tomography of the Chest. “记忆效率”合成三维生成对抗网络(GAN)的头对头的比较临床评价:胸部计算机断层扫描的结果。
Journal of imaging informatics in medicine Pub Date : 2026-05-01 DOI: 10.1007/s10278-025-01516-4
Mahshid Shiri, Chandra Bortolotto, Alessandro Bruno, Alessio Consonni, Daniela Maria Grasso, Leonardo Brizzi, Daniele Loiacono, Lorenzo Preda
{"title":"Comparative Clinical Evaluation of \"Memory-Efficient\" Synthetic 3D Generative Adversarial Networks (GAN) Head-to-Head to State of Art: Results on Computed Tomography of the Chest.","authors":"Mahshid Shiri, Chandra Bortolotto, Alessandro Bruno, Alessio Consonni, Daniela Maria Grasso, Leonardo Brizzi, Daniele Loiacono, Lorenzo Preda","doi":"10.1007/s10278-025-01516-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01516-4","url":null,"abstract":"<p><p>Generative adversarial networks (GANs) are increasingly used to generate synthetic medical images, addressing the critical shortage of annotated data for training artificial intelligence (AI) systems. This study introduces conditional random field (CRF)-GAN, a novel memory-efficient GAN architecture that enhances structural consistency in 3D medical image synthesis. Integrating conditional random fields (CRFs) within a two-step generation process, allows CRF-GAN improving spatial coherence while maintaining high-resolution image quality. The model is designed to be computationally efficient, avoiding the need for additional GANs or post-processing. Its performance is evaluated against the state-of-the-art hierarchical (HA)-GAN model. We evaluate the performance of CRF-GAN against the state-of-the-art hierarchical (HA)-GAN model. The comparison between the two models was made through a quantitative evaluation, using Fréchet Inception distance (FID) and maximum mean discrepancy (MMD) metrics, and a qualitative evaluation, through a two-alternative forced choice (2AFC) test completed by a pool of 12 resident radiologists, in order to assess the realism of the generated images. CRF-GAN outperformed HA-GAN with lower FID (0.047 vs. 0.061) and MMD (0.084 vs. 0.086) scores, indicating better image fidelity. The 2AFC test showed a significant preference for images generated by CRF-Gan over those generated by HA-GAN with a p-value of 1.93e - 05. Additionally, CRF-GAN demonstrated 9.34% lower memory usage at 256<sup>3</sup> resolution and achieved up to 14.6% faster training speeds, offering substantial computational savings. CRF-GAN model successfully generates high-resolution 3D medical images with non-inferior quality to conventional models, while being more memory-efficient and faster. The key objective was not only to lower the computational cost but also to reallocate the freed-up resources towards the creation of higher-resolution 3D imaging, which is still a critical factor limiting their direct clinical applicability. Moreover, unlike many previous studies, we combined qualitative and quantitative assessments to obtain a more holistic feedback of model's performance.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147825571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Transformer-Based Segmentation of Abdominal Subcutaneous and Visceral Fat on MRI. 基于视觉变换的腹部皮下和内脏脂肪MRI分割。
Journal of imaging informatics in medicine Pub Date : 2026-05-01 DOI: 10.1007/s10278-026-01970-8
Sara Hosseinzadeh Kassani, Kavya Patel, Paul K Commean, Mahshid Naghashzadeh, Mahsa Dolatshahi, Farzaneh Rahmani, Shuang Wu, Jingxia Liu, LaKisha Lloyd, Caitlyn Nguyen, Nancy Hantler, Abby McBee-Kemper, Suzanne Schindler, Matthew R Brier, Joseph E Ippolito, Claude Sirlin, Bettina Mittendorfer, John C Morris, Tammie L S Benzinger, Cyrus A Raji
{"title":"Vision Transformer-Based Segmentation of Abdominal Subcutaneous and Visceral Fat on MRI.","authors":"Sara Hosseinzadeh Kassani, Kavya Patel, Paul K Commean, Mahshid Naghashzadeh, Mahsa Dolatshahi, Farzaneh Rahmani, Shuang Wu, Jingxia Liu, LaKisha Lloyd, Caitlyn Nguyen, Nancy Hantler, Abby McBee-Kemper, Suzanne Schindler, Matthew R Brier, Joseph E Ippolito, Claude Sirlin, Bettina Mittendorfer, John C Morris, Tammie L S Benzinger, Cyrus A Raji","doi":"10.1007/s10278-026-01970-8","DOIUrl":"10.1007/s10278-026-01970-8","url":null,"abstract":"<p><p>The purpose of this study is to validate a deep learning-based vision transformer for automated quantification and segmentation of abdominal adipose tissue from T1-weighted MRI. This study included abdominal T1 MRI volumes from 107 participants (mean age, 49.9 years; 35 males, 72 females; BMI range, 18.2-49.6) who were midlife adults enrolled in a prospective study assessing the link between abdominal adiposity and biomarkers of dementia. For each abdominal image, visceral and subcutaneous adipose tissues were annotated by an expert reader as the ground truth. Inter- and intra-reader reliability were assessed to establish ground truth validity. A deep learning-based vision transformer was trained using a fivefold cross-validation scheme, and its performance was evaluated using various segmentation metrics against the manually annotated ground truths. The SwinUNETR48 model achieved average Dice coefficients of 96.56% ± 2.38% (p < .001) for SAT and 88.35% ± 8.82% (p < .001) for VAT in cross-validation. The model generalized well to different adiposity and body sizes within the abdominal cavity. Automated segmentation of abdominal adipose tissue provides a promising option for facilitating large-scale investigation of abdominal fat distribution on MRI.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147825592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised Segmentation Network Based on Prototype-Oriented Local Contrastive Learning for Pregnancy Tissue in MR Images. 基于原型导向局部对比学习的MR妊娠组织半监督分割网络。
Journal of imaging informatics in medicine Pub Date : 2026-04-29 DOI: 10.1007/s10278-026-01973-5
Ping Lou, Jie Ying, Feng Gao, Yu Wang, Haima Yang, Le Fu
{"title":"Semi-supervised Segmentation Network Based on Prototype-Oriented Local Contrastive Learning for Pregnancy Tissue in MR Images.","authors":"Ping Lou, Jie Ying, Feng Gao, Yu Wang, Haima Yang, Le Fu","doi":"10.1007/s10278-026-01973-5","DOIUrl":"https://doi.org/10.1007/s10278-026-01973-5","url":null,"abstract":"<p><p>Cesarean scar pregnancy (CSP) is a severe form of ectopic pregnancy, where early screening and monitoring are critical to reducing the risk of prolonged uterine bleeding and other serious complications. Accurate segmentation of pregnancy tissue plays a vital role in clinical assessment and treatment planning. However, the segmentation of pregnancy tissue is particularly challenging due to the diverse morphology and small size of target regions, which causes limited accuracy in existing studies. In addition, large-scale annotated datasets are lacking, and manual annotation is costly and time-consuming. To address these issues, we propose a prototype-oriented local contrastive learning framework for semi-supervised pregnancy tissue segmentation, which addresses the informatics challenges of limited labeled data and fine-grained feature extraction in medical image segmentation. Specifically, representative prototypes are first extracted to characterize the distribution of features in different images. Then, a prototype-guided local contrastive strategy is introduced to incorporate supervised signals into the contrastive learning process. This guides unlabeled data to align with supervised prototype centers, thereby improving segmentation accuracy. Experiments conducted on self-constructed pregnancy tissue dataset demonstrated that the proposed method achieved Dice coefficients of 86.91% at a 50% labeling rate. To further evaluate the generalizability of the method, we also validated it on the public cardiac dataset, achieving a Dice coefficient of 87.34%. These results not only advance semi-supervised learning in medical imaging informatics but also provide a reliable tool for accurate CSP tissue segmentation, supporting clinical decision-making in early ectopic pregnancy management.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147794201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书