Cong Zhang, Xin Gao, Xuebin Zheng, Jun Xie, Gang Feng, Yunchao Bao, Pengchen Gu, Chuan He, Ruimin Wang, Jiahe Tian
{"title":"A fully automated, expert-perceptive image quality assessment system for whole-body [18F]FDG PET/CT.","authors":"Cong Zhang, Xin Gao, Xuebin Zheng, Jun Xie, Gang Feng, Yunchao Bao, Pengchen Gu, Chuan He, Ruimin Wang, Jiahe Tian","doi":"10.1186/s13550-025-01238-2","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The quality of clinical PET/CT images is critical for both accurate diagnosis and image-based research. However, current image quality assessment (IQA) methods predominantly rely on handcrafted features and region-specific analyses, thereby limiting automation in whole-body and multicenter evaluations. This study aims to develop an expert-perceptive deep learning-based IQA system for [18F]FDG PET/CT to tackle the lack of automated, interpretable assessments of clinical whole-body PET/CT image quality.</p><p><strong>Methods: </strong>This retrospective multicenter study included clinical whole-body [18F]FDG PET/CT scans from 718 patients. Automated identification and localization algorithms were applied to select predefined pairs of PET and CT slices from whole-body images. Fifteen experienced experts, trained to conduct blinded slice-level subjective assessments, provided average visual scores as reference standards. Using the MANIQA framework, the developed IQA model integrates the Vision Transformer, Transposed Attention, and Scale Swin Transformer Blocks to categorize PET and CT images into five quality classes. The model's correlation, consistency, and accuracy with expert evaluations on both PET and CT test sets were statistically analysed to assess the system's IQA performance. Additionally, the model's ability to distinguish high-quality images was evaluated using receiver operating characteristic (ROC) curves.</p><p><strong>Results: </strong>The IQA model demonstrated high accuracy in predicting image quality categories and showed strong concordance with expert evaluations of PET/CT image quality. In predicting slice-level image quality across all body regions, the model achieved an average accuracy of 0.832 for PET and 0.902 for CT. The model's scores showed substantial agreement with expert assessments, achieving average Spearman coefficients (ρ) of 0.891 for PET and 0.624 for CT, while the average Intraclass Correlation Coefficient (ICC) reached 0.953 for PET and 0.92 for CT. The PET IQA model demonstrated strong discriminative performance, achieving an area under the curve (AUC) of ≥ 0.88 for both the thoracic and abdominal regions.</p><p><strong>Conclusions: </strong>This fully automated IQA system provides a robust and comprehensive framework for the objective evaluation of clinical image quality. Furthermore, it demonstrates significant potential as an impartial, expert-level tool for standardised multicenter clinical IQA.</p>","PeriodicalId":11611,"journal":{"name":"EJNMMI Research","volume":"15 1","pages":"42"},"PeriodicalIF":3.1000,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12008089/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EJNMMI Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13550-025-01238-2","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Background: The quality of clinical PET/CT images is critical for both accurate diagnosis and image-based research. However, current image quality assessment (IQA) methods predominantly rely on handcrafted features and region-specific analyses, thereby limiting automation in whole-body and multicenter evaluations. This study aims to develop an expert-perceptive deep learning-based IQA system for [18F]FDG PET/CT to tackle the lack of automated, interpretable assessments of clinical whole-body PET/CT image quality.
Methods: This retrospective multicenter study included clinical whole-body [18F]FDG PET/CT scans from 718 patients. Automated identification and localization algorithms were applied to select predefined pairs of PET and CT slices from whole-body images. Fifteen experienced experts, trained to conduct blinded slice-level subjective assessments, provided average visual scores as reference standards. Using the MANIQA framework, the developed IQA model integrates the Vision Transformer, Transposed Attention, and Scale Swin Transformer Blocks to categorize PET and CT images into five quality classes. The model's correlation, consistency, and accuracy with expert evaluations on both PET and CT test sets were statistically analysed to assess the system's IQA performance. Additionally, the model's ability to distinguish high-quality images was evaluated using receiver operating characteristic (ROC) curves.
Results: The IQA model demonstrated high accuracy in predicting image quality categories and showed strong concordance with expert evaluations of PET/CT image quality. In predicting slice-level image quality across all body regions, the model achieved an average accuracy of 0.832 for PET and 0.902 for CT. The model's scores showed substantial agreement with expert assessments, achieving average Spearman coefficients (ρ) of 0.891 for PET and 0.624 for CT, while the average Intraclass Correlation Coefficient (ICC) reached 0.953 for PET and 0.92 for CT. The PET IQA model demonstrated strong discriminative performance, achieving an area under the curve (AUC) of ≥ 0.88 for both the thoracic and abdominal regions.
Conclusions: This fully automated IQA system provides a robust and comprehensive framework for the objective evaluation of clinical image quality. Furthermore, it demonstrates significant potential as an impartial, expert-level tool for standardised multicenter clinical IQA.
EJNMMI ResearchRADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING&nb-
CiteScore
5.90
自引率
3.10%
发文量
72
审稿时长
13 weeks
期刊介绍:
EJNMMI Research publishes new basic, translational and clinical research in the field of nuclear medicine and molecular imaging. Regular features include original research articles, rapid communication of preliminary data on innovative research, interesting case reports, editorials, and letters to the editor. Educational articles on basic sciences, fundamental aspects and controversy related to pre-clinical and clinical research or ethical aspects of research are also welcome. Timely reviews provide updates on current applications, issues in imaging research and translational aspects of nuclear medicine and molecular imaging technologies.
The main emphasis is placed on the development of targeted imaging with radiopharmaceuticals within the broader context of molecular probes to enhance understanding and characterisation of the complex biological processes underlying disease and to develop, test and guide new treatment modalities, including radionuclide therapy.