N. Clerkin , C. Ski , M. Suleiman , Z. Gandomkar , P. Brennan , R. Strudwick
{"title":"分析具有挑战性的乳房x光病例显示了微妙的读者群体差异","authors":"N. Clerkin , C. Ski , M. Suleiman , Z. Gandomkar , P. Brennan , R. Strudwick","doi":"10.1016/j.radi.2025.103134","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>High quality image interpretation is essential to detect early abnormalities on mammograms. A better understanding of the types of image characteristics that are most challenging to readers would support future education, as well as underpin advancements in AI modelling. This current work focuses on radiography advanced practitioners (RAP) to establish if RAPs and radiologists are challenged by the same characteristics.</div></div><div><h3>Methods</h3><div>This was a prospective, comparison study of radiographer and radiologist mammography readings. Using a cloud-based image interpretative platform and a 5 MP display, 16 radiographers and 24 radiologists read a test set of 60 mammograms with 20 confirmed cancer cases. Difficulty indices were calculated for each group based on error rates for each mammographic case. Unpaired Mann–Whitney tests were employed to compare error rates between various image characteristics. Spearman correlation analysis was used to establish if difficulty indices were associated with each cohort.</div></div><div><h3>Results</h3><div>Strong correlations for cancer and normal cases difficulty indices respectively (r = 0.83 CI:0.61–0.93) and (r = 0.73; CI:0.54–0.85) were shown between both groups. Greatest difficulty scores were shown for cases with soft tissue appearances as opposed calcifications (p = 0.003) and for cases without prior images, compared to those with (p = 0.03). No significant image characteristic differences were noted for the radiologists.</div></div><div><h3>Conclusion</h3><div>This early study acknowledges a strong correlation between radiologists and radiographers when determining which mammographic cases are difficult to interpret. However, radiographers appear to be more susceptible to varying cancer appearances as well as the non-availability of prior images with normal cases.</div></div><div><h3>Implications for practice</h3><div>The results should be helpful when tailoring educational strategies and developing augmented artificial intelligence (AI) solutions to support human readers.</div></div>","PeriodicalId":47416,"journal":{"name":"Radiography","volume":"31 6","pages":"Article 103134"},"PeriodicalIF":2.8000,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analysis of challenging mammographic cases demonstrates subtle reader group discrepancies\",\"authors\":\"N. Clerkin , C. Ski , M. Suleiman , Z. Gandomkar , P. Brennan , R. Strudwick\",\"doi\":\"10.1016/j.radi.2025.103134\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Introduction</h3><div>High quality image interpretation is essential to detect early abnormalities on mammograms. A better understanding of the types of image characteristics that are most challenging to readers would support future education, as well as underpin advancements in AI modelling. This current work focuses on radiography advanced practitioners (RAP) to establish if RAPs and radiologists are challenged by the same characteristics.</div></div><div><h3>Methods</h3><div>This was a prospective, comparison study of radiographer and radiologist mammography readings. Using a cloud-based image interpretative platform and a 5 MP display, 16 radiographers and 24 radiologists read a test set of 60 mammograms with 20 confirmed cancer cases. Difficulty indices were calculated for each group based on error rates for each mammographic case. Unpaired Mann–Whitney tests were employed to compare error rates between various image characteristics. Spearman correlation analysis was used to establish if difficulty indices were associated with each cohort.</div></div><div><h3>Results</h3><div>Strong correlations for cancer and normal cases difficulty indices respectively (r = 0.83 CI:0.61–0.93) and (r = 0.73; CI:0.54–0.85) were shown between both groups. Greatest difficulty scores were shown for cases with soft tissue appearances as opposed calcifications (p = 0.003) and for cases without prior images, compared to those with (p = 0.03). No significant image characteristic differences were noted for the radiologists.</div></div><div><h3>Conclusion</h3><div>This early study acknowledges a strong correlation between radiologists and radiographers when determining which mammographic cases are difficult to interpret. However, radiographers appear to be more susceptible to varying cancer appearances as well as the non-availability of prior images with normal cases.</div></div><div><h3>Implications for practice</h3><div>The results should be helpful when tailoring educational strategies and developing augmented artificial intelligence (AI) solutions to support human readers.</div></div>\",\"PeriodicalId\":47416,\"journal\":{\"name\":\"Radiography\",\"volume\":\"31 6\",\"pages\":\"Article 103134\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2025-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiography\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1078817425002780\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiography","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1078817425002780","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Analysis of challenging mammographic cases demonstrates subtle reader group discrepancies
Introduction
High quality image interpretation is essential to detect early abnormalities on mammograms. A better understanding of the types of image characteristics that are most challenging to readers would support future education, as well as underpin advancements in AI modelling. This current work focuses on radiography advanced practitioners (RAP) to establish if RAPs and radiologists are challenged by the same characteristics.
Methods
This was a prospective, comparison study of radiographer and radiologist mammography readings. Using a cloud-based image interpretative platform and a 5 MP display, 16 radiographers and 24 radiologists read a test set of 60 mammograms with 20 confirmed cancer cases. Difficulty indices were calculated for each group based on error rates for each mammographic case. Unpaired Mann–Whitney tests were employed to compare error rates between various image characteristics. Spearman correlation analysis was used to establish if difficulty indices were associated with each cohort.
Results
Strong correlations for cancer and normal cases difficulty indices respectively (r = 0.83 CI:0.61–0.93) and (r = 0.73; CI:0.54–0.85) were shown between both groups. Greatest difficulty scores were shown for cases with soft tissue appearances as opposed calcifications (p = 0.003) and for cases without prior images, compared to those with (p = 0.03). No significant image characteristic differences were noted for the radiologists.
Conclusion
This early study acknowledges a strong correlation between radiologists and radiographers when determining which mammographic cases are difficult to interpret. However, radiographers appear to be more susceptible to varying cancer appearances as well as the non-availability of prior images with normal cases.
Implications for practice
The results should be helpful when tailoring educational strategies and developing augmented artificial intelligence (AI) solutions to support human readers.
RadiographyRADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
4.70
自引率
34.60%
发文量
169
审稿时长
63 days
期刊介绍:
Radiography is an International, English language, peer-reviewed journal of diagnostic imaging and radiation therapy. Radiography is the official professional journal of the College of Radiographers and is published quarterly. Radiography aims to publish the highest quality material, both clinical and scientific, on all aspects of diagnostic imaging and radiation therapy and oncology.