{"title":"人工智能和放射科医生在检测胸部x光异常方面的表现比较。","authors":"Jakub Dandár, Tomáš Jindra, Daniel Kvak","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) has been increasingly applied in radiology, where it offers the potential to improve the accuracy and efficiency of diagnosis, particularly in the evaluation of conventional imaging modalities such as chest X-rays. This study analyzes the performance of commercial software using machine learning and, respectively, artificial intelligence approaches (Carebot AI CXR; Carebot s.r.o.) in detecting abnormalities in chest radiographs compared with independent evaluations by 3 radiologists of different levels of experience. The study was conducted in collaboration with Hospital Tabor, which provided a dataset of 207 anonymised radiographs, out of which 196 were assessed as relevant. The sensitivity and specificity of AI were compared with human assessment in 5 categories of abnormalities: atelectasis (ATE), consolidation (CON), cardiac shadow enlargement (CMG), pleural effusion (EFF) and pulmonary lesions (LES). Carebot AI CXR software achieved high sensitivity in all evaluated categories (e.g., ATE: 0.909, CMG: 0.889, EFF: 0.951), and its performance was consistent across all findings. In contrast, AI specificity was lower in some categories (e.g., EFF: 0.792, CON: 0.895), while radiologists achieved performance values approaching 1.000 in most cases (e.g., RAD 1 and RAD 2 EFF: 1.000). AI demonstrated consistently higher sensitivity than less experienced radiologists (e.g., RAD 1 ATE: 0.087, CMG: 0.327) and in some cases than more experienced assessors, but at a modest decrease in specificity. The study also includes case reports, including false-positive and false-negative findings, which contribute to a deeper understanding of AI performance in clinical practice. The results suggest that AI can effectively complement the work of radiologists, especially for less experienced doctors, and improve the sensitivity of diagnosis on chest radiographs.</p>","PeriodicalId":9645,"journal":{"name":"Casopis lekaru ceskych","volume":"164 3","pages":"125-140"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparison of performance between artificial intelligence and radiologists in detecting abnormalities on chest X-rays.\",\"authors\":\"Jakub Dandár, Tomáš Jindra, Daniel Kvak\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Artificial intelligence (AI) has been increasingly applied in radiology, where it offers the potential to improve the accuracy and efficiency of diagnosis, particularly in the evaluation of conventional imaging modalities such as chest X-rays. This study analyzes the performance of commercial software using machine learning and, respectively, artificial intelligence approaches (Carebot AI CXR; Carebot s.r.o.) in detecting abnormalities in chest radiographs compared with independent evaluations by 3 radiologists of different levels of experience. The study was conducted in collaboration with Hospital Tabor, which provided a dataset of 207 anonymised radiographs, out of which 196 were assessed as relevant. The sensitivity and specificity of AI were compared with human assessment in 5 categories of abnormalities: atelectasis (ATE), consolidation (CON), cardiac shadow enlargement (CMG), pleural effusion (EFF) and pulmonary lesions (LES). Carebot AI CXR software achieved high sensitivity in all evaluated categories (e.g., ATE: 0.909, CMG: 0.889, EFF: 0.951), and its performance was consistent across all findings. In contrast, AI specificity was lower in some categories (e.g., EFF: 0.792, CON: 0.895), while radiologists achieved performance values approaching 1.000 in most cases (e.g., RAD 1 and RAD 2 EFF: 1.000). AI demonstrated consistently higher sensitivity than less experienced radiologists (e.g., RAD 1 ATE: 0.087, CMG: 0.327) and in some cases than more experienced assessors, but at a modest decrease in specificity. The study also includes case reports, including false-positive and false-negative findings, which contribute to a deeper understanding of AI performance in clinical practice. The results suggest that AI can effectively complement the work of radiologists, especially for less experienced doctors, and improve the sensitivity of diagnosis on chest radiographs.</p>\",\"PeriodicalId\":9645,\"journal\":{\"name\":\"Casopis lekaru ceskych\",\"volume\":\"164 3\",\"pages\":\"125-140\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Casopis lekaru ceskych\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Casopis lekaru ceskych","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Medicine","Score":null,"Total":0}
Comparison of performance between artificial intelligence and radiologists in detecting abnormalities on chest X-rays.
Artificial intelligence (AI) has been increasingly applied in radiology, where it offers the potential to improve the accuracy and efficiency of diagnosis, particularly in the evaluation of conventional imaging modalities such as chest X-rays. This study analyzes the performance of commercial software using machine learning and, respectively, artificial intelligence approaches (Carebot AI CXR; Carebot s.r.o.) in detecting abnormalities in chest radiographs compared with independent evaluations by 3 radiologists of different levels of experience. The study was conducted in collaboration with Hospital Tabor, which provided a dataset of 207 anonymised radiographs, out of which 196 were assessed as relevant. The sensitivity and specificity of AI were compared with human assessment in 5 categories of abnormalities: atelectasis (ATE), consolidation (CON), cardiac shadow enlargement (CMG), pleural effusion (EFF) and pulmonary lesions (LES). Carebot AI CXR software achieved high sensitivity in all evaluated categories (e.g., ATE: 0.909, CMG: 0.889, EFF: 0.951), and its performance was consistent across all findings. In contrast, AI specificity was lower in some categories (e.g., EFF: 0.792, CON: 0.895), while radiologists achieved performance values approaching 1.000 in most cases (e.g., RAD 1 and RAD 2 EFF: 1.000). AI demonstrated consistently higher sensitivity than less experienced radiologists (e.g., RAD 1 ATE: 0.087, CMG: 0.327) and in some cases than more experienced assessors, but at a modest decrease in specificity. The study also includes case reports, including false-positive and false-negative findings, which contribute to a deeper understanding of AI performance in clinical practice. The results suggest that AI can effectively complement the work of radiologists, especially for less experienced doctors, and improve the sensitivity of diagnosis on chest radiographs.