Jonas M Getzmann, Kitija Nulle, Cinzia Mennini, Umberto Viglino, Francesca Serpi, Domenico Albano, Carmelo Messina, Stefano Fusco, Salvatore Gitto, Luca Maria Sconfienza
{"title":"肋骨骨折成像中的深度学习:使用医学成像中人工智能的Must AI标准-10 (mac -10)检查表进行研究质量评估。","authors":"Jonas M Getzmann, Kitija Nulle, Cinzia Mennini, Umberto Viglino, Francesca Serpi, Domenico Albano, Carmelo Messina, Stefano Fusco, Salvatore Gitto, Luca Maria Sconfienza","doi":"10.1186/s13244-025-02046-x","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To analyze the methodological quality of studies on deep learning (DL) in rib fracture imaging with the Must AI Criteria-10 (MAIC-10) checklist, and to report insights and experiences regarding the applicability of the MAIC-10 checklist.</p><p><strong>Materials and methods: </strong>An electronic literature search was conducted on the PubMed database. After selection of articles, three radiologists independently rated the articles according to MAIC-10. Differences of the MAIC-10 score for each checklist item were assessed using the Fleiss' kappa coefficient.</p><p><strong>Results: </strong>A total of 25 original articles discussing DL applications in rib fracture imaging were identified. Most studies focused on fracture detection (n = 21, 84%). In most of the research papers, internal cross-validation of the dataset was performed (n = 16, 64%), while only six studies (24%) conducted external validation. The mean MAIC-10 score of the 25 studies was 5.63 (SD, 1.84; range 1-8), with the item \"clinical need\" being reported most consistently (100%) and the item \"study design\" being most frequently reported incompletely (94.8%). The average inter-rater agreement for the MAIC-10 score was 0.771.</p><p><strong>Conclusions: </strong>The MAIC-10 checklist is a valid tool for assessing the quality of AI research in medical imaging with good inter-rater agreement. With regard to rib fracture imaging, items such as \"study design\", \"explainability\", and \"transparency\" were often not comprehensively addressed.</p><p><strong>Critical relevance statement: </strong>AI in medical imaging has become increasingly common. Therefore, quality control systems of published literature such as the MAIC-10 checklist are needed to ensure high quality research output.</p><p><strong>Key points: </strong>Quality control systems are needed for research on AI in medical imaging. The MAIC-10 checklist is a valid tool to assess AI in medical imaging research quality. Checklist items such as \"study design\", \"explainability\", and \"transparency\" are frequently addressed incomprehensively.</p>","PeriodicalId":13639,"journal":{"name":"Insights into Imaging","volume":"16 1","pages":"173"},"PeriodicalIF":4.5000,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12335413/pdf/","citationCount":"0","resultStr":"{\"title\":\"Deep learning in rib fracture imaging: study quality assessment using the Must AI Criteria-10 (MAIC-10) checklist for artificial intelligence in medical imaging.\",\"authors\":\"Jonas M Getzmann, Kitija Nulle, Cinzia Mennini, Umberto Viglino, Francesca Serpi, Domenico Albano, Carmelo Messina, Stefano Fusco, Salvatore Gitto, Luca Maria Sconfienza\",\"doi\":\"10.1186/s13244-025-02046-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>To analyze the methodological quality of studies on deep learning (DL) in rib fracture imaging with the Must AI Criteria-10 (MAIC-10) checklist, and to report insights and experiences regarding the applicability of the MAIC-10 checklist.</p><p><strong>Materials and methods: </strong>An electronic literature search was conducted on the PubMed database. After selection of articles, three radiologists independently rated the articles according to MAIC-10. Differences of the MAIC-10 score for each checklist item were assessed using the Fleiss' kappa coefficient.</p><p><strong>Results: </strong>A total of 25 original articles discussing DL applications in rib fracture imaging were identified. Most studies focused on fracture detection (n = 21, 84%). In most of the research papers, internal cross-validation of the dataset was performed (n = 16, 64%), while only six studies (24%) conducted external validation. The mean MAIC-10 score of the 25 studies was 5.63 (SD, 1.84; range 1-8), with the item \\\"clinical need\\\" being reported most consistently (100%) and the item \\\"study design\\\" being most frequently reported incompletely (94.8%). The average inter-rater agreement for the MAIC-10 score was 0.771.</p><p><strong>Conclusions: </strong>The MAIC-10 checklist is a valid tool for assessing the quality of AI research in medical imaging with good inter-rater agreement. With regard to rib fracture imaging, items such as \\\"study design\\\", \\\"explainability\\\", and \\\"transparency\\\" were often not comprehensively addressed.</p><p><strong>Critical relevance statement: </strong>AI in medical imaging has become increasingly common. Therefore, quality control systems of published literature such as the MAIC-10 checklist are needed to ensure high quality research output.</p><p><strong>Key points: </strong>Quality control systems are needed for research on AI in medical imaging. The MAIC-10 checklist is a valid tool to assess AI in medical imaging research quality. Checklist items such as \\\"study design\\\", \\\"explainability\\\", and \\\"transparency\\\" are frequently addressed incomprehensively.</p>\",\"PeriodicalId\":13639,\"journal\":{\"name\":\"Insights into Imaging\",\"volume\":\"16 1\",\"pages\":\"173\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2025-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12335413/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Insights into Imaging\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s13244-025-02046-x\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Insights into Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13244-025-02046-x","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Deep learning in rib fracture imaging: study quality assessment using the Must AI Criteria-10 (MAIC-10) checklist for artificial intelligence in medical imaging.
Objectives: To analyze the methodological quality of studies on deep learning (DL) in rib fracture imaging with the Must AI Criteria-10 (MAIC-10) checklist, and to report insights and experiences regarding the applicability of the MAIC-10 checklist.
Materials and methods: An electronic literature search was conducted on the PubMed database. After selection of articles, three radiologists independently rated the articles according to MAIC-10. Differences of the MAIC-10 score for each checklist item were assessed using the Fleiss' kappa coefficient.
Results: A total of 25 original articles discussing DL applications in rib fracture imaging were identified. Most studies focused on fracture detection (n = 21, 84%). In most of the research papers, internal cross-validation of the dataset was performed (n = 16, 64%), while only six studies (24%) conducted external validation. The mean MAIC-10 score of the 25 studies was 5.63 (SD, 1.84; range 1-8), with the item "clinical need" being reported most consistently (100%) and the item "study design" being most frequently reported incompletely (94.8%). The average inter-rater agreement for the MAIC-10 score was 0.771.
Conclusions: The MAIC-10 checklist is a valid tool for assessing the quality of AI research in medical imaging with good inter-rater agreement. With regard to rib fracture imaging, items such as "study design", "explainability", and "transparency" were often not comprehensively addressed.
Critical relevance statement: AI in medical imaging has become increasingly common. Therefore, quality control systems of published literature such as the MAIC-10 checklist are needed to ensure high quality research output.
Key points: Quality control systems are needed for research on AI in medical imaging. The MAIC-10 checklist is a valid tool to assess AI in medical imaging research quality. Checklist items such as "study design", "explainability", and "transparency" are frequently addressed incomprehensively.
期刊介绍:
Insights into Imaging (I³) is a peer-reviewed open access journal published under the brand SpringerOpen. All content published in the journal is freely available online to anyone, anywhere!
I³ continuously updates scientific knowledge and progress in best-practice standards in radiology through the publication of original articles and state-of-the-art reviews and opinions, along with recommendations and statements from the leading radiological societies in Europe.
Founded by the European Society of Radiology (ESR), I³ creates a platform for educational material, guidelines and recommendations, and a forum for topics of controversy.
A balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes I³ an indispensable source for current information in this field.
I³ is owned by the ESR, however authors retain copyright to their article according to the Creative Commons Attribution License (see Copyright and License Agreement). All articles can be read, redistributed and reused for free, as long as the author of the original work is cited properly.
The open access fees (article-processing charges) for this journal are kindly sponsored by ESR for all Members.
The journal went open access in 2012, which means that all articles published since then are freely available online.