大ACR MRI幻像低对比度目标可检测性测试自动分析的统计方法

IF 2.2 4区 医学 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Ali M. Golestani, Julia M. Gee
{"title":"大ACR MRI幻像低对比度目标可检测性测试自动分析的统计方法","authors":"Ali M. Golestani,&nbsp;Julia M. Gee","doi":"10.1002/acm2.70173","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Regular quality control checks are essential to ensure the quality of MRI systems. The American College of Radiology (ACR) has developed a standardized large phantom test protocol for this purpose. However, the ACR protocol recommends manual measurements, which are time-consuming, labor-intensive, and prone to variability, impacting accuracy and reproducibility. Although some aspects of the ACR evaluation have been automated or semi-automated, tests like low-contrast object detectability (LCOD), remain challenging to automate. LCOD involves assessing the visibility of objects at various contrast levels.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>The purpose of this research is to propose and evaluate an automated approach for LCOD testing in MRI.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>The automated Python code generates a one-dimensional profile of image intensities along radial paths from the center of the contrast disk. These profiles are compared to templates created from the disc's geometric information using general linear model statistical tests. A total of 80 image volumes (40 T1- and 40 T2-weighted) were assessed twice by two human evaluators and the proposed Python code.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Human raters showed intra-rater variability (Cohen's Kappa 0.941, 0.962), while the Python code exhibited perfect intra-rater agreement. Inter-rater agreement between the code and humans was comparable to human-to-human agreement (Cohen's Kappa 0.878 between the two human raters vs. 0.945, and 0.783 between the code and human raters). A stress test revealed both human raters and the code assigned higher scores to lower bandwidth images and lower scores to higher bandwidth images.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>The proposed automated method eliminates intra-rater variability and achieves strong inter-rater agreement with human raters. These findings suggest the method is reliable and suitable for clinical settings, showing high concordance with human assessments.</p>\n </section>\n </div>","PeriodicalId":14989,"journal":{"name":"Journal of Applied Clinical Medical Physics","volume":"26 7","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acm2.70173","citationCount":"0","resultStr":"{\"title\":\"A statistical approach to automated analysis of the low-contrast object detectability test for the large ACR MRI phantom\",\"authors\":\"Ali M. Golestani,&nbsp;Julia M. Gee\",\"doi\":\"10.1002/acm2.70173\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Regular quality control checks are essential to ensure the quality of MRI systems. The American College of Radiology (ACR) has developed a standardized large phantom test protocol for this purpose. However, the ACR protocol recommends manual measurements, which are time-consuming, labor-intensive, and prone to variability, impacting accuracy and reproducibility. Although some aspects of the ACR evaluation have been automated or semi-automated, tests like low-contrast object detectability (LCOD), remain challenging to automate. LCOD involves assessing the visibility of objects at various contrast levels.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>The purpose of this research is to propose and evaluate an automated approach for LCOD testing in MRI.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>The automated Python code generates a one-dimensional profile of image intensities along radial paths from the center of the contrast disk. These profiles are compared to templates created from the disc's geometric information using general linear model statistical tests. A total of 80 image volumes (40 T1- and 40 T2-weighted) were assessed twice by two human evaluators and the proposed Python code.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>Human raters showed intra-rater variability (Cohen's Kappa 0.941, 0.962), while the Python code exhibited perfect intra-rater agreement. Inter-rater agreement between the code and humans was comparable to human-to-human agreement (Cohen's Kappa 0.878 between the two human raters vs. 0.945, and 0.783 between the code and human raters). A stress test revealed both human raters and the code assigned higher scores to lower bandwidth images and lower scores to higher bandwidth images.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>The proposed automated method eliminates intra-rater variability and achieves strong inter-rater agreement with human raters. These findings suggest the method is reliable and suitable for clinical settings, showing high concordance with human assessments.</p>\\n </section>\\n </div>\",\"PeriodicalId\":14989,\"journal\":{\"name\":\"Journal of Applied Clinical Medical Physics\",\"volume\":\"26 7\",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acm2.70173\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Applied Clinical Medical Physics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/acm2.70173\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Clinical Medical Physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/acm2.70173","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

背景定期的质量控制检查是保证MRI系统质量的必要条件。为此,美国放射学会(ACR)制定了一套标准化的大型假体测试方案。然而,ACR协议建议手动测量,这是耗时的,劳动密集型的,并且容易变化,影响准确性和可重复性。尽管ACR评估的某些方面已经实现了自动化或半自动化,但像低对比度对象可检测性(LCOD)这样的测试仍然难以实现自动化。LCOD包括在不同对比度水平下评估物体的可见性。本研究的目的是提出和评估一种在MRI中检测LCOD的自动化方法。方法自动Python代码从对比盘中心沿径向路径生成图像强度的一维轮廓。使用一般线性模型统计测试,将这些配置文件与从磁盘几何信息创建的模板进行比较。共有80个图像体积(40个T1加权和40个t2加权)由两名人类评估者和提议的Python代码进行了两次评估。结果人类编码具有显著的内部变异(Cohen’s Kappa 0.941, 0.962),而Python编码具有良好的内部一致性。代码与人类之间的一致性与人与人之间的一致性相当(两个人类评分者之间的Cohen’s Kappa为0.878,而代码与人类评分者之间的Kappa为0.945,代码与人类评分者之间的Kappa为0.783)。一项压力测试显示,人类评分者和代码都给低带宽的图像分配了更高的分数,给高带宽的图像分配了更低的分数。结论所提出的自动评分方法消除了评分者内部的可变性,并与人工评分者达成了较强的一致性。这些结果表明,该方法是可靠的,适用于临床设置,显示高度一致性的人类评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A statistical approach to automated analysis of the low-contrast object detectability test for the large ACR MRI phantom

A statistical approach to automated analysis of the low-contrast object detectability test for the large ACR MRI phantom

Background

Regular quality control checks are essential to ensure the quality of MRI systems. The American College of Radiology (ACR) has developed a standardized large phantom test protocol for this purpose. However, the ACR protocol recommends manual measurements, which are time-consuming, labor-intensive, and prone to variability, impacting accuracy and reproducibility. Although some aspects of the ACR evaluation have been automated or semi-automated, tests like low-contrast object detectability (LCOD), remain challenging to automate. LCOD involves assessing the visibility of objects at various contrast levels.

Purpose

The purpose of this research is to propose and evaluate an automated approach for LCOD testing in MRI.

Methods

The automated Python code generates a one-dimensional profile of image intensities along radial paths from the center of the contrast disk. These profiles are compared to templates created from the disc's geometric information using general linear model statistical tests. A total of 80 image volumes (40 T1- and 40 T2-weighted) were assessed twice by two human evaluators and the proposed Python code.

Results

Human raters showed intra-rater variability (Cohen's Kappa 0.941, 0.962), while the Python code exhibited perfect intra-rater agreement. Inter-rater agreement between the code and humans was comparable to human-to-human agreement (Cohen's Kappa 0.878 between the two human raters vs. 0.945, and 0.783 between the code and human raters). A stress test revealed both human raters and the code assigned higher scores to lower bandwidth images and lower scores to higher bandwidth images.

Conclusion

The proposed automated method eliminates intra-rater variability and achieves strong inter-rater agreement with human raters. These findings suggest the method is reliable and suitable for clinical settings, showing high concordance with human assessments.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.60
自引率
19.00%
发文量
331
审稿时长
3 months
期刊介绍: Journal of Applied Clinical Medical Physics is an international Open Access publication dedicated to clinical medical physics. JACMP welcomes original contributions dealing with all aspects of medical physics from scientists working in the clinical medical physics around the world. JACMP accepts only online submission. JACMP will publish: -Original Contributions: Peer-reviewed, investigations that represent new and significant contributions to the field. Recommended word count: up to 7500. -Review Articles: Reviews of major areas or sub-areas in the field of clinical medical physics. These articles may be of any length and are peer reviewed. -Technical Notes: These should be no longer than 3000 words, including key references. -Letters to the Editor: Comments on papers published in JACMP or on any other matters of interest to clinical medical physics. These should not be more than 1250 (including the literature) and their publication is only based on the decision of the editor, who occasionally asks experts on the merit of the contents. -Book Reviews: The editorial office solicits Book Reviews. -Announcements of Forthcoming Meetings: The Editor may provide notice of forthcoming meetings, course offerings, and other events relevant to clinical medical physics. -Parallel Opposed Editorial: We welcome topics relevant to clinical practice and medical physics profession. The contents can be controversial debate or opposed aspects of an issue. One author argues for the position and the other against. Each side of the debate contains an opening statement up to 800 words, followed by a rebuttal up to 500 words. Readers interested in participating in this series should contact the moderator with a proposed title and a short description of the topic
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信