Performance of automated machine learning in detecting fundus diseases based on ophthalmologic B-scan ultrasound images.

IF 2 Q2 OPHTHALMOLOGY
Qiaoling Wei, Qian Chen, Chen Zhao, Rui Jiang
{"title":"Performance of automated machine learning in detecting fundus diseases based on ophthalmologic B-scan ultrasound images.","authors":"Qiaoling Wei, Qian Chen, Chen Zhao, Rui Jiang","doi":"10.1136/bmjophth-2024-001873","DOIUrl":null,"url":null,"abstract":"<p><strong>Aim: </strong>To evaluate the efficacy of automated machine learning (AutoML) models in detecting fundus diseases using ocular B-scan ultrasound images.</p><p><strong>Methods: </strong>Ophthalmologists annotated two B-scan ultrasound image datasets to develop three AutoML models-single-label, multi-class single-label and multi-label-on the Vertex artificial intelligence (AI) platform. Performance of these models was compared among themselves and against existing bespoke models for binary classification tasks.</p><p><strong>Results: </strong>The training set involved 3938 images from 1378 patients, while batch predictions used an additional set of 336 images from 180 patients. The single-label AutoML model, trained on normal and abnormal fundus images, achieved an area under the precision-recall curve (AUPRC) of 0.9943. The multi-class single-label model, focused on single-pathology images, recorded an AUPRC of 0.9617, with performance metrics of these two single-label models proving comparable to those of previously published models. The multi-label model, designed to detect both single and multiple pathologies, posted an AUPRC of 0.9650. Pathology classification AUPRCs for the multi-class single-label model ranged from 0.9277 to 1.0000 and from 0.8780 to 0.9980 for the multi-label model. Batch prediction accuracies ranged from 86.57% to 97.65% for various fundus conditions in the multi-label AutoML model. Statistical analysis demonstrated that the single-label model significantly outperformed the other two models in all evaluated metrics (p<0.05).</p><p><strong>Conclusion: </strong>AutoML models, developed by clinicians, effectively detected multiple fundus lesions with performance on par with that of deep-learning models crafted by AI specialists. This underscores AutoML's potential to revolutionise ophthalmologic diagnostics, facilitating broader accessibility and application of sophisticated diagnostic technologies.</p>","PeriodicalId":9286,"journal":{"name":"BMJ Open Ophthalmology","volume":"9 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11647328/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Open Ophthalmology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjophth-2024-001873","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Aim: To evaluate the efficacy of automated machine learning (AutoML) models in detecting fundus diseases using ocular B-scan ultrasound images.

Methods: Ophthalmologists annotated two B-scan ultrasound image datasets to develop three AutoML models-single-label, multi-class single-label and multi-label-on the Vertex artificial intelligence (AI) platform. Performance of these models was compared among themselves and against existing bespoke models for binary classification tasks.

Results: The training set involved 3938 images from 1378 patients, while batch predictions used an additional set of 336 images from 180 patients. The single-label AutoML model, trained on normal and abnormal fundus images, achieved an area under the precision-recall curve (AUPRC) of 0.9943. The multi-class single-label model, focused on single-pathology images, recorded an AUPRC of 0.9617, with performance metrics of these two single-label models proving comparable to those of previously published models. The multi-label model, designed to detect both single and multiple pathologies, posted an AUPRC of 0.9650. Pathology classification AUPRCs for the multi-class single-label model ranged from 0.9277 to 1.0000 and from 0.8780 to 0.9980 for the multi-label model. Batch prediction accuracies ranged from 86.57% to 97.65% for various fundus conditions in the multi-label AutoML model. Statistical analysis demonstrated that the single-label model significantly outperformed the other two models in all evaluated metrics (p<0.05).

Conclusion: AutoML models, developed by clinicians, effectively detected multiple fundus lesions with performance on par with that of deep-learning models crafted by AI specialists. This underscores AutoML's potential to revolutionise ophthalmologic diagnostics, facilitating broader accessibility and application of sophisticated diagnostic technologies.

基于眼科b超图像的自动机器学习眼底疾病检测性能研究。
目的:评价自动机器学习(AutoML)模型在眼b超图像检测眼底疾病中的应用效果。方法:眼科医师在Vertex人工智能(AI)平台上对2个b超图像数据集进行注释,开发单标签、多类别单标签和多标签3种AutoML模型。这些模型的性能在它们之间进行了比较,并与现有的定制模型进行了比较。结果:训练集涉及来自1378名患者的3938张图像,而批量预测使用来自180名患者的额外336张图像。在正常和异常眼底图像上训练的单标签AutoML模型在precision-recall curve (AUPRC)下的面积为0.9943。多类别单标签模型专注于单一病理图像,AUPRC为0.9617,这两个单标签模型的性能指标与先前发表的模型相当。设计用于检测单一和多种病理的多标签模型的AUPRC为0.9650。多类别单标签模型的病理分类auprc范围为0.9277 ~ 1.0000,多标签模型的auprc范围为0.8780 ~ 0.9980。在多标签AutoML模型中,对不同眼底状况的批预测准确率为86.57% ~ 97.65%。统计分析表明,单标签模型在所有评估指标上都明显优于其他两种模型(结论:由临床医生开发的AutoML模型有效地检测到多种眼底病变,其性能与人工智能专家制作的深度学习模型相当。)这凸显了AutoML革命性眼科诊断的潜力,促进了复杂诊断技术更广泛的可及性和应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
BMJ Open Ophthalmology
BMJ Open Ophthalmology OPHTHALMOLOGY-
CiteScore
3.40
自引率
4.20%
发文量
104
审稿时长
20 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信