Automated Machine Learning versus Expert-Designed Models in Ocular Toxoplasmosis: Detection and Lesion Localization Using Fundus Images.

IF 2.6 4区 医学 Q2 OPHTHALMOLOGY
Ocular Immunology and Inflammation Pub Date : 2024-11-01 Epub Date: 2024-02-27 DOI:10.1080/09273948.2024.2319281
Daniel Milad, Fares Antaki, Allison Bernstein, Samir Touma, Renaud Duval
{"title":"Automated Machine Learning versus Expert-Designed Models in Ocular Toxoplasmosis: Detection and Lesion Localization Using Fundus Images.","authors":"Daniel Milad, Fares Antaki, Allison Bernstein, Samir Touma, Renaud Duval","doi":"10.1080/09273948.2024.2319281","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Automated machine learning (AutoML) allows clinicians without coding experience to build their own deep learning (DL) models. This study assesses the performance of AutoML in detecting and localizing ocular toxoplasmosis (OT) lesions in fundus images and compares it to expert-designed models.</p><p><strong>Methods: </strong>Ophthalmology trainees without coding experience designed AutoML models using 304 labelled fundus images. We designed a binary model to differentiate OT from normal and an object detection model to visually identify OT lesions.</p><p><strong>Results: </strong>The AutoML model had an area under the precision-recall curve (AuPRC) of 0.945, sensitivity of 100%, specificity of 83% and accuracy of 93.5% (vs. 94%, 86% and 91% for the bespoke models). The AutoML object detection model had an AuPRC of 0.600 with a precision of 93.3% and recall of 56%. Using a diversified external validation dataset, our model correctly labeled 15 normal fundus images (100%) and 15 OT fundus images (100%), with a mean confidence score of 0.965 and 0.963, respectively.</p><p><strong>Conclusion: </strong>AutoML models created by ophthalmologists without coding experience were comparable or better than expert-designed bespoke models trained on the same dataset. By creatively using AutoML to identify OT lesions on fundus images, our approach brings the whole spectrum of DL model design into the hands of clinicians.</p>","PeriodicalId":19406,"journal":{"name":"Ocular Immunology and Inflammation","volume":" ","pages":"2061-2067"},"PeriodicalIF":2.6000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ocular Immunology and Inflammation","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1080/09273948.2024.2319281","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/2/27 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Automated machine learning (AutoML) allows clinicians without coding experience to build their own deep learning (DL) models. This study assesses the performance of AutoML in detecting and localizing ocular toxoplasmosis (OT) lesions in fundus images and compares it to expert-designed models.

Methods: Ophthalmology trainees without coding experience designed AutoML models using 304 labelled fundus images. We designed a binary model to differentiate OT from normal and an object detection model to visually identify OT lesions.

Results: The AutoML model had an area under the precision-recall curve (AuPRC) of 0.945, sensitivity of 100%, specificity of 83% and accuracy of 93.5% (vs. 94%, 86% and 91% for the bespoke models). The AutoML object detection model had an AuPRC of 0.600 with a precision of 93.3% and recall of 56%. Using a diversified external validation dataset, our model correctly labeled 15 normal fundus images (100%) and 15 OT fundus images (100%), with a mean confidence score of 0.965 and 0.963, respectively.

Conclusion: AutoML models created by ophthalmologists without coding experience were comparable or better than expert-designed bespoke models trained on the same dataset. By creatively using AutoML to identify OT lesions on fundus images, our approach brings the whole spectrum of DL model design into the hands of clinicians.

眼弓形虫病中的自动机器学习与专家设计模型:使用眼底图像进行检测和病变定位
目的:自动机器学习(AutoML)允许没有编码经验的临床医生建立自己的深度学习(DL)模型。本研究评估了 AutoML 在眼底图像中检测和定位眼弓形虫病(OT)病变的性能,并将其与专家设计的模型进行了比较:方法:没有编码经验的眼科受训人员使用 304 张标记的眼底图像设计了 AutoML 模型。我们设计了一个二元模型来区分 OT 和正常眼,并设计了一个对象检测模型来直观地识别 OT 病变:结果:AutoML 模型的精确度-召回曲线下面积(AuPRC)为 0.945,灵敏度为 100%,特异度为 83%,准确度为 93.5%(定制模型的准确度、特异度和准确度分别为 94%、86% 和 91%)。AutoML 物体检测模型的 AuPRC 为 0.600,精确度为 93.3%,召回率为 56%。使用多样化的外部验证数据集,我们的模型正确标注了 15 张正常眼底图像(100%)和 15 张 OT 眼底图像(100%),平均置信度分别为 0.965 和 0.963:没有编码经验的眼科医生创建的 AutoML 模型与在相同数据集上训练的专家设计的定制模型相当或更好。通过创造性地使用 AutoML 来识别眼底图像上的 OT 病变,我们的方法将 DL 模型设计的整个范围带到了临床医生的手中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.20
自引率
15.20%
发文量
285
审稿时长
6-12 weeks
期刊介绍: Ocular Immunology & Inflammation ranks 18 out of 59 in the Ophthalmology Category.Ocular Immunology and Inflammation is a peer-reviewed, scientific publication that welcomes the submission of original, previously unpublished manuscripts directed to ophthalmologists and vision scientists. Published bimonthly, the journal provides an international medium for basic and clinical research reports on the ocular inflammatory response and its control by the immune system. The journal publishes original research papers, case reports, reviews, letters to the editor, meeting abstracts, and invited editorials.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信