通过从内窥镜图像中提取源不可知的相机信号,推进人工智能在内窥镜中的适用性。

IF 2.6 3区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES
PLoS ONE Pub Date : 2025-06-11 eCollection Date: 2025-01-01 DOI:10.1371/journal.pone.0325987
Ioannis Kafetzis, Philipp Sodmann, Robert Hüneburg, Jacob Nattermann, Nora Martens, Daniel R Englmann, Wolfram G Zoller, Alexander Meining, Alexander Hann
{"title":"通过从内窥镜图像中提取源不可知的相机信号,推进人工智能在内窥镜中的适用性。","authors":"Ioannis Kafetzis, Philipp Sodmann, Robert Hüneburg, Jacob Nattermann, Nora Martens, Daniel R Englmann, Wolfram G Zoller, Alexander Meining, Alexander Hann","doi":"10.1371/journal.pone.0325987","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Successful application of artificial intelligence (AI) in endoscopy requires effective image processing. Yet, the plethora of sources for endoscopic images, such as different processor-endoscope combinations or capsule endoscopy devices, results in images that vastly differ in appearance. These differences hinder the generalizability of AI models in endoscopy.</p><p><strong>Methods: </strong>We developed an AI-based method for extracting the camera signal from raw endoscopic images in a source-agnostic manner. Additionally, we created a diverse dataset of standardized endoscopic images, named Endoscopic Processor Image Collection (EPIC), from 4 different endoscopy centers. Included data were recorded using 9 different processors from 4 manufacturers with 45 endoscopes. Furthermore, images recorded with 4 capsule endoscopy devices from 2 manufacturers are included. We evaluated the camera signal extraction method using 641 manually annotated images from 5 different, publicly available endoscopic image datasets, as well as on the EPIC dataset. Results were compared it with a published baseline in terms of Intersection over Union (IoU) and Hausdorff distance (HD).</p><p><strong>Results: </strong>In segmenting the camera signal on images from public datasets, our method achieved mean IoU of 0.97 which was significantly higher than that of the baseline method and mean HD of 21 pixels which was significantly lower compared to the baseline. On the standardized images of the EPIC dataset, there was no significant difference between IoU but our method achieved a significantly lower HD. Both the developed AI-based method and the generated dataset are made publicly available.</p><p><strong>Conclusion: </strong>This work introduces an AI-based method that effectively segments the endoscope camera signal from the raw endoscopic data in a source-agnostic way. Utilizing the proposed method as a preprocessing step allows existing AI models to use any endoscopic image, independent of its source, without compromising performance. Additionally, EPIC, a dataset of diverse endoscopic images, is generated. The proposed method, trained AI model weights, and the EPIC dataset are made publicly available.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"20 6","pages":"e0325987"},"PeriodicalIF":2.6000,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12157078/pdf/","citationCount":"0","resultStr":"{\"title\":\"Advancing artificial intelligence applicability in endoscopy through source-agnostic camera signal extraction from endoscopic images.\",\"authors\":\"Ioannis Kafetzis, Philipp Sodmann, Robert Hüneburg, Jacob Nattermann, Nora Martens, Daniel R Englmann, Wolfram G Zoller, Alexander Meining, Alexander Hann\",\"doi\":\"10.1371/journal.pone.0325987\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Successful application of artificial intelligence (AI) in endoscopy requires effective image processing. Yet, the plethora of sources for endoscopic images, such as different processor-endoscope combinations or capsule endoscopy devices, results in images that vastly differ in appearance. These differences hinder the generalizability of AI models in endoscopy.</p><p><strong>Methods: </strong>We developed an AI-based method for extracting the camera signal from raw endoscopic images in a source-agnostic manner. Additionally, we created a diverse dataset of standardized endoscopic images, named Endoscopic Processor Image Collection (EPIC), from 4 different endoscopy centers. Included data were recorded using 9 different processors from 4 manufacturers with 45 endoscopes. Furthermore, images recorded with 4 capsule endoscopy devices from 2 manufacturers are included. We evaluated the camera signal extraction method using 641 manually annotated images from 5 different, publicly available endoscopic image datasets, as well as on the EPIC dataset. Results were compared it with a published baseline in terms of Intersection over Union (IoU) and Hausdorff distance (HD).</p><p><strong>Results: </strong>In segmenting the camera signal on images from public datasets, our method achieved mean IoU of 0.97 which was significantly higher than that of the baseline method and mean HD of 21 pixels which was significantly lower compared to the baseline. On the standardized images of the EPIC dataset, there was no significant difference between IoU but our method achieved a significantly lower HD. Both the developed AI-based method and the generated dataset are made publicly available.</p><p><strong>Conclusion: </strong>This work introduces an AI-based method that effectively segments the endoscope camera signal from the raw endoscopic data in a source-agnostic way. Utilizing the proposed method as a preprocessing step allows existing AI models to use any endoscopic image, independent of its source, without compromising performance. Additionally, EPIC, a dataset of diverse endoscopic images, is generated. The proposed method, trained AI model weights, and the EPIC dataset are made publicly available.</p>\",\"PeriodicalId\":20189,\"journal\":{\"name\":\"PLoS ONE\",\"volume\":\"20 6\",\"pages\":\"e0325987\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12157078/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLoS ONE\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pone.0325987\",\"RegionNum\":3,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0325987","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

导读:人工智能(AI)在内窥镜中的成功应用需要有效的图像处理。然而,内窥镜图像的来源过多,例如不同的处理器-内窥镜组合或胶囊内窥镜设备,导致图像在外观上差异很大。这些差异阻碍了人工智能模型在内窥镜检查中的推广。方法:我们开发了一种基于人工智能的方法,以源不可知的方式从原始内窥镜图像中提取相机信号。此外,我们从4个不同的内窥镜中心创建了一个标准化内窥镜图像的多样化数据集,称为内窥镜处理器图像集(EPIC)。纳入的数据使用4家制造商的9个不同处理器和45个内窥镜进行记录。此外,我们还包括了来自两家制造商的4个胶囊内窥镜设备记录的图像。我们使用来自5个不同的、公开的内窥镜图像数据集以及EPIC数据集的641张手动注释的图像来评估相机信号提取方法。结果将其与已发表的基线进行比较,包括交汇距离(IoU)和豪斯多夫距离(HD)。结果:在公共数据集图像上分割相机信号时,我们的方法平均IoU为0.97,明显高于基线方法,平均HD为21像素,明显低于基线方法。在EPIC数据集的标准化图像上,IoU之间没有显著差异,但我们的方法实现了明显较低的HD。开发的基于人工智能的方法和生成的数据集都是公开的。结论:本文介绍了一种基于人工智能的方法,可以有效地从原始内窥镜数据中分离出内窥镜相机信号。利用所提出的方法作为预处理步骤,现有的人工智能模型可以使用任何内窥镜图像,独立于其来源,而不会影响性能。此外,还生成了EPIC,这是一个由各种内窥镜图像组成的数据集。提出的方法、训练好的AI模型权重和EPIC数据集都是公开的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Advancing artificial intelligence applicability in endoscopy through source-agnostic camera signal extraction from endoscopic images.

Advancing artificial intelligence applicability in endoscopy through source-agnostic camera signal extraction from endoscopic images.

Advancing artificial intelligence applicability in endoscopy through source-agnostic camera signal extraction from endoscopic images.

Advancing artificial intelligence applicability in endoscopy through source-agnostic camera signal extraction from endoscopic images.

Introduction: Successful application of artificial intelligence (AI) in endoscopy requires effective image processing. Yet, the plethora of sources for endoscopic images, such as different processor-endoscope combinations or capsule endoscopy devices, results in images that vastly differ in appearance. These differences hinder the generalizability of AI models in endoscopy.

Methods: We developed an AI-based method for extracting the camera signal from raw endoscopic images in a source-agnostic manner. Additionally, we created a diverse dataset of standardized endoscopic images, named Endoscopic Processor Image Collection (EPIC), from 4 different endoscopy centers. Included data were recorded using 9 different processors from 4 manufacturers with 45 endoscopes. Furthermore, images recorded with 4 capsule endoscopy devices from 2 manufacturers are included. We evaluated the camera signal extraction method using 641 manually annotated images from 5 different, publicly available endoscopic image datasets, as well as on the EPIC dataset. Results were compared it with a published baseline in terms of Intersection over Union (IoU) and Hausdorff distance (HD).

Results: In segmenting the camera signal on images from public datasets, our method achieved mean IoU of 0.97 which was significantly higher than that of the baseline method and mean HD of 21 pixels which was significantly lower compared to the baseline. On the standardized images of the EPIC dataset, there was no significant difference between IoU but our method achieved a significantly lower HD. Both the developed AI-based method and the generated dataset are made publicly available.

Conclusion: This work introduces an AI-based method that effectively segments the endoscope camera signal from the raw endoscopic data in a source-agnostic way. Utilizing the proposed method as a preprocessing step allows existing AI models to use any endoscopic image, independent of its source, without compromising performance. Additionally, EPIC, a dataset of diverse endoscopic images, is generated. The proposed method, trained AI model weights, and the EPIC dataset are made publicly available.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
PLoS ONE
PLoS ONE 生物-生物学
CiteScore
6.20
自引率
5.40%
发文量
14242
审稿时长
3.7 months
期刊介绍: PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides: * Open-access—freely accessible online, authors retain copyright * Fast publication times * Peer review by expert, practicing researchers * Post-publication tools to indicate quality and impact * Community-based dialogue on articles * Worldwide media coverage
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信