Ioannis Kafetzis, Philipp Sodmann, Robert Hüneburg, Jacob Nattermann, Nora Martens, Daniel R Englmann, Wolfram G Zoller, Alexander Meining, Alexander Hann
{"title":"通过从内窥镜图像中提取源不可知的相机信号,推进人工智能在内窥镜中的适用性。","authors":"Ioannis Kafetzis, Philipp Sodmann, Robert Hüneburg, Jacob Nattermann, Nora Martens, Daniel R Englmann, Wolfram G Zoller, Alexander Meining, Alexander Hann","doi":"10.1371/journal.pone.0325987","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Successful application of artificial intelligence (AI) in endoscopy requires effective image processing. Yet, the plethora of sources for endoscopic images, such as different processor-endoscope combinations or capsule endoscopy devices, results in images that vastly differ in appearance. These differences hinder the generalizability of AI models in endoscopy.</p><p><strong>Methods: </strong>We developed an AI-based method for extracting the camera signal from raw endoscopic images in a source-agnostic manner. Additionally, we created a diverse dataset of standardized endoscopic images, named Endoscopic Processor Image Collection (EPIC), from 4 different endoscopy centers. Included data were recorded using 9 different processors from 4 manufacturers with 45 endoscopes. Furthermore, images recorded with 4 capsule endoscopy devices from 2 manufacturers are included. We evaluated the camera signal extraction method using 641 manually annotated images from 5 different, publicly available endoscopic image datasets, as well as on the EPIC dataset. Results were compared it with a published baseline in terms of Intersection over Union (IoU) and Hausdorff distance (HD).</p><p><strong>Results: </strong>In segmenting the camera signal on images from public datasets, our method achieved mean IoU of 0.97 which was significantly higher than that of the baseline method and mean HD of 21 pixels which was significantly lower compared to the baseline. On the standardized images of the EPIC dataset, there was no significant difference between IoU but our method achieved a significantly lower HD. Both the developed AI-based method and the generated dataset are made publicly available.</p><p><strong>Conclusion: </strong>This work introduces an AI-based method that effectively segments the endoscope camera signal from the raw endoscopic data in a source-agnostic way. Utilizing the proposed method as a preprocessing step allows existing AI models to use any endoscopic image, independent of its source, without compromising performance. Additionally, EPIC, a dataset of diverse endoscopic images, is generated. The proposed method, trained AI model weights, and the EPIC dataset are made publicly available.</p>","PeriodicalId":20189,"journal":{"name":"PLoS ONE","volume":"20 6","pages":"e0325987"},"PeriodicalIF":2.6000,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12157078/pdf/","citationCount":"0","resultStr":"{\"title\":\"Advancing artificial intelligence applicability in endoscopy through source-agnostic camera signal extraction from endoscopic images.\",\"authors\":\"Ioannis Kafetzis, Philipp Sodmann, Robert Hüneburg, Jacob Nattermann, Nora Martens, Daniel R Englmann, Wolfram G Zoller, Alexander Meining, Alexander Hann\",\"doi\":\"10.1371/journal.pone.0325987\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Successful application of artificial intelligence (AI) in endoscopy requires effective image processing. Yet, the plethora of sources for endoscopic images, such as different processor-endoscope combinations or capsule endoscopy devices, results in images that vastly differ in appearance. These differences hinder the generalizability of AI models in endoscopy.</p><p><strong>Methods: </strong>We developed an AI-based method for extracting the camera signal from raw endoscopic images in a source-agnostic manner. Additionally, we created a diverse dataset of standardized endoscopic images, named Endoscopic Processor Image Collection (EPIC), from 4 different endoscopy centers. Included data were recorded using 9 different processors from 4 manufacturers with 45 endoscopes. Furthermore, images recorded with 4 capsule endoscopy devices from 2 manufacturers are included. We evaluated the camera signal extraction method using 641 manually annotated images from 5 different, publicly available endoscopic image datasets, as well as on the EPIC dataset. Results were compared it with a published baseline in terms of Intersection over Union (IoU) and Hausdorff distance (HD).</p><p><strong>Results: </strong>In segmenting the camera signal on images from public datasets, our method achieved mean IoU of 0.97 which was significantly higher than that of the baseline method and mean HD of 21 pixels which was significantly lower compared to the baseline. On the standardized images of the EPIC dataset, there was no significant difference between IoU but our method achieved a significantly lower HD. Both the developed AI-based method and the generated dataset are made publicly available.</p><p><strong>Conclusion: </strong>This work introduces an AI-based method that effectively segments the endoscope camera signal from the raw endoscopic data in a source-agnostic way. Utilizing the proposed method as a preprocessing step allows existing AI models to use any endoscopic image, independent of its source, without compromising performance. Additionally, EPIC, a dataset of diverse endoscopic images, is generated. The proposed method, trained AI model weights, and the EPIC dataset are made publicly available.</p>\",\"PeriodicalId\":20189,\"journal\":{\"name\":\"PLoS ONE\",\"volume\":\"20 6\",\"pages\":\"e0325987\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12157078/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLoS ONE\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pone.0325987\",\"RegionNum\":3,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS ONE","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1371/journal.pone.0325987","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
Advancing artificial intelligence applicability in endoscopy through source-agnostic camera signal extraction from endoscopic images.
Introduction: Successful application of artificial intelligence (AI) in endoscopy requires effective image processing. Yet, the plethora of sources for endoscopic images, such as different processor-endoscope combinations or capsule endoscopy devices, results in images that vastly differ in appearance. These differences hinder the generalizability of AI models in endoscopy.
Methods: We developed an AI-based method for extracting the camera signal from raw endoscopic images in a source-agnostic manner. Additionally, we created a diverse dataset of standardized endoscopic images, named Endoscopic Processor Image Collection (EPIC), from 4 different endoscopy centers. Included data were recorded using 9 different processors from 4 manufacturers with 45 endoscopes. Furthermore, images recorded with 4 capsule endoscopy devices from 2 manufacturers are included. We evaluated the camera signal extraction method using 641 manually annotated images from 5 different, publicly available endoscopic image datasets, as well as on the EPIC dataset. Results were compared it with a published baseline in terms of Intersection over Union (IoU) and Hausdorff distance (HD).
Results: In segmenting the camera signal on images from public datasets, our method achieved mean IoU of 0.97 which was significantly higher than that of the baseline method and mean HD of 21 pixels which was significantly lower compared to the baseline. On the standardized images of the EPIC dataset, there was no significant difference between IoU but our method achieved a significantly lower HD. Both the developed AI-based method and the generated dataset are made publicly available.
Conclusion: This work introduces an AI-based method that effectively segments the endoscope camera signal from the raw endoscopic data in a source-agnostic way. Utilizing the proposed method as a preprocessing step allows existing AI models to use any endoscopic image, independent of its source, without compromising performance. Additionally, EPIC, a dataset of diverse endoscopic images, is generated. The proposed method, trained AI model weights, and the EPIC dataset are made publicly available.
期刊介绍:
PLOS ONE is an international, peer-reviewed, open-access, online publication. PLOS ONE welcomes reports on primary research from any scientific discipline. It provides:
* Open-access—freely accessible online, authors retain copyright
* Fast publication times
* Peer review by expert, practicing researchers
* Post-publication tools to indicate quality and impact
* Community-based dialogue on articles
* Worldwide media coverage