Deep learning for automated boundary detection and segmentation in organ donation photography.

IF 1.7 Q2 SURGERY
Georgios Kourounis, Ali Ahmed Elmahmudi, Brian Thomson, Robin Nandi, Samuel J Tingle, Emily K Glover, Emily Thompson, Balaji Mahendran, Chloe Connelly, Beth Gibson, Lucy Bates, Neil S Sheerin, James Hunter, Hassan Ugail, Colin Wilson
{"title":"Deep learning for automated boundary detection and segmentation in organ donation photography.","authors":"Georgios Kourounis, Ali Ahmed Elmahmudi, Brian Thomson, Robin Nandi, Samuel J Tingle, Emily K Glover, Emily Thompson, Balaji Mahendran, Chloe Connelly, Beth Gibson, Lucy Bates, Neil S Sheerin, James Hunter, Hassan Ugail, Colin Wilson","doi":"10.1515/iss-2024-0022","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Medical photography is ubiquitous and plays an increasingly important role in the fields of medicine and surgery. Any assessment of these photographs by computer vision algorithms requires first that the area of interest can accurately be delineated from the background. We aimed to develop deep learning segmentation models for kidney and liver organ donation photographs where accurate automated segmentation has not yet been described.</p><p><strong>Methods: </strong>Two novel deep learning models (Detectron2 and YoloV8) were developed using transfer learning and compared against existing tools for background removal (macBGRemoval, remBGisnet, remBGu2net). Anonymised photograph datasets comprised training/internal validation sets (821 kidney and 400 liver images) and external validation sets (203 kidney and 208 liver images). Each image had two segmentation labels: whole organ and clear view (parenchyma only). Intersection over Union (IoU) was the primary outcome, as the recommended metric for assessing segmentation performance.</p><p><strong>Results: </strong>In whole kidney segmentation, Detectron2 and YoloV8 outperformed other models with internal validation IoU of 0.93 and 0.94, and external validation IoU of 0.92 and 0.94, respectively. Other methods - macBGRemoval, remBGisnet and remBGu2net - scored lower, with highest internal validation IoU at 0.54 and external validation at 0.59. Similar results were observed in liver segmentation, where Detectron2 and YoloV8 both showed internal validation IoU of 0.97 and external validation of 0.92 and 0.91, respectively. The other models showed a maximum internal validation and external validation IoU of 0.89 and 0.59 respectively. All image segmentation tasks with Detectron2 and YoloV8 completed within 0.13-1.5 s per image.</p><p><strong>Conclusions: </strong>Accurate, rapid and automated image segmentation in the context of surgical photography is possible with open-source deep-learning software. These outperform existing methods and could impact the field of surgery, enabling similar advancements seen in other areas of medical computer vision.</p>","PeriodicalId":44186,"journal":{"name":"Innovative Surgical Sciences","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7617812/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Innovative Surgical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/iss-2024-0022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: Medical photography is ubiquitous and plays an increasingly important role in the fields of medicine and surgery. Any assessment of these photographs by computer vision algorithms requires first that the area of interest can accurately be delineated from the background. We aimed to develop deep learning segmentation models for kidney and liver organ donation photographs where accurate automated segmentation has not yet been described.

Methods: Two novel deep learning models (Detectron2 and YoloV8) were developed using transfer learning and compared against existing tools for background removal (macBGRemoval, remBGisnet, remBGu2net). Anonymised photograph datasets comprised training/internal validation sets (821 kidney and 400 liver images) and external validation sets (203 kidney and 208 liver images). Each image had two segmentation labels: whole organ and clear view (parenchyma only). Intersection over Union (IoU) was the primary outcome, as the recommended metric for assessing segmentation performance.

Results: In whole kidney segmentation, Detectron2 and YoloV8 outperformed other models with internal validation IoU of 0.93 and 0.94, and external validation IoU of 0.92 and 0.94, respectively. Other methods - macBGRemoval, remBGisnet and remBGu2net - scored lower, with highest internal validation IoU at 0.54 and external validation at 0.59. Similar results were observed in liver segmentation, where Detectron2 and YoloV8 both showed internal validation IoU of 0.97 and external validation of 0.92 and 0.91, respectively. The other models showed a maximum internal validation and external validation IoU of 0.89 and 0.59 respectively. All image segmentation tasks with Detectron2 and YoloV8 completed within 0.13-1.5 s per image.

Conclusions: Accurate, rapid and automated image segmentation in the context of surgical photography is possible with open-source deep-learning software. These outperform existing methods and could impact the field of surgery, enabling similar advancements seen in other areas of medical computer vision.

器官捐献摄影中自动边界检测与分割的深度学习。
目的:医学摄影无处不在,在医学和外科领域发挥着越来越重要的作用。计算机视觉算法对这些照片的任何评估都要求首先可以从背景中准确地描绘出感兴趣的区域。我们的目标是开发肾脏和肝脏器官捐赠照片的深度学习分割模型,其中尚未描述准确的自动分割。方法:利用迁移学习开发了两种新的深度学习模型(Detectron2和YoloV8),并与现有的背景去除工具(macBGRemoval, remBGisnet, remBGu2net)进行了比较。匿名照片数据集包括训练/内部验证集(821张肾脏和400张肝脏图像)和外部验证集(203张肾脏和208张肝脏图像)。每张图像有两个分割标签:整个器官和清晰视图(仅限实质)。交叉联(IoU)是主要结果,作为评估分割性能的推荐指标。结果:在全肾分割中,Detectron2和YoloV8的内部验证IoU分别为0.93和0.94,外部验证IoU分别为0.92和0.94,优于其他模型。其他方法——macBGRemoval、remBGisnet和remBGu2net——得分较低,内部验证IoU最高为0.54,外部验证IoU最高为0.59。在肝脏分割中也有类似的结果,Detectron2和YoloV8的内部验证IoU分别为0.97,外部验证IoU分别为0.92和0.91。其他模型的最大内部验证IoU和外部验证IoU分别为0.89和0.59。Detectron2和YoloV8在每张图像0.13-1.5 s内完成所有图像分割任务。结论:开放源代码的深度学习软件可以实现外科摄影中准确、快速和自动化的图像分割。这些方法优于现有的方法,并可能影响手术领域,从而在医疗计算机视觉的其他领域实现类似的进步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.40
自引率
0.00%
发文量
29
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信