Deep learning implementation for extrahepatic bile duct detection during indocyanine green fluorescence-guided laparoscopic cholecystectomy: pilot study.

IF 3.5 3区 医学 Q1 SURGERY
BJS Open Pub Date : 2025-03-04 DOI:10.1093/bjsopen/zraf013
Shih-Min Yin, Jenn-Jier J Lien, I Min Chiu
{"title":"Deep learning implementation for extrahepatic bile duct detection during indocyanine green fluorescence-guided laparoscopic cholecystectomy: pilot study.","authors":"Shih-Min Yin, Jenn-Jier J Lien, I Min Chiu","doi":"10.1093/bjsopen/zraf013","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>A real-time deep learning system was developed to identify the extrahepatic bile ducts during indocyanine green fluorescence-guided laparoscopic cholecystectomy.</p><p><strong>Methods: </strong>Two expert surgeons annotated surgical videos from 113 patients and six class structures. YOLOv7, a real-time object detection model that enhances speed and accuracy in identifying and localizing objects within images, was trained for structures identification. To evaluate the model's performance, single-frame and short video clip validations were used. The primary outcomes were average precision and mean average precision in single-frame validation. Secondary outcomes were accuracy and other metrics in short video clip validations. An intraoperative prototype was developed for the verification experiments.</p><p><strong>Results: </strong>A total of 3993 images were extracted to train the YOLOv7 model. In single-frame validation, all classes' mean average precision was 0.846, and average precision for the common bile duct and cystic duct was 0.864 and 0.698 respectively. The model was trained to detect six different classes of objects and exhibited the best overall performance, with an accuracy of 94.39% for the common bile duct and 84.97% for the cystic duct in video clip validation.</p><p><strong>Conclusion: </strong>This model could potentially assist surgeons in identifying the critical landmarks during laparoscopic cholecystectomy, thereby minimizing the risk of bile duct injuries.</p>","PeriodicalId":9028,"journal":{"name":"BJS Open","volume":"9 2","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11928939/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BJS Open","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/bjsopen/zraf013","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

Abstract

Background: A real-time deep learning system was developed to identify the extrahepatic bile ducts during indocyanine green fluorescence-guided laparoscopic cholecystectomy.

Methods: Two expert surgeons annotated surgical videos from 113 patients and six class structures. YOLOv7, a real-time object detection model that enhances speed and accuracy in identifying and localizing objects within images, was trained for structures identification. To evaluate the model's performance, single-frame and short video clip validations were used. The primary outcomes were average precision and mean average precision in single-frame validation. Secondary outcomes were accuracy and other metrics in short video clip validations. An intraoperative prototype was developed for the verification experiments.

Results: A total of 3993 images were extracted to train the YOLOv7 model. In single-frame validation, all classes' mean average precision was 0.846, and average precision for the common bile duct and cystic duct was 0.864 and 0.698 respectively. The model was trained to detect six different classes of objects and exhibited the best overall performance, with an accuracy of 94.39% for the common bile duct and 84.97% for the cystic duct in video clip validation.

Conclusion: This model could potentially assist surgeons in identifying the critical landmarks during laparoscopic cholecystectomy, thereby minimizing the risk of bile duct injuries.

在吲哚菁绿荧光引导的腹腔镜胆囊切除术中实现肝外胆管检测的深度学习:试验研究。
背景:开发了一种实时深度学习系统,用于吲哚菁绿荧光引导下腹腔镜胆囊切除术中肝外胆管的识别。方法:2位专家对113例患者6个分类结构的手术录像进行注释。YOLOv7是一种实时目标检测模型,可以提高图像中目标识别和定位的速度和准确性,用于结构识别。为了评估模型的性能,使用了单帧和短视频片段验证。在单帧验证中,主要结果为平均精度和平均精度。次要结果是短视频剪辑验证的准确性和其他指标。为验证实验开发了术中原型。结果:共提取了3993张图像用于YOLOv7模型的训练。在单框架验证中,所有类别的平均精度为0.846,胆总管和胆囊管的平均精度分别为0.864和0.698。经过训练,该模型可以检测六种不同类别的物体,并表现出最佳的综合性能,在视频片段验证中,对胆总管和胆囊管的准确率分别为94.39%和84.97%。结论:该模型可以帮助外科医生识别腹腔镜胆囊切除术中的关键标志,从而最大限度地降低胆管损伤的风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
BJS Open
BJS Open SURGERY-
CiteScore
6.00
自引率
3.20%
发文量
144
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信