用于自动检测和计数手术器械的人工智能模型:概念验证实验研究。

IF 2.6 Q1 SURGERY
Ekamjit S Deol, Grant Henning, Spyridon Basourakos, Ranveer M S Vasdev, Vidit Sharma, Nicholas L Kavoussi, R Jeffrey Karnes, Bradley C Leibovich, Stephen A Boorjian, Abhinav Khanna
{"title":"用于自动检测和计数手术器械的人工智能模型:概念验证实验研究。","authors":"Ekamjit S Deol, Grant Henning, Spyridon Basourakos, Ranveer M S Vasdev, Vidit Sharma, Nicholas L Kavoussi, R Jeffrey Karnes, Bradley C Leibovich, Stephen A Boorjian, Abhinav Khanna","doi":"10.1186/s13037-024-00406-y","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Retained surgical items (RSI) are preventable events that pose a significant risk to patient safety. Current strategies for preventing RSIs rely heavily on manual instrument counting methods, which are prone to human error. This study evaluates the feasibility and performance of a deep learning-based computer vision model for automated surgical tool detection and counting.</p><p><strong>Methods: </strong>A novel dataset of 1,004 images containing 13,213 surgical tools across 11 categories was developed. The dataset was split into training, validation, and test sets at a 60:20:20 ratio. An artificial intelligence (AI) model was trained on the dataset, and the model's performance was evaluated using standard object detection metrics, including precision and recall. To simulate a real-world surgical setting, model performance was also evaluated in a dynamic surgical video of instruments being moved in real-time.</p><p><strong>Results: </strong>The model demonstrated high precision (98.5%) and recall (99.9%) in distinguishing surgical tools from the background. It also exhibited excellent performance in differentiating between various surgical tools, with precision ranging from 94.0 to 100% and recall ranging from 97.1 to 100% across 11 tool categories. The model maintained strong performance on a subset of test images containing overlapping tools (precision range: 89.6-100%, and recall range 97.2-98.2%). In a real-time surgical video analysis, the model maintained a correct surgical tool count in all non-transition frames, with a median inference speed of 40.4 frames per second (interquartile range: 4.9).</p><p><strong>Conclusion: </strong>This study demonstrates that using a deep learning-based computer vision model for automated surgical tool detection and counting is feasible. The model's high precision and real-time inference capabilities highlight its potential to serve as an AI safeguard to potentially improve patient safety and reduce manual burden on surgical staff. Further validation in clinical settings is warranted.</p>","PeriodicalId":46782,"journal":{"name":"Patient Safety in Surgery","volume":"18 1","pages":"24"},"PeriodicalIF":2.6000,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11265075/pdf/","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence model for automated surgical instrument detection and counting: an experimental proof-of-concept study.\",\"authors\":\"Ekamjit S Deol, Grant Henning, Spyridon Basourakos, Ranveer M S Vasdev, Vidit Sharma, Nicholas L Kavoussi, R Jeffrey Karnes, Bradley C Leibovich, Stephen A Boorjian, Abhinav Khanna\",\"doi\":\"10.1186/s13037-024-00406-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Retained surgical items (RSI) are preventable events that pose a significant risk to patient safety. Current strategies for preventing RSIs rely heavily on manual instrument counting methods, which are prone to human error. This study evaluates the feasibility and performance of a deep learning-based computer vision model for automated surgical tool detection and counting.</p><p><strong>Methods: </strong>A novel dataset of 1,004 images containing 13,213 surgical tools across 11 categories was developed. The dataset was split into training, validation, and test sets at a 60:20:20 ratio. An artificial intelligence (AI) model was trained on the dataset, and the model's performance was evaluated using standard object detection metrics, including precision and recall. To simulate a real-world surgical setting, model performance was also evaluated in a dynamic surgical video of instruments being moved in real-time.</p><p><strong>Results: </strong>The model demonstrated high precision (98.5%) and recall (99.9%) in distinguishing surgical tools from the background. It also exhibited excellent performance in differentiating between various surgical tools, with precision ranging from 94.0 to 100% and recall ranging from 97.1 to 100% across 11 tool categories. The model maintained strong performance on a subset of test images containing overlapping tools (precision range: 89.6-100%, and recall range 97.2-98.2%). In a real-time surgical video analysis, the model maintained a correct surgical tool count in all non-transition frames, with a median inference speed of 40.4 frames per second (interquartile range: 4.9).</p><p><strong>Conclusion: </strong>This study demonstrates that using a deep learning-based computer vision model for automated surgical tool detection and counting is feasible. The model's high precision and real-time inference capabilities highlight its potential to serve as an AI safeguard to potentially improve patient safety and reduce manual burden on surgical staff. Further validation in clinical settings is warranted.</p>\",\"PeriodicalId\":46782,\"journal\":{\"name\":\"Patient Safety in Surgery\",\"volume\":\"18 1\",\"pages\":\"24\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-07-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11265075/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Patient Safety in Surgery\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s13037-024-00406-y\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patient Safety in Surgery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s13037-024-00406-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

摘要

背景:残留手术器械 (RSI) 是可预防的事件,对患者安全构成重大风险。目前预防 RSI 的策略主要依赖人工器械计数方法,而这种方法很容易出现人为错误。本研究评估了基于深度学习的计算机视觉模型用于自动手术工具检测和计数的可行性和性能:方法:开发了一个新颖的数据集,该数据集包含 1,004 张图像,包含 11 个类别的 13,213 件手术工具。数据集按 60:20:20 的比例分为训练集、验证集和测试集。人工智能(AI)模型在数据集上进行了训练,并使用标准对象检测指标(包括精确度和召回率)对模型的性能进行了评估。为了模拟真实世界的手术环境,还在实时移动器械的动态手术视频中对模型性能进行了评估:结果:该模型在将手术工具与背景区分开来方面表现出很高的精确度(98.5%)和召回率(99.9%)。该模型在区分各种手术工具方面也表现出色,在 11 个工具类别中,精确度从 94.0% 到 100% 不等,召回率从 97.1% 到 100% 不等。该模型在包含重叠工具的测试图像子集上保持了强劲的性能(精确度范围:89.6%-100%,召回率范围:97.2%-98.2%)。在实时手术视频分析中,该模型在所有非过渡帧中都保持了正确的手术工具计数,推理速度中位数为每秒 40.4 帧(四分位间范围:4.9):本研究表明,使用基于深度学习的计算机视觉模型进行自动手术工具检测和计数是可行的。该模型的高精度和实时推理能力凸显了其作为人工智能保障措施的潜力,有可能提高患者的安全性并减轻手术人员的人工操作负担。还需要在临床环境中进一步验证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial intelligence model for automated surgical instrument detection and counting: an experimental proof-of-concept study.

Background: Retained surgical items (RSI) are preventable events that pose a significant risk to patient safety. Current strategies for preventing RSIs rely heavily on manual instrument counting methods, which are prone to human error. This study evaluates the feasibility and performance of a deep learning-based computer vision model for automated surgical tool detection and counting.

Methods: A novel dataset of 1,004 images containing 13,213 surgical tools across 11 categories was developed. The dataset was split into training, validation, and test sets at a 60:20:20 ratio. An artificial intelligence (AI) model was trained on the dataset, and the model's performance was evaluated using standard object detection metrics, including precision and recall. To simulate a real-world surgical setting, model performance was also evaluated in a dynamic surgical video of instruments being moved in real-time.

Results: The model demonstrated high precision (98.5%) and recall (99.9%) in distinguishing surgical tools from the background. It also exhibited excellent performance in differentiating between various surgical tools, with precision ranging from 94.0 to 100% and recall ranging from 97.1 to 100% across 11 tool categories. The model maintained strong performance on a subset of test images containing overlapping tools (precision range: 89.6-100%, and recall range 97.2-98.2%). In a real-time surgical video analysis, the model maintained a correct surgical tool count in all non-transition frames, with a median inference speed of 40.4 frames per second (interquartile range: 4.9).

Conclusion: This study demonstrates that using a deep learning-based computer vision model for automated surgical tool detection and counting is feasible. The model's high precision and real-time inference capabilities highlight its potential to serve as an AI safeguard to potentially improve patient safety and reduce manual burden on surgical staff. Further validation in clinical settings is warranted.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.80
自引率
8.10%
发文量
37
审稿时长
9 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信