Ekamjit S Deol, Grant Henning, Spyridon Basourakos, Ranveer M S Vasdev, Vidit Sharma, Nicholas L Kavoussi, R Jeffrey Karnes, Bradley C Leibovich, Stephen A Boorjian, Abhinav Khanna
{"title":"Artificial intelligence model for automated surgical instrument detection and counting: an experimental proof-of-concept study.","authors":"Ekamjit S Deol, Grant Henning, Spyridon Basourakos, Ranveer M S Vasdev, Vidit Sharma, Nicholas L Kavoussi, R Jeffrey Karnes, Bradley C Leibovich, Stephen A Boorjian, Abhinav Khanna","doi":"10.1186/s13037-024-00406-y","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Retained surgical items (RSI) are preventable events that pose a significant risk to patient safety. Current strategies for preventing RSIs rely heavily on manual instrument counting methods, which are prone to human error. This study evaluates the feasibility and performance of a deep learning-based computer vision model for automated surgical tool detection and counting.</p><p><strong>Methods: </strong>A novel dataset of 1,004 images containing 13,213 surgical tools across 11 categories was developed. The dataset was split into training, validation, and test sets at a 60:20:20 ratio. An artificial intelligence (AI) model was trained on the dataset, and the model's performance was evaluated using standard object detection metrics, including precision and recall. To simulate a real-world surgical setting, model performance was also evaluated in a dynamic surgical video of instruments being moved in real-time.</p><p><strong>Results: </strong>The model demonstrated high precision (98.5%) and recall (99.9%) in distinguishing surgical tools from the background. It also exhibited excellent performance in differentiating between various surgical tools, with precision ranging from 94.0 to 100% and recall ranging from 97.1 to 100% across 11 tool categories. The model maintained strong performance on a subset of test images containing overlapping tools (precision range: 89.6-100%, and recall range 97.2-98.2%). In a real-time surgical video analysis, the model maintained a correct surgical tool count in all non-transition frames, with a median inference speed of 40.4 frames per second (interquartile range: 4.9).</p><p><strong>Conclusion: </strong>This study demonstrates that using a deep learning-based computer vision model for automated surgical tool detection and counting is feasible. The model's high precision and real-time inference capabilities highlight its potential to serve as an AI safeguard to potentially improve patient safety and reduce manual burden on surgical staff. Further validation in clinical settings is warranted.</p>","PeriodicalId":46782,"journal":{"name":"Patient Safety in Surgery","volume":"18 1","pages":"24"},"PeriodicalIF":2.6000,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11265075/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patient Safety in Surgery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s13037-024-00406-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Retained surgical items (RSI) are preventable events that pose a significant risk to patient safety. Current strategies for preventing RSIs rely heavily on manual instrument counting methods, which are prone to human error. This study evaluates the feasibility and performance of a deep learning-based computer vision model for automated surgical tool detection and counting.
Methods: A novel dataset of 1,004 images containing 13,213 surgical tools across 11 categories was developed. The dataset was split into training, validation, and test sets at a 60:20:20 ratio. An artificial intelligence (AI) model was trained on the dataset, and the model's performance was evaluated using standard object detection metrics, including precision and recall. To simulate a real-world surgical setting, model performance was also evaluated in a dynamic surgical video of instruments being moved in real-time.
Results: The model demonstrated high precision (98.5%) and recall (99.9%) in distinguishing surgical tools from the background. It also exhibited excellent performance in differentiating between various surgical tools, with precision ranging from 94.0 to 100% and recall ranging from 97.1 to 100% across 11 tool categories. The model maintained strong performance on a subset of test images containing overlapping tools (precision range: 89.6-100%, and recall range 97.2-98.2%). In a real-time surgical video analysis, the model maintained a correct surgical tool count in all non-transition frames, with a median inference speed of 40.4 frames per second (interquartile range: 4.9).
Conclusion: This study demonstrates that using a deep learning-based computer vision model for automated surgical tool detection and counting is feasible. The model's high precision and real-time inference capabilities highlight its potential to serve as an AI safeguard to potentially improve patient safety and reduce manual burden on surgical staff. Further validation in clinical settings is warranted.