Artificial intelligence in the operating room: A systematic review of AI models for surgical phase, instruments and anatomical structure identification.
Sara Paracchini, Cristina Taliento, Giulia Pellecchia, Veronica Tius, Madalena Tavares, Chiara Borghi, Alessandro Antonio Buda, Adrien Bartoli, Nicolas Bourdel, Giuseppe Vizzielli
{"title":"Artificial intelligence in the operating room: A systematic review of AI models for surgical phase, instruments and anatomical structure identification.","authors":"Sara Paracchini, Cristina Taliento, Giulia Pellecchia, Veronica Tius, Madalena Tavares, Chiara Borghi, Alessandro Antonio Buda, Adrien Bartoli, Nicolas Bourdel, Giuseppe Vizzielli","doi":"10.1111/aogs.70045","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>This systematic review examines the application of multiple deep learning algorithms in the analysis of intraoperative videos to enable feature extraction and pattern recognition of surgical phases, anatomical structures, and surgical instruments.</p><p><strong>Material and methods: </strong>A comprehensive literature search was conducted across PubMed, Web of Science, and EBSCO, covering studies published until March 2024. This review includes studies that applied AI models in the operating room for surgical-phase recognition and/or anatomical structures and instruments. Only studies utilizing machine learning or deep learning for surgical video analysis were considered. The primary outcome measures were accuracy, precision, recall, and F1 score.</p><p><strong>Results: </strong>A total of 21 studies were included. Multilayer architecture of interconnected neural networks was predominantly used. The deep learning models demonstrated promising results, with accuracy ranging from 81% to 93.2% for surgical-phase recognition. Anatomical structure recognition models achieved accuracy between 71.4% and 98.1%.</p><p><strong>Conclusions: </strong>Artificial intelligence has the potential to significantly improve surgical precision and workflow, with demonstrated success in phase recognition and anatomical structure identification. However, further research is needed to address dataset limitations, standardize annotation protocols, and minimize biases.</p>","PeriodicalId":6990,"journal":{"name":"Acta Obstetricia et Gynecologica Scandinavica","volume":" ","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Acta Obstetricia et Gynecologica Scandinavica","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/aogs.70045","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OBSTETRICS & GYNECOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: This systematic review examines the application of multiple deep learning algorithms in the analysis of intraoperative videos to enable feature extraction and pattern recognition of surgical phases, anatomical structures, and surgical instruments.
Material and methods: A comprehensive literature search was conducted across PubMed, Web of Science, and EBSCO, covering studies published until March 2024. This review includes studies that applied AI models in the operating room for surgical-phase recognition and/or anatomical structures and instruments. Only studies utilizing machine learning or deep learning for surgical video analysis were considered. The primary outcome measures were accuracy, precision, recall, and F1 score.
Results: A total of 21 studies were included. Multilayer architecture of interconnected neural networks was predominantly used. The deep learning models demonstrated promising results, with accuracy ranging from 81% to 93.2% for surgical-phase recognition. Anatomical structure recognition models achieved accuracy between 71.4% and 98.1%.
Conclusions: Artificial intelligence has the potential to significantly improve surgical precision and workflow, with demonstrated success in phase recognition and anatomical structure identification. However, further research is needed to address dataset limitations, standardize annotation protocols, and minimize biases.
本系统综述探讨了多种深度学习算法在术中视频分析中的应用,以实现手术阶段、解剖结构和手术器械的特征提取和模式识别。材料和方法:在PubMed、Web of Science和EBSCO上进行了全面的文献检索,涵盖了截至2024年3月发表的研究。本综述包括将人工智能模型应用于手术室手术阶段识别和/或解剖结构和器械的研究。仅考虑利用机器学习或深度学习进行手术视频分析的研究。主要结局指标为准确性、精密度、查全率和F1评分。结果:共纳入21项研究。主要采用多层互联神经网络结构。深度学习模型展示了令人鼓舞的结果,手术阶段识别的准确率从81%到93.2%不等。解剖结构识别模型的准确率在71.4% ~ 98.1%之间。结论:人工智能具有显著提高手术精度和工作流程的潜力,在相位识别和解剖结构识别方面取得了成功。然而,需要进一步的研究来解决数据集的局限性、标准化标注协议和最小化偏差。
期刊介绍:
Published monthly, Acta Obstetricia et Gynecologica Scandinavica is an international journal dedicated to providing the very latest information on the results of both clinical, basic and translational research work related to all aspects of women’s health from around the globe. The journal regularly publishes commentaries, reviews, and original articles on a wide variety of topics including: gynecology, pregnancy, birth, female urology, gynecologic oncology, fertility and reproductive biology.