S.J.-Y. Ohtani-Kim , J. Samejima , M. Wakabayashi , M. Tada , T. Miyoshi , Y. Matsumura , K. Tane , K. Aokage , Y. Ishikawa , K. Hayashi , T. Ogane , K. Sasaki , S. Takenaka , Y. Kinebuchi , M. Ito , M. Tsuboi
{"title":"人工智能驱动的相位识别在肺外科手术中的应用:一项单中心试点研究","authors":"S.J.-Y. Ohtani-Kim , J. Samejima , M. Wakabayashi , M. Tada , T. Miyoshi , Y. Matsumura , K. Tane , K. Aokage , Y. Ishikawa , K. Hayashi , T. Ogane , K. Sasaki , S. Takenaka , Y. Kinebuchi , M. Ito , M. Tsuboi","doi":"10.1016/j.esmorw.2025.100194","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>Effective surgical specimen management during lung resection is crucial for accurate analyses and treatment. Minimally invasive techniques complicate workflows; thus, artificial intelligence (AI)-based solutions are needed to improve their safety and efficiency. We assessed the feasibility of AI for the automated classification of surgical phases in thoracoscopic wedge resection, examining the link between classification accuracy and surgical complexity, particularly during specimen extraction.</div></div><div><h3>Patients and methods</h3><div>This single-centre retrospective observational study from Japan included 73 video recordings of video-assisted thoracic surgery lung wedge resections with extraction of a single specimen carried out from January 2021 to December 2023. A Swin Transformer AI model was used to classify five distinct surgical phases: preparatory actions, lesion identification, resection, and specimen extraction. Pre- and postprocessing techniques improved model performance across different phases. The primary outcome was AI model performance in classifying surgical phases using metrics such as accuracy, precision, recall, and F1 score.</div></div><div><h3>Results</h3><div>The modified AI model achieved an overall accuracy of 0.778, with phase-specific accuracies ranging from 0.574 to 0.911. Significant improvements were observed in critical phases for specimen management (accuracy: 0.816). Clinical factors, including the number of access ports and phase duration, were key determinants of accuracy.</div></div><div><h3>Conclusions</h3><div>Our AI-driven phase recognition model for thoracoscopic lung surgery videos shows potential for optimizing operating room workflow, enhancing real-time decision-making, and improving efficiency by automating surgical phase classification.</div></div>","PeriodicalId":100491,"journal":{"name":"ESMO Real World Data and Digital Oncology","volume":"10 ","pages":"Article 100194"},"PeriodicalIF":0.0000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence-driven phase recognition in lung surgery: a single-centre pilot study☆\",\"authors\":\"S.J.-Y. Ohtani-Kim , J. Samejima , M. Wakabayashi , M. Tada , T. Miyoshi , Y. Matsumura , K. Tane , K. Aokage , Y. Ishikawa , K. Hayashi , T. Ogane , K. Sasaki , S. Takenaka , Y. Kinebuchi , M. Ito , M. Tsuboi\",\"doi\":\"10.1016/j.esmorw.2025.100194\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Introduction</h3><div>Effective surgical specimen management during lung resection is crucial for accurate analyses and treatment. Minimally invasive techniques complicate workflows; thus, artificial intelligence (AI)-based solutions are needed to improve their safety and efficiency. We assessed the feasibility of AI for the automated classification of surgical phases in thoracoscopic wedge resection, examining the link between classification accuracy and surgical complexity, particularly during specimen extraction.</div></div><div><h3>Patients and methods</h3><div>This single-centre retrospective observational study from Japan included 73 video recordings of video-assisted thoracic surgery lung wedge resections with extraction of a single specimen carried out from January 2021 to December 2023. A Swin Transformer AI model was used to classify five distinct surgical phases: preparatory actions, lesion identification, resection, and specimen extraction. Pre- and postprocessing techniques improved model performance across different phases. The primary outcome was AI model performance in classifying surgical phases using metrics such as accuracy, precision, recall, and F1 score.</div></div><div><h3>Results</h3><div>The modified AI model achieved an overall accuracy of 0.778, with phase-specific accuracies ranging from 0.574 to 0.911. Significant improvements were observed in critical phases for specimen management (accuracy: 0.816). Clinical factors, including the number of access ports and phase duration, were key determinants of accuracy.</div></div><div><h3>Conclusions</h3><div>Our AI-driven phase recognition model for thoracoscopic lung surgery videos shows potential for optimizing operating room workflow, enhancing real-time decision-making, and improving efficiency by automating surgical phase classification.</div></div>\",\"PeriodicalId\":100491,\"journal\":{\"name\":\"ESMO Real World Data and Digital Oncology\",\"volume\":\"10 \",\"pages\":\"Article 100194\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ESMO Real World Data and Digital Oncology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949820125000839\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ESMO Real World Data and Digital Oncology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949820125000839","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Artificial intelligence-driven phase recognition in lung surgery: a single-centre pilot study☆
Introduction
Effective surgical specimen management during lung resection is crucial for accurate analyses and treatment. Minimally invasive techniques complicate workflows; thus, artificial intelligence (AI)-based solutions are needed to improve their safety and efficiency. We assessed the feasibility of AI for the automated classification of surgical phases in thoracoscopic wedge resection, examining the link between classification accuracy and surgical complexity, particularly during specimen extraction.
Patients and methods
This single-centre retrospective observational study from Japan included 73 video recordings of video-assisted thoracic surgery lung wedge resections with extraction of a single specimen carried out from January 2021 to December 2023. A Swin Transformer AI model was used to classify five distinct surgical phases: preparatory actions, lesion identification, resection, and specimen extraction. Pre- and postprocessing techniques improved model performance across different phases. The primary outcome was AI model performance in classifying surgical phases using metrics such as accuracy, precision, recall, and F1 score.
Results
The modified AI model achieved an overall accuracy of 0.778, with phase-specific accuracies ranging from 0.574 to 0.911. Significant improvements were observed in critical phases for specimen management (accuracy: 0.816). Clinical factors, including the number of access ports and phase duration, were key determinants of accuracy.
Conclusions
Our AI-driven phase recognition model for thoracoscopic lung surgery videos shows potential for optimizing operating room workflow, enhancing real-time decision-making, and improving efficiency by automating surgical phase classification.