International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Automated vision-based assistance tools in bronchoscopy: stenosis severity estimation. 支气管镜检查中基于自动视觉的辅助工具:狭窄严重程度估计。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-15 DOI: 10.1007/s11548-025-03398-x
Clara Tomasini, Javier Rodriguez-Puigvert, Dinora Polanco, Manuel Viñuales, Luis Riazuelo, Ana C Murillo
{"title":"Automated vision-based assistance tools in bronchoscopy: stenosis severity estimation.","authors":"Clara Tomasini, Javier Rodriguez-Puigvert, Dinora Polanco, Manuel Viñuales, Luis Riazuelo, Ana C Murillo","doi":"10.1007/s11548-025-03398-x","DOIUrl":"https://doi.org/10.1007/s11548-025-03398-x","url":null,"abstract":"<p><strong>Purpose: </strong>Subglottic stenosis refers to the narrowing of the subglottis, the airway between the vocal cords and the trachea. Its severity is typically evaluated by estimating the percentage of obstructed airway. This estimation can be obtained from CT data or through visual inspection by experts exploring the region. However, visual inspections are inherently subjective, leading to less consistent and robust diagnoses. No public methods or datasets are currently available for automated evaluation of this condition from bronchoscopy video.</p><p><strong>Methods: </strong>We propose a pipeline for automated subglottic stenosis severity estimation during the bronchoscopy exploration, without requiring the physician to traverse the stenosed region. Our approach exploits the physical effect of illumination decline in endoscopy to segment and track the lumen and obtain a 3D model of the airway. This 3D model is obtained from a single frame and is used to measure the airway narrowing.</p><p><strong>Results: </strong>Our pipeline is the first to enable automated and robust subglottic stenosis severity measurement using bronchoscopy images. The results show consistency with ground-truth estimations from CT scans and expert estimations and reliable repeatability across multiple estimations on the same patient. Our evaluation is performed on our new Subglottic Stenosis Dataset of real bronchoscopy procedures data.</p><p><strong>Conclusion: </strong>We demonstrate how to automate evaluation of subglottic stenosis severity using only bronchoscopy. Our approach can assist with and shorten diagnosis and monitoring procedures, with automated and repeatable estimations and less exploration time, and save radiation exposure to patients as no CT is required. Additionally, we release the first public benchmark for subglottic stenosis severity assessment.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144080525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A methodological framework for integrating model-guided medicine and multidimensional information management systems: application in anti-aging healthcare. 模型导向医学与多维信息管理系统集成的方法学框架:在抗衰老医疗中的应用。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-15 DOI: 10.1007/s11548-025-03337-w
Hugo Herrero Antón de Vez, Esteban Felez, Mario A Cypko
{"title":"A methodological framework for integrating model-guided medicine and multidimensional information management systems: application in anti-aging healthcare.","authors":"Hugo Herrero Antón de Vez, Esteban Felez, Mario A Cypko","doi":"10.1007/s11548-025-03337-w","DOIUrl":"https://doi.org/10.1007/s11548-025-03337-w","url":null,"abstract":"<p><strong>Purpose: </strong>This study introduces a methodological framework for integrating model-guided medicine (MGM) with multidimensional information management systems (MIMMS) to address healthcare's digital transformation challenges. Anti-aging medicine is used as a case study to illustrate the framework's adaptability and practical application in terms of data integration, workflow transparency, and AI auditability.</p><p><strong>Methods: </strong>The methodology combines MGM and MIMMS to manage complex healthcare data through patient-specific, semantic, and syntactic models. Automated workflows streamline processes from data acquisition to decision-making. Integration is demonstrated with metabolic assessments and patient-specific modeling.</p><p><strong>Results: </strong>The framework effectively integrates multi-domain data, enhancing interoperability, workflow transparency, and AI auditability. A case study in anti-aging medicine illustrates its practical utility and scalability, addressing limitations of existing systems and highlighting potential for broader applications.</p><p><strong>Conclusion: </strong>This methodological framework offers a novel approach to advancing digital healthcare transformation by enabling integrated, patient-centric workflows. While not yet applied in a clinical setting, its conceptual application to anti-aging medicine illustrates the framework's adaptability and potential to enhance healthcare standards across various domains. Future work will focus on real-world validation and refinement to further demonstrate its practical impact.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated multimodel segmentation and tracking for AR-guided open liver surgery using scene-aware self-prompting. 基于场景感知自我提示的ar引导下肝脏开放性手术的自动多模型分割和跟踪。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-14 DOI: 10.1007/s11548-025-03381-6
Serouj Khajarian, Michael Schwimmbeck, Konstantin Holzapfel, Johannes Schmidt, Christopher Auer, Stefanie Remmele, Oliver Amft
{"title":"Automated multimodel segmentation and tracking for AR-guided open liver surgery using scene-aware self-prompting.","authors":"Serouj Khajarian, Michael Schwimmbeck, Konstantin Holzapfel, Johannes Schmidt, Christopher Auer, Stefanie Remmele, Oliver Amft","doi":"10.1007/s11548-025-03381-6","DOIUrl":"https://doi.org/10.1007/s11548-025-03381-6","url":null,"abstract":"<p><strong>Purpose: </strong>We introduce a multimodel, real-time semantic segmentation and tracking approach for Augmented Reality (AR)-guided open liver surgery. Our approach leverages foundation models and scene-aware re-prompting strategies to balance segmentation accuracy and inference time as required for real-time AR-assisted surgery applications.</p><p><strong>Methods: </strong>Our approach integrates a domain-specific RGBD model (ESANet), a foundation model for semantic segmentation (SAM), and a semi-supervised video object segmentation model (DeAOT). Models were combined in an auto-promptable pipeline with a scene-aware re-prompting algorithm that adapts to surgical scene changes. We evaluated our approach on intraoperative RGBD videos from 10 open liver surgeries using a head-mounted AR device. Segmentation accuracy (IoU), temporal resolution (FPS), and the impact of re-prompting strategies were analyzed. Comparisons to individual models were performed.</p><p><strong>Results: </strong>Our multimodel approach achieved a median IoU of 71% at 13.2 FPS without re-prompting. Performance of our multimodel approach surpasses that of individual models, yielding better segmentation accuracy than ESANet and better temporal resolution compared to SAM. Our scene-aware re-prompting method reaches the DeAOT performance, with an IoU of 74.7% at 11.5 FPS, even when the DeAOT model uses an ideal reference frame.</p><p><strong>Conclusion: </strong>Our scene-aware re-prompting strategy provides a trade-off between segmentation accuracy and temporal resolution, thus addressing the requirements of real-time AR-guided open liver surgery. The integration of complementary models resulted in robust and accurate segmentation in a complex, real-world surgical settings.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When do they StOP?: A first step toward automatically identifying team communication in the operating room. 他们什么时候停止?:自动识别手术室团队沟通的第一步。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-14 DOI: 10.1007/s11548-025-03393-2
Keqi Chen, Lilien Schewski, Vinkle Srivastav, Joël Lavanchy, Didier Mutter, Guido Beldi, Sandra Keller, Nicolas Padoy
{"title":"When do they StOP?: A first step toward automatically identifying team communication in the operating room.","authors":"Keqi Chen, Lilien Schewski, Vinkle Srivastav, Joël Lavanchy, Didier Mutter, Guido Beldi, Sandra Keller, Nicolas Padoy","doi":"10.1007/s11548-025-03393-2","DOIUrl":"https://doi.org/10.1007/s11548-025-03393-2","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical performance depends not only on surgeons' technical skills, but also on team communication within and across the different professional groups present during the operation. Therefore, automatically identifying team communication in the OR is crucial for patient safety and advances in the development of computer-assisted surgical workflow analysis and intra-operative support systems. To take the first step, we propose a new task of detecting communication briefings involving all OR team members, i.e., the team Time-out and the StOP?-protocol, by localizing their start and end times in video recordings of surgical operations.</p><p><strong>Methods: </strong>We generate an OR dataset of real surgeries, called Team-OR, with more than one hundred hours of surgical videos captured by the multi-view camera system in the OR. The dataset contains temporal annotations of 33 Time-out and 22 StOP?-protocol activities in total. We then propose a novel group activity detection approach, where we encode both scene context and action features, and use an efficient neural network model to output the results.</p><p><strong>Results: </strong>The experimental results on the Team-OR dataset show that our approach outperforms existing state-of-the-art temporal action detection approaches. It also demonstrates the lack of research on group activities in the OR, proving the significance of our dataset.</p><p><strong>Conclusion: </strong>We investigate the Team Time-Out and the StOP?-protocol in the OR, by presenting the first OR dataset with temporal annotations of group activities protocols, and introducing a novel group activity detection approach that outperforms existing approaches. Code is available at https://github.com/CAMMA-public/Team-OR .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing registration uncertainty visualization to support intraoperative decision-making during brain tumor resection. 优化配准不确定性可视化,支持脑肿瘤切除术中决策。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-13 DOI: 10.1007/s11548-025-03407-z
M Geshvadi, R Dorent, C Galvin, L Rigolo, N Haouchine, T Kapur, S Pieper, M Vangel, W M Wells, A J Golby, D Haehn, S Frisken
{"title":"Optimizing registration uncertainty visualization to support intraoperative decision-making during brain tumor resection.","authors":"M Geshvadi, R Dorent, C Galvin, L Rigolo, N Haouchine, T Kapur, S Pieper, M Vangel, W M Wells, A J Golby, D Haehn, S Frisken","doi":"10.1007/s11548-025-03407-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03407-z","url":null,"abstract":"<p><strong>Purpose: </strong>Neurosurgeons need to precisely localize and resect tumors without damaging critical brain tissue. However, deformation of the brain (i.e., 'brain shift') and other factors introduce uncertainty during image-guided surgery. We present a new visualization software that supports qualitative and quantitative exploration of the effectiveness of a broad range of methods for communicating uncertainty. We expect that the ability to visualize uncertainty during surgery will help surgeons better understand uncertainty in neuronavigation and make more informed decisions.</p><p><strong>Methods: </strong>We developed UVisExplore, a software module for exploring various visualization techniques for understanding the spatial distribution of uncertainty in image registration. UVisExplore incorporates multiple classic uncertainty visualization techniques and introduces two novel paradigms appropriate for surgical environments. We also introduce a novel game-based approach to evaluate visualization effectiveness before surgery. The game scenario emulates the cognitive decision-making process during tumor resection allowing quantitative evaluation of visualization effectiveness in a non-threatening environment while training neurosurgeons to better understand uncertainty.</p><p><strong>Results: </strong>Six clinicians and three computer scientists participated in a study using our game. Participants explored different uncertainty visualization techniques in a tumor resection task and provided feedback. Surgeon-participants preferred surgeon-centric approaches, which emphasize uncertainty near the surgical probe. They also preferred explicit numerical measures of uncertainty displayed in millimeters. The game provided valuable insights into uncertainty visualization preferences and interpretation.</p><p><strong>Conclusions: </strong>We provide an open-source 3D Slicer module for visualizing registration uncertainty and a game that allows users to explore uncertainty visualization for tumor resection surgery. UVisExplore provides a platform for exploring and comparing various uncertainty visualization techniques while simulating the decision-making process during surgery. The visualization module and the game proved to be a valuable communication tool and helped narrow the field of candidate visualizations that we plan to test during surgical procedures in the next phase of our research.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144059351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive pipeline for mandible reconstruction surgery planning using fibula free flap. 交互式管道在腓骨游离皮瓣下颌骨重建手术中的应用。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-12 DOI: 10.1007/s11548-025-03392-3
Sylvain Leclerc, Bianca Jansen Van Rensburg, Thibault De Villèle, Marie De Boutray, Nabil Zemiti, Noura Faraj
{"title":"Interactive pipeline for mandible reconstruction surgery planning using fibula free flap.","authors":"Sylvain Leclerc, Bianca Jansen Van Rensburg, Thibault De Villèle, Marie De Boutray, Nabil Zemiti, Noura Faraj","doi":"10.1007/s11548-025-03392-3","DOIUrl":"https://doi.org/10.1007/s11548-025-03392-3","url":null,"abstract":"<p><strong>Purpose: </strong>Mandible reconstruction surgery using fibula free flap is a long and expensive process requiring extensive surgical experience. Indeed, the planning stage, mandible shaping, and therefore osteotomy positioning on the fibula are tedious, often done by hand, and can take months. This delay is unacceptable when mandible deterioration is caused by a time-sensitive disease such as cancer. In this paper, we propose an interactive pipeline for an easy-to-use and time-efficient surgical planning tool tailored to be used directly by the surgeon.</p><p><strong>Methods: </strong>From CT scans of patient's mandible and fibula, we propose to register a cutting structure to the mandible and to segment and mesh the fibula; then, respecting anatomical constraints (mandible curvature, flap size, vessel preservation, etc.), we generate a surgery plan. Next, in a 3D interactive environment, the surgeon can intuitively shape the mandible by cutting, moving, and modifying bone fragments nondestructively. This stage allows surgeons to express their expertise, and the resulting cutting plane positions are then sent to a robot serving as a cutting guide for the surgery.</p><p><strong>Results: </strong>We demonstrate the efficiency of our method through patient-specific surgery planning for two different pathologic cases. We show our results are comparable to a commercial solution away from cutting guides design.</p><p><strong>Conclusion: </strong>Our proposed pipeline allows for a patient-specific precise planning and to cut down the preoperative planning phase of the mandible reconstruction surgery from days to minutes.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic trajectory planning for stereotactic radiofrequency ablation in non-discrete search space. 非离散搜索空间中立体定向射频消融的自动轨迹规划。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-11 DOI: 10.1007/s11548-025-03386-1
Adela Lukes, Reto Bale, Wolfgang Freysinger
{"title":"Automatic trajectory planning for stereotactic radiofrequency ablation in non-discrete search space.","authors":"Adela Lukes, Reto Bale, Wolfgang Freysinger","doi":"10.1007/s11548-025-03386-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03386-1","url":null,"abstract":"<p><strong>Purpose: </strong>Radiofrequency ablation is a well established minimally invasive procedure to treat tumors in solid organs. During the procedure applicators are inserted into the tumor and cells around their tips are destroyed by heat-induced denaturation. Manual trajectory planning requires a trained interventionalist, and its complexity and planning time rise significantly with an increasing number of trajectories.</p><p><strong>Methods: </strong>We propose a trajectory planning method using a genetic algorithm to accelerate the planning process by automatically generating multiple safe plans. Our method uses a non-discrete search space to find the best entry and target points and does not need any prior calculation of such candidate's points sets. The method offers multiple plans, allowing the interventionalists to choose the most appropriate one. We tested on an open-source and in-house dataset, comparing with related work and retrospectively with the in-house clinical planning.</p><p><strong>Results: </strong>Our method, tested on 154 liver tumors across all segments using a 10 mm ablation radius, achieves a mean coverage of over 99% of the tumors including a 5 mm safety margin. The method provides safe trajectories for all solutions and is on average 4 <math><mo>×</mo></math> faster than related approaches.</p><p><strong>Conclusion: </strong>To the best of our knowledge, we are the first to propose a fast and accurate planning technique using multiple applicators with 10 mm ablation radius. Our algorithm can deliver solutions optimizing more than ten trajectories, approaching the clinical practice at our institution, where large tumors are treated with multiple overlapping ablation zones rather than resection.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144007343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy of a novel, automated 2D-3D registration software for measuring cup position in total hip arthroplasty. 全髋关节置换术中测量杯位的新型自动2D-3D配准软件的准确性。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-11 DOI: 10.1007/s11548-025-03389-y
Jevan Arulampalam, Moritz F Ehlke, Christopher Plaskos, Qing Li, Catherine Z Stambouzou, James A Sullivan, Michael I Solomon, Eric M Slotkin
{"title":"Accuracy of a novel, automated 2D-3D registration software for measuring cup position in total hip arthroplasty.","authors":"Jevan Arulampalam, Moritz F Ehlke, Christopher Plaskos, Qing Li, Catherine Z Stambouzou, James A Sullivan, Michael I Solomon, Eric M Slotkin","doi":"10.1007/s11548-025-03389-y","DOIUrl":"https://doi.org/10.1007/s11548-025-03389-y","url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluated the accuracy of an automated 2D-3D registration software for matching preoperative 3D models of the pelvis and acetabular component to intraoperative 2D fluoroscopy images in total hip arthroplasty (THA).</p><p><strong>Methods: </strong>We developed a 2D-3D registration software that registers a 3D model of the pelvis from preoperative CT and a 3D model of the acetabular implant to intraoperative fluoroscopic imaging, thereby calculating the implant position relative to the 3D pelvic reference frame. A total of 145 datasets were used including 65 digitally reconstructed radiographs, 20 dry bone phantoms datasets and 60 clinical datasets with preoperative CT and intraoperative fluoroscopy imaging. Achieved acetabular positions from the clinical images were determined from post-operative CT using a 3D/3D registration method. Accuracy was assessed by comparing the calculated acetabular position from the 2D-3D software to the ground truth data.</p><p><strong>Results: </strong>Mean absolute difference between ground truth and the 2D-3D software was 1.9° [signed error range: -4.4, 4.8] for inclination, 1.5° [-7.3, 4.1] for anteversion, 1.6 mm [-5, 3.8] for cup height and 1.8 mm [-7.3, 4.1] for depth across all datasets. In total, 100% of inclination results and 98% of anteversion results were within 5° while 90% of height and 81% of depth results were within 3 mm.</p><p><strong>Conclusion: </strong>We validated the accuracy of an automated 2D-3D registration software for use in THA. While our method requires preoperative data from CT, the results are comparable to robotics and image-based navigation, and present a promising, simple technology that can be easily integrated into an operating room for THA.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144008373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing segmentation accuracy of the common iliac vein in OLIF51 surgery in intraoperative endoscopic video through gamma correction: a deep learning approach. 术中内镜视频中通过伽马校正提高OLIF51手术中髂总静脉分割的准确性:一种深度学习方法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-11 DOI: 10.1007/s11548-025-03388-z
Kaori Yamamoto, Reoto Ueda, Kazuhide Inage, Yawara Eguchi, Miyako Narita, Yasuhiro Shiga, Masahiro Inoue, Noriyasu Toshi, Soichiro Tokeshi, Kohei Okuyama, Shuhei Ohyama, Satoshi Maki, Takeo Furuya, Seiji Ohtori, Sumihisa Orita
{"title":"Enhancing segmentation accuracy of the common iliac vein in OLIF51 surgery in intraoperative endoscopic video through gamma correction: a deep learning approach.","authors":"Kaori Yamamoto, Reoto Ueda, Kazuhide Inage, Yawara Eguchi, Miyako Narita, Yasuhiro Shiga, Masahiro Inoue, Noriyasu Toshi, Soichiro Tokeshi, Kohei Okuyama, Shuhei Ohyama, Satoshi Maki, Takeo Furuya, Seiji Ohtori, Sumihisa Orita","doi":"10.1007/s11548-025-03388-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03388-z","url":null,"abstract":"<p><strong>Purpose: </strong>The principal objective of this study was to develop and evaluate a deep learning model for segmenting the common iliac vein (CIV) from intraoperative endoscopic videos during oblique lateral interbody fusion for L5/S1 (OLIF51), a minimally invasive surgical procedure for degenerative lumbosacral spine diseases. The study aimed to address the challenge of intraoperative differentiation of the CIV from surrounding tissues to minimize the risk of vascular damage during the surgery.</p><p><strong>Methods: </strong>We employed two convolutional neural network (CNN) architectures: U-Net and U-Net++ with a ResNet18 backbone, for semantic segmentation. Gamma correction was applied during image preprocessing to improve luminance contrast between the CIV and adjacent tissues. We used a dataset of 614 endoscopic images from OLIF51 surgeries for model training, validation, and testing.</p><p><strong>Results: </strong>The U-Net++/ResNet18 model outperformed, achieving a Dice score of 0.70, indicating superior ability in delineating the position and shape of the CIV compared to the U-Net/ResNet18 model, which achieved a Dice score of 0.59. Gamma correction increased the differentiation between the CIV and the artery, improving the Dice score from 0.44 to 0.70.</p><p><strong>Conclusion: </strong>The findings demonstrate that deep learning models, especially the U-Net++ with ResNet18 enhanced by gamma correction preprocessing, can effectively segment the CIV in intraoperative videos. This approach has the potential to significantly improve intraoperative assistance and reduce the risk of vascular injury during OLIF51 procedures, despite the need for further research and refinement of the model for clinical application.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving neck ultrasound image retrieval using intra-sweep representation learning. 利用扫描内表示学习改进颈部超声图像检索。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-10 DOI: 10.1007/s11548-025-03394-1
Wanwen Chen, Adam Schmidt, Eitan Prisman, Septimiu E Salcudean
{"title":"Improving neck ultrasound image retrieval using intra-sweep representation learning.","authors":"Wanwen Chen, Adam Schmidt, Eitan Prisman, Septimiu E Salcudean","doi":"10.1007/s11548-025-03394-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03394-1","url":null,"abstract":"<p><strong>Purpose: </strong>Intraoperative ultrasound (US) can enhance real-time visualization in transoral robotic surgery (TORS) and improve the safety of the surgery. To develop a US guidance system for TORS, US probe localization and US-preoperative image registration are essential. Image retrieval has the potential to solve these two problems in the same framework, and learning a discriminative US representation is key to successful image retrieval.</p><p><strong>Methods: </strong>We propose a self-supervised contrastive learning approach to match intraoperative US views to a preoperative image database. We introduce a novel contrastive learning strategy that leverages intra-sweep similarity and US probe location to improve feature encoding. Additionally, our model incorporates a flexible threshold to reject unsatisfactory matches.</p><p><strong>Results: </strong>Our method achieves 92.30% retrieval accuracy on simulated data and outperforms state-of-the-art temporal-based contrastive learning approaches. We also test our approach on real patient data with preoperative US-CT registration to show the feasibility of the proposed US probe localization system, despite tissue deformation due to tongue retraction.</p><p><strong>Conclusion: </strong>Our contrastive learning method, which utilizes intra-sweep similarity and US probe location, enhances US image representation learning. We also demonstrate the feasibility of using our image retrieval method to provide neck US localization on real patients US after tongue retraction. Total number of words: 2414 words.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信