Computer Assisted Surgery最新文献

筛选
英文 中文
ArthroPhase: a novel dataset and method for phase recognition in arthroscopic video. 关节镜视频中相位识别的新数据集和方法。
IF 1.5 4区 医学
Computer Assisted Surgery Pub Date : 2025-12-01 Epub Date: 2025-05-31 DOI: 10.1080/24699322.2025.2508144
Ali Bahari Malayeri, Matthias Seibold, Nicola A Cavalcanti, Jonas Hein, Sascha Jecklin, Lazaros Vlachopoulos, Sandro Fucentese, Sandro Hodel, Philipp Fürnstahl
{"title":"ArthroPhase: a novel dataset and method for phase recognition in arthroscopic video.","authors":"Ali Bahari Malayeri, Matthias Seibold, Nicola A Cavalcanti, Jonas Hein, Sascha Jecklin, Lazaros Vlachopoulos, Sandro Fucentese, Sandro Hodel, Philipp Fürnstahl","doi":"10.1080/24699322.2025.2508144","DOIUrl":"https://doi.org/10.1080/24699322.2025.2508144","url":null,"abstract":"<p><p>This study advances surgical phase recognition in arthroscopic procedures, specifically Anterior Cruciate Ligament (ACL) reconstruction, by introducing the first arthroscopy dataset and a novel transformer-based model. We establish a benchmark for arthroscopic surgical phase recognition by leveraging spatio-temporal features to address challenges such as limited field of view, occlusions, and visual distortions. We developed the ACL27 dataset, comprising 27 videos of ACL surgeries, each labeled with surgical phases. Our model employs a transformer-based architecture, utilizing temporal-aware frame-wise feature extraction through ResNet-50 and transformer layers. This approach integrates spatio-temporal features and introduces a Surgical Progress Index (SPI) to quantify surgery progression. The model's performance was evaluated using accuracy, precision, recall, and Jaccard Index on the ACL27 and Cholec80 datasets. The proposed model achieved an overall accuracy of 72.9% on the ACL27 dataset. On the Cholec80 dataset, the model achieved performance comparable to state-of-the-art methods, with an accuracy of 92.4%. The SPI demonstrated an output error of 10.6% and 9.8% on ACL27 and Cholec80 datasets, respectively, indicating reliable surgery progression estimation. This study introduces a significant advancement in surgical phase recognition for arthroscopy, providing a comprehensive dataset and robust transformer-based model. The results validate the model's effectiveness and generalizability, highlighting its potential to improve surgical training, real-time assistance, and operational efficiency in orthopedic surgery. The publicly available dataset and code will facilitate future research in this critical field. Word Count: 6490.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2508144"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144192536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SurgPointTransformer: transformer-based vertebra shape completion using RGB-D imaging. SurgPointTransformer:基于变压器的椎体形状补全,使用RGB-D成像。
IF 1.5 4区 医学
Computer Assisted Surgery Pub Date : 2025-12-01 Epub Date: 2025-06-03 DOI: 10.1080/24699322.2025.2511126
Aidana Massalimova, Florentin Liebmann, Sascha Jecklin, Fabio Carrillo, Mazda Farshad, Philipp Fürnstahl
{"title":"SurgPointTransformer: transformer-based vertebra shape completion using RGB-D imaging.","authors":"Aidana Massalimova, Florentin Liebmann, Sascha Jecklin, Fabio Carrillo, Mazda Farshad, Philipp Fürnstahl","doi":"10.1080/24699322.2025.2511126","DOIUrl":"https://doi.org/10.1080/24699322.2025.2511126","url":null,"abstract":"<p><p>State-of-the-art computer- and robot-assisted surgery systems rely on intraoperative imaging technologies such as computed tomography and fluoroscopy to provide detailed 3D visualizations of patient anatomy. However, these methods expose both patients and clinicians to ionizing radiation. This study introduces a radiation-free approach for 3D spine reconstruction using RGB-D data. Inspired by the \"mental map\" surgeons form during procedures, we present SurgPointTransformer, a shape completion method that reconstructs unexposed spinal regions from sparse surface observations. The method begins with a vertebra segmentation step that extracts vertebra-level point clouds for subsequent shape completion. SurgPointTransformer then uses an attention mechanism to learn the relationship between visible surface features and the complete spine structure. The approach is evaluated on an <i>ex vivo</i> dataset comprising nine samples, with CT-derived data used as ground truth. SurgPointTransformer significantly outperforms state-of-the-art baselines, achieving a Chamfer distance of 5.39 mm, an F-score of 0.85, an Earth mover's distance of 11.00 and a signal-to-noise ratio of 22.90 dB. These results demonstrate the potential of our method to reconstruct 3D vertebral shapes without exposing patients to ionizing radiation. This work contributes to the advancement of computer-aided and robot-assisted surgery by enhancing system perception and intelligence.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2511126"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-dimensional (3D)-printed custom-made titanium ribs for chest wall reconstruction post-desmoid fibromatosis resection. 三维(3D)打印定制钛肋骨用于硬纤维瘤切除术后胸壁重建。
IF 1.5 4区 医学
Computer Assisted Surgery Pub Date : 2025-12-01 Epub Date: 2025-01-22 DOI: 10.1080/24699322.2025.2456303
Chen Yang, Lei Chen, Xiangyu Xie, Changping Wu, Qianyun Wang
{"title":"Three-dimensional (3D)-printed custom-made titanium ribs for chest wall reconstruction post-desmoid fibromatosis resection.","authors":"Chen Yang, Lei Chen, Xiangyu Xie, Changping Wu, Qianyun Wang","doi":"10.1080/24699322.2025.2456303","DOIUrl":"10.1080/24699322.2025.2456303","url":null,"abstract":"<p><p>Desmoid fibromatosis (DF) is a rare low-grade benign myofibroblastic neoplasm that originates from fascia and muscle striae. For giant chest wall DF, surgical resection offer a radical form of treatment and the causing defects usually need repair and reconstruction, which can restore the structural integrity and rigidity of the thoracic cage. The past decade witnessed rapid advances in the application of various prosthetic material in thoracic surgery. However, three-dimensional (3D)-printed custom-made titanium ribs have never been reported for chest wall reconstruction post-DF resection. Here, we report a successful implantation of individualized 3D-printed titanium ribs to repair the chest wall defect in a patient with DF.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2456303"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrospective case control study on the evaluation of the impact of augmented reality in gynecological laparoscopy on patients operated for myomectomy or adenomyomectomy. 评价妇科腹腔镜增强现实技术对子宫肌瘤或子宫腺肌瘤切除术患者影响的回顾性病例对照研究。
IF 1.5 4区 医学
Computer Assisted Surgery Pub Date : 2025-12-01 Epub Date: 2025-05-24 DOI: 10.1080/24699322.2025.2509686
Aurélie Comptour, Pauline Chauvet, Anne-Sophie Grémeau, Claire Figuier, Bruno Pereira, Matthieu Rouland, Prasad Samarakoon, Adrien Bartoli, Marie De Antonio, Nicolas Bourdel
{"title":"Retrospective case control study on the evaluation of the impact of augmented reality in gynecological laparoscopy on patients operated for myomectomy or adenomyomectomy.","authors":"Aurélie Comptour, Pauline Chauvet, Anne-Sophie Grémeau, Claire Figuier, Bruno Pereira, Matthieu Rouland, Prasad Samarakoon, Adrien Bartoli, Marie De Antonio, Nicolas Bourdel","doi":"10.1080/24699322.2025.2509686","DOIUrl":"https://doi.org/10.1080/24699322.2025.2509686","url":null,"abstract":"<p><p>The objective of this study is to evaluate the safety of using augmented reality (AR) in laparoscopic (adeno)myomectomy, defined as an increase in operating time shorter than 15 min. A total of 17 AR cases underwent laparoscopic myomectomy or adenomyomectomy with the use of AR and 17 controls without AR for the resection of (adeno)myomas. The non-inferiority assumption was defined by an operative overtime not exceeding 15 min, representing 10% of the typical operative time. The 17 AR cases were matched to 17 controls. The criteria used in matching the two groups were the type of lesions, the size and the placement. The mean operative time was 135 ± 39 min for AR cases and 149 ± 62 min for controls. The margin of non-inferiority was expressed as a difference in operative time of 15 min between the case and control groups. The mean difference observed between AR cases and controls was -14 min with 90% CI [-38.3;11.3] and was significantly lower than the non-inferiority margin of 15 min (<i>p</i> = 0.03). This negative time difference means that the operative time is shorter for the AR cases group. Intraoperative data revealed a volume of bleeding ≤200 mL in 82.3% of AR cases and in 75% of controls (<i>p</i> = 0.62). No intra or postoperative complications were reported in the groups. The use of augmented reality in laparoscopic (adeno)myomectomy does not introduce additional constraints for the surgeon. It appears to be safe for the patients, with an absence of additional adverse events and of significantly prolonged operative time.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2509686"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144135869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imageless optical navigation system is clinically valid for total knee arthroplasty. 无图像光学导航系统在全膝关节置换术中是有效的。
IF 1.5 4区 医学
Computer Assisted Surgery Pub Date : 2025-12-01 Epub Date: 2025-02-16 DOI: 10.1080/24699322.2025.2466424
Taylor B Winberg, Sheila Wang, James L Howard
{"title":"Imageless optical navigation system is clinically valid for total knee arthroplasty.","authors":"Taylor B Winberg, Sheila Wang, James L Howard","doi":"10.1080/24699322.2025.2466424","DOIUrl":"10.1080/24699322.2025.2466424","url":null,"abstract":"<p><p>Achieving optimal implant position and orientation during total knee arthroplasty (TKA) is a pivotal factor in long-term survival. Computer-assisted navigation (CAN) has been recognized as a trusted technology that improves the accuracy and consistency of femoral and tibial bone cuts. Imageless CAN offers advantages over image-based CAN by reducing cost, radiation exposure, and time. The purpose of this study was to evaluate the accuracy of an imageless optical navigation system for TKA in a clinical setting. Forty-two consecutive patients who underwent primary TKA with CAN were retrospectively reviewed. Femoral and tibial component coronal alignment was assessed <i>via</i> post-operative radiographs by two independent reviewers and compared against coronal alignment angles from the CAN. The primary outcome was the mean absolute difference of femoral and tibial varus/valgus angles between radiograph and intra-operative device measurements. Bland-Altman plots were used to assess agreement between the methods and statistically analyze potential systematic bias. The mean absolute differences between navigation-guided cut measurements and post-operative radiographs were 1.16 ± 1.03° and 1.76 ± 1.38° for femoral and tibial alignment respectively. About 88% of coronal measurements were within ±3°, while 99% were within ±5°. Bland-Altman analysis demonstrated a bias between CAN and radiographic measurements with CAN values averaging 0.52° (95% CI: 0.11°-0.93°) less than their paired radiographic measurements. This study demonstrated the ability of an optical imageless navigation system to measure, on average, femoral and tibial coronal cuts to within 2.0° of post-operative radiographic measurements in a clinical setting.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2466424"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143434411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning methods for clinical workflow phase-based prediction of procedure duration: a benchmark study. 基于临床工作流程阶段的程序持续时间预测的深度学习方法:一项基准研究。
IF 1.5 4区 医学
Computer Assisted Surgery Pub Date : 2025-12-01 Epub Date: 2025-02-24 DOI: 10.1080/24699322.2025.2466426
Emanuele Frassini, Teddy S Vijfvinkel, Rick M Butler, Maarten van der Elst, Benno H W Hendriks, John J van den Dobbelsteen
{"title":"Deep learning methods for clinical workflow phase-based prediction of procedure duration: a benchmark study.","authors":"Emanuele Frassini, Teddy S Vijfvinkel, Rick M Butler, Maarten van der Elst, Benno H W Hendriks, John J van den Dobbelsteen","doi":"10.1080/24699322.2025.2466426","DOIUrl":"10.1080/24699322.2025.2466426","url":null,"abstract":"<p><p>This study evaluates the performance of deep learning models in the prediction of the end time of procedures performed in the cardiac catheterization laboratory (cath lab). We employed only the clinical phases derived from video analysis as input to the algorithms. Our results show that InceptionTime and LSTM-FCN yielded the most accurate predictions. InceptionTime achieves Mean Absolute Error (MAE) values below 5 min and Symmetric Mean Absolute Percentage Error (SMAPE) under 6% at 60-s sampling intervals. In contrast, LSTM with attention mechanism and standard LSTM models have higher error rates, indicating challenges in handling both long-term and short-term dependencies. CNN-based models, especially InceptionTime, excel at feature extraction across different scales, making them effective for time-series predictions. We also analyzed training and testing times. CNN models, despite higher computational costs, significantly reduce prediction errors. The Transformer model has the fastest inference time, making it ideal for real-time applications. An ensemble model derived by averaging the two best performing algorithms reported low MAE and SMAPE, although needing longer training. Future research should validate these findings across different procedural contexts and explore ways to optimize training times without losing accuracy. Integrating these models into clinical scheduling systems could improve efficiency in cath labs. Our research demonstrates that the models we implemented can form the basis of an automated tool, which predicts the optimal time to call the next patient with an average error of approximately 30 s. These findings show the effectiveness of deep learning models, especially CNN-based architectures, in accurately predicting procedure end times.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2466426"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Risk prediction and analysis of gallbladder polyps with deep neural network. 利用深度神经网络预测和分析胆囊息肉的风险。
IF 2.1 4区 医学
Computer Assisted Surgery Pub Date : 2024-12-01 Epub Date: 2024-03-23 DOI: 10.1080/24699322.2024.2331774
Kerong Yuan, Xiaofeng Zhang, Qian Yang, Xuesong Deng, Zhe Deng, Xiangyun Liao, Weixin Si
{"title":"Risk prediction and analysis of gallbladder polyps with deep neural network.","authors":"Kerong Yuan, Xiaofeng Zhang, Qian Yang, Xuesong Deng, Zhe Deng, Xiangyun Liao, Weixin Si","doi":"10.1080/24699322.2024.2331774","DOIUrl":"10.1080/24699322.2024.2331774","url":null,"abstract":"&lt;p&gt;&lt;p&gt;The aim of this study is to analyze the risk factors associated with the development of adenomatous and malignant polyps in the gallbladder. Adenomatous polyps of the gallbladder are considered precancerous and have a high likelihood of progressing into malignancy. Preoperatively, distinguishing between benign gallbladder polyps, adenomatous polyps, and malignant polyps is challenging. Therefore, the objective is to develop a neural network model that utilizes these risk factors to accurately predict the nature of polyps. This predictive model can be employed to differentiate the nature of polyps before surgery, enhancing diagnostic accuracy. A retrospective study was done on patients who had cholecystectomy surgeries at the Department of Hepatobiliary Surgery of the Second People's Hospital of Shenzhen between January 2017 and December 2022. The patients' clinical characteristics, lab results, and ultrasonographic indices were examined. Using risk variables for the growth of adenomatous and malignant polyps in the gallbladder, a neural network model for predicting the kind of polyps will be created. A normalized confusion matrix, PR, and ROC curve were used to evaluate the performance of the model. In this comprehensive study, we meticulously analyzed a total of 287 cases of benign gallbladder polyps, 15 cases of adenomatous polyps, and 27 cases of malignant polyps. The data analysis revealed several significant findings. Specifically, hepatitis B core antibody (95% CI -0.237 to 0.061, &lt;i&gt;p&lt;/i&gt; &lt; 0.001), number of polyps (95% CI -0.214 to -0.052, &lt;i&gt;p&lt;/i&gt; = 0.001), polyp size (95% CI 0.038 to 0.051, &lt;i&gt;p&lt;/i&gt; &lt; 0.001), wall thickness (95% CI 0.042 to 0.081, &lt;i&gt;p&lt;/i&gt; &lt; 0.001), and gallbladder size (95% CI 0.185 to 0.367, &lt;i&gt;p&lt;/i&gt; &lt; 0.001) emerged as independent predictors for gallbladder adenomatous polyps and malignant polyps. Based on these significant findings, we developed a predictive classification model for gallbladder polyps, represented as follows, Predictive classification model for GBPs = -0.149 * core antibody - 0.033 * number of polyps + 0.045 * polyp size + 0.061 * wall thickness + 0.276 * gallbladder size - 4.313. To assess the predictive efficiency of the model, we employed precision-recall (PR) and receiver operating characteristic (ROC) curves. The area under the curve (AUC) for the prediction model was 0.945 and 0.930, respectively, indicating excellent predictive capability. We determined that a polyp size of 10 mm served as the optimal cutoff value for diagnosing gallbladder adenoma, with a sensitivity of 81.5% and specificity of 60.0%. For the diagnosis of gallbladder cancer, the sensitivity and specificity were 81.5% and 92.5%, respectively. These findings highlight the potential of our predictive model and provide valuable insights into accurate diagnosis and risk assessment for gallbladder polyps. We identified several risk factors associated with the development of adenomatous and malignant polyps in the gallbladder","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2331774"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140195203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools. 通过生成式人工智能工具精心设计的每日 CBCT,质子剂量测定覆盖计划 CT 的可行性。
IF 2.1 4区 医学
Computer Assisted Surgery Pub Date : 2024-12-01 Epub Date: 2024-03-11 DOI: 10.1080/24699322.2024.2327981
Matteo Rossi, Gabriele Belotti, Luca Mainardi, Guido Baroni, Pietro Cerveri
{"title":"Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools.","authors":"Matteo Rossi, Gabriele Belotti, Luca Mainardi, Guido Baroni, Pietro Cerveri","doi":"10.1080/24699322.2024.2327981","DOIUrl":"10.1080/24699322.2024.2327981","url":null,"abstract":"<p><p>Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2327981"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140102858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A decade of progress: bringing mixed reality image-guided surgery systems in the operating room. 十年进展:将混合现实图像引导手术系统引入手术室。
IF 2.1 4区 医学
Computer Assisted Surgery Pub Date : 2024-12-01 Epub Date: 2024-05-24 DOI: 10.1080/24699322.2024.2355897
Zahra Asadi, Mehrdad Asadi, Negar Kazemipour, Étienne Léger, Marta Kersten-Oertel
{"title":"A decade of progress: bringing mixed reality image-guided surgery systems in the operating room.","authors":"Zahra Asadi, Mehrdad Asadi, Negar Kazemipour, Étienne Léger, Marta Kersten-Oertel","doi":"10.1080/24699322.2024.2355897","DOIUrl":"https://doi.org/10.1080/24699322.2024.2355897","url":null,"abstract":"<p><p>Advancements in mixed reality (MR) have led to innovative approaches in image-guided surgery (IGS). In this paper, we provide a comprehensive analysis of the current state of MR in image-guided procedures across various surgical domains. Using the Data Visualization View (DVV) Taxonomy, we analyze the progress made since a 2013 literature review paper on MR IGS systems. In addition to examining the current surgical domains using MR systems, we explore trends in types of MR hardware used, type of data visualized, visualizations of virtual elements, and interaction methods in use. Our analysis also covers the metrics used to evaluate these systems in the operating room (OR), both qualitative and quantitative assessments, and clinical studies that have demonstrated the potential of MR technologies to enhance surgical workflows and outcomes. We also address current challenges and future directions that would further establish the use of MR in IGS.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2355897"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SwinD-Net: a lightweight segmentation network for laparoscopic liver segmentation. SwinD-Net:用于腹腔镜肝脏分割的轻量级分割网络。
IF 2.1 4区 医学
Computer Assisted Surgery Pub Date : 2024-12-01 Epub Date: 2024-03-20 DOI: 10.1080/24699322.2024.2329675
Shuiming Ouyang, Baochun He, Huoling Luo, Fucang Jia
{"title":"SwinD-Net: a lightweight segmentation network for laparoscopic liver segmentation.","authors":"Shuiming Ouyang, Baochun He, Huoling Luo, Fucang Jia","doi":"10.1080/24699322.2024.2329675","DOIUrl":"10.1080/24699322.2024.2329675","url":null,"abstract":"<p><p>The real-time requirement for image segmentation in laparoscopic surgical assistance systems is extremely high. Although traditional deep learning models can ensure high segmentation accuracy, they suffer from a large computational burden. In the practical setting of most hospitals, where powerful computing resources are lacking, these models cannot meet the real-time computational demands. We propose a novel network SwinD-Net based on Skip connections, incorporating Depthwise separable convolutions and Swin Transformer Blocks. To reduce computational overhead, we eliminate the skip connection in the first layer and reduce the number of channels in shallow feature maps. Additionally, we introduce Swin Transformer Blocks, which have a larger computational and parameter footprint, to extract global information and capture high-level semantic features. Through these modifications, our network achieves desirable performance while maintaining a lightweight design. We conduct experiments on the CholecSeg8k dataset to validate the effectiveness of our approach. Compared to other models, our approach achieves high accuracy while significantly reducing computational and parameter overhead. Specifically, our model requires only 98.82 M floating-point operations (FLOPs) and 0.52 M parameters, with an inference time of 47.49 ms per image on a CPU. Compared to the recently proposed lightweight segmentation network UNeXt, our model not only outperforms it in terms of the Dice metric but also has only 1/3 of the parameters and 1/22 of the FLOPs. In addition, our model achieves a 2.4 times faster inference speed than UNeXt, demonstrating comprehensive improvements in both accuracy and speed. Our model effectively reduces parameter count and computational complexity, improving the inference speed while maintaining comparable accuracy. The source code will be available at https://github.com/ouyangshuiming/SwinDNet.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2329675"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信