Srdjan Milosavljevic, Zoltan Bardosi, Yusuf Oezbek, Wolfgang Freysinger
{"title":"Adaptive infrared patterns for microscopic surface reconstructions.","authors":"Srdjan Milosavljevic, Zoltan Bardosi, Yusuf Oezbek, Wolfgang Freysinger","doi":"10.1007/s11548-024-03242-8","DOIUrl":"10.1007/s11548-024-03242-8","url":null,"abstract":"<p><strong>Purpose: </strong>Multi-zoom microscopic surface reconstructions of operating sites, especially in ENT surgeries, would allow multimodal image fusion for determining the amount of resected tissue, for recognizing critical structures, and novel tools for intraoperative quality assurance. State-of-the-art three-dimensional model creation of the surgical scene is challenged by the surgical environment, illumination, and the homogeneous structures of skin, muscle, bones, etc., that lack invariant features for stereo reconstruction.</p><p><strong>Methods: </strong>An adaptive near-infrared pattern projector illuminates the surgical scene with optimized patterns to yield accurate dense multi-zoom stereoscopic surface reconstructions. The approach does not impact the clinical workflow. The new method is compared to state-of-the-art approaches and is validated by determining its reconstruction errors relative to a high-resolution 3D-reconstruction of CT data.</p><p><strong>Results: </strong>200 surface reconstructions were generated for 5 zoom levels with 10 reconstructions for each object illumination method (standard operating room light, microscope light, random pattern and adaptive NIR pattern). For the adaptive pattern, the surface reconstruction errors ranged from 0.5 to 0.7 mm, as compared to 1-1.9 mm for the other approaches. The local reconstruction differences are visualized in heat maps.</p><p><strong>Conclusion: </strong>Adaptive near-infrared (NIR) pattern projection in microscopic surgery allows dense and accurate microscopic surface reconstructions for variable zoom levels of small and homogeneous surfaces. This could potentially aid in microscopic interventions at the lateral skull base and potentially open up new possibilities for combining quantitative intraoperative surface reconstructions with preoperative radiologic imagery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2311-2319"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607032/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalisation capabilities of machine-learning algorithms for the detection of the subthalamic nucleus in micro-electrode recordings.","authors":"Thibault Martin, Pierre Jannin, John S H Baxter","doi":"10.1007/s11548-024-03202-2","DOIUrl":"10.1007/s11548-024-03202-2","url":null,"abstract":"<p><strong>Purpose: </strong>Micro-electrode recordings (MERs) are a key intra-operative modality used during deep brain stimulation (DBS) electrode implantation, which allow for a trained neurophysiologist to infer the anatomy in which the electrode is placed. As DBS targets are small, such inference is necessary to confirm that the electrode is correctly positioned. Recently, machine learning techniques have been used to augment the neurophysiologist's capability. The goal of this paper is to investigate the generalisability of these methods with respect to different clinical centres and training paradigms.</p><p><strong>Methods: </strong>Five deep learning algorithms for binary classification of MER signals have been implemented. Three databases from two different clinical centres have also been collected with differing size, acquisition hardware, and annotation protocol. Each algorithm has initially been trained on the largest database, then either directly tested or fine-tuned on the smaller databases in order to estimate their generalisability. As a reference, they have also been trained from scratch on the smaller databases as well in order to estimate the effect of the differing database sizes and annotation systems.</p><p><strong>Results: </strong>Each network shows significantly reduced performance (on the order of a 6.5% to 16.0% reduction in balanced accuracy) when applied out-of-distribution. This reduction can be ameliorated through fine-tuning the network on the new database through transfer learning. Although, even for these small databases, it appears that retraining from scratch may still offer equivalent performance as fine-tuning with transfer learning. However, this is at the expense of significantly longer training times.</p><p><strong>Conclusion: </strong>Generalisability is an important criterion for the success of machine learning algorithms in clinic. We have demonstrated that a variety of recent machine learning algorithms for MER classification are negatively affected by domain shift, but that this can be quickly ameliorated through simple transfer learning procedures that can be readily performed for new centres.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2445-2451"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141477935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juliane Neumann, Christoph Vogel, Lisa Kießling, Gunther Hempel, Christian Kleber, Georg Osterhoff, Thomas Neumuth
{"title":"TraumaFlow-development of a workflow-based clinical decision support system for the management of severe trauma cases.","authors":"Juliane Neumann, Christoph Vogel, Lisa Kießling, Gunther Hempel, Christian Kleber, Georg Osterhoff, Thomas Neumuth","doi":"10.1007/s11548-024-03191-2","DOIUrl":"10.1007/s11548-024-03191-2","url":null,"abstract":"<p><strong>Purpose: </strong>The treatment of severely injured patients in the resuscitation room of an emergency department requires numerous critical decisions, often under immense time pressure, which places very high demands on the facility and the interdisciplinary team. Computer-based cognitive aids are a valuable tool, especially in education and training of medical professionals. For the management of polytrauma cases, TraumaFlow, a workflow management-based clinical decision support system, was developed. The system supports the registration and coordination of activities in the resuscitation room and actively recommends diagnosis and treatment actions.</p><p><strong>Methods: </strong>Based on medical guidelines, a resuscitation room algorithm was developed according to the cABCDE scheme. The algorithm was then modeled using the process description language BPMN 2.0 and implemented in a workflow management system. In addition, a web-based user interface that provides assistance functions was developed. An evaluation study was conducted with 11 final-year medical students and three residents to assess the applicability of TraumaFlow in a case-based training scenario.</p><p><strong>Results: </strong>TraumaFlow significantly improved guideline-based decision-making, provided more complete therapy, and reduced treatment errors. The system was shown to be beneficial not only for the education of low- and medium-experienced users but also for the training of highly experienced physicians. 92% of the participants felt more confident with computer-aided decision support and considered TraumaFlow useful for the training of polytrauma treatment. In addition, 62% acknowledged a higher training effect.</p><p><strong>Conclusion: </strong>TraumaFlow enables real-time decision support for the treatment of polytrauma patients. It improves guideline-based decision-making in complex and critical situations and reduces treatment errors. Supporting functions, such as the automatic treatment documentation and the calculation of medical scores, enable the trauma team to focus on the primary task. TraumaFlow was developed to support the training of medical students and experienced professionals. Each training session is documented and can be objectively and qualitatively evaluated.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2399-2409"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607099/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141181468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Ostler-Mildner, Luca Wegener, Jonas Fuchtmann, Hubertus Feussner, Dirk Wilhelm, Nassir Navab
{"title":"The sound of surgery-development of an acoustic trocar system enabling laparoscopic sound analysis.","authors":"Daniel Ostler-Mildner, Luca Wegener, Jonas Fuchtmann, Hubertus Feussner, Dirk Wilhelm, Nassir Navab","doi":"10.1007/s11548-024-03183-2","DOIUrl":"10.1007/s11548-024-03183-2","url":null,"abstract":"<p><strong>Purpose: </strong>Acoustic information can contain viable information in medicine and specifically in surgery. While laparoscopy depends mainly on visual information, our goal is to develop the means to capture and process acoustic information during laparoscopic surgery.</p><p><strong>Methods: </strong>To achieve this, we iteratively developed three prototypes that will overcome the abdominal wall as a sound barrier and can be used with standard trocars. We evaluated them in terms of clinical applicability and sound transmission quality. Furthermore, the applicability of each prototype for sound classification based on machine learning was evaluated.</p><p><strong>Results: </strong>Our developed prototypes for recording airborne sound from the intraperitoneal cavity represent a promising solution suitable for real-world clinical usage All three prototypes fulfill our set requirements in terms of clinical applicability (i.e., air-tightness, invasiveness, sterility) and show promising results regarding their acoustic characteristics and the associated results on ML-based sound classification.</p><p><strong>Conclusion: </strong>In summary, our prototypes for capturing acoustic information during laparoscopic surgeries integrate seamlessly with existing procedures and have the potential to augment the surgeon's perception. This advancement could change how surgeons interact with and understand the surgical field.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2389-2397"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607030/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Fuchtmann, Thomas Riedel, Maximilian Berlet, Alissa Jell, Luca Wegener, Lars Wagner, Simone Graf, Dirk Wilhelm, Daniel Ostler-Mildner
{"title":"Audio-based event detection in the operating room.","authors":"Jonas Fuchtmann, Thomas Riedel, Maximilian Berlet, Alissa Jell, Luca Wegener, Lars Wagner, Simone Graf, Dirk Wilhelm, Daniel Ostler-Mildner","doi":"10.1007/s11548-024-03211-1","DOIUrl":"10.1007/s11548-024-03211-1","url":null,"abstract":"<p><strong>Purpose: </strong>Even though workflow analysis in the operating room has come a long way, current systems are still limited to research. In the quest for a robust, universal setup, hardly any attention has been given to the dimension of audio despite its numerous advantages, such as low costs, location, and sight independence, or little required processing power.</p><p><strong>Methodology: </strong>We present an approach for audio-based event detection that solely relies on two microphones capturing the sound in the operating room. Therefore, a new data set was created with over 63 h of audio recorded and annotated at the University Hospital rechts der Isar. Sound files were labeled, preprocessed, augmented, and subsequently converted to log-mel-spectrograms that served as a visual input for an event classification using pretrained convolutional neural networks.</p><p><strong>Results: </strong>Comparing multiple architectures, we were able to show that even lightweight models, such as MobileNet, can already provide promising results. Data augmentation additionally improved the classification of 11 defined classes, including inter alia different types of coagulation, operating table movements as well as an idle class. With the newly created audio data set, an overall accuracy of 90%, a precision of 91% and a F1-score of 91% were achieved, demonstrating the feasibility of an audio-based event recognition in the operating room.</p><p><strong>Conclusion: </strong>With this first proof of concept, we demonstrated that audio events can serve as a meaningful source of information that goes beyond spoken language and can easily be integrated into future workflow recognition pipelines using computational inexpensive architectures.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2381-2387"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607025/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T Stauffer, Q Lohmeyer, S Melamed, A Uhde, R Hostettler, S Wetzel, M Meboldt
{"title":"Evaluation of augmented reality training for a navigation device used for CT-guided needle placement.","authors":"T Stauffer, Q Lohmeyer, S Melamed, A Uhde, R Hostettler, S Wetzel, M Meboldt","doi":"10.1007/s11548-024-03112-3","DOIUrl":"10.1007/s11548-024-03112-3","url":null,"abstract":"<p><strong>Purpose: </strong>Numerous navigation devices for percutaneous, CT-guided interventions exist and are, due to their advantages, increasingly integrated into the clinical workflow. However, effective training methods to ensure safe usage are still lacking. This study compares the potential of an augmented reality (AR) training application with conventional instructions for the Cube Navigation System (CNS), hypothesizing enhanced training with AR, leading to safer clinical usage.</p><p><strong>Methods: </strong>An AR-tablet app was developed to train users puncturing with CNS. In a study, 34 medical students were divided into two groups: One trained with the AR-app, while the other used conventional instructions. After training, each participant executed 6 punctures on a phantom (204 in total) following a standardized protocol to identify and measure two potential CNS procedural user errors: (1) missing the coordinates specified and (2) altering the needle trajectory during puncture. Training performance based on train time and occurrence of procedural errors, as well as scores of User Experience Questionnaire (UEQ) for both groups, was compared.</p><p><strong>Results: </strong>Training duration was similar between the groups. However, the AR-trained participants showed a 55.1% reduced frequency of the first procedural error (p > 0.05) and a 35.1% reduced extent of the second procedural error (p < 0.01) compared to the conventionally trained participants. UEQ scores favored the AR-training in five of six categories (p < 0.05).</p><p><strong>Conclusion: </strong>The AR-app enhanced training performance and user experience over traditional methods. This suggests the potential of AR-training for navigation devices like the CNS, potentially increasing their safety, ultimately improving outcomes in percutaneous needle placements.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2411-2419"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607048/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jorge Badilla-Solórzano, Sontje Ihler, Thomas Seel
{"title":"HybGrip: a synergistic hybrid gripper for enhanced robotic surgical instrument grasping.","authors":"Jorge Badilla-Solórzano, Sontje Ihler, Thomas Seel","doi":"10.1007/s11548-024-03245-5","DOIUrl":"10.1007/s11548-024-03245-5","url":null,"abstract":"<p><strong>Purpose: </strong>A fundamental task of a robotic scrub nurse is handling surgical instruments. Thus, a gripper capable of consistently grasping a wide variety of tools is essential. We introduce a novel gripper that combines granular jamming and pinching technologies to achieve a synergistic improvement in surgical instrument grasping.</p><p><strong>Methods: </strong>A reliable hybrid gripper is constructed by integrating a pinching mechanism and a standard granular jamming gripper, achieving enhanced granular interlocking. For our experiments, our prototype is affixed to the end-effector of a collaborative robot. A novel grasping strategy is proposed and utilized to evaluate the robustness and performance of our prototype on 18 different surgical tools with diverse geometries.</p><p><strong>Results: </strong>It is demonstrated that the integration of the pinching mechanism significantly enhances grasping performance compared with standard granular jamming grippers, with a success rate above 98%. It is shown that with the combined use of our gripper with an underlying grid, i.e., a complementary device placed beneath the instruments, robustness and performance are further enhanced.</p><p><strong>Conclusion: </strong>Our prototype's performance in surgical instrument grasping stands on par with, if not surpasses, that of comparable contemporary studies, ensuring its competitiveness. Our gripper proves to be robust, cost-effective, and simple, requiring no instrument-specific grasping strategies. Future research will focus on addressing the sterilizability of our prototype and assessing the viability of the introduced grid for intra-operative use.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2363-2370"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142019487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Non-rigid scene reconstruction of deformable soft tissue with monocular endoscopy in minimally invasive surgery.","authors":"Enpeng Wang, Yueang Liu, Jiangchang Xu, Xiaojun Chen","doi":"10.1007/s11548-024-03149-4","DOIUrl":"10.1007/s11548-024-03149-4","url":null,"abstract":"<p><strong>Purpose: </strong>The utilization of image-guided surgery has demonstrated its ability to improve the precision and safety of minimally invasive surgery (MIS). Non-rigid scene reconstruction is a challenge in image-guided system duo to uniform texture, smoke, and instrument occlusion, etc. METHODS: In this paper, we introduced an algorithm for 3D reconstruction aimed at non-rigid surgery scenes. The proposed method comprises two main components: firstly, the front-end process involves the initial reconstruction of 3D information for deformable soft tissues using embedded deformation graph (EDG) on the basis of dual quaternions, enabling the reconstruction without the need for prior knowledge of the target. Secondly, the EDG is integrated with isometric nonrigid structure from motion (Iso-NRSFM) to facilitate centralized optimization of the observed map points and camera motion across different time instances in deformable scenes.</p><p><strong>Results: </strong>For the quantitative evaluation of the proposed method, we conducted comparative experiments with both synthetic datasets and publicly available datasets against the state-of-the-art 3D reconstruction method, DefSLAM. The test results show that our proposed method achieved a maximum reduction of 1.6 mm in average reconstruction error compared to method DefSLAM across all datasets. Additionally, qualitative experiments were performed on video scene datasets involving surgical instrument occlusions.</p><p><strong>Conclusion: </strong>Our method proved to outperform DefSLAM on both synthetic datasets and public datasets through experiments, demonstrating its robustness and accuracy in the reconstruction of soft tissues in dynamic surgical scenes. This success highlights the potential clinical application of our method in delivering surgeons with critical shape and depth information for MIS.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2433-2443"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steven D Curry, Kieran S Boochoon, Geoffrey C Casazza, Daniel L Surdell, Justin A Cramer
{"title":"Deep learning to predict risk of lateral skull base cerebrospinal fluid leak or encephalocele.","authors":"Steven D Curry, Kieran S Boochoon, Geoffrey C Casazza, Daniel L Surdell, Justin A Cramer","doi":"10.1007/s11548-024-03259-z","DOIUrl":"10.1007/s11548-024-03259-z","url":null,"abstract":"<p><strong>Purpose: </strong>Skull base features, including increased foramen ovale (FO) cross-sectional area, are associated with lateral skull base spontaneous cerebrospinal fluid (sCSF) leak and encephalocele. Manual measurement requires skill in interpreting imaging studies and is time consuming. The goal of this study was to develop a fully automated deep learning method for FO segmentation and to determine the predictive value in identifying patients with sCSF leak or encephalocele.</p><p><strong>Methods: </strong>A retrospective cohort study at a tertiary care academic hospital of 34 adults with lateral skull base sCSF leak or encephalocele were compared with 815 control patients from 2013-2021. A convolutional neural network (CNN) was constructed for image segmentation of axial computed tomography (CT) studies. Predicted FO segmentations were compared to manual segmentations, and receiver operating characteristic (ROC) curves were constructed.</p><p><strong>Results: </strong>295 CTs were used for training and validation of the CNN. A separate dataset of 554 control CTs was matched 5:1 on age and sex with the sCSF leak/encephalocele group. The mean Dice score was 0.81. The sCSF leak/encephalocele group had greater mean (SD) FO cross-sectional area compared to the control group, 29.0 (7.7) mm<sup>2</sup> versus 24.3 (7.6) mm<sup>2</sup> (P = .002, 95% confidence interval 0.02-0.08). The area under the ROC curve was 0.69.</p><p><strong>Conclusion: </strong>CNNs can be used to segment the cross-sectional area of the FO accurately and efficiently. Used together with other predictors, this method could be used as part of a clinical tool to predict the risk of sCSF leak or encephalocele.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2453-2461"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Núria Adell-Gómez, Adaia Valls-Ontañón, Albert Malet-Contreras, Andrés García-Piñeiro, Marta Gómez-Chiari, Arnau Valls-Esteve, Lucas Krauel, Josep Rubio-Palau
{"title":"Analysis of the implementation of a circuit for intra-operative superposition and comparison of the surgical outcomes using ICBCT in maxillofacial surgery.","authors":"Núria Adell-Gómez, Adaia Valls-Ontañón, Albert Malet-Contreras, Andrés García-Piñeiro, Marta Gómez-Chiari, Arnau Valls-Esteve, Lucas Krauel, Josep Rubio-Palau","doi":"10.1007/s11548-024-03196-x","DOIUrl":"10.1007/s11548-024-03196-x","url":null,"abstract":"<p><strong>Purpose: </strong>This paper describes a novel circuit for intraoperative analysis with ICBCT in maxillofacial surgery. The aim is to establish guidelines, define indications, and conduct an analysis of the implementation of the circuit for intraoperative comparison of surgical outcomes in relation to 3D virtual planning in maxillofacial surgery.</p><p><strong>Methods: </strong>The study included 150 maxillofacial surgical procedures. Intraoperative actions involved fluoroscopy localization, intraoperative CBCT acquisition, segmentation, and superimposition, among other steps. Surgical times due to intraoperative superposition were measured, including time required for ICBCT positioning and acquisition, image segmentation, and comparison of 3D surfaces from the surgical planning.</p><p><strong>Results: </strong>Successful intraoperative comparison was achieved in all 150 cases, enabling surgeons to detect and address modifications before concluding the surgery. Out of the total, 26 patients (17.33%) required intraoperative revisions, with 11 cases (7.33%) needing major surgical revisions. On average, the additional surgical time with this circuit implementation was 10.66 ± 3.03 min (n = 22).</p><p><strong>Conclusion: </strong>The results of our research demonstrate the potential for performing intraoperative surgical revision, allowing for immediate evaluation, enhancing surgical outcomes, and reducing the need for re-interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2463-2470"},"PeriodicalIF":2.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}