International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Multi-modal dataset creation for federated learning with DICOM-structured reports.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2025-02-03 DOI: 10.1007/s11548-025-03327-y
Malte Tölle, Lukas Burger, Halvar Kelm, Florian André, Peter Bannas, Gerhard Diller, Norbert Frey, Philipp Garthe, Stefan Groß, Anja Hennemuth, Lars Kaderali, Nina Krüger, Andreas Leha, Simon Martin, Alexander Meyer, Eike Nagel, Stefan Orwat, Clemens Scherer, Moritz Seiffert, Jan Moritz Seliger, Stefan Simm, Tim Friede, Tim Seidler, Sandy Engelhardt
{"title":"Multi-modal dataset creation for federated learning with DICOM-structured reports.","authors":"Malte Tölle, Lukas Burger, Halvar Kelm, Florian André, Peter Bannas, Gerhard Diller, Norbert Frey, Philipp Garthe, Stefan Groß, Anja Hennemuth, Lars Kaderali, Nina Krüger, Andreas Leha, Simon Martin, Alexander Meyer, Eike Nagel, Stefan Orwat, Clemens Scherer, Moritz Seiffert, Jan Moritz Seliger, Stefan Simm, Tim Friede, Tim Seidler, Sandy Engelhardt","doi":"10.1007/s11548-025-03327-y","DOIUrl":"10.1007/s11548-025-03327-y","url":null,"abstract":"<p><p>Purpose Federated training is often challenging on heterogeneous datasets due to divergent data storage options, inconsistent naming schemes, varied annotation procedures, and disparities in label quality. This is particularly evident in the emerging multi-modal learning paradigms, where dataset harmonization including a uniform data representation and filtering options are of paramount importance.Methods DICOM-structured reports enable the standardized linkage of arbitrary information beyond the imaging domain and can be used within Python deep learning pipelines with highdicom. Building on this, we developed an open platform for data integration with interactive filtering capabilities, thereby simplifying the process of creation of patient cohorts over several sites with consistent multi-modal data.Results In this study, we extend our prior work by showing its applicability to more and divergent data types, as well as streamlining datasets for federated training within an established consortium of eight university hospitals in Germany. We prove its concurrent filtering ability by creating harmonized multi-modal datasets across all locations for predicting the outcome after minimally invasive heart valve replacement. The data include imaging and waveform data (i.e., computed tomography images, electrocardiography scans) as well as annotations (i.e., calcification segmentations, and pointsets), and metadata (i.e., prostheses and pacemaker dependency).Conclusion Structured reports bridge the traditional gap between imaging systems and information systems. Utilizing the inherent DICOM reference system arbitrary data types can be queried concurrently to create meaningful cohorts for multi-centric data analysis. The graphical interface as well as example structured report templates are available at https://github.com/Cardio-AI/fl-multi-modal-dataset-creation .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"485-495"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929732/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated assessment of non-technical skills by heart-rate data. 通过心率数据自动评估非技术技能。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-11-04 DOI: 10.1007/s11548-024-03287-9
Arnaud Huaulmé, Alexandre Tronchot, Hervé Thomazeau, Pierre Jannin
{"title":"Automated assessment of non-technical skills by heart-rate data.","authors":"Arnaud Huaulmé, Alexandre Tronchot, Hervé Thomazeau, Pierre Jannin","doi":"10.1007/s11548-024-03287-9","DOIUrl":"10.1007/s11548-024-03287-9","url":null,"abstract":"<p><strong>Purpose: </strong>Observer-based scoring systems, or automatic methods, based on features or kinematic data analysis, are used to perform surgical skill assessments. These methods have several limitations, observer-based ones are subjective, and the automatic ones mainly focus on technical skills or use data strongly related to technical skills to assess non-technical skills. In this study, we are exploring the use of heart-rate data, a non-technical-related data, to predict values of an observer-based scoring system thanks to random forest regressors.</p><p><strong>Methods: </strong>Heart-rate data from 35 junior resident orthopedic surgeons were collected during the evaluation of a meniscectomy performed on a bench-top simulator. Each participant has been evaluated by two assessors using the Arthroscopic Surgical Skill Evaluation Tool (ASSET) score. A preprocessing stage on heart-rate data, composed of threshold filtering and a detrending method, was considered before extracting 41 features. Then a random forest regressor has been optimized thanks to a randomized search cross-validation strategy to predict each score component.</p><p><strong>Results: </strong>The prediction of the partially non-technical-related components presents promising results, with the best result obtained for the safety component with a mean absolute error of 0.24, which represents a mean absolute percentage error of 5.76%. The analysis of feature important allowed us to determine which features are the more related to each ASSET component, and therefore determine the underlying impact of the sympathetic and parasympathetic nervous systems.</p><p><strong>Conclusion: </strong>In this preliminary work, a random forest regressor train on feature extract from heart-rate data could be used for automatic skill assessment and more especially for the partially non-technical-related components. Combined with more traditional data, such as kinematic data, it could help to perform accurate automatic skill assessment.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"561-568"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fronto-orbital advancement with patient-specific 3D-printed implants and robot-guided laser osteotomy: an in vitro accuracy assessment. 患者特异性3d打印植入物和机器人引导激光截骨术的额眶推进:体外准确性评估。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-12-13 DOI: 10.1007/s11548-024-03298-6
Michaela Maintz, Nora Desan, Neha Sharma, Jörg Beinemann, Michel Beyer, Daniel Seiler, Philipp Honigmann, Jehuda Soleman, Raphael Guzman, Philippe C Cattin, Florian M Thieringer
{"title":"Fronto-orbital advancement with patient-specific 3D-printed implants and robot-guided laser osteotomy: an in vitro accuracy assessment.","authors":"Michaela Maintz, Nora Desan, Neha Sharma, Jörg Beinemann, Michel Beyer, Daniel Seiler, Philipp Honigmann, Jehuda Soleman, Raphael Guzman, Philippe C Cattin, Florian M Thieringer","doi":"10.1007/s11548-024-03298-6","DOIUrl":"10.1007/s11548-024-03298-6","url":null,"abstract":"<p><strong>Purpose: </strong>The use of computer-assisted virtual surgical planning (VSP) for craniosynostosis surgery is gaining increasing implementation in the clinics. However, accurately transferring the preoperative planning data to the operating room remains challenging. We introduced and investigated a fully digital workflow to perform fronto-orbital advancement (FOA) surgery using 3D-printed patient-specific implants (PSIs) and cold-ablation robot-guided laser osteotomy. This novel approach eliminates the need for traditional surgical templates while enhancing precision and customization, offering a more streamlined and efficient surgical process.</p><p><strong>Methods: </strong>Computed tomography data of a patient with craniosynostosis were used to digitally reconstruct the skull and to perform VSP of the FOA. In total, six PSIs per skull were 3D-printed with a medical-grade bioresorbable composite using the Arburg Plastic Freeforming technology. The planned osteotomy paths and the screw holes, including their positions and axis angles, were digitally transferred to the cold-ablation robot-guided osteotome interface. The osteotomies were performed on 3D-printed patient skull models. The implants, osteotomy and final FOA results were scanned and compared to the VSP data.</p><p><strong>Results: </strong>The osteotomy deviations for the skulls indicated an overall maximum distance of 1.7 mm, a median deviation of 0.44 mm, and a maximum root mean square (RMS) error of 0.67 mm. The deviation of the point-to-point surface comparison of the FOA with the VSP data resulted in a median accuracy of 1.27 mm. Accessing the orbital cavity with the laser remained challenging.</p><p><strong>Conclusion: </strong>This in vitro study showcases a novel FOA technique by effectively combining robot-guided laser osteotomy with 3D-printed patient-specific implants, eliminating the need for surgical templates and achieving high accuracy in bone cutting and positioning. The workflow holds promise for reducing preoperative planning time and increasing surgical efficiency. Further studies on bone tissue are required to validate the safety and effectiveness of this approach, especially in addressing the challenges of pediatric craniofacial surgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"513-524"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929943/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stereo reconstruction from microscopic images for computer-assisted ophthalmic surgery. 用于计算机辅助眼科手术的显微图像立体重建。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-06-04 DOI: 10.1007/s11548-024-03177-0
Rebekka Peter, Sofia Moreira, Eleonora Tagliabue, Matthias Hillenbrand, Rita G Nunes, Franziska Mathis-Ullrich
{"title":"Stereo reconstruction from microscopic images for computer-assisted ophthalmic surgery.","authors":"Rebekka Peter, Sofia Moreira, Eleonora Tagliabue, Matthias Hillenbrand, Rita G Nunes, Franziska Mathis-Ullrich","doi":"10.1007/s11548-024-03177-0","DOIUrl":"10.1007/s11548-024-03177-0","url":null,"abstract":"<p><strong>Purpose: </strong>This work presents a novel platform for stereo reconstruction in anterior segment ophthalmic surgery to enable enhanced scene understanding, especially depth perception, for advanced computer-assisted eye surgery by effectively addressing the lack of texture and corneal distortions artifacts in the surgical scene.</p><p><strong>Methods: </strong>The proposed platform for stereo reconstruction uses a two-step approach: generating a sparse 3D point cloud from microscopic images, deriving a dense 3D representation by fitting surfaces onto the point cloud, and considering geometrical priors of the eye anatomy. We incorporate a pre-processing step to rectify distortion artifacts induced by the cornea's high refractive power, achieved by aligning a 3D phenotypical cornea geometry model to the images and computing a distortion map using ray tracing.</p><p><strong>Results: </strong>The accuracy of 3D reconstruction is evaluated on stereo microscopic images of ex vivo porcine eyes, rigid phantom eyes, and synthetic photo-realistic images. The results demonstrate the potential of the proposed platform to enhance scene understanding via an accurate 3D representation of the eye and enable the estimation of instrument to layer distances in porcine eyes with a mean average error of 190  <math><mrow><mi>μ</mi> <mtext>m</mtext></mrow> </math> , comparable to the scale of surgeons' hand tremor.</p><p><strong>Conclusion: </strong>This work marks a significant advancement in stereo reconstruction for ophthalmic surgery by addressing corneal distortions, a previously often overlooked aspect in such surgical scenarios. This could improve surgical outcomes by allowing for intra-operative computer assistance, e.g., in the form of virtual distance sensors.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"605-612"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929700/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141249094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-automatic robotic puncture system based on deformable soft tissue point cloud registration. 基于可变形软组织点云注册的半自动机器人穿刺系统。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-10-26 DOI: 10.1007/s11548-024-03247-3
Bo Zhang, Kui Chen, Yuhang Yao, Bo Wu, Qiang Li, Zheming Zhang, Peihua Fan, Wei Wang, Manxia Lin, Masakatsu G Fujie
{"title":"Semi-automatic robotic puncture system based on deformable soft tissue point cloud registration.","authors":"Bo Zhang, Kui Chen, Yuhang Yao, Bo Wu, Qiang Li, Zheming Zhang, Peihua Fan, Wei Wang, Manxia Lin, Masakatsu G Fujie","doi":"10.1007/s11548-024-03247-3","DOIUrl":"10.1007/s11548-024-03247-3","url":null,"abstract":"<p><strong>Purpose: </strong>Traditional surgical puncture robot systems based on computed tomography (CT) and infrared camera guidance have natural disadvantages for puncture of deformable soft tissues such as the liver. Liver movement and deformation caused by breathing are difficult to accurately assess and compensate by current technical solutions. We propose a semi-automatic robotic puncture system based on real-time ultrasound images to solve this problem.</p><p><strong>Method: </strong>Real-time ultrasound images and their spatial position information can be obtained by robot in this system. By recognizing target tissue in these ultrasound images and using reconstruction algorithm, 3D real-time ultrasound tissue point cloud can be constructed. Point cloud of the target tissue in the CT image can be obtained by using developed software. Through the point cloud registration method based on feature points, two point clouds above are registered. The puncture target will be automatically positioned, then robot quickly carries the puncture guide mechanism to the puncture site and guides the puncture. It takes about just tens of seconds from the start of image acquisition to completion of needle insertion. Patient can be controlled by a ventilator to temporarily stop breathing, and patient's breathing state does not need to be the same as taking CT scan.</p><p><strong>Results: </strong>The average operation time of 24 phantom experiments is 64.5 s, and the average error between the needle tip and the target point after puncture is 0.8 mm. Two animal puncture surgeries were performed, and the results indicated that the puncture errors of these two experiments are 1.76 mm and 1.81 mm, respectively.</p><p><strong>Conclusion: </strong>Robot system can effectively carry out and implement liver tissue puncture surgery, and the success rate of phantom experiments and experiments is 100%. It also shows that the puncture robot system has high puncture accuracy, short operation time, and great clinical value.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"525-534"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929701/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual airways heatmaps to optimize point of entry location in lung biopsy planning systems. 虚拟气道热图优化肺活检计划系统的切入点位置。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-11-29 DOI: 10.1007/s11548-024-03292-y
Debora Gil, Pere Lloret, Marta Diez-Ferrer, Carles Sanchez
{"title":"Virtual airways heatmaps to optimize point of entry location in lung biopsy planning systems.","authors":"Debora Gil, Pere Lloret, Marta Diez-Ferrer, Carles Sanchez","doi":"10.1007/s11548-024-03292-y","DOIUrl":"10.1007/s11548-024-03292-y","url":null,"abstract":"<p><strong>Purpose: </strong>We present a virtual model to optimize point of entry (POE) in lung biopsy planning systems. Our model allows to compute the quality of a biopsy sample taken from potential POE, taking into account the margin of error that arises from discrepancies between the orientation in the planning simulation and the actual orientation during the operation. Additionally, the study examines the impact of the characteristics of the lesion.</p><p><strong>Methods: </strong>The quality of the biopsy is given by a heatmap projected onto the skeleton of a patient-specific model of airways. The skeleton provides a 3D representation of airways structure, while the heatmap intensity represents the potential amount of tissue that it could be extracted from each POE. This amount of tissue is determined by the intersection of the lesion with a cone that represents the uncertainty area in the introduction of biopsy instruments. The cone, lesion, and skeleton are modelled as graphical objects that define a 3D scene of the intervention.</p><p><strong>Results: </strong>We have simulated different settings of the intervention scene from a single anatomy extracted from a CT scan and two lesions with regular and irregular shapes. The different scenarios are simulated by systematic rotation of each lesion placed at different distances from airways. Analysis of the heatmaps for the different settings shows a strong impact of lesion orientation for irregular shape and the distance for both shapes.</p><p><strong>Conclusion: </strong>The proposed heatmaps help to visually assess the optimal POE and identify whether multiple optimal POEs exist in different zones of the bronchi. They also allow us to model the maximum allowable error in navigation systems and study which variables have the greatest influence on the success of the operation. Additionally, they help determine at what point this influence could potentially jeopardize the operation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"591-596"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive neighborhood triplet loss: enhanced segmentation of dermoscopy datasets by mining pixel information. 自适应邻域三重丢失:通过挖掘像素信息增强皮肤镜数据集的分割能力
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-08-02 DOI: 10.1007/s11548-024-03241-9
Mohan Xu, Lena Wiese
{"title":"Adaptive neighborhood triplet loss: enhanced segmentation of dermoscopy datasets by mining pixel information.","authors":"Mohan Xu, Lena Wiese","doi":"10.1007/s11548-024-03241-9","DOIUrl":"10.1007/s11548-024-03241-9","url":null,"abstract":"<p><strong>Purpose: </strong>The integration of deep learning in image segmentation technology markedly improves the automation capabilities of medical diagnostic systems, reducing the dependence on the clinical expertise of medical professionals. However, the accuracy of image segmentation is still impacted by various interference factors encountered during image acquisition.</p><p><strong>Methods: </strong>To address this challenge, this paper proposes a loss function designed to mine specific pixel information which dynamically changes during training process. Based on the triplet concept, this dynamic change is leveraged to drive the predicted boundaries of images closer to the real boundaries.</p><p><strong>Results: </strong>Extensive experiments on the PH2 and ISIC2017 dermoscopy datasets validate that our proposed loss function overcomes the limitations of traditional triplet loss methods in image segmentation applications. This loss function not only enhances Jaccard indices of neural networks by 2.42 <math><mo>%</mo></math> and 2.21 <math><mo>%</mo></math> for PH2 and ISIC2017, respectively, but also neural networks utilizing this loss function generally surpass those that do not in terms of segmentation performance.</p><p><strong>Conclusion: </strong>This work proposed a loss function that mined the information of specific pixels deeply without incurring additional training costs, significantly improving the automation of neural networks in image segmentation tasks. This loss function adapts to dermoscopic images of varying qualities and demonstrates higher effectiveness and robustness compared to other boundary loss functions, making it suitable for image segmentation tasks across various neural networks.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"453-463"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929709/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141876608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3d freehand ultrasound reconstruction by reference-based point cloud registration. 基于参考点云配准的三维手绘超声重建。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2025-01-07 DOI: 10.1007/s11548-024-03280-2
Christoph Großbröhmer, Lasse Hansen, Jürgen Lichtenstein, Ludger Tüshaus, Mattias P Heinrich
{"title":"3d freehand ultrasound reconstruction by reference-based point cloud registration.","authors":"Christoph Großbröhmer, Lasse Hansen, Jürgen Lichtenstein, Ludger Tüshaus, Mattias P Heinrich","doi":"10.1007/s11548-024-03280-2","DOIUrl":"10.1007/s11548-024-03280-2","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to address the challenging estimation of trajectories from freehand ultrasound examinations by means of registration of automatically generated surface points. Current approaches to inter-sweep point cloud registration can be improved by incorporating heatmap predictions, but practical challenges such as label-sparsity or only partially overlapping coverage of target structures arise when applying realistic examination conditions.</p><p><strong>Methods: </strong>We propose a pipeline comprising three stages: (1) Utilizing a Free Point Transformer for coarse pre-registration, (2) Introducing HeatReg for further refinement using support point clouds, and (3) Employing instance optimization to enhance predicted displacements. Key techniques include expanding point sets with support points derived from prior knowledge and leverage of gradient keypoints. We evaluate our method on a large set of 42 forearm ultrasound sweeps with optical ground-truth tracking and investigate multiple ablations.</p><p><strong>Results: </strong>The proposed pipeline effectively registers free-hand intra-patient ultrasound sweeps. Combining Free Point Transformer with support-point enhanced HeatReg outperforms the FPT baseline by a mean directed surface distance of 0.96 mm (40%). Subsequent refinement using Adam instance optimization and DiVRoC further improves registration accuracy and trajectory estimation.</p><p><strong>Conclusion: </strong>The proposed techniques enable and improve the application of point cloud registration as a basis for freehand ultrasound reconstruction. Our results demonstrate significant theoretical and practical advantages of heatmap incorporation and multi-stage model predictions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"475-484"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence facilitates the potential of simulator training: An innovative laparoscopic surgical skill validation system using artificial intelligence technology. 人工智能促进了模拟器培训的潜力:使用人工智能技术的创新型腹腔镜手术技能验证系统。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-08-19 DOI: 10.1007/s11548-024-03253-5
Atsuhisa Fukuta, Shogo Yamashita, Junnosuke Maniwa, Akihiko Tamaki, Takuya Kondo, Naonori Kawakubo, Kouji Nagata, Toshiharu Matsuura, Tatsuro Tajiri
{"title":"Artificial intelligence facilitates the potential of simulator training: An innovative laparoscopic surgical skill validation system using artificial intelligence technology.","authors":"Atsuhisa Fukuta, Shogo Yamashita, Junnosuke Maniwa, Akihiko Tamaki, Takuya Kondo, Naonori Kawakubo, Kouji Nagata, Toshiharu Matsuura, Tatsuro Tajiri","doi":"10.1007/s11548-024-03253-5","DOIUrl":"10.1007/s11548-024-03253-5","url":null,"abstract":"<p><strong>Purpose: </strong>The development of innovative solutions, such as simulator training and artificial intelligence (AI)-powered tutoring systems, has significantly changed surgical trainees' environments to receive the intraoperative instruction necessary for skill acquisition. In this study, we developed a new objective assessment system using AI for forceps manipulation in a surgical training simulator.</p><p><strong>Methods: </strong>Laparoscopic exercises were recorded using an iPad®, which provided top and side views. Top-view movies were used for AI learning of forceps trajectory. Side-view movies were used as supplementary information to assess the situation. We used an AI-based posture estimation method, DeepLabCut (DLC), to recognize and positionally measure the forceps in the operating field. Tracking accuracy was quantitatively evaluated by calculating the pixel differences between the annotation points and the points predicted by the AI model. Tracking stability at specified key points was verified to assess the AI model.</p><p><strong>Results: </strong>We selected a random sample to evaluate tracking accuracy quantitatively. This sample comprised 5% of the frames not used for AI training from the complete set of video frames. We compared the AI detection positions and correct positions and found an average pixel discrepancy of 9.2. The qualitative evaluation of the tracking stability was good at the forceps hinge; however, forceps tip tracking was unstable during rotation.</p><p><strong>Conclusion: </strong>The AI-based forceps tracking system can visualize and evaluate laparoscopic surgical skills. Improvements in the proposed system and AI self-learning are expected to enable it to distinguish the techniques of expert and novice surgeons accurately. This system is a useful tool for surgeon training and assessment.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"597-603"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929722/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142005803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transferable situation recognition system for scenario-independent context-aware surgical assistance systems: a proof of concept. 用于独立于场景的情境感知手术辅助系统的可转移情境识别系统:概念验证。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-11-27 DOI: 10.1007/s11548-024-03283-z
D Junger, C Kücherer, B Hirt, O Burgert
{"title":"Transferable situation recognition system for scenario-independent context-aware surgical assistance systems: a proof of concept.","authors":"D Junger, C Kücherer, B Hirt, O Burgert","doi":"10.1007/s11548-024-03283-z","DOIUrl":"10.1007/s11548-024-03283-z","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical interventions and the intraoperative environment can vary greatly. A system that reliably recognizes the situation in the operating room should therefore be flexibly applicable to different surgical settings. To achieve this, transferability should be focused during system design and development. In this paper, we demonstrated the feasibility of a transferable, scenario-independent situation recognition system (SRS) by the definition and evaluation based on non-functional requirements.</p><p><strong>Methods: </strong>Based on a high-level concept for a transferable SRS, a proof of concept implementation was demonstrated using scenarios. The architecture was evaluated with a focus on non-functional requirements of compatibility, maintainability, and portability. Moreover, transferability aspects beyond the requirements, such as the effort to cover new scenarios, were discussed in a subsequent argumentative evaluation.</p><p><strong>Results: </strong>The evaluation demonstrated the development of an SRS that can be applied to various scenarios. Furthermore, the investigation of the transferability to other settings highlighted the system's characteristics regarding configurability, interchangeability, and expandability. The components can be optimized step by step to realize a versatile and efficient situation recognition that can be easily adapted to different scenarios.</p><p><strong>Conclusion: </strong>The prototype provides a framework for scenario-independent situation recognition, suggesting greater applicability and transferability to different surgical settings. For the transfer into clinical routine, the system's modules need to be evolved, further transferability challenges be addressed, and comprehensive scenarios be integrated.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"579-590"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929725/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142741251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信