{"title":"Diagnosing Helicobacter pylori using autoencoders and limited annotations through anomalous staining patterns in IHC whole slide images.","authors":"Pau Cano, Eva Musulen, Debora Gil","doi":"10.1007/s11548-024-03313-w","DOIUrl":"https://doi.org/10.1007/s11548-024-03313-w","url":null,"abstract":"<p><strong>Purpose: </strong>This work addresses the detection of Helicobacter pylori (H. pylori) in histological images with immunohistochemical staining. This analysis is a time-demanding task, currently done by an expert pathologist that visually inspects the samples. Given the effort required to localize the pathogen in images, a limited number of annotations might be available in an initial setting. Our goal is to design an approach that, using a limited set of annotations, is capable of obtaining results good enough to be used as a support tool.</p><p><strong>Methods: </strong>We propose to use autoencoders to learn the latent patterns of healthy patches and formulate a specific measure of the reconstruction error of the image in HSV space. ROC analysis is used to set the optimal threshold of this measure and the percentage of positive patches in a sample that determines the presence of H. pylori.</p><p><strong>Results: </strong>Our method has been tested on an own database of 245 whole slide images (WSI) having 117 cases without H. pylori and different density of the bacteria in the remaining ones. The database has 1211 annotated patches, with only 163 positive patches. This dataset of positive annotations was used to train a baseline thresholding and an SVM using the features of a pre-trained RedNet-18 and ViT models. A 10-fold cross-validation shows that our method has better performance with 91% accuracy, 86% sensitivity, 96% specificity and 0.97 AUC in the diagnosis of H. pylori .</p><p><strong>Conclusion: </strong>Unlike classification approaches, our shallow autoencoder with threshold adaptation for the detection of anomalous staining is able to achieve competitive results with a limited set of annotated data. This initial approach is good enough to be used as a guide for fast annotation of infected patches.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Großbröhmer, Lasse Hansen, Jürgen Lichtenstein, Ludger Tüshaus, Mattias P Heinrich
{"title":"3d freehand ultrasound reconstruction by reference-based point cloud registration.","authors":"Christoph Großbröhmer, Lasse Hansen, Jürgen Lichtenstein, Ludger Tüshaus, Mattias P Heinrich","doi":"10.1007/s11548-024-03280-2","DOIUrl":"https://doi.org/10.1007/s11548-024-03280-2","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to address the challenging estimation of trajectories from freehand ultrasound examinations by means of registration of automatically generated surface points. Current approaches to inter-sweep point cloud registration can be improved by incorporating heatmap predictions, but practical challenges such as label-sparsity or only partially overlapping coverage of target structures arise when applying realistic examination conditions.</p><p><strong>Methods: </strong>We propose a pipeline comprising three stages: (1) Utilizing a Free Point Transformer for coarse pre-registration, (2) Introducing HeatReg for further refinement using support point clouds, and (3) Employing instance optimization to enhance predicted displacements. Key techniques include expanding point sets with support points derived from prior knowledge and leverage of gradient keypoints. We evaluate our method on a large set of 42 forearm ultrasound sweeps with optical ground-truth tracking and investigate multiple ablations.</p><p><strong>Results: </strong>The proposed pipeline effectively registers free-hand intra-patient ultrasound sweeps. Combining Free Point Transformer with support-point enhanced HeatReg outperforms the FPT baseline by a mean directed surface distance of 0.96 mm (40%). Subsequent refinement using Adam instance optimization and DiVRoC further improves registration accuracy and trajectory estimation.</p><p><strong>Conclusion: </strong>The proposed techniques enable and improve the application of point cloud registration as a basis for freehand ultrasound reconstruction. Our results demonstrate significant theoretical and practical advantages of heatmap incorporation and multi-stage model predictions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Kaftan, Mattias P Heinrich, Lasse Hansen, Volker Rasche, Hans A Kestler, Alexander Bigalke
{"title":"Sparse keypoint segmentation of lung fissures: efficient geometric deep learning for abstracting volumetric images.","authors":"Paul Kaftan, Mattias P Heinrich, Lasse Hansen, Volker Rasche, Hans A Kestler, Alexander Bigalke","doi":"10.1007/s11548-024-03310-z","DOIUrl":"https://doi.org/10.1007/s11548-024-03310-z","url":null,"abstract":"<p><strong>Purpose: </strong>Lung fissure segmentation on CT images often relies on 3D convolutional neural networks (CNNs). However, 3D-CNNs are inefficient for detecting thin structures like the fissures, which make up a tiny fraction of the entire image volume. We propose to make lung fissure segmentation more efficient by using geometric deep learning (GDL) on sparse point clouds.</p><p><strong>Methods: </strong>We abstract image data with sparse keypoint (KP) clouds. We train GDL models to segment the point cloud, comparing three major paradigms of models (PointNets, graph convolutional networks (GCNs), and PointTransformers). From the sparse point segmentations, 3D meshes of the objects are reconstructed to obtain a dense surface. The state-of-the-art Poisson surface reconstruction (PSR) makes up most of the time in our pipeline. Therefore, we propose an efficient point cloud to mesh autoencoder (PC-AE) that deforms a template mesh to fit a point cloud in a single forward pass. Our pipeline is evaluated extensively and compared to the 3D-CNN gold standard nnU-Net on diverse clinical and pathological data.</p><p><strong>Results: </strong>GCNs yield the best trade-off between inference time and accuracy, being <math><mrow><mn>21</mn> <mo>×</mo></mrow> </math> faster with only <math><mrow><mn>1.4</mn> <mo>×</mo></mrow> </math> increased error over the nnU-Net. Our PC-AE also achieves a favorable trade-off, being <math><mrow><mn>3</mn> <mo>×</mo></mrow> </math> faster at <math><mrow><mn>1.5</mn> <mo>×</mo></mrow> </math> the error compared to the PSR.</p><p><strong>Conclusion: </strong>We present a KP-based fissure segmentation pipeline that is more efficient than 3D-CNNs and can greatly speed up large-scale analyses. A novel PC-AE for efficient mesh reconstruction from sparse point clouds is introduced, showing promise not only for fissure segmentation. Source code is available on https://github.com/kaftanski/fissure-segmentation-IJCARS.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moujan Saderi, Jaykumar H Patel, Calder D Sheagren, Judit Csőre, Trisha L Roy, Graham A Wright
{"title":"3D CT to 2D X-ray image registration for improved visualization of tibial vessels in endovascular procedures.","authors":"Moujan Saderi, Jaykumar H Patel, Calder D Sheagren, Judit Csőre, Trisha L Roy, Graham A Wright","doi":"10.1007/s11548-024-03302-z","DOIUrl":"https://doi.org/10.1007/s11548-024-03302-z","url":null,"abstract":"<p><strong>Purpose: </strong>During endovascular revascularization interventions for peripheral arterial disease, the standard modality of X-ray fluoroscopy (XRF) used for image guidance is limited in visualizing distal segments of infrapopliteal vessels. To enhance visualization of arteries, an image registration technique was developed to align pre-acquired computed tomography (CT) angiography images and to create fusion images highlighting arteries of interest.</p><p><strong>Methods: </strong>X-ray image metadata capturing the position of the X-ray gantry initializes a multiscale iterative optimization process, which uses a local-variance masked normalized cross-correlation loss to rigidly align a digitally reconstructed radiograph (DRR) of the CT dataset with the target X-ray, using the edges of the fibula and tibia as the basis for alignment. A precomputed library of DRRs is used to improve run-time, and the six-degree-of-freedom optimization problem of rigid registration is divided into three smaller sub-problems to improve convergence. The method was tested on a dataset of paired cone-beam CT (CBCT) and XRF images of ex vivo limbs, and registration accuracy at the midline of the artery was evaluated.</p><p><strong>Results: </strong>On a dataset of CBCTs from 4 different limbs and a total of 17 XRF images, successful registration was achieved in 13 cases, with the remainder suffering from input image quality issues. The method produced average misalignments of less than 1 mm in horizontal projection distance along the artery midline, with an average run-time of 16 s.</p><p><strong>Conclusion: </strong>The sub-mm spatial accuracy of artery overlays is sufficient for the clinical use case of identifying guidewire deviations from the path of the artery, for early detection of guidewire-induced perforations. The semiautomatic workflow and average run-time of the algorithm make it feasible for integration into clinical workflows.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142928132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lovis Schwenderling, Laura Isabel Hanke, Undine Holst, Florentine Huettl, Fabian Joeres, Tobias Huber, Christian Hansen
{"title":"Toward structured abdominal examination training using augmented reality.","authors":"Lovis Schwenderling, Laura Isabel Hanke, Undine Holst, Florentine Huettl, Fabian Joeres, Tobias Huber, Christian Hansen","doi":"10.1007/s11548-024-03311-y","DOIUrl":"https://doi.org/10.1007/s11548-024-03311-y","url":null,"abstract":"<p><strong>Purpose: </strong>Structured abdominal examination is an essential part of the medical curriculum and surgical training, requiring a blend of theory and practice from trainees. Current training methods, however, often do not provide adequate engagement, fail to address individual learning needs or do not cover rare diseases.</p><p><strong>Methods: </strong>In this work, an application for structured Abdominal Examination Training using Augmented Reality (AETAR) is presented. Required theoretical knowledge is displayed step by step via virtual indicators directly on the associated body regions. Exercises facilitate building up the routine in performing the examination. AETAR was evaluated in an exploratory user study with medical students (n=12) and teaching surgeons (n=2).</p><p><strong>Results: </strong>Learning with AETAR was described as fun and beneficial. Usability (SUS=73) and rated suitability for teaching were promising. All students improved in a knowledge test and felt more confident with the abdominal examination. Shortcomings were identified in the area of interaction, especially in teaching examination-specific movements.</p><p><strong>Conclusion: </strong>AETAR represents a first approach to structured abdominal examination training using augmented reality. The application demonstrates the potential to improve educational outcomes for medical students and provides an important foundation for future research and development in digital medical education.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142928249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotic navigation with deep reinforcement learning in transthoracic echocardiography.","authors":"Yuuki Shida, Souto Kumagai, Hiroyasu Iwata","doi":"10.1007/s11548-024-03275-z","DOIUrl":"10.1007/s11548-024-03275-z","url":null,"abstract":"<p><strong>Purpose: </strong>The search for heart components in robotic transthoracic echocardiography is a time-consuming process. This paper proposes an optimized robotic navigation system for heart components using deep reinforcement learning to achieve an efficient and effective search technique for heart components.</p><p><strong>Method: </strong>The proposed method introduces (i) an optimized search behavior generation algorithm that avoids multiple local solutions and searches for the optimal solution and (ii) an optimized path generation algorithm that minimizes the search path, thereby realizing short search times.</p><p><strong>Results: </strong>The mitral valve search with the proposed method reaches the optimal solution with a probability of 74.4%, the mitral valve confidence loss rate when the local solution stops is 16.3% on average, and the inspection time with the generated path is 48.6 s on average, which is 56.6% of the time cost of the conventional method.</p><p><strong>Conclusion: </strong>The results indicate that the proposed method improves the search efficiency, and the optimal location can be searched in many cases with the proposed method, and the loss rate of the confidence in the mitral valve was low even when a local solution rather than the optimal solution was reached. It is suggested that the proposed method enables accurate and quick robotic navigation to find heart components.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"191-202"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11757869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hamraz Javaheri, Omid Ghamarnejad, Ragnar Bade, Paul Lukowicz, Jakob Karolus, Gregor Alexander Stavrou
{"title":"Beyond the visible: preliminary evaluation of the first wearable augmented reality assistance system for pancreatic surgery.","authors":"Hamraz Javaheri, Omid Ghamarnejad, Ragnar Bade, Paul Lukowicz, Jakob Karolus, Gregor Alexander Stavrou","doi":"10.1007/s11548-024-03131-0","DOIUrl":"10.1007/s11548-024-03131-0","url":null,"abstract":"<p><strong>Purpose: </strong>The retroperitoneal nature of the pancreas, marked by minimal intraoperative organ shifts and deformations, makes augmented reality (AR)-based systems highly promising for pancreatic surgery. This study presents preliminary data from a prospective study aiming to develop the first wearable AR assistance system, ARAS, for pancreatic surgery and evaluating its usability, accuracy, and effectiveness in enhancing the perioperative outcomes of patients.</p><p><strong>Methods: </strong>We developed ARAS as a two-phase system for a wearable AR device to aid surgeons in planning and operation. This system was used to visualize and register patient-specific 3D anatomical models during the surgery. The location and precision of the registered 3D anatomy were evaluated by assessing the arterial pulse and employing Doppler and duplex ultrasonography. The usability, accuracy, and effectiveness of ARAS were assessed using a five-point Likert scale questionnaire.</p><p><strong>Results: </strong>Perioperative outcomes of five patients underwent various pancreatic resections with ARAS are presented. Surgeons rated ARAS as excellent for preoperative planning. All structures were accurately identified without any noteworthy errors. Only tumor identification decreased after the preparation phase, especially in patients who underwent pancreaticoduodenectomy because of the extensive mobilization of peripancreatic structures. No perioperative complications related to ARAS were observed.</p><p><strong>Conclusions: </strong>ARAS shows promise in enhancing surgical precision during pancreatic procedures. Its efficacy in preoperative planning and intraoperative vascular identification positions it as a valuable tool for pancreatic surgery and a potential educational resource for future surgical residents.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"117-129"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11757645/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141288907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William Ndzimbong, Nicolas Thome, Cyril Fourniol, Yvonne Keeza, Benoît Sauer, Jacques Marescaux, Daniel George, Alexandre Hostettler, Toby Collins
{"title":"Global registration of kidneys in 3D ultrasound and CT images.","authors":"William Ndzimbong, Nicolas Thome, Cyril Fourniol, Yvonne Keeza, Benoît Sauer, Jacques Marescaux, Daniel George, Alexandre Hostettler, Toby Collins","doi":"10.1007/s11548-024-03255-3","DOIUrl":"10.1007/s11548-024-03255-3","url":null,"abstract":"<p><strong>Purpose: </strong>Automatic registration between abdominal ultrasound (US) and computed tomography (CT) images is needed to enhance interventional guidance of renal procedures, but it remains an open research challenge. We propose a novel method that doesn't require an initial registration estimate (a global method) and also handles registration ambiguity caused by the organ's natural symmetry. Combined with a registration refinement algorithm, this method achieves robust and accurate kidney registration while avoiding manual initialization.</p><p><strong>Methods: </strong>We propose solving global registration in a three-step approach: (1) Automatic anatomical landmark localization, where 2 deep neural networks (DNNs) localize a set of landmarks in each modality. (2) Registration hypothesis generation, where potential registrations are computed from the landmarks with a deterministic variant of RANSAC. Due to the Kidney's strong bilateral symmetry, there are usually 2 compatible solutions. Finally, in Step (3), the correct solution is determined automatically, using a DNN classifier that resolves the geometric ambiguity. The registration may then be iteratively improved with a registration refinement method. Results are presented with state-of-the-art surface-based refinement-Bayesian coherent point drift (BCPD).</p><p><strong>Results: </strong>This automatic global registration approach gives better results than various competitive state-of-the-art methods, which, additionally, require organ segmentation. The results obtained on 59 pairs of 3D US/CT kidney images show that the proposed method, combined with BCPD refinement, achieves a target registration error (TRE) of an internal kidney landmark (the renal pelvis) of 5.78 mm and an average nearest neighbor surface distance (nndist) of 2.42 mm.</p><p><strong>Conclusion: </strong>This work presents the first approach for automatic kidney registration in US and CT images, which doesn't require an initial manual registration estimate to be known a priori. The results show a fully automatic registration approach with performances comparable to manual methods is feasible.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"65-75"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142146830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ainkaran Santhirasekaram, Mathias Winkler, Andrea Rockall, Ben Glocker
{"title":"Robust prostate disease classification using transformers with discrete representations.","authors":"Ainkaran Santhirasekaram, Mathias Winkler, Andrea Rockall, Ben Glocker","doi":"10.1007/s11548-024-03153-8","DOIUrl":"10.1007/s11548-024-03153-8","url":null,"abstract":"<p><strong>Purpose: </strong>Automated prostate disease classification on multi-parametric MRI has recently shown promising results with the use of convolutional neural networks (CNNs). The vision transformer (ViT) is a convolutional free architecture which only exploits the self-attention mechanism and has surpassed CNNs in some natural imaging classification tasks. However, these models are not very robust to textural shifts in the input space. In MRI, we often have to deal with textural shift arising from varying acquisition protocols. Here, we focus on the ability of models to generalise well to new magnet strengths for MRI.</p><p><strong>Method: </strong>We propose a new framework to improve the robustness of vision transformer-based models for disease classification by constructing discrete representations of the data using vector quantisation. We sample a subset of the discrete representations to form the input into a transformer-based model. We use cross-attention in our transformer model to combine the discrete representations of T2-weighted and apparent diffusion coefficient (ADC) images.</p><p><strong>Results: </strong>We analyse the robustness of our model by training on a 1.5 T scanner and test on a 3 T scanner and vice versa. Our approach achieves SOTA performance for classification of lesions on prostate MRI and outperforms various other CNN and transformer-based models in terms of robustness to domain shift and perturbations in the input space.</p><p><strong>Conclusion: </strong>We develop a method to improve the robustness of transformer-based disease classification of prostate lesions on MRI using discrete representations of the T2-weighted and ADC images.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"11-20"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759462/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140916593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sven Kolb, Andrew Madden, Nicolai Kröger, Fidan Mehmeti, Franziska Jurosch, Lukas Bernhard, Wolfgang Kellerer, Dirk Wilhelm
{"title":"6G in medical robotics: development of network allocation strategies for a telerobotic examination system.","authors":"Sven Kolb, Andrew Madden, Nicolai Kröger, Fidan Mehmeti, Franziska Jurosch, Lukas Bernhard, Wolfgang Kellerer, Dirk Wilhelm","doi":"10.1007/s11548-024-03260-6","DOIUrl":"10.1007/s11548-024-03260-6","url":null,"abstract":"<p><strong>Purpose: </strong>Healthcare systems around the world are increasingly facing severe challenges due to problems such as staff shortage, changing demographics and the reliance on an often strongly human-dependent environment. One approach aiming to address these issues is the development of new telemedicine applications. The currently researched network standard 6G promises to deliver many new features which could be beneficial to leverage the full potential of emerging telemedical solutions and overcome the limitations of current network standards.</p><p><strong>Methods: </strong>We developed a telerobotic examination system with a distributed robot control infrastructure to investigate the benefits and challenges of distributed computing scenarios, such as fog computing, in medical applications. We investigate different software configurations for which we characterize the network traffic and computational loads and subsequently establish network allocation strategies for different types of modular application functions (MAFs).</p><p><strong>Results: </strong>The results indicate a high variability in the usage profiles of these MAFs, both in terms of computational load and networking behavior, which in turn allows the development of allocation strategies for different types of MAFs according to their requirements. Furthermore, the results provide a strong basis for further exploration of distributed computing scenarios in medical robotics.</p><p><strong>Conclusion: </strong>This work lays the foundation for the development of medical robotic applications using 6G network architectures and distributed computing scenarios, such as fog computing. In the future, we plan to investigate the capability to dynamically shift MAFs within the network based on current situational demand, which could help to further optimize the performance of network-based medical applications and play a role in addressing the increasingly critical challenges in healthcare.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"167-178"},"PeriodicalIF":2.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759283/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}