{"title":"A multimodal digital twin for autonomous micro-drilling in scientific exploration.","authors":"Saul Alexis Heredia Perez, Tze Lun Lok, Enduo Zhao, Kanako Harada","doi":"10.1007/s11548-025-03465-3","DOIUrl":"10.1007/s11548-025-03465-3","url":null,"abstract":"<p><strong>Purpose: </strong>To support research on autonomous robotic micro-drilling for cranial window creation in mice, a multimodal digital twin (DT) is developed to generate realistic synthetic images and drilling sounds. The realism of the DT is evaluated using data from an eggshell drilling scenario, demonstrating its potential for training AI models with multimodal synthetic data.</p><p><strong>Methods: </strong>The asynchronous multi-body framework (AMBF) simulator for volumetric drilling with haptic feedback is combined with the Isaac Sim simulator for photorealistic rendering. A deep audio generator (DAG) model is presented and its realism is evaluated on real drilling sounds. A convolutional neural network (CNN) trained on synthetic images is used to assess visual realism by detecting drilling areas in real eggshell images. Finally, the accuracy of the DT is evaluated by experiments on a real eggshell.</p><p><strong>Results: </strong>The DAG model outperformed pitch modulation methods, achieving lower Frechet audio distance (FAD) and Frechet inception distance (FID) scores, demonstrating a closer resemblance to real drilling sounds. The CNN trained on synthetic images achieved a mean average precision (mAP) of 70.2 when tested on real drilling images. The DT had an alignment error of 0.22 ± 0.03 mm.</p><p><strong>Conclusion: </strong>A multimodal DT has been developed to simulate the creation of the cranial window on an eggshell model and its realism has been evaluated. The results indicate a high degree of realism in both the synthetic audio and images and submillimeter accuracy.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1987-1997"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518470/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenxu Wang, Zhenyuan Ning, Jifan Zhang, Yu Zhang, Weizhen Wang
{"title":"Interpretable deep fuzzy network-aided detection of central lymph node metastasis status in papillary thyroid carcinoma.","authors":"Wenxu Wang, Zhenyuan Ning, Jifan Zhang, Yu Zhang, Weizhen Wang","doi":"10.1007/s11548-025-03453-7","DOIUrl":"10.1007/s11548-025-03453-7","url":null,"abstract":"<p><strong>Purpose: </strong>The non-invasive assessment of central lymph node metastasis (CLNM) in patients with papillary thyroid carcinoma (PTC) plays a crucial role in assisting treatment decision and prognosis planning. This study aims to use an interpretable deep fuzzy network guided by expert knowledge to predict the CLNM status of patients with PTC from ultrasound images.</p><p><strong>Methods: </strong>A total of 1019 PTC patients were enrolled in this study, comprising 465 CLNM patients and 554 non-CLNM patients. Pathological diagnosis served as the gold standard to determine metastasis status. Clinical and morphological features of thyroid were collected as expert knowledge to guide the deep fuzzy network in predicting CLNM status. The network consisted of a region of interest (ROI) segmentation module, a knowledge-aware feature extraction module, and a fuzzy prediction module. The network was trained on 652 patients, validated on 163 patients and tested on 204 patients.</p><p><strong>Results: </strong>The model exhibited promising performance in predicting CLNM status, achieving the area under the receiver operating characteristic curve (AUC), accuracy, precision, sensitivity and specificity of 0.786 (95% CI 0.720-0.846), 0.745 (95% CI 0.681-0.799), 0.727 (95% CI 0.636-0.819), 0.696 (95% CI 0.594-0.789), and 0.786 (95% CI 0.712-0.864), respectively. In addition, the rules of the fuzzy system in the model are easy to understand and explain, and have good interpretability.</p><p><strong>Conclusion: </strong>The deep fuzzy network guided by expert knowledge predicted CLNM status of PTC patients with high accuracy and good interpretability, and may be considered as an effective tool to guide preoperative clinical decision-making.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2119-2129"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ehsan Amini, Georg Hille, Janine Hürtgen, Alexey Surov, Sylvia Saalfeld
{"title":"Deep learning-based segmentation of acute pulmonary embolism in cardiac CT images.","authors":"Ehsan Amini, Georg Hille, Janine Hürtgen, Alexey Surov, Sylvia Saalfeld","doi":"10.1007/s11548-025-03503-0","DOIUrl":"https://doi.org/10.1007/s11548-025-03503-0","url":null,"abstract":"<p><strong>Purpose: </strong>Acute pulmonary embolism (APE) is a common pulmonary condition that, in severe cases, can progress to right ventricular hypertrophy and failure, making it a critical health concern surpassed in severity only by myocardial infarction and sudden death. CT pulmonary angiogram (CTPA) is a standard diagnostic tool for detecting APE. However, for treatment planning and prognosis of patient outcome, an accurate assessment of individual APEs is required.</p><p><strong>Methods: </strong>Within this study, we compiled and prepared a dataset of 200 CTPA image volumes of patients with APE. We then adapted two state-of-the-art neural networks; the nnU-Net and the transformer-based VT-UNet in order to provide fully automatic APE segmentations.</p><p><strong>Results: </strong>The nnU-Net demonstrated robust performance, achieving an average Dice similarity coefficient (DSC) of 88.25 ± 10.19% and an average 95th percentile Hausdorff distance (HD95) of 10.57 ± 34.56 mm across the validation sets in a five-fold cross-validation framework. In comparison, the VT-UNet was achieving on par accuracies with an average DSC of 87.90 ± 10.94% and a mean HD95 of 10.77 ± 34.19 mm.</p><p><strong>Conclusions: </strong>We applied two state-of-the-art networks for automatic APE segmentation to our compiled CTPA dataset and achieved superior experimental results compared to the current state of the art. In clinical routine, accurate APE segmentations can be used for enhanced patient prognosis and treatment planning.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145139470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marianne Maktabi, Claudia Hain, Hannes Köhler, Benjamin Huber, René Thieme, Katrin Schierle, Boris Jansen-Winkeln, Ines Gockel
{"title":"Classification of esophageal cancer by using hyperspectral data.","authors":"Marianne Maktabi, Claudia Hain, Hannes Köhler, Benjamin Huber, René Thieme, Katrin Schierle, Boris Jansen-Winkeln, Ines Gockel","doi":"10.1007/s11548-025-03514-x","DOIUrl":"https://doi.org/10.1007/s11548-025-03514-x","url":null,"abstract":"<p><strong>Purpose: </strong>Esophageal cancer is widespread worldwide, with the highest rate in Asia. Early diagnosis plays a key role in increasing the survival rate. Early cancer detection as well as fast evaluation of tumor extent before and resection margins during/after surgery are important to improve patients' outcomes. Hyperspectral imaging (HSI), as a noninvasive and contactless novel intraoperative technique, has shown promising results in cancer detecting in combination with artificial intelligence.</p><p><strong>Methods: </strong>In this clinical study, the extent to which physiological parameters, such as water or hemoglobin content, differ in the esophagus, stomach, and cancer tissue, was examined. For this purpose, hyperspectral intraluminal recordings of affected tissue specimen were carried out. In addition, a classification of the three intraluminal tissue types (esophageal, stomach mucosa, and cancerous tissue) was performed by using two different convolutional neural networks.</p><p><strong>Results: </strong>Our analysis clearly demonstrated differences in hemoglobin concentration and water content between healthy and cancerous tissues, as well as among different tumor stages. As classification results, an averaged area under the curve score of 81 ± 3%, a sensitivity of 74 ± 8%, and a specificity of 89 ± 2% could be achieved across all tissue types using a hybrid convolutional neural network.</p><p><strong>Conclusion: </strong>HSI has relevant potential for supporting the detection of tumorous tissue in esophageal cancer. However, further analyses including more detailed histopathologic correlation as \"gold standard\" are needed. Data augmentation and future multicenter studies have to be carried out. These steps may help to improve and sharpen our current findings, especially for esophageal cancerous tissue.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Liver mask-guided SAM-enhanced dual-decoder network for landmark segmentation in AR-guided surgery.","authors":"Xukun Zhang, Sharib Ali, Yanlan Kang, Jingyi Zhu, Minghao Han, Le Wang, Xiaoying Wang, Lihua Zhang","doi":"10.1007/s11548-025-03516-9","DOIUrl":"https://doi.org/10.1007/s11548-025-03516-9","url":null,"abstract":"<p><strong>Purpose: </strong>In augmented reality (AR)-guided laparoscopic liver surgery, accurate segmentation of liver landmarks is crucial for precise 3D-2D registration. However, existing methods struggle with complex structures, limited data, and class imbalance. In this study, we propose a novel approach to improve landmark segmentation performance by leveraging liver mask prediction.</p><p><strong>Methods: </strong>We propose a dual-decoder model enhanced by a pre-trained segment anything model (SAM) encoder, where one decoder segments the liver and the other focuses on liver landmarks. The SAM encoder provides robust features for liver mask prediction, improving generalizability. A liver-guided consistency constraint establishes fine-grained spatial consistency between liver regions and landmarks, enhancing segmentation accuracy through detailed spatial modeling.</p><p><strong>Results: </strong>The proposed method achieved state-of-the-art performance in liver landmark segmentation on two public laparoscopic datasets. By addressing feature entanglement, the dual-decoder framework with SAM and consistency constraints significantly improved segmentation in complex surgical scenarios.</p><p><strong>Conclusion: </strong>The SAM-enhanced dual-decoder network, incorporating liver-guided consistency constraints, offers a promising solution for 2D landmark segmentation in AR-guided laparoscopic surgery. By mutually reinforcing liver mask and landmark segmentation, the method achieves improved accuracy and robustness for intraoperative applications.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teodoro Martín-Noguerol, Pilar López-Úbeda, Carolina Díaz-Angulo, Antonio Luna
{"title":"Lexomics, or why to extract relevant information from radiology reports through LLMs.","authors":"Teodoro Martín-Noguerol, Pilar López-Úbeda, Carolina Díaz-Angulo, Antonio Luna","doi":"10.1007/s11548-025-03521-y","DOIUrl":"https://doi.org/10.1007/s11548-025-03521-y","url":null,"abstract":"<p><strong>Purpose: </strong>The application of large language models (LLMs) to radiology reports aims to enhance the extraction of meaningful textual data, improving clinical decision-making and patient management. Similar to radiomics in image analysis, lexomics seeks to reveal hidden patterns in radiology reports to support diagnosis, classification, and structured reporting.</p><p><strong>Methods: </strong>LLMs and natural language processing (NLP) algorithms analyze radiology reports to extract relevant information, refine differential diagnoses, and integrate clinical data. These models process structured and unstructured text, identifying patterns and correlations that may otherwise go unnoticed. Applications include automated structured reporting, quality control, and enhanced communication of incidental and urgent findings.</p><p><strong>Results: </strong>LLMs have demonstrated the ability to assist radiologists in real-time, standardizing classifications, improving report clarity, and enhancing the integration of radiology reports into electronic health records (EHRs). They support radiologists by reducing redundancies, structuring free-text reports, and detecting clinically relevant insights. Unlike radiomics, lexomics requires minimal computational power, making it more accessible in clinical settings.</p><p><strong>Conclusion: </strong>Lexomics represents a significant advancement in AI-driven radiology, optimizing report utilization and communication. Future research should focus on addressing challenges such as data privacy, bias mitigation, and validation in diverse clinical scenarios to ensure ethical and effective implementation in radiological practice.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145114545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DCEM-TCRCN: an innovative approach to depression detection using wearable IoT devices and deep learning.","authors":"Xinfeng Xiao, Shijun Li, Wei Yu","doi":"10.1007/s11548-025-03479-x","DOIUrl":"https://doi.org/10.1007/s11548-025-03479-x","url":null,"abstract":"<p><strong>Purpose: </strong>Depression is a psychological disorder that has vital implications for society's health. So, it is important to develop a model that aids in effective and accurate depression diagnosis. This paper proposes a Dynamic Convolutional Encoder Model based on a Temporal Circular Residual Convolutional Network (DCEM-TCRCN), a novel approach for diagnosing depression using wearable Internet-of-Things sensors.</p><p><strong>Methods: </strong>DCEM integrates Mobile Inverted Bottleneck Convolution (MBConv) blocks with Dynamic Convolution (DConv) to maximize feature extraction and allow the system to react to input changes and effectively extract depression-correlated patterns. The TCRCN model improves the performance using circular dilated convolution to address long-range temporal relations and eliminate boundary effects. Temporal attention mechanisms deal with important patterns in the data, while weight normalization, GELU activation, and dropout assure stability, regularization, and convergence.</p><p><strong>Results: </strong>The proposed system applies physiological information acquired from wearable sensors, including heart rate variability and electrodermal activity. Preprocessing tasks like one-hot encoding and data normalization normalize inputs to enable successful feature extraction. Dual fully connected layers perform classifications using pooled learned representations to make accurate predictions regarding depression states.</p><p><strong>Conclusion: </strong>Experimental analysis on the Depression Dataset confirmed the improved performance of the DCEM-TCRCN model with an accuracy of 98.88%, precision of 97.76%, recall of 98.21%, and a Cohen-Kappa score of 97.99%. The findings confirm the efficacy, trustworthiness, and stability of the model, making it usable for real-time psychological health monitoring.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Egocentric video analysis for automated assessment of open surgical skills via deep learning.","authors":"Athanasios Gazis, Dimitrios Schizas, Stylianos Kykalos, Pantelis Karaiskos, Constantinos Loukas","doi":"10.1007/s11548-025-03518-7","DOIUrl":"https://doi.org/10.1007/s11548-025-03518-7","url":null,"abstract":"<p><strong>Purpose: </strong>While significant progress has been made in skill assessment for minimally invasive procedures, objective evaluation methods for open surgery remain limited. This paper presents a deep learning framework for assessing technical surgical skills using egocentric video data from open surgery training.</p><p><strong>Methods: </strong>Our dataset includes 201 videos and corresponding hand kinematics data from three fundamental training task-knot tying (KT), continuous suturing (CS), and interrupted suturing (IS)-performed by 20 participants. Each video was annotated by two experts using a modified OSATS scale (KT: five criteria, total score range: 5-25; CS/IS: seven criteria, total score range: 7-35). We evaluate three temporal architectures (LSTM, TCN, and Transformer), each using ResNet50 as the backbone for spatial feature extraction, and assess them under various training strategies: single-task learning, feature concatenation, pretraining, and multi-task learning with integrated kinematic data. Performance metrics included mean absolute error (MAE) and Spearman correlation coefficient ( <math><mi>ρ</mi></math> ), both with respect to total score prediction.</p><p><strong>Results: </strong>The Transformer-based models consistently outperformed LSTM and TCN across all tasks. The multi-task Transformer incorporating prediction of task completion time ( <math><msub><mtext>Transf-MT</mtext> <mtext>T+S</mtext></msub> </math> ) achieved the lowest MAE (KT: 1.92, CS: 2.81, and IS: 2.89) and <math><mi>ρ</mi></math> = 0.84- <math><mo>-</mo></math> 0.90. It also demonstrated promising capabilities for early skill assessment by predicting the total score from partial observations-particularly for simpler tasks. Additionally, we show that models trained on consensus expert ratings outperform those trained on individual annotations, highlighting the value of multi-rater ground truth.</p><p><strong>Conclusion: </strong>This research provides a foundation for objective, automated assessment of open surgical skills, with potential to improve the efficiency and standardization of surgical training.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alon Olesinski, Richard Lederman, Yusef Azraq, Jacob Sosna, Leo Joskowicz
{"title":"Annotation-efficient deep learning detection and measurement of mediastinal lymph nodes in CT.","authors":"Alon Olesinski, Richard Lederman, Yusef Azraq, Jacob Sosna, Leo Joskowicz","doi":"10.1007/s11548-025-03513-y","DOIUrl":"https://doi.org/10.1007/s11548-025-03513-y","url":null,"abstract":"<p><strong>Purpose: </strong>Manual detection and measurement of structures in volumetric scans is routine in clinical practice but is time-consuming and subject to observer variability. Automatic deep learning-based solutions are effective but require a large dataset of manual annotations by experts. We present a novel annotation-efficient semi-supervised deep learning method for automatic detection, segmentation, and measurement of the short axis length (SAL) of mediastinal lymph nodes (LNs) in contrast-enhanced CT (ceCT) scans.</p><p><strong>Methods: </strong>Our semi-supervised method combines the precision of expert annotations with the quantity advantages of pseudolabeled data. It uses an ensemble of 3D nnU-Net models trained on a few expert-annotated scans to generate pseudolabels on a large dataset of unannotated scans. The pseudolabels are then filtered to remove false positive LNs by excluding LNs outside the mediastinum and LNs overlapping with other anatomical structures. Finally, a single 3D nnU-Net model is trained using the filtered pseudo-labels. Our method optimizes the ratio of annotated/non-annotated dataset sizes to achieve the desired performance, thus reducing manual annotation effort.</p><p><strong>Results: </strong>Experimental studies on three chest ceCT datasets with a total of 268 annotated scans (1817 LNs), of which 134 scans were used for testing and the remaining for ensemble training in batches of 17, 34, 67, and 134 scans, as well as 710 unannotated scans, show that the semi-supervised models' recall improvements were 11-24% (0.72-0.87) while maintaining comparable precision levels. The best model achieved mean SAL differences of 1.65 ± 0.92 mm for normal LNs and 4.25 ± 4.98 mm for enlarged LNs, both within the observer variability.</p><p><strong>Conclusion: </strong>Our semi-supervised method requires one-fourth to one-eighth less annotations to achieve a performance to supervised models trained on the same dataset for the automatic measurement of mediastinal LNs in chest ceCT. Using pseudolabels with anatomical filtering may be effective to overcome the challenges of the development of AI-based solutions in radiology.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145055981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Grube, Sarah Latus, Martin Fischer, Vidas Raudonis, Axel Heinemann, Benjamin Ondruschka, Alexander Schlaefer
{"title":"A mobile robotic approach to autonomous surface scanning in legal medicine.","authors":"Sarah Grube, Sarah Latus, Martin Fischer, Vidas Raudonis, Axel Heinemann, Benjamin Ondruschka, Alexander Schlaefer","doi":"10.1007/s11548-025-03507-w","DOIUrl":"https://doi.org/10.1007/s11548-025-03507-w","url":null,"abstract":"<p><strong>Purpose: </strong>Comprehensive legal medicine documentation includes internal and external examination of the corpse. Typically, this documentation is conducted manually during conventional autopsy. Systematic digital documentation would be desirable, especially for external wound examination, which is becoming more relevant for legal medicine analysis. For this purpose, RGB surface scanning has been introduced. While manual full-surface scanning using a handheld camera is time-consuming and operator-dependent, floor or ceiling-mounted robotic systems require specialized rooms. Hence, we consider whether a mobile robotic system can be used for external documentation.</p><p><strong>Methods: </strong>We develop a mobile robotic system that enables full-body RGB-D surface scanning. Our work includes a detailed configuration space analysis to identify the environmental parameters that must be considered for a successful surface scan. We validate our findings through an experimental study in the lab and demonstrate the systems application in legal medicine.</p><p><strong>Results: </strong>Our configuration space analysis shows that a good trade-off between coverage and time is reached with three robot base positions, leading to a coverage of 94.96 %. Experiments validate the effectiveness of the system in accurately capturing body surface geometry with an average surface coverage of <math><mrow><mn>96.90</mn> <mo>±</mo> <mn>3.16</mn></mrow> </math> % and <math><mrow><mn>92.45</mn> <mo>±</mo> <mn>1.43</mn></mrow> </math> % for a body phantom and actual corpses, respectively.</p><p><strong>Conclusion: </strong>This work demonstrates the potential of a mobile robotic system to automate RGB-D surface scanning in legal medicine, complementing post-mortem CT scans for inner documentation. Our results indicate that the proposed system can contribute to more efficient, autonomous legal medicine documentation, reducing the need for manual intervention.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145056024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}