International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Mathematical methods for assessing the accuracy of pre-planned and guided surgical osteotomies.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-19 DOI: 10.1007/s11548-025-03324-1
George R Nahass, Nicolas Kaplan, Isabel Scharf, Devansh Saini, Naji Bou Zeid, Sobhi Kazmouz, Linping Zhao, Lee W T Alkureishi
{"title":"Mathematical methods for assessing the accuracy of pre-planned and guided surgical osteotomies.","authors":"George R Nahass, Nicolas Kaplan, Isabel Scharf, Devansh Saini, Naji Bou Zeid, Sobhi Kazmouz, Linping Zhao, Lee W T Alkureishi","doi":"10.1007/s11548-025-03324-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03324-1","url":null,"abstract":"<p><strong>Purpose: </strong>The fibula-free flap (FFF) is a valuable reconstructive technique in maxillofacial surgery; however, the assessment of osteotomy accuracy remains challenging. We devised two novel methodologies to compare planned and postoperative osteotomies in FFF reconstructions that minimized user input but would still generalize to other operations involving the analysis of osteotomies.</p><p><strong>Methods: </strong>Our approaches leverage basic mathematics to derive both quantitative and qualitative insights about the relationship of the postoperative osteotomy to the planned model. We have coined our methods 'analysis by a shared reference angle' and 'Euler angle analysis.'</p><p><strong>Results: </strong>In addition to describing our algorithm and the clinical utility, we present a thorough validation of both methods. Our algorithm is highly repeatable in an intraobserver repeatability test and provides information about the overall accuracy as well as geometric specifics of the deviation from the planned reconstruction.</p><p><strong>Conclusion: </strong>Our algorithm is a novel and robust method for assessing the osteotomy accuracy of FFF reconstructions. This approach has no reliance on the overall position of the reconstruction, which is valuable due to the multiple factors that may influence the outcome of FFF reconstructions. Additionally, while our approach relies on anatomical features for landmark selections, the flexibility in our approach makes it applicable to evaluate any operation involving osteotomies.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic future remnant segmentation in liver resection planning.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-17 DOI: 10.1007/s11548-025-03331-2
Hicham Messaoudi, Marwan Abbas, Bogdan Badic, Douraied Ben Salem, Ahror Belaid, Pierre-Henri Conze
{"title":"Automatic future remnant segmentation in liver resection planning.","authors":"Hicham Messaoudi, Marwan Abbas, Bogdan Badic, Douraied Ben Salem, Ahror Belaid, Pierre-Henri Conze","doi":"10.1007/s11548-025-03331-2","DOIUrl":"https://doi.org/10.1007/s11548-025-03331-2","url":null,"abstract":"<p><strong>Purpose: </strong>Liver resection is a complex procedure requiring precise removal of tumors while preserving viable tissue. This study proposes a novel approach for automated liver resection planning, using segmentations of the liver, vessels, and tumors from CT scans to predict the future liver remnant (FLR), aiming to improve pre-operative planning accuracy and patient outcomes.</p><p><strong>Methods: </strong>This study evaluates deep convolutional and Transformer-based networks under various computational setups. Using different combinations of anatomical and pathological delineation masks, we assess the contribution of each structure. The method is initially tested with ground-truth masks for feasibility and later validated with predicted masks from a deep learning model.</p><p><strong>Results: </strong>The experimental results highlight the crucial importance of incorporating anatomical and pathological masks for accurate FLR delineation. Among the tested configurations, the best performing model achieves an average Dice score of approximately 0.86, aligning closely with the inter-observer variability reported in the literature. Additionally, the model achieves an average symmetric surface distance of 0.95 mm, demonstrating its precision in capturing fine-grained structural details critical for pre-operative planning.</p><p><strong>Conclusion: </strong>This study highlights the potential for fully-automated FLR segmentation pipelines in liver pre-operative planning. Our approach holds promise for developing a solution to reduce the time and variability associated with manual delineation. Such method can provide better decision-making in liver resection planning by providing accurate and consistent segmentation results. Future studies should explore its seamless integration into clinical workflows.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143442785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breaking barriers: noninvasive AI model for BRAFV600E mutation identification.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-15 DOI: 10.1007/s11548-024-03290-0
Fan Wu, Xiangfeng Lin, Yuying Chen, Mengqian Ge, Ting Pan, Jingjing Shi, Linlin Mao, Gang Pan, You Peng, Li Zhou, Haitao Zheng, Dingcun Luo, Yu Zhang
{"title":"Breaking barriers: noninvasive AI model for BRAF<sup>V600E</sup> mutation identification.","authors":"Fan Wu, Xiangfeng Lin, Yuying Chen, Mengqian Ge, Ting Pan, Jingjing Shi, Linlin Mao, Gang Pan, You Peng, Li Zhou, Haitao Zheng, Dingcun Luo, Yu Zhang","doi":"10.1007/s11548-024-03290-0","DOIUrl":"https://doi.org/10.1007/s11548-024-03290-0","url":null,"abstract":"<p><strong>Objective: </strong>BRAF<sup>V600E</sup> is the most common mutation found in thyroid cancer and is particularly associated with papillary thyroid carcinoma (PTC). Currently, genetic mutation detection relies on invasive procedures. This study aimed to extract radiomic features and utilize deep transfer learning (DTL) from ultrasound images to develop a noninvasive artificial intelligence model for identifying BRAF<sup>V600E</sup> mutations.</p><p><strong>Materials and methods: </strong>Regions of interest (ROI) were manually annotated in the ultrasound images, and radiomic and DTL features were extracted. These were used in a joint DTL-radiomics (DTLR) model. Fourteen DTL models were employed, and feature selection was performed using the LASSO regression. Eight machine learning methods were used to construct predictive models. Model performance was primarily evaluated using area under the curve (AUC), accuracy, sensitivity and specificity. The interpretability of the model was visualized using gradient-weighted class activation maps (Grad-CAM).</p><p><strong>Results: </strong>Sole reliance on radiomics for identification of BRAF<sup>V600E</sup> mutations had limited capability, but the optimal DTLR model, combined with ResNet152, effectively identified BRAF<sup>V600E</sup> mutations. In the validation set, the AUC, accuracy, sensitivity and specificity were 0.833, 80.6%, 76.2% and 81.7%, respectively. The AUC of the DTLR model was higher than that of the DTL and radiomics models. Visualization using the ResNet152-based DTLR model revealed its ability to capture and learn ultrasound image features related to BRAF<sup>V600E</sup> mutations.</p><p><strong>Conclusion: </strong>The ResNet152-based DTLR model demonstrated significant value in identifying BRAF<sup>V600E</sup> mutations in patients with PTC using ultrasound images. Grad-CAM has the potential to objectively stratify BRAF mutations visually. The findings of this study require further collaboration among more centers and the inclusion of additional data for validation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging deep learning for nonlinear shape representation in anatomically parameterized statistical shape models. 在解剖参数化统计形状模型中利用深度学习实现非线性形状表示。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-14 DOI: 10.1007/s11548-025-03330-3
Behnaz Gheflati, Morteza Mirzaei, Sunil Rottoo, Hassan Rivaz
{"title":"Leveraging deep learning for nonlinear shape representation in anatomically parameterized statistical shape models.","authors":"Behnaz Gheflati, Morteza Mirzaei, Sunil Rottoo, Hassan Rivaz","doi":"10.1007/s11548-025-03330-3","DOIUrl":"https://doi.org/10.1007/s11548-025-03330-3","url":null,"abstract":"<p><strong>Purpose: </strong>Statistical shape models (SSMs) are widely used for morphological assessment of anatomical structures. However, a key limitation is the need for a clear relationship between the model's shape coefficients and clinically relevant anatomical parameters. To address this limitation, this paper proposes a novel deep learning-based anatomically parameterized SSM (DL-ANAT<sub>SSM</sub>) by introducing a nonlinear relationship between anatomical parameters and bone shape information.</p><p><strong>Methods: </strong>Our approach utilizes a multilayer perceptron model trained on a synthetic femoral bone population to learn the nonlinear mapping between anatomical measurements and shape parameters. The trained model is then fine-tuned on a real bone dataset. We compare the performance of DL-ANAT<sub>SSM</sub> with a linear ANAT<sub>SSM</sub> generated using least-squares regression for baseline evaluation.</p><p><strong>Results: </strong>When applied to a previously unseen femoral bone dataset, DL-ANAT<sub>SSM</sub> demonstrated superior performance in predicting 3D bone shape based on anatomical parameters compared to the linear baseline model. The impact of fine-tuning was also investigated, with results indicating improved model performance after this process.</p><p><strong>Conclusion: </strong>The proposed DL-ANAT<sub>SSM</sub> is therefore a more precise and interpretable SSM, which is directly controlled by clinically relevant parameters. The proposed method holds promise for applications in both morphometry analysis and patient-specific 3D model generation without preoperative images.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double-mix pseudo-label framework: enhancing semi-supervised segmentation on category-imbalanced CT volumes.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-11 DOI: 10.1007/s11548-024-03281-1
Luyang Zhang, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori
{"title":"Double-mix pseudo-label framework: enhancing semi-supervised segmentation on category-imbalanced CT volumes.","authors":"Luyang Zhang, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori","doi":"10.1007/s11548-024-03281-1","DOIUrl":"https://doi.org/10.1007/s11548-024-03281-1","url":null,"abstract":"<p><strong>Purpose: </strong>Deep-learning-based supervised CT segmentation relies on fully and densely labeled data, the labeling process of which is time-consuming. In this study, our proposed method aims to improve segmentation performance on CT volumes with limited annotated data by considering category-wise difficulties and distribution.</p><p><strong>Methods: </strong>We propose a novel confidence-difficulty weight (CDifW) allocation method that considers confidence levels, balancing the training across different categories, influencing the loss function and volume-mixing process for pseudo-label generation. Additionally, we introduce a novel Double-Mix Pseudo-label Framework (DMPF), which strategically selects categories for image blending based on the distribution of voxel-counts per category and the weight of segmentation difficulty. DMPF is designed to enhance the segmentation performance of categories that are challenging to segment.</p><p><strong>Result: </strong>Our approach was tested on two commonly used datasets: a Congenital Heart Disease (CHD) dataset and a Beyond-the-Cranial-Vault (BTCV) Abdomen dataset. Compared to the SOTA methods, our approach achieved an improvement of 5.1% and 7.0% in Dice score for the segmentation of difficult-to-segment categories on 5% of the labeled data in CHD and 40% of the labeled data in BTCV, respectively.</p><p><strong>Conclusion: </strong>Our method improves segmentation performance in difficult categories within CT volumes by category-wise weights and weight-based mixture augmentation. Our method was validated across multiple datasets and is significant for advancing semi-supervised segmentation tasks in health care. The code is available at https://github.com/MoriLabNU/Double-Mix .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143392507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and validation of a surgical robot system for orbital decompression surgery.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-11 DOI: 10.1007/s11548-025-03322-3
Yanping Lin, Shiqi Peng, Siqi Jiao, Yi Wang, Yinwei Li, Huifang Zhou
{"title":"Development and validation of a surgical robot system for orbital decompression surgery.","authors":"Yanping Lin, Shiqi Peng, Siqi Jiao, Yi Wang, Yinwei Li, Huifang Zhou","doi":"10.1007/s11548-025-03322-3","DOIUrl":"https://doi.org/10.1007/s11548-025-03322-3","url":null,"abstract":"<p><strong>Purpose: </strong>Orbital decompression surgery, which expands the volume of the orbit by removing sections of the orbital walls with a drill and saw, is an important treatment option for thyroid-associated ophthalmopathy. However, it is often limited by physical factors such as a narrow operating space and instability of the manual holding of surgical instruments, which constrains doctors from accurately executing surgical planning.</p><p><strong>Methods: </strong>To overcome these limitations, we designed a surgical robot comprising position adjustment, remote center of motion, and end-effector with a rapid surgical instrument assembly mechanisms. Additionally, to guide surgical robots in precisely performing preoperative surgical planning, we constructed a surgical navigation system comprising preoperative surgical planning and intraoperative optical navigation subsystems. An internally complementary orbital surgical robot system in which the navigation system, optical tracker, and surgical robot and its motion control system serve as the decision-making, perception, and execution layers of the system, respectively, was developed.</p><p><strong>Results: </strong>The results of precision measurement experiments revealed that the absolute and repeated pose accuracies of the surgical robot satisfied the design requirements. As verified by animal experiments, the precision of osteotomy and bone drilling operation of orbital surgical robot system can meet the clinical technical indicators.</p><p><strong>Conclusion: </strong>The developed orbital surgical robotic system for orbital decompression surgery could perform routine operations such as drilling and sawing on the orbital bone with assistance and supervision from surgeons. The feasibility and reliability of the orbital surgical robot system were comprehensively verified through accuracy measurements and animal experiments.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143392506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intraoperative adaptive eye model based on instrument-integrated OCT for robot-assisted vitreoretinal surgery.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-08 DOI: 10.1007/s11548-025-03325-0
Marius Briel, Ludwig Haide, Maximilian Hess, Jan Schimmelpfennig, Philipp Matten, Rebekka Peter, Matthias Hillenbrand, Eleonora Tagliabue, Franziska Mathis-Ullrich
{"title":"Intraoperative adaptive eye model based on instrument-integrated OCT for robot-assisted vitreoretinal surgery.","authors":"Marius Briel, Ludwig Haide, Maximilian Hess, Jan Schimmelpfennig, Philipp Matten, Rebekka Peter, Matthias Hillenbrand, Eleonora Tagliabue, Franziska Mathis-Ullrich","doi":"10.1007/s11548-025-03325-0","DOIUrl":"https://doi.org/10.1007/s11548-025-03325-0","url":null,"abstract":"<p><strong>Purpose: </strong>Pars plana vitrectomy (PPV) is the most common surgical procedure performed by retinal specialists, highlighting the need for model-based assistance and automation in surgical treatment. An intraoperative retinal model provides precise anatomical information relative to the surgical instrument, enhancing surgical precision and safety.</p><p><strong>Methods: </strong>This work focuses on the intraoperative parametrization of retinal shape using 1D instrument-integrated optical coherence tomography distance measurements combined with a surgical robot. Our approach accommodates variability in eye geometries by transitioning from an initial spherical model to an ellipsoidal representation, improving accuracy as more data is collected through sensor motion.</p><p><strong>Results: </strong>We demonstrate that ellipsoid fitting outperforms sphere fitting for regular eye shapes, achieving a mean absolute error of less than 40  <math><mrow><mi>μ</mi> <mtext>m</mtext></mrow> </math> in simulation and below 200  <math><mrow><mi>μ</mi> <mtext>m</mtext></mrow> </math> on 3D printed models and ex vivo porcine eyes. The model reliably transitions from a spherical to an ellipsoidal representation across all six tested eye shapes when specific criteria are satisfied.</p><p><strong>Conclusion: </strong>The adaptive eye model developed in this work meets the accuracy requirements for clinical application in PPV within the central retina. Additionally, the global model effectively extrapolates beyond the scanned area to encompass the retinal periphery.This capability enhances PPV procedures, particularly through virtual boundary assistance and improved surgical navigation, ultimately contributing to safer surgical outcomes.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143374209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning-driven method for safe and effective ERCP cannulation.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-07 DOI: 10.1007/s11548-025-03329-w
Yuying Liu, Xin Chen, Siyang Zuo
{"title":"A deep learning-driven method for safe and effective ERCP cannulation.","authors":"Yuying Liu, Xin Chen, Siyang Zuo","doi":"10.1007/s11548-025-03329-w","DOIUrl":"https://doi.org/10.1007/s11548-025-03329-w","url":null,"abstract":"<p><strong>Purpose: </strong>In recent years, the detection of the duodenal papilla and surgical cannula has become a critical task in computer-assisted endoscopic retrograde cholangiopancreatography (ERCP) cannulation operations. The complex surgical anatomy, coupled with the small size of the duodenal papillary orifice and its high similarity to the background, poses significant challenges to effective computer-assisted cannulation. To address these challenges, we present a deep learning-driven graphical user interface (GUI) to assist ERCP cannulation.</p><p><strong>Methods: </strong>Considering the characteristics of the ERCP scenario, we propose a deep learning method for duodenal papilla and surgical cannula detection, utilizing four swin transformer decoupled heads (4STDH). Four different prediction heads are employed to detect objects of different sizes. Subsequently, we integrate the swin transformer module to identify attention regions to explore prediction potential deeply. Moreover, we decouple the classification and regression networks, significantly improving the model's accuracy and robustness through the separation prediction. Simultaneously, we introduce a dataset on papilla and cannula (DPAC), consisting of 1840 annotated endoscopic images, which will be publicly available. We integrated 4STDH and several state-of-the-art methods into the GUI and compared them.</p><p><strong>Results: </strong>On the DPAC dataset, 4STDH outperforms state-of-the-art methods with an mAP of 93.2% and superior generalization performance. Additionally, the GUI provides real-time positions of the papilla and cannula, along with the planar distance and direction required for the cannula to reach the cannulation position.</p><p><strong>Conclusion: </strong>We validate the GUI's performance in human gastrointestinal endoscopic videos, showing deep learning's potential to enhance the safety and efficiency of clinical ERCP cannulation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143371421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
German surgeons' perspective on the application of artificial intelligence in clinical decision-making.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-05 DOI: 10.1007/s11548-025-03326-z
Jonas Henn, Tijs Vandemeulebroucke, Simon Hatterscheidt, Jonas Dohmen, Jörg C Kalff, Aimee van Wynsberghe, Hanno Matthaei
{"title":"German surgeons' perspective on the application of artificial intelligence in clinical decision-making.","authors":"Jonas Henn, Tijs Vandemeulebroucke, Simon Hatterscheidt, Jonas Dohmen, Jörg C Kalff, Aimee van Wynsberghe, Hanno Matthaei","doi":"10.1007/s11548-025-03326-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03326-z","url":null,"abstract":"<p><strong>Purpose: </strong>Artificial intelligence (AI) is transforming clinical decision-making (CDM). This application of AI should be a conscious choice to avoid technological determinism. The surgeons' perspective is needed to guide further implementation.</p><p><strong>Methods: </strong>We conducted an online survey among German surgeons, focusing on digitalization and AI in CDM, specifically for acute abdominal pain (AAP). The survey included Likert items and scales.</p><p><strong>Results: </strong>We analyzed 263 responses. Seventy-one percentage of participants were male, with a median age of 49 years (IQR 41-57). Seventy-three percentage of participants carried out a senior role, with a median of 22 years of work experience (IQR 13-28). AI in CDM was seen as helpful for workload management (48%) but not for preventing unnecessary treatments (32%). Safety (95%), evidence (94%), and usability (96%) were prioritized over costs (43%) for the implementation. Concerns included the loss of practical CDM skills (81%) and ethical issues like transparency (52%), patient trust (45%), and physician integrity (44%). Traditional CDM for AAP was seen as experience-based (93%) and not standardized (31%), whereas AI was perceived to assist with urgency triage (60%) and resource management (59%). On median, generation Y showed more confidence in AI for CDM (P = 0.001), while participants working in primary care hospitals were less confident (P = 0.021).</p><p><strong>Conclusion: </strong>Participants saw the potential of AI for organizational tasks but are hesitant about its use in CDM. Concerns about trust and performance need to be addressed through education and critical evaluation. In the future, AI might provide sufficient decision support but will not replace the human component.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An intuitive guidewire control mechanism for robotic intervention. 用于机器人介入的直观导丝控制机制。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-01 Epub Date: 2024-10-07 DOI: 10.1007/s11548-024-03279-9
Rohit Dey, Yichen Guo, Yang Liu, Ajit Puri, Luis Savastano, Yihao Zheng
{"title":"An intuitive guidewire control mechanism for robotic intervention.","authors":"Rohit Dey, Yichen Guo, Yang Liu, Ajit Puri, Luis Savastano, Yihao Zheng","doi":"10.1007/s11548-024-03279-9","DOIUrl":"10.1007/s11548-024-03279-9","url":null,"abstract":"<p><strong>Purpose: </strong>Teleoperated Interventional Robotic systems (TIRs) are developed to reduce radiation exposure and physical stress of the physicians and enhance device manipulation accuracy and stability. Nevertheless, TIRs are not widely adopted, partly due to the lack of intuitive control interfaces. Current TIR interfaces like joysticks, keyboards, and touchscreens differ significantly from traditional manual techniques, resulting in a shallow, longer learning curve. To this end, this research introduces a novel control mechanism for intuitive operation and seamless adoption of TIRs.</p><p><strong>Methods: </strong>An off-the-shelf medical torque device augmented with a micro-electromagnetic tracker was proposed as the control interface to preserve the tactile sensation and muscle memory integral to interventionalists' proficiency. The control inputs to drive the TIR were extracted via real-time motion mapping of the interface. To verify the efficacy of the proposed control mechanism to accurately operate the TIR, evaluation experiments using industrial grade encoders were conducted.</p><p><strong>Results: </strong>A mean tracking error of 0.32 ± 0.12 mm in linear and 0.54 ± 0.07° in angular direction were achieved. The time lag in tracking was found to be 125 ms on average using pade approximation. Ergonomically, the developed control interface is 3.5 mm diametrically larger, and 4.5 g. heavier compared to traditional torque devices.</p><p><strong>Conclusion: </strong>With uncanny resemblance to traditional torque devices while maintaining results comparable to state-of-the-art commercially available TIRs, this research successfully provides an intuitive control interface for potential wider clinical adoption of robot-assisted interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"333-344"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信