Manuel Vossel, Lukas Theisgen, Noah Wickel, Lovis Phlippen, Rastislav Pjontek, Sergey Drobinsky, Hans Clusmann, Klaus Radermacher, Christian Blume, Matías de la Fuente
{"title":"MINARO DRS: usability study of a robotic-assisted laminectomy.","authors":"Manuel Vossel, Lukas Theisgen, Noah Wickel, Lovis Phlippen, Rastislav Pjontek, Sergey Drobinsky, Hans Clusmann, Klaus Radermacher, Christian Blume, Matías de la Fuente","doi":"10.1007/s11548-024-03285-x","DOIUrl":"10.1007/s11548-024-03285-x","url":null,"abstract":"<p><strong>Purpose: </strong>Although the literature shows that robotic assistance can support the surgeon, robotic systems are not widely spread in clinics. They often incorporate large robotic arms adopted from the manufacturing industry, imposing safety hazards when in contact with the patient or surgical staff. We approached this limitation with a modular dual robot consisting of an ultra-lightweight carrier robot for rough prepositioning and small, highly dynamic, application-specific, interchangeable tooling robots.</p><p><strong>Methods: </strong>A formative usability study with N = 10 neurosurgeons was conducted using a prototype of a novel tooling robot for laminectomy to evaluate the system's usability. The participants were asked to perform three experiments using the robotic system: (1) prepositioning with the carrier robot and milling into (2) a block phantom as well as (3) a spine model.</p><p><strong>Results: </strong>All neurosurgeons could perform a simulated laminectomy on a spine phantom using the robotic system. On average, they rated the usability of this first prototype already between good and excellent (SUS-Score above 75%). Eight out of the ten participants preferred robotic-assisted milling over manual milling. For prepositioning, the developed haptic guidance showed significantly higher effectiveness and efficiency than visual navigation.</p><p><strong>Conclusion: </strong>The proposed dual robot system showed the potential to increase safety in the operating room because of the synergistic hands-on control and the ultra-lightweight design of the carrier robot. The modular design allows for easy adaptation to various surgical procedures. However, improvements are needed in the ergonomics of the tooling robot and the complexity of the virtual fixtures. The cooperative dual robot system can subsequently be tested in a cadaver laboratory and in vivo on animals.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"357-367"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11807922/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142632280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification of speech arrests and speech impairments during awake craniotomy: a multi-databases analysis.","authors":"Ilias Maoudj, Atsushi Kuwano, Céline Panheleux, Yuichi Kubota, Takakazu Kawamata, Yoshihiro Muragaki, Ken Masamune, Romuald Seizeur, Guillaume Dardenne, Manabu Tamura","doi":"10.1007/s11548-024-03301-0","DOIUrl":"10.1007/s11548-024-03301-0","url":null,"abstract":"<p><strong>Purpose: </strong>Awake craniotomy presents a unique opportunity to map and preserve critical brain functions, particularly speech, during tumor resection. The ability to accurately assess linguistic functions in real-time not only enhances surgical precision, but also contributes significantly to improving postoperative outcomes. However, today, its evaluation is subjective as it relies on a clinician's observations only. This paper explores the use of a deep learning based model for the objective assessment of speech arrest and speech impairments during awake craniotomy.</p><p><strong>Methods: </strong>We extracted 1883 3-second audio clips containing the patient's response following direct electrical stimulation from 23 awake craniotomies recorded from two operating rooms of the Tokyo Women's Medical University Hospital (Japan) and two awake craniotomies recorded from the University Hospital of Brest (France). A Wav2Vec2-based model has been trained and used to detect speech arrests and speech impairments. Experiments were performed with different datasets settings and preprocessing techniques and the performances of the model were evaluated using the F1-score.</p><p><strong>Results: </strong>The F1-score was 84.12% when the model was trained and tested on Japanese data only. In a cross-language situation, the F1-score was 74.68% when the model was trained on Japanese data and tested on French data.</p><p><strong>Conclusions: </strong>The results are encouraging even in a cross-language situation but further evaluation is required. The integration of preprocessing techniques, in particular noise reduction, improved the results significantly.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"217-224"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Run Tian, Xudong Duan, Fangze Xing, Yiwei Zhao, ChengYan Liu, Heng Li, Ning Kong, Ruomu Cao, Huanshuai Guan, Yiyang Li, Xinghua Li, Jiewen Zhang, Kunzheng Wang, Pei Yang, Chunsheng Wang
{"title":"Computed tomography radiomics in predicting patient satisfaction after robotic-assisted total knee arthroplasty.","authors":"Run Tian, Xudong Duan, Fangze Xing, Yiwei Zhao, ChengYan Liu, Heng Li, Ning Kong, Ruomu Cao, Huanshuai Guan, Yiyang Li, Xinghua Li, Jiewen Zhang, Kunzheng Wang, Pei Yang, Chunsheng Wang","doi":"10.1007/s11548-024-03192-1","DOIUrl":"10.1007/s11548-024-03192-1","url":null,"abstract":"<p><strong>Purpose: </strong>After robotic-assisted total knee arthroplasty (RA-TKA) surgery, some patients still experience joint discomfort. We aimed to establish an effective machine learning model that integrates radiomic features extracted from computed tomography (CT) scans and relevant clinical information to predict patient satisfaction three months postoperatively following RA-TKA.</p><p><strong>Materials and methods: </strong>After careful selection, data from 142 patients were randomly divided into a training set (n = 99) and a test set (n = 43), approximately in a 7:3 ratio. A total of 1329 radiomic features were extracted from the regions of interest delineated in CT scans. The features were standardized using normalization algorithms, and the least absolute shrinkage and selection operator regression model was employed to select radiomic features with ICC > 0.75 and P < 0.05, generating the Rad-score as feature markers. Univariate and multivariate logistic regression was then used to screen clinical information (age, body mass index, operation time, gender, surgical side, comorbidities, preoperative KSS score, preoperative range of motion (ROM), preoperative and postoperative HKA angle, preoperative and postoperative VAS score) as potential predictive factors. The satisfaction scale ≥ 20 indicates patient satisfaction. Finally, three prediction models were established, focusing on radiomic features, clinical features, and their fusion. Model performance was evaluated using Receiver Operating Characteristic curves and decision curve analysis.</p><p><strong>Results: </strong>In the training set, the area under the curve (AUC) of the clinical model was 0.793 (95% CI 0.681-0.906), the radiomic model was 0.854 (95% CI 0.743-0.964), and the combined radiomic-clinical model was 0.899 (95% CI 0.804-0.995). In the test set, the AUC of the clinical model was 0.908 (95% CI 0.814-1.000), the radiomic model was 0.709 (95% CI 0.541-0.878), and the combined radiomic-clinical model was 0.928 (95% CI 0.842-1.000). The AUC of the radiomic-clinical model was significantly higher than the other two models. The decision curve analysis indicated its clinical application value.</p><p><strong>Conclusion: </strong>We developed a radiomic-based nomogram model using CT imaging to predict the satisfaction of RA-TKA patients at 3 months postoperatively. This model integrated clinical and radiomic features and demonstrated good predictive performance and excellent clinical application potential.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"237-248"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141249091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"G-SET-DCL: a guided sequential episodic training with dual contrastive learning approach for colon segmentation.","authors":"Samir Farag Harb, Asem Ali, Mohamed Yousuf, Salwa Elshazly, Aly Farag","doi":"10.1007/s11548-024-03319-4","DOIUrl":"10.1007/s11548-024-03319-4","url":null,"abstract":"<p><strong>Purpose: </strong>This article introduces a novel deep learning approach to substantially improve the accuracy of colon segmentation even with limited data annotation, which enhances the overall effectiveness of the CT colonography pipeline in clinical settings.</p><p><strong>Methods: </strong>The proposed approach integrates 3D contextual information via guided sequential episodic training in which a query CT slice is segmented by exploiting its previous labeled CT slice (i.e., support). Segmentation starts by detecting the rectum using a Markov Random Field-based algorithm. Then, supervised sequential episodic training is applied to the remaining slices, while contrastive learning is employed to enhance feature discriminability, thereby improving segmentation accuracy.</p><p><strong>Results: </strong>The proposed method, evaluated on 98 abdominal scans of prepped patients, achieved a Dice coefficient of 97.3% and a polyp information preservation accuracy of 98.28%. Statistical analysis, including 95% confidence intervals, underscores the method's robustness and reliability. Clinically, this high level of accuracy is vital for ensuring the preservation of critical polyp details, which are essential for accurate automatic diagnostic evaluation. The proposed method performs reliably in scenarios with limited annotated data. This is demonstrated by achieving a Dice coefficient of 97.15% when the model was trained on a smaller number of annotated CT scans (e.g., 10 scans) than the testing dataset (e.g., 88 scans).</p><p><strong>Conclusions: </strong>The proposed sequential segmentation approach achieves promising results in colon segmentation. A key strength of the method is its ability to generalize effectively, even with limited annotated datasets-a common challenge in medical imaging.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"279-287"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joseph L Cozzi, Hui Li, Jordan D Fuhrman, Li Lan, Jelani Williams, Brendan Finnerty, Thomas J Fahey, Abhinay Tumati, Joshua Genender, Xavier M Keutgen, Maryellen L Giger
{"title":"Multi-institutional development and testing of attention-enhanced deep learning segmentation of thyroid nodules on ultrasound.","authors":"Joseph L Cozzi, Hui Li, Jordan D Fuhrman, Li Lan, Jelani Williams, Brendan Finnerty, Thomas J Fahey, Abhinay Tumati, Joshua Genender, Xavier M Keutgen, Maryellen L Giger","doi":"10.1007/s11548-024-03294-w","DOIUrl":"10.1007/s11548-024-03294-w","url":null,"abstract":"<p><strong>Purpose: </strong>Thyroid nodules are common, and ultrasound-based risk stratification using ACR's TIRADS classification is a key step in predicting nodule pathology. Determining thyroid nodule contours is necessary for the calculation of TIRADS scores and can also be used in the development of machine learning nodule diagnosis systems. This paper presents the development, validation, and multi-institutional independent testing of a machine learning system for the automatic segmentation of thyroid nodules on ultrasound.</p><p><strong>Methods: </strong>The datasets, containing a total of 1595 thyroid ultrasound images from 520 patients with thyroid nodules, were retrospectively collected under IRB approval from University of Chicago Medicine (UCM) and Weill Cornell Medical Center (WCMC). Nodules were manually contoured by a team of UCM and WCMC physicians for ground truth. An AttU-Net, a U-Net architecture with additional attention weighting functions, was trained for the segmentations. The algorithm was validated through fivefold cross-validation by nodule and was tested on two independent test sets: one from UCM and one from WCMC. Dice similarity coefficient (DSC) and percent Hausdorff distance (%HD), Hausdorff distance reported as a percent of the nodule's effective diameter, served as the performance metrics.</p><p><strong>Results: </strong>On multi-institutional independent testing, the AttU-Net yielded average DSCs (std. deviation) of 0.915 (0.04) and 0.922 (0.03) and %HDs (std. deviation) of 12.9% (4.6) and 13.4% (6.3) on the UCM and WCMC test sets, respectively. Similarity testing showed the algorithm's performance on the two institutional test sets was equivalent up to margins of <math><mi>Δ</mi></math> DSC <math><mo>≤</mo></math> 0.013 and <math><mi>Δ</mi></math> %HD <math><mo>≤</mo></math> 1.73%.</p><p><strong>Conclusions: </strong>This work presents a robust automatic thyroid nodule segmentation algorithm that could be implemented for risk stratification systems. Future work is merited to incorporate this segmentation method within an automatic thyroid classification system.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"259-267"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Synchronising a stereoscopic surgical video stream using specular reflection.","authors":"Kilian Chandelon, Adrien Bartoli","doi":"10.1007/s11548-024-03232-w","DOIUrl":"10.1007/s11548-024-03232-w","url":null,"abstract":"<p><strong>Purpose: </strong>A stereoscopic surgical video stream consists of left-right image pairs provided by a stereo endoscope. While the surgical display shows these image pairs synchronised, most capture cards cause de-synchronisation. This means that the paired left and right images may not correspond once used in downstream tasks such as stereo depth computation. The stereo synchronisation problem is to recover the corresponding left-right images. This is particularly challenging in the surgical setting, owing to the moist tissues, rapid camera motion, quasi-staticity and real-time processing requirement. Existing methods exploit image cues from the diffuse reflection component and are defeated by the above challenges.</p><p><strong>Methods: </strong>We propose to exploit the specular reflection. Specifically, we propose a powerful left-right comparison score (LRCS) using the specular highlights commonly occurring on moist tissues. We detect the highlights using a neural network, characterise them with invariant descriptors, match them, and use the number of matches to form the proposed LRCS. We perform evaluation against 147 existing LRCS in 44 challenging robotic partial nephrectomy and robotic-assisted hepatic resection video sequences with simulated and real de-synchronisation.</p><p><strong>Results: </strong>The proposed LRCS outperforms, with an average and maximum offsets of 0.055 and 1 frames and 94.1±3.6% successfully synchronised frames. In contrast, the best existing LRCS achieves an average and maximum offsets of 0.3 and 3 frames and 81.2±6.4% successfully synchronised frames.</p><p><strong>Conclusion: </strong>The use of specular reflection brings a tremendous boost to the real-time surgical stereo synchronisation problem.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"289-299"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M Mendez, F Castillo, L Probyn, S Kras, P N Tyrrell
{"title":"Leveraging domain knowledge for synthetic ultrasound image generation: a novel approach to rare disease AI detection.","authors":"M Mendez, F Castillo, L Probyn, S Kras, P N Tyrrell","doi":"10.1007/s11548-024-03309-6","DOIUrl":"10.1007/s11548-024-03309-6","url":null,"abstract":"<p><strong>Purpose: </strong>This study explores the use of deep generative models to create synthetic ultrasound images for the detection of hemarthrosis in hemophilia patients. Addressing the challenge of sparse datasets in rare disease diagnostics, the study aims to enhance AI model robustness and accuracy through the integration of domain knowledge into the synthetic image generation process.</p><p><strong>Methods: </strong>The study employed two ultrasound datasets: a base dataset (Db) of knee recess distension images from non-hemophiliac patients and a target dataset (Dt) of hemarthrosis images from hemophiliac patients. The synthetic generation framework included a content generator (Gc) trained on Db and a context generator (Gs) to adapt these images to match Dt's context. This approach generated a synthetic target dataset (Ds), primed for AI training in rare disease research. The assessment of synthetic image generation involved expert evaluations, statistical analysis, and the use of domain-invariant perceptual distance and Fréchet inception distance for quality measurement.</p><p><strong>Results: </strong>Expert evaluation revealed that images produced by our synthetic generation framework were comparable to real ones, with no significant difference in overall quality or anatomical accuracy. Additionally, the use of synthetic data in training convolutional neural networks demonstrated robustness in detecting hemarthrosis, especially with limited sample sizes.</p><p><strong>Conclusion: </strong>This study presents a novel approach for generating synthetic ultrasound images for rare disease detection, such as hemarthrosis in hemophiliac knees. By leveraging deep generative models and integrating domain knowledge, the proposed framework successfully addresses the limitations of sparse datasets and enhances AI model training and robustness. The synthetic images produced are of high quality and contribute significantly to AI-driven diagnostics in rare diseases, highlighting the potential of synthetic data in medical imaging.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"415-431"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fan Yang, Qiming He, Yanxia Wang, Siqi Zeng, Yingming Xu, Jing Ye, Yonghong He, Tian Guan, Zhe Wang, Jing Li
{"title":"Unsupervised stain augmentation enhanced glomerular instance segmentation on pathology images.","authors":"Fan Yang, Qiming He, Yanxia Wang, Siqi Zeng, Yingming Xu, Jing Ye, Yonghong He, Tian Guan, Zhe Wang, Jing Li","doi":"10.1007/s11548-024-03154-7","DOIUrl":"10.1007/s11548-024-03154-7","url":null,"abstract":"<p><strong>Purpose: </strong>In pathology images, different stains highlight different glomerular structures, so a supervised deep learning-based glomerular instance segmentation model trained on individual stains performs poorly on other stains. However, it is difficult to obtain a training set with multiple stains because the labeling of pathology images is very time-consuming and tedious. Therefore, in this paper, we proposed an unsupervised stain augmentation-based method for segmentation of glomerular instances.</p><p><strong>Methods: </strong>In this study, we successfully realized the conversion between different staining methods such as PAS, MT and PASM by contrastive unpaired translation (CUT), thus improving the staining diversity of the training set. Moreover, we replaced the backbone of mask R-CNN with swin transformer to further improve the efficiency of feature extraction and thus achieve better performance in instance segmentation task.</p><p><strong>Results: </strong>To validate the method presented in this paper, we constructed a dataset from 216 WSIs of the three stains in this study. After conducting in-depth experiments, we verified that the instance segmentation method based on stain augmentation outperforms existing methods across all metrics for PAS, PASM, and MT stains. Furthermore, ablation experiments are performed in this paper to further demonstrate the effectiveness of the proposed module.</p><p><strong>Conclusion: </strong>This study successfully demonstrated the potential of unsupervised stain augmentation to improve glomerular segmentation in pathology analysis. Future research could extend this approach to other complex segmentation tasks in the pathology image domain to further explore the potential of applying stain augmentation techniques in different domains of pathology image analysis.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"225-236"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preoperative and intraoperative laparoscopic liver surface registration using deep graph matching of representative overlapping points.","authors":"Yue Dai, Xiangyue Yang, Junchen Hao, Huoling Luo, Guohui Mei, Fucang Jia","doi":"10.1007/s11548-024-03312-x","DOIUrl":"10.1007/s11548-024-03312-x","url":null,"abstract":"<p><strong>Purpose: </strong>In laparoscopic liver surgery, registering preoperative CT-extracted 3D models with intraoperative laparoscopic video reconstructions of the liver surface can help surgeons predict critical liver anatomy. However, the registration process is challenged by non-rigid deformation of the organ due to intraoperative pneumoperitoneum pressure, partial visibility of the liver surface, and surface reconstruction noise.</p><p><strong>Methods: </strong>First, we learn point-by-point descriptors and encode location information to alleviate the limitations of descriptors in location perception. In addition, we introduce a GeoTransformer to enhance the geometry perception to cope with the problem of inconspicuous liver surface features. Finally, we construct a deep graph matching module to optimize the descriptors and learn overlap masks to robustly estimate the transformation parameters based on representative overlap points.</p><p><strong>Results: </strong>Evaluation of our method with comparative methods on both simulated and real datasets shows that our method achieves state-of-the-art results, realizing the lowest surface registration error(SRE) 4.12 mm with the highest inlier ratios (IR) 53.31% and match scores (MS) 28.17%.</p><p><strong>Conclusion: </strong>Highly accurate and robust initialized registration obtained from partial information can be achieved while meeting the speed requirement. Non-rigid registration can further enhance the accuracy of the registration process on this basis.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"269-278"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142910423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benfang Duan, Biao Jia, Cheng Wang, Shijia Chen, Jun Xu, Gao-Jun Teng
{"title":"Optimization of percutaneous intervention robotic system for skin insertion force.","authors":"Benfang Duan, Biao Jia, Cheng Wang, Shijia Chen, Jun Xu, Gao-Jun Teng","doi":"10.1007/s11548-024-03274-0","DOIUrl":"10.1007/s11548-024-03274-0","url":null,"abstract":"<p><strong>Purpose: </strong>Percutaneous puncture is a common interventional procedure, and its effectiveness is influenced by the insertion force of the needle. To optimize outcomes, we focus on reducing the peak force of the needle in the skin, aiming to apply this method to other tissue layers.</p><p><strong>Methods: </strong>We developed a clinical puncture system, setting and measuring various variables. We analyzed their effects, introduced admittance control, set thresholds, and adjusted parameters. Finally, we validated these methods to ensure their effectiveness.</p><p><strong>Results: </strong>Our system meets application requirements. We assessed the impact of various variables on peak force and validated the effectiveness of the new method. Results show a reduction of about 50% in peak force compared to the maximum force condition and about 13% compared to the minimum force condition. Finally, we summarized the factors to consider when applying this method.</p><p><strong>Conclusion: </strong>To achieve peak force suppression, initial puncture variables should be set based on the trends in variable impact. Additionally, the factors of the new method should be introduced using these initial settings. When selecting these factors, the characteristics of the new method must also be considered. This process will help to better optimize peak puncture force.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"345-355"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}