International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
High-quality semi-supervised anomaly detection with generative adversarial networks. 使用生成对抗性网络进行高质量的半监督异常检测。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2023-11-09 DOI: 10.1007/s11548-023-03031-9
Yuki Sato, Junya Sato, Noriyuki Tomiyama, Shoji Kido
{"title":"High-quality semi-supervised anomaly detection with generative adversarial networks.","authors":"Yuki Sato, Junya Sato, Noriyuki Tomiyama, Shoji Kido","doi":"10.1007/s11548-023-03031-9","DOIUrl":"10.1007/s11548-023-03031-9","url":null,"abstract":"<p><strong>Purpose: </strong>The visualization of an anomaly area is easier in anomaly detection methods that use generative models rather than classification models. However, achieving both anomaly detection accuracy and a clear visualization of anomalous areas is challenging. This study aimed to establish a method that combines both detection accuracy and clear visualization of anomalous areas using a generative adversarial network (GAN).</p><p><strong>Methods: </strong>In this study, StyleGAN2 with adaptive discriminator augmentation (StyleGAN2-ADA), which can generate high-resolution and high-quality images with limited number of datasets, was used as the image generation model, and pixel-to-style-to-pixel (pSp) encoder was used to convert images into intermediate latent variables. We combined existing methods for training and proposed a method for calculating anomaly scores using intermediate latent variables. The proposed method, which combines these two methods, is called high-quality anomaly GAN (HQ-AnoGAN).</p><p><strong>Results: </strong>The experimental results obtained using three datasets demonstrated that HQ-AnoGAN has equal or better detection accuracy than the existing methods. The results of the visualization of abnormal areas using the generated images showed that HQ-AnoGAN could generate more natural images than the existing methods and was qualitatively more accurate in the visualization of abnormal areas.</p><p><strong>Conclusion: </strong>In this study, HQ-AnoGAN comprising StyleGAN2-ADA and pSp encoder was proposed with an optimal anomaly score calculation method. The experimental results show that HQ-AnoGAN can achieve both high abnormality detection accuracy and clear visualization of abnormal areas; thus, HQ-AnoGAN demonstrates significant potential for application in medical imaging diagnosis cases where an explanation of diagnosis is required.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71523347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based automatic pipeline for 3D needle localization on intra-procedural 3D MRI. 基于深度学习的自动管道,用于术中三维磁共振成像的三维针定位。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-03-23 DOI: 10.1007/s11548-024-03077-3
Wenqi Zhou, Xinzhou Li, Fatemeh Zabihollahy, David S Lu, Holden H Wu
{"title":"Deep learning-based automatic pipeline for 3D needle localization on intra-procedural 3D MRI.","authors":"Wenqi Zhou, Xinzhou Li, Fatemeh Zabihollahy, David S Lu, Holden H Wu","doi":"10.1007/s11548-024-03077-3","DOIUrl":"10.1007/s11548-024-03077-3","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate and rapid needle localization on 3D magnetic resonance imaging (MRI) is critical for MRI-guided percutaneous interventions. The current workflow requires manual needle localization on 3D MRI, which is time-consuming and cumbersome. Automatic methods using 2D deep learning networks for needle segmentation require manual image plane localization, while 3D networks are challenged by the need for sufficient training datasets. This work aimed to develop an automatic deep learning-based pipeline for accurate and rapid 3D needle localization on in vivo intra-procedural 3D MRI using a limited training dataset.</p><p><strong>Methods: </strong>The proposed automatic pipeline adopted Shifted Window (Swin) Transformers and employed a coarse-to-fine segmentation strategy: (1) initial 3D needle feature segmentation with 3D Swin UNEt TRansfomer (UNETR); (2) generation of a 2D reformatted image containing the needle feature; (3) fine 2D needle feature segmentation with 2D Swin Transformer and calculation of 3D needle tip position and axis orientation. Pre-training and data augmentation were performed to improve network training. The pipeline was evaluated via cross-validation with 49 in vivo intra-procedural 3D MR images from preclinical pig experiments. The needle tip and axis localization errors were compared with human intra-reader variation using the Wilcoxon signed rank test, with p < 0.05 considered significant.</p><p><strong>Results: </strong>The average end-to-end computational time for the pipeline was 6 s per 3D volume. The median Dice scores of the 3D Swin UNETR and 2D Swin Transformer in the pipeline were 0.80 and 0.93, respectively. The median 3D needle tip and axis localization errors were 1.48 mm (1.09 pixels) and 0.98°, respectively. Needle tip localization errors were significantly smaller than human intra-reader variation (median 1.70 mm; p < 0.01).</p><p><strong>Conclusion: </strong>The proposed automatic pipeline achieved rapid pixel-level 3D needle localization on intra-procedural 3D MRI without requiring a large 3D training dataset and has the potential to assist MRI-guided percutaneous interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140195078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery. 多中心泛化的挑战:Roux-en-Y 胃旁路手术中的阶段和步骤识别。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-05-18 DOI: 10.1007/s11548-024-03166-3
Joël L Lavanchy, Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Paolo Fiorini, Beat P Müller-Stich, Philipp C Nett, Jacques Marescaux, Didier Mutter, Nicolas Padoy
{"title":"Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery.","authors":"Joël L Lavanchy, Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Paolo Fiorini, Beat P Müller-Stich, Philipp C Nett, Jacques Marescaux, Didier Mutter, Nicolas Padoy","doi":"10.1007/s11548-024-03166-3","DOIUrl":"10.1007/s11548-024-03166-3","url":null,"abstract":"<p><strong>Purpose: </strong>Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers.</p><p><strong>Methods: </strong>In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70.</p><p><strong>Results: </strong>The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)).</p><p><strong>Conclusion: </strong>MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based osteochondritis dissecans detection in ultrasound images with humeral capitellum localization. 基于深度学习的肱骨髁定位超声图像骨软骨炎检测
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-01-17 DOI: 10.1007/s11548-023-03040-8
Kenta Sasaki, Daisuke Fujita, Kenta Takatsuji, Yoshihiro Kotoura, Masataka Minami, Yusuke Kobayashi, Tsuyoshi Sukenari, Yoshikazu Kida, Kenji Takahashi, Syoji Kobashi
{"title":"Deep learning-based osteochondritis dissecans detection in ultrasound images with humeral capitellum localization.","authors":"Kenta Sasaki, Daisuke Fujita, Kenta Takatsuji, Yoshihiro Kotoura, Masataka Minami, Yusuke Kobayashi, Tsuyoshi Sukenari, Yoshikazu Kida, Kenji Takahashi, Syoji Kobashi","doi":"10.1007/s11548-023-03040-8","DOIUrl":"10.1007/s11548-023-03040-8","url":null,"abstract":"<p><strong>Purpose: </strong>Osteochondritis dissecans (OCD) of the humeral capitellum is a common cause of elbow disorders, particularly among young throwing athletes. Conservative treatment is the preferred treatment for managing OCD, and early intervention significantly influences the possibility of complete disease resolution. The purpose of this study is to develop a deep learning-based classification model in ultrasound images for computer-aided diagnosis.</p><p><strong>Methods: </strong>This paper proposes a deep learning-based OCD classification method in ultrasound images. The proposed method first detects the humeral capitellum detection using YOLO and then estimates the OCD probability of the detected region probability using VGG16. We hypothesis that the performance will be improved by eliminating unnecessary regions. To validate the performance of the proposed method, it was applied to 158 subjects (OCD: 67, Normal: 91) using five-fold-cross-validation.</p><p><strong>Results: </strong>The study demonstrated that the humeral capitellum detection achieved a mean average precision (mAP) of over 0.95, while OCD probability estimation achieved an average accuracy of 0.890, precision of 0.888, recall of 0.927, F1 score of 0.894, and an area under the curve (AUC) of 0.962. On the other hand, when the classification model was constructed for the entire image, accuracy, precision, recall, F1 score, and AUC were 0.806, 0.806, 0.932, 0.843, and 0.928, respectively. The findings suggest the high-performance potential of the proposed model for OCD classification in ultrasonic images.</p><p><strong>Conclusion: </strong>This paper introduces a deep learning-based OCD classification method. The experimental results emphasize the effectiveness of focusing on the humeral capitellum for OCD classification in ultrasound images. Future work should involve evaluating the effectiveness of employing the proposed method by physicians during medical check-ups for OCD.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541362/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139486838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to: Background removal for debiasing computer-aided cytological diagnosis. 更正:为计算机辅助细胞学诊断去除背景色。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 DOI: 10.1007/s11548-024-03236-6
Keita Takeda, Tomoya Sakai, Eiji Mitate
{"title":"Correction to: Background removal for debiasing computer-aided cytological diagnosis.","authors":"Keita Takeda, Tomoya Sakai, Eiji Mitate","doi":"10.1007/s11548-024-03236-6","DOIUrl":"10.1007/s11548-024-03236-6","url":null,"abstract":"","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541286/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141914557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Objective performance indicators versus GEARS: an opportunity for more accurate assessment of surgical skill. 客观绩效指标与 GEARS 对比:更准确评估手术技能的机会。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-09-25 DOI: 10.1007/s11548-024-03248-2
Marzieh Ershad Langroodi, Xi Liu, Mark R Tousignant, Anthony M Jarc
{"title":"Objective performance indicators versus GEARS: an opportunity for more accurate assessment of surgical skill.","authors":"Marzieh Ershad Langroodi, Xi Liu, Mark R Tousignant, Anthony M Jarc","doi":"10.1007/s11548-024-03248-2","DOIUrl":"10.1007/s11548-024-03248-2","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical skill evaluation that relies on subjective scoring of surgical videos can be time-consuming and inconsistent across raters. We demonstrate differentiated opportunities for objective evaluation to improve surgeon training and performance.</p><p><strong>Methods: </strong>Subjective evaluation was performed using the Global evaluative assessment of robotic skills (GEARS) from both expert and crowd raters; whereas, objective evaluation used objective performance indicators (OPIs) derived from da Vinci surgical systems. Classifiers were trained for each evaluation method to distinguish between surgical expertise levels. This study includes one clinical task from a case series of robotic-assisted sleeve gastrectomy procedures performed by a single surgeon, and two training tasks performed by novice and expert surgeons, i.e., surgeons with no experience in robotic-assisted surgery (RAS) and those with more than 500 RAS procedures.</p><p><strong>Results: </strong>When comparing expert and novice skill levels, OPI-based classifier showed significantly higher accuracy than GEARS-based classifier on the more complex dissection task (OPI 0.93 ± 0.08 vs. GEARS 0.67 ± 0.18; 95% CI, 0.16-0.37; p = 0.02), but no significant difference was shown on the simpler suturing task. For the single-surgeon case series, both classifiers performed well when differentiating between early and late group cases with smaller group sizes and larger intervals between groups (OPI 0.9 ± 0.08; GEARS 0.87 ± 0.12; 95% CI, 0.02-0.04; p = 0.67). When increasing the group size to include more cases, thereby having smaller intervals between groups, OPIs demonstrated significantly higher accuracy (OPI 0.97 ± 0.06; GEARS 0.76 ± 0.07; 95% CI, 0.12-0.28; p = 0.004) in differentiating between the early/late cases.</p><p><strong>Conclusions: </strong>Objective methods for skill evaluation in RAS outperform subjective methods when (1) differentiating expertise in a technically challenging training task, and (2) identifying more granular differences along early versus late phases of a surgeon learning curve within a clinical task. Objective methods offer an opportunity for more accessible and scalable skill evaluation in RAS.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142332054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Requirement analysis for an AI-based AR assistance system for surgical tools in the operating room: stakeholder requirements and technical perspectives. 手术室手术工具的人工智能 AR 辅助系统需求分析:利益相关者的需求和技术视角。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-06-07 DOI: 10.1007/s11548-024-03193-0
E Cramer, A B Kucharski, J Kreimeier, S Andreß, S Li, C Walk, F Merkl, J Högl, P Wucherer, P Stefan, R von Eisenhart-Rothe, P Enste, D Roth
{"title":"Requirement analysis for an AI-based AR assistance system for surgical tools in the operating room: stakeholder requirements and technical perspectives.","authors":"E Cramer, A B Kucharski, J Kreimeier, S Andreß, S Li, C Walk, F Merkl, J Högl, P Wucherer, P Stefan, R von Eisenhart-Rothe, P Enste, D Roth","doi":"10.1007/s11548-024-03193-0","DOIUrl":"10.1007/s11548-024-03193-0","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to investigate the integration of augmented reality (AR) within the context of increasingly complex surgical procedures and instrument handling toward the transition to smart operating rooms (OR). In contrast to cumbersome paper-based surgical instrument manuals still used in the OR, we wish to provide surgical staff with an AR head-mounted display that provides in-situ visualization and guidance throughout the assembly process of surgical instruments. Our requirement analysis supports the development and provides guidelines for its transfer into surgical practice.</p><p><strong>Methods: </strong>A three-phase user-centered design approach was applied with online interviews, an observational study, and a workshop with two focus groups with scrub nurses, circulating nurses, surgeons, manufacturers, clinic IT staff, and members of the sterilization department. The requirement analysis was based on key criteria for usability. The data were analyzed via structured content analysis.</p><p><strong>Results: </strong>We identified twelve main problems with the current use of paper manuals. Major issues included sterile users' inability to directly handle non-sterile manuals, missing details, and excessive text information, potentially delaying procedure performance. Major requirements for AR-driven guidance fall into the categories of design, practicability, control, and integration into the current workflow. Additionally, further recommendations for technical development could be obtained.</p><p><strong>Conclusion: </strong>In conclusion, our insights have outlined a comprehensive spectrum of requirements that are essential for the successful implementation of an AI- and AR-driven guidance for assembling surgical instruments. The consistently appreciative evaluation by stakeholders underscores the profound potential of AR and AI technology as valuable assistance and guidance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541324/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal human-computer interaction in interventional radiology and surgery: a systematic literature review. 介入放射学和外科手术中的多模式人机交互:系统性文献综述。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-10-28 DOI: 10.1007/s11548-024-03263-3
Josefine Schreiter, Florian Heinrich, Benjamin Hatscher, Danny Schott, Christian Hansen
{"title":"Multimodal human-computer interaction in interventional radiology and surgery: a systematic literature review.","authors":"Josefine Schreiter, Florian Heinrich, Benjamin Hatscher, Danny Schott, Christian Hansen","doi":"10.1007/s11548-024-03263-3","DOIUrl":"https://doi.org/10.1007/s11548-024-03263-3","url":null,"abstract":"<p><strong>Purpose: </strong>As technology advances, more research dedicated to medical interactive systems emphasizes the integration of touchless and multimodal interaction (MMI). Particularly in surgical and interventional settings, this approach is advantageous because it maintains sterility and promotes a natural interaction. Past reviews have focused on investigating MMI in terms of technology and interaction with robots. However, none has put particular emphasis on analyzing these kind of interactions for surgical and interventional scenarios.</p><p><strong>Methods: </strong>Two databases were included in the query to search for relevant publications within the past 10 years. After identification, two screening steps followed which included eligibility criteria. A forward/backward search was added to identify more relevant publications. The analysis incorporated the clustering of references in terms of addressed medical field, input and output modalities, and challenges regarding the development and evaluation.</p><p><strong>Results: </strong>A sample of 31 references was obtained (16 journal articles, 15 conference papers). MMI was predominantly developed for laparoscopy and radiology and interaction with image viewers. The majority implemented two input modalities, with voice-hand interaction being the most common combination-voice for discrete and hand for continuous navigation tasks. The application of gaze, body, and facial control is minimal, primarily because of ergonomic concerns. Feedback was included in 81% publications, of which visual cues were most often applied.</p><p><strong>Conclusion: </strong>This work systematically reviews MMI for surgical and interventional scenarios over the past decade. In future research endeavors, we propose an enhanced focus on conducting in-depth analyses of the considered use cases and the application of standardized evaluation methods. Moreover, insights from various sectors, including but not limited to the gaming sector, should be exploited.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-automatic robotic puncture system based on deformable soft tissue point cloud registration. 基于可变形软组织点云注册的半自动机器人穿刺系统。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-10-26 DOI: 10.1007/s11548-024-03247-3
Bo Zhang, Kui Chen, Yuhang Yao, Bo Wu, Qiang Li, Zheming Zhang, Peihua Fan, Wei Wang, Manxia Lin, Masakatsu G Fujie
{"title":"Semi-automatic robotic puncture system based on deformable soft tissue point cloud registration.","authors":"Bo Zhang, Kui Chen, Yuhang Yao, Bo Wu, Qiang Li, Zheming Zhang, Peihua Fan, Wei Wang, Manxia Lin, Masakatsu G Fujie","doi":"10.1007/s11548-024-03247-3","DOIUrl":"https://doi.org/10.1007/s11548-024-03247-3","url":null,"abstract":"<p><strong>Purpose: </strong>Traditional surgical puncture robot systems based on computed tomography (CT) and infrared camera guidance have natural disadvantages for puncture of deformable soft tissues such as the liver. Liver movement and deformation caused by breathing are difficult to accurately assess and compensate by current technical solutions. We propose a semi-automatic robotic puncture system based on real-time ultrasound images to solve this problem.</p><p><strong>Method: </strong>Real-time ultrasound images and their spatial position information can be obtained by robot in this system. By recognizing target tissue in these ultrasound images and using reconstruction algorithm, 3D real-time ultrasound tissue point cloud can be constructed. Point cloud of the target tissue in the CT image can be obtained by using developed software. Through the point cloud registration method based on feature points, two point clouds above are registered. The puncture target will be automatically positioned, then robot quickly carries the puncture guide mechanism to the puncture site and guides the puncture. It takes about just tens of seconds from the start of image acquisition to completion of needle insertion. Patient can be controlled by a ventilator to temporarily stop breathing, and patient's breathing state does not need to be the same as taking CT scan.</p><p><strong>Results: </strong>The average operation time of 24 phantom experiments is 64.5 s, and the average error between the needle tip and the target point after puncture is 0.8 mm. Two animal puncture surgeries were performed, and the results indicated that the puncture errors of these two experiments are 1.76 mm and 1.81 mm, respectively.</p><p><strong>Conclusion: </strong>Robot system can effectively carry out and implement liver tissue puncture surgery, and the success rate of phantom experiments and experiments is 100%. It also shows that the puncture robot system has high puncture accuracy, short operation time, and great clinical value.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time ultrasound AR 3D visualization toward better topological structure perception for hepatobiliary surgery. 实时超声 AR 三维可视化为肝胆外科手术提供更好的拓扑结构感知。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-10-14 DOI: 10.1007/s11548-024-03273-1
Yuqi Ji, Tianqi Huang, Yutong Wu, Ruiyang Li, Pengfei Wang, Jiahong Dong, Honegen Liao
{"title":"Real-time ultrasound AR 3D visualization toward better topological structure perception for hepatobiliary surgery.","authors":"Yuqi Ji, Tianqi Huang, Yutong Wu, Ruiyang Li, Pengfei Wang, Jiahong Dong, Honegen Liao","doi":"10.1007/s11548-024-03273-1","DOIUrl":"https://doi.org/10.1007/s11548-024-03273-1","url":null,"abstract":"<p><strong>Purpose: </strong>Ultrasound serves as a crucial intraoperative imaging tool for hepatobiliary surgeons, enabling the identification of complex anatomical structures like blood vessels, bile ducts, and lesions. However, the reliance on manual mental reconstruction of 3D topologies from 2D ultrasound images presents significant challenges, leading to a pressing need for tools to assist surgeons with real-time identification of 3D topological anatomy.</p><p><strong>Methods: </strong>We propose a real-time ultrasound AR 3D visualization method for intraoperative 2D ultrasound imaging. Our system leverages backward alpha blending to integrate multi-planar ultrasound data effectively. To ensure continuity between 2D ultrasound planes, we employ spatial smoothing techniques to interpolate the widely spaced ultrasound planes. A dynamic 3D transfer function is also developed to enhance spatial representation through color differentiation.</p><p><strong>Results: </strong>Comparative experiments involving our AR visualization of 3D ultrasound, alongside AR visualization of 2D ultrasound and 2D visualization of 3D ultrasound, demonstrated that the proposed method significantly reduced operational time(110.25 ± 27.83 s compared to 292 ± 146.63 s and 365.25 ± 131.62 s), improved depth perception and comprehension of complex topologies, contributing to reduced pressure and increased personal satisfaction among users.</p><p><strong>Conclusion: </strong>Quantitative experimental results and feedback from both novice and experienced physicians highlight our system's exceptional ability to enhance the understanding of complex topological anatomy. This improvement is crucial for accurate ultrasound diagnosis and informed surgical decision-making, underscoring the system's clinical applicability.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信