Guest Editorial: Special Issue on Al Technologies and Applications in Medical Robots

IF 8.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xiaozhi Qi, Zhongliang Jiang, Ying Hu, Jianwei Zhang
{"title":"Guest Editorial: Special Issue on Al Technologies and Applications in Medical Robots","authors":"Xiaozhi Qi,&nbsp;Zhongliang Jiang,&nbsp;Ying Hu,&nbsp;Jianwei Zhang","doi":"10.1049/cit2.70019","DOIUrl":null,"url":null,"abstract":"<p>The integration of artificial intelligence (AI) into medical robotics has emerged as a cornerstone of modern healthcare, driving transformative advancements in precision, adaptability and patient outcomes. Although computational tools have long supported diagnostic processes, their role is evolving beyond passive assistance to become active collaborators in therapeutic decision-making. In this paradigm, knowledge-driven deep learning systems are redefining possibilities—enabling robots to interpret complex data, adapt to dynamic clinical environments and execute tasks with human-like contextual awareness.</p><p>The purpose of this special issue is to showcase the latest developments in the application of AI technology in medical robots. The main content includes but is not limited to passive data adaptation, force feedback tracking, image processing and diagnosis, surgical navigation, exoskeleton systems etc. These studies cover various application scenarios of medical robots, with the ultimate goal of maximising AI autonomy.</p><p>We have received 31 paper submissions from around the world, and after a rigorous peer review process, we have finally selected 9 papers for publication. The selected collection of papers covers various fascinating research topics, all of which have achieved key breakthroughs in their respective fields. We believe that these accepted papers have guiding significance for their research fields and can help researchers enhance their understanding of current trends. Sincere thanks to the authors who chose our platform and all the staff who provided assistance for the publication of these papers.</p><p>In the article ‘Model adaptation via credible local context representation’, Tang et al. pointed out that conventional model transfer techniques require labelled source data, which makes them inapplicable in privacy-sensitive medical domains. To address these critical problems of source-free domain adaptation (SFDA), they proposed a credible local context representation (CLCR) method that significantly enhances model generalisation through geometric structure mining in feature space. This method innovatively constructs a two-stage learning framework: introducing a data-enhanced mutual information regularisation term in the pretraining stage of the source model to enhance the model's learning of sample discriminative features; design a deep space fixed step walking strategy during the target domain adaptation phase, dynamically capture the local credible contextual features of each target sample and use them as pseudo-labels for semantic fusion. Experiments on the three benchmark datasets of Office-31, Office Home and VisDA show that CLCR achieves an average accuracy of 89.2% in 12 cross-domain tasks, which is 3.1% higher than the existing optimal SFDA method and even surpasses some domain adaptation methods that require the participation of source data. This work provides a new approach to address the privacy performance conflict in cross-institutional model transfer in healthcare, and its context discovery mechanism has universal significance for unsupervised representation learning.</p><p>In the article ‘A human-robot collaboration method for uncertain surface scanning’, Zhao et al. introduces a human–robot collaboration framework for uncertain surface scanning that synergises teleoperation with adaptive force control. The system enables operators to remotely guide scanning trajectories, whereas an admittance controller maintains constant contact force through real-time stiffness adjustment, achieving ± 1 N tracking precision on surfaces with unknown stiffness. Autonomous tool reorientation, triggered when angular deviation exceeds 5°, ensures perpendicular alignment through friction-compensated force perception. Experimental validation, using a mock ultrasound probe, demonstrated 63% workload reduction compared to pure teleoperation, successfully handling both spongy and spring-supported phantoms. The hybrid control architecture decouples human guidance from robotic compliance, permitting simultaneous XY-axis motion control and Z-axis force regulation without prior environmental modelling. This approach bridges human intuition with robotic precision, particularly valuable for medical scanning applications requiring safe tissue interaction.</p><p>In the research entitled ‘AESR3D: 3D Overcomplete Autoencoder for Trabecular CT Super Resolution’, Zhang et al. proposed AESR3D, a 3D overcomplete autoencoder framework, to address the limitations of osteoporosis diagnosis by enhancing low-resolution trabecular CT scans. Current reliance on bone mineral density (BMD) overlooks microstructural deterioration critical for biomechanical strength. AESR3D combines a hybrid CNN-transformer architecture with dual-task regularisation—simultaneously optimising super-resolution reconstruction and low-resolution restoration—to prevent overfitting while recovering structural details. The model achieves state-of-the-art performance (SSIM: 0.996) and demonstrates strong correlation with high-resolution ground truth in trabecular metrics (ICC = 0.917). By integrating unsupervised <i>K</i>-means segmentation, it enables precise visualisation of bone microarchitecture without labelled data. Outperforming existing medical/natural image SR methods, AESR3D bridges micro-CT research and clinical CT applications, offering a noninvasive tool for enhanced osteoporosis assessment and advancing diagnostic accuracy in bone quality evaluation.</p><p>In the paper ‘Segmentation versus Detection: Development and Evaluation of Deep Learning Models for PIRADS Lesions Localisation on Bi-Parametric Prostate MRI’, Min et al. address the critical challenge of automated prostate cancer detection in bi-parametric MRI (bp-MRI) by rigorously comparing segmentation (nnUNet) and object detection (nnDetection) deep learning approaches. Prostate cancer, a leading cause of male mortality, demands precise early diagnosis, yet MRI interpretation remains radiologist-dependent and time-intensive. The authors introduce novel lesion-level sensitivity and precision metrics, overcoming limitations of traditional voxel-wise evaluations, and propose ensemble methods to synergise the strengths of both models. Results demonstrate nnDetection's superior lesion-level sensitivity (80.78% vs. 60.40% for PIRADS ≥ 3 lesions at 3 false positives), whereas nnUNet excels in voxel-level accuracy (DSC 0.46 vs. 0.35). Ensemble techniques further enhance performance, achieving 82.24% lesion-level sensitivity, underscoring their potential to balance detection robustness and spatial precision. Validated on external datasets, the framework highlights the clinical viability of combining segmentation and detection paradigms, particularly for MRI-guided biopsies requiring high sensitivity. This work advances computer-aided diagnosis by bridging methodological gaps and providing metrics aligned with clinical priorities, offering a scalable pathway towards improved prostate cancer management through AI-driven lesion localisation.</p><p>In the paper ‘Needle Detection and Localisation for Robot-assisted Subretinal Injection using Deep Learning’, Zhou et al. address the critical challenge of precise needle detection and localisation in robot-assisted subretinal injection, a high-stakes ophthalmic procedure requiring micrometre-level accuracy. Leveraging microscope-integrated optical coherence tomography (MI-OCT), the authors propose a robust framework combining ROI cropping and deep learning to overcome limitations in manual needle tracking caused by tissue deformation and specular noise. Five convolutional neural network architectures were evaluated, with the top-performing model (Network II) achieving 100% detection success on ex vivo porcine eyes and localising needle segments with an Intersection-over-Union of 0.55. By analysing bounding box edges, the method demonstrated sub-10 μm accuracy in depth estimation, crucial for navigating the delicate retinal layers. The integration of neighbouring OCT scans enhanced spatial context awareness, outperforming geometric feature-based approaches. This work advances intraoperative imaging-guided robotics by enabling real-time, deformation-resistant needle tracking, potentially reducing surgical risks in gene therapy delivery and subretinal haemorrhage treatment. The validated framework bridges a critical gap in ophthalmic robotics, offering a pathway towards safer, more precise robotic interventions in retinal surgery.</p><p>In the paper ‘A method for automatic feature points extraction of pelvic surface based on PointMLP_RegNet’, Kou et al. note that the precise extraction of anatomical landmarks from complex pelvic structures is critical for enhancing 3D/3D registration accuracy in robot-assisted fracture reduction. Addressing challenges in manual and conventional automated methods, this study introduces PointMLP_RegNet, a deep learning framework adapted from PointMLP by replacing its classification layer with a regression module to predict spatial coordinates of 10 pelvic landmarks. Trained on a clinical dataset of 40 patient-derived CT-reconstructed point clouds augmented via downsampling, translation, rotation and noise injection, the model demonstrated robust performance through leave-one-out cross-validation. Results revealed sub-5 mm accuracy across all landmarks, with 80% achieving errors below 4 mm, surpassing PointNet++ and PointNet in precision (reducing mean error by 20%–30%) while maintaining superior computational efficiency (0.688 M parameters). By automating feature extraction, the method minimises human variability, streamlines intraoperative registration and improves surgical planning reliability. This innovation bridges technical gaps in pelvic fracture robotics, offering a scalable solution for clinical adoption and underscoring the transformative potential of tailored deep learning architectures in orthopaedic navigation systems.</p><p>In the paper ‘Rehabilitation Exoskeleton System with Bidirectional Virtual Reality Feedback Training Strategy’, Gao et al. introduced a VR-integrated exoskeleton system for stroke rehabilitation, combining immersive 3D environments with real-time bidirectional feedback to enhance neural retraining. The system employs a novel muscle activation model merging linear and nonlinear contraction dynamics, addressing limitations of traditional Hill-based models, whereas a WOA-GRNN algorithm achieves precise muscle strength prediction (RMSE: 0.0173, MAPE: 1.25%). Experiments with healthy participants demonstrated synchronised exoskeleton-VR motion mapping and involuntary muscle responses to virtual stimuli, validating neural pathway engagement. Notably, 75% of subjects exhibited subconscious arm movements during VR-induced phantom limb activation, suggesting enhanced proprioceptive integration. This bidirectional feedback framework advances personalised rehabilitation by objectively quantifying recovery through sEMG-driven metrics while maintaining patient engagement through adaptive virtual tasks.</p><p>In the paper ‘A Demonstration Trajectory Segmentation Approach for Wheelchair-mounted Robotic Arms’, Chi et al. proposed a novel trajectory segmentation approach for wheelchair-mounted assistive robots, aiming to enhance their ability to learn and reproduce complex tasks in unstructured environments. The proposed GTW-BP-AR-HMM method integrates the generalised time warping (GTW) algorithm with a beta process autoregressive hidden Markov model (BP-AR-HMM) to address challenges in aligning and segmenting variable-length demonstration trajectories. By first aligning multiple task demonstrations temporally using GTW, the framework mitigates inconsistencies in trajectory lengths, a critical limitation of traditional BP-AR-HMM. Subsequent segmentation identifies motion primitives, enabling the creation of reusable task libraries. Validation on a 6-DOF robotic arm demonstrated high accuracy in segmenting tasks such as holding a water glass and eating, with segmentation points closely matching manual annotations. This approach reduces reliance on expert input, simplifying the demonstration process for nonspecialists while improving the robot's adaptability to user-specific needs. The work underscores the potential of combining temporal alignment and probabilistic modelling to advance assistive robotics in healthcare and home settings.</p><p>In the paper ‘Processing Water-Medium Spinal Endoscopic Images Based on Dual Transmittance’, Hu and Zhang proposed a novel dual-transmittance fusion method to enhance water-medium spinal endoscopic images degraded by suspended contaminants during minimally invasive procedures. By adapting an underwater imaging model to spinal endoscopy, the authors estimate transmittance through boundary constraints and local contrast analysis, addressing light scattering and absorption caused by turbid surgical environments. The fusion of these transmittance maps, optimised via guided filtering, minimises artefacts while preserving structural integrity. Ambient light estimation using a “Shades of Grey” algorithm further ensures balanced colour correction. Experimental validation against classical methods—including WGIF, AGCWD and MSRCR—demonstrates superior performance in entropy, contrast and structural similarity metrics, effectively restoring tissue textures without overexposure or distortion. This physics-informed approach bridges computational efficiency with clinical utility, offering real-time image clarity for precise intraoperative navigation. The method's robustness across diverse degradation scenarios, from blood contamination to tool shadows, positions it as a pivotal advancement in enhancing visualisation for complex spinal surgeries, promising improved surgical accuracy and safety.</p>","PeriodicalId":46211,"journal":{"name":"CAAI Transactions on Intelligence Technology","volume":"10 3","pages":"635-637"},"PeriodicalIF":8.4000,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cit2.70019","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CAAI Transactions on Intelligence Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cit2.70019","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The integration of artificial intelligence (AI) into medical robotics has emerged as a cornerstone of modern healthcare, driving transformative advancements in precision, adaptability and patient outcomes. Although computational tools have long supported diagnostic processes, their role is evolving beyond passive assistance to become active collaborators in therapeutic decision-making. In this paradigm, knowledge-driven deep learning systems are redefining possibilities—enabling robots to interpret complex data, adapt to dynamic clinical environments and execute tasks with human-like contextual awareness.

The purpose of this special issue is to showcase the latest developments in the application of AI technology in medical robots. The main content includes but is not limited to passive data adaptation, force feedback tracking, image processing and diagnosis, surgical navigation, exoskeleton systems etc. These studies cover various application scenarios of medical robots, with the ultimate goal of maximising AI autonomy.

We have received 31 paper submissions from around the world, and after a rigorous peer review process, we have finally selected 9 papers for publication. The selected collection of papers covers various fascinating research topics, all of which have achieved key breakthroughs in their respective fields. We believe that these accepted papers have guiding significance for their research fields and can help researchers enhance their understanding of current trends. Sincere thanks to the authors who chose our platform and all the staff who provided assistance for the publication of these papers.

In the article ‘Model adaptation via credible local context representation’, Tang et al. pointed out that conventional model transfer techniques require labelled source data, which makes them inapplicable in privacy-sensitive medical domains. To address these critical problems of source-free domain adaptation (SFDA), they proposed a credible local context representation (CLCR) method that significantly enhances model generalisation through geometric structure mining in feature space. This method innovatively constructs a two-stage learning framework: introducing a data-enhanced mutual information regularisation term in the pretraining stage of the source model to enhance the model's learning of sample discriminative features; design a deep space fixed step walking strategy during the target domain adaptation phase, dynamically capture the local credible contextual features of each target sample and use them as pseudo-labels for semantic fusion. Experiments on the three benchmark datasets of Office-31, Office Home and VisDA show that CLCR achieves an average accuracy of 89.2% in 12 cross-domain tasks, which is 3.1% higher than the existing optimal SFDA method and even surpasses some domain adaptation methods that require the participation of source data. This work provides a new approach to address the privacy performance conflict in cross-institutional model transfer in healthcare, and its context discovery mechanism has universal significance for unsupervised representation learning.

In the article ‘A human-robot collaboration method for uncertain surface scanning’, Zhao et al. introduces a human–robot collaboration framework for uncertain surface scanning that synergises teleoperation with adaptive force control. The system enables operators to remotely guide scanning trajectories, whereas an admittance controller maintains constant contact force through real-time stiffness adjustment, achieving ± 1 N tracking precision on surfaces with unknown stiffness. Autonomous tool reorientation, triggered when angular deviation exceeds 5°, ensures perpendicular alignment through friction-compensated force perception. Experimental validation, using a mock ultrasound probe, demonstrated 63% workload reduction compared to pure teleoperation, successfully handling both spongy and spring-supported phantoms. The hybrid control architecture decouples human guidance from robotic compliance, permitting simultaneous XY-axis motion control and Z-axis force regulation without prior environmental modelling. This approach bridges human intuition with robotic precision, particularly valuable for medical scanning applications requiring safe tissue interaction.

In the research entitled ‘AESR3D: 3D Overcomplete Autoencoder for Trabecular CT Super Resolution’, Zhang et al. proposed AESR3D, a 3D overcomplete autoencoder framework, to address the limitations of osteoporosis diagnosis by enhancing low-resolution trabecular CT scans. Current reliance on bone mineral density (BMD) overlooks microstructural deterioration critical for biomechanical strength. AESR3D combines a hybrid CNN-transformer architecture with dual-task regularisation—simultaneously optimising super-resolution reconstruction and low-resolution restoration—to prevent overfitting while recovering structural details. The model achieves state-of-the-art performance (SSIM: 0.996) and demonstrates strong correlation with high-resolution ground truth in trabecular metrics (ICC = 0.917). By integrating unsupervised K-means segmentation, it enables precise visualisation of bone microarchitecture without labelled data. Outperforming existing medical/natural image SR methods, AESR3D bridges micro-CT research and clinical CT applications, offering a noninvasive tool for enhanced osteoporosis assessment and advancing diagnostic accuracy in bone quality evaluation.

In the paper ‘Segmentation versus Detection: Development and Evaluation of Deep Learning Models for PIRADS Lesions Localisation on Bi-Parametric Prostate MRI’, Min et al. address the critical challenge of automated prostate cancer detection in bi-parametric MRI (bp-MRI) by rigorously comparing segmentation (nnUNet) and object detection (nnDetection) deep learning approaches. Prostate cancer, a leading cause of male mortality, demands precise early diagnosis, yet MRI interpretation remains radiologist-dependent and time-intensive. The authors introduce novel lesion-level sensitivity and precision metrics, overcoming limitations of traditional voxel-wise evaluations, and propose ensemble methods to synergise the strengths of both models. Results demonstrate nnDetection's superior lesion-level sensitivity (80.78% vs. 60.40% for PIRADS ≥ 3 lesions at 3 false positives), whereas nnUNet excels in voxel-level accuracy (DSC 0.46 vs. 0.35). Ensemble techniques further enhance performance, achieving 82.24% lesion-level sensitivity, underscoring their potential to balance detection robustness and spatial precision. Validated on external datasets, the framework highlights the clinical viability of combining segmentation and detection paradigms, particularly for MRI-guided biopsies requiring high sensitivity. This work advances computer-aided diagnosis by bridging methodological gaps and providing metrics aligned with clinical priorities, offering a scalable pathway towards improved prostate cancer management through AI-driven lesion localisation.

In the paper ‘Needle Detection and Localisation for Robot-assisted Subretinal Injection using Deep Learning’, Zhou et al. address the critical challenge of precise needle detection and localisation in robot-assisted subretinal injection, a high-stakes ophthalmic procedure requiring micrometre-level accuracy. Leveraging microscope-integrated optical coherence tomography (MI-OCT), the authors propose a robust framework combining ROI cropping and deep learning to overcome limitations in manual needle tracking caused by tissue deformation and specular noise. Five convolutional neural network architectures were evaluated, with the top-performing model (Network II) achieving 100% detection success on ex vivo porcine eyes and localising needle segments with an Intersection-over-Union of 0.55. By analysing bounding box edges, the method demonstrated sub-10 μm accuracy in depth estimation, crucial for navigating the delicate retinal layers. The integration of neighbouring OCT scans enhanced spatial context awareness, outperforming geometric feature-based approaches. This work advances intraoperative imaging-guided robotics by enabling real-time, deformation-resistant needle tracking, potentially reducing surgical risks in gene therapy delivery and subretinal haemorrhage treatment. The validated framework bridges a critical gap in ophthalmic robotics, offering a pathway towards safer, more precise robotic interventions in retinal surgery.

In the paper ‘A method for automatic feature points extraction of pelvic surface based on PointMLP_RegNet’, Kou et al. note that the precise extraction of anatomical landmarks from complex pelvic structures is critical for enhancing 3D/3D registration accuracy in robot-assisted fracture reduction. Addressing challenges in manual and conventional automated methods, this study introduces PointMLP_RegNet, a deep learning framework adapted from PointMLP by replacing its classification layer with a regression module to predict spatial coordinates of 10 pelvic landmarks. Trained on a clinical dataset of 40 patient-derived CT-reconstructed point clouds augmented via downsampling, translation, rotation and noise injection, the model demonstrated robust performance through leave-one-out cross-validation. Results revealed sub-5 mm accuracy across all landmarks, with 80% achieving errors below 4 mm, surpassing PointNet++ and PointNet in precision (reducing mean error by 20%–30%) while maintaining superior computational efficiency (0.688 M parameters). By automating feature extraction, the method minimises human variability, streamlines intraoperative registration and improves surgical planning reliability. This innovation bridges technical gaps in pelvic fracture robotics, offering a scalable solution for clinical adoption and underscoring the transformative potential of tailored deep learning architectures in orthopaedic navigation systems.

In the paper ‘Rehabilitation Exoskeleton System with Bidirectional Virtual Reality Feedback Training Strategy’, Gao et al. introduced a VR-integrated exoskeleton system for stroke rehabilitation, combining immersive 3D environments with real-time bidirectional feedback to enhance neural retraining. The system employs a novel muscle activation model merging linear and nonlinear contraction dynamics, addressing limitations of traditional Hill-based models, whereas a WOA-GRNN algorithm achieves precise muscle strength prediction (RMSE: 0.0173, MAPE: 1.25%). Experiments with healthy participants demonstrated synchronised exoskeleton-VR motion mapping and involuntary muscle responses to virtual stimuli, validating neural pathway engagement. Notably, 75% of subjects exhibited subconscious arm movements during VR-induced phantom limb activation, suggesting enhanced proprioceptive integration. This bidirectional feedback framework advances personalised rehabilitation by objectively quantifying recovery through sEMG-driven metrics while maintaining patient engagement through adaptive virtual tasks.

In the paper ‘A Demonstration Trajectory Segmentation Approach for Wheelchair-mounted Robotic Arms’, Chi et al. proposed a novel trajectory segmentation approach for wheelchair-mounted assistive robots, aiming to enhance their ability to learn and reproduce complex tasks in unstructured environments. The proposed GTW-BP-AR-HMM method integrates the generalised time warping (GTW) algorithm with a beta process autoregressive hidden Markov model (BP-AR-HMM) to address challenges in aligning and segmenting variable-length demonstration trajectories. By first aligning multiple task demonstrations temporally using GTW, the framework mitigates inconsistencies in trajectory lengths, a critical limitation of traditional BP-AR-HMM. Subsequent segmentation identifies motion primitives, enabling the creation of reusable task libraries. Validation on a 6-DOF robotic arm demonstrated high accuracy in segmenting tasks such as holding a water glass and eating, with segmentation points closely matching manual annotations. This approach reduces reliance on expert input, simplifying the demonstration process for nonspecialists while improving the robot's adaptability to user-specific needs. The work underscores the potential of combining temporal alignment and probabilistic modelling to advance assistive robotics in healthcare and home settings.

In the paper ‘Processing Water-Medium Spinal Endoscopic Images Based on Dual Transmittance’, Hu and Zhang proposed a novel dual-transmittance fusion method to enhance water-medium spinal endoscopic images degraded by suspended contaminants during minimally invasive procedures. By adapting an underwater imaging model to spinal endoscopy, the authors estimate transmittance through boundary constraints and local contrast analysis, addressing light scattering and absorption caused by turbid surgical environments. The fusion of these transmittance maps, optimised via guided filtering, minimises artefacts while preserving structural integrity. Ambient light estimation using a “Shades of Grey” algorithm further ensures balanced colour correction. Experimental validation against classical methods—including WGIF, AGCWD and MSRCR—demonstrates superior performance in entropy, contrast and structural similarity metrics, effectively restoring tissue textures without overexposure or distortion. This physics-informed approach bridges computational efficiency with clinical utility, offering real-time image clarity for precise intraoperative navigation. The method's robustness across diverse degradation scenarios, from blood contamination to tool shadows, positions it as a pivotal advancement in enhancing visualisation for complex spinal surgeries, promising improved surgical accuracy and safety.

特刊:人工智能技术及其在医疗机器人中的应用
人工智能(AI)与医疗机器人的集成已成为现代医疗保健的基石,推动了精确度、适应性和患者治疗结果的变革性进步。虽然计算工具长期以来一直支持诊断过程,但它们的作用正在从被动辅助发展成为治疗决策的积极合作者。在这种模式下,知识驱动的深度学习系统正在重新定义可能性——使机器人能够解释复杂的数据,适应动态的临床环境,并以类似人类的上下文感知执行任务。本期特刊旨在展示人工智能技术在医疗机器人领域应用的最新进展。主要内容包括但不限于被动数据自适应、力反馈跟踪、图像处理与诊断、手术导航、外骨骼系统等。这些研究涵盖了医疗机器人的各种应用场景,最终目标是最大限度地提高人工智能的自主性。我们收到了来自世界各地的31篇投稿论文,经过严格的同行评议,我们最终选择了9篇论文发表。精选的论文涵盖了各种引人入胜的研究课题,这些课题都在各自的领域取得了重大突破。我们认为这些被认可的论文对其研究领域具有指导意义,可以帮助研究人员增强对当前趋势的理解。衷心感谢选择我们平台的作者和为论文发表提供帮助的所有工作人员。在文章“通过可信的局部上下文表示进行模型适应”中,Tang等人指出,传统的模型转移技术需要标记源数据,这使得它们不适用于隐私敏感的医疗领域。为了解决这些无源域自适应(SFDA)的关键问题,他们提出了一种可信的局部上下文表示(CLCR)方法,该方法通过特征空间中的几何结构挖掘显著增强了模型的泛化。该方法创新性地构建了两阶段学习框架:在源模型的预训练阶段引入数据增强的互信息正则化项,增强模型对样本判别特征的学习能力;在目标域适应阶段设计深空定步行走策略,动态捕获每个目标样本的局部可信上下文特征,并将其作为伪标签进行语义融合。在Office-31、Office Home和VisDA三个基准数据集上的实验表明,CLCR在12个跨域任务中平均准确率达到89.2%,比现有的最优SFDA方法提高了3.1%,甚至超过了一些需要源数据参与的域自适应方法。该研究为解决医疗保健跨机构模型转移中的隐私绩效冲突提供了一种新方法,其上下文发现机制对无监督表示学习具有普遍意义。在文章“不确定表面扫描的人机协作方法”中,Zhao等人介绍了一种用于不确定表面扫描的人机协作框架,该框架将远程操作与自适应力控制协同起来。该系统使操作人员能够远程引导扫描轨迹,而导纳控制器通过实时刚度调整保持恒定的接触力,在刚度未知的表面上实现±1 N的跟踪精度。当角度偏差超过5°时触发自动工具重新定向,通过摩擦补偿力感知确保垂直对准。使用模拟超声探头的实验验证表明,与纯远程操作相比,工作量减少了63%,成功地处理了海绵和弹簧支撑的幻影。混合控制体系结构将人类引导与机器人依从性分离,允许同时进行xy轴运动控制和z轴力调节,而无需事先进行环境建模。这种方法将人类的直觉与机器人的精度联系起来,对于需要安全组织相互作用的医疗扫描应用特别有价值。在“AESR3D:用于小梁CT超分辨率的3D过完整自编码器”研究中,Zhang等人提出了一种3D过完整自编码器框架AESR3D,通过增强低分辨率小梁CT扫描来解决骨质疏松症诊断的局限性。目前对骨矿物质密度(BMD)的依赖忽略了对生物力学强度至关重要的微观结构恶化。AESR3D结合了CNN-transformer混合架构和双任务正则化,同时优化超分辨率重建和低分辨率恢复,以防止在恢复结构细节时过拟合。 该模型实现了最先进的性能(SSIM: 0.996),并在小梁度量(ICC = 0.917)中显示出与高分辨率地面真值的强相关性。通过整合无监督的k均值分割,它可以在没有标记数据的情况下精确地可视化骨骼微结构。AESR3D超越了现有的医学/自然图像SR方法,连接了微CT研究和临床CT应用,为增强骨质疏松症评估提供了一种无创工具,提高了骨质量评估的诊断准确性。在论文“分割与检测:双参数前列腺MRI上PIRADS病变定位的深度学习模型的开发和评估”中,Min等人通过严格比较分割(nnUNet)和对象检测(nnDetection)深度学习方法,解决了双参数MRI (bp-MRI)中自动前列腺癌检测的关键挑战。前列腺癌是男性死亡的主要原因,需要精确的早期诊断,但MRI解释仍然依赖于放射科医生和时间密集。作者引入了新的损伤级灵敏度和精度度量,克服了传统体素评估的局限性,并提出了集成方法来协同两种模型的优势。结果表明,nnDetection在病变级别上的灵敏度更高(对于PIRADS≥3个病变,3个假阳性时为80.78% vs 60.40%),而nnUNet在体素级别上的准确性更高(DSC为0.46 vs 0.35)。集成技术进一步提高了性能,达到了82.24%的损伤级灵敏度,强调了它们平衡检测鲁棒性和空间精度的潜力。通过外部数据集的验证,该框架强调了分割和检测范式相结合的临床可行性,特别是对于需要高灵敏度的mri引导活检。这项工作通过弥合方法差距和提供与临床优先事项一致的指标来推进计算机辅助诊断,通过人工智能驱动的病变定位为改善前列腺癌管理提供了可扩展的途径。在论文“使用深度学习进行机器人辅助视网膜下注射的针头检测和定位”中,Zhou等人解决了机器人辅助视网膜下注射中精确针头检测和定位的关键挑战,这是一项需要微米级精度的高风险眼科手术。利用显微镜集成光学相干断层扫描(MI-OCT),作者提出了一个结合ROI裁剪和深度学习的强大框架,以克服由组织变形和镜面噪声引起的手动针头跟踪的局限性。对5种卷积神经网络架构进行了评估,其中表现最好的模型(network II)在离体猪眼上实现了100%的检测成功率,并以0.55的交集-过联合(Intersection-over-Union)定位了针段。通过分析边界盒边缘,该方法的深度估计精度低于10 μm,这对于导航脆弱的视网膜层至关重要。相邻OCT扫描的整合增强了空间上下文感知,优于基于几何特征的方法。这项工作通过实现实时、抗变形的针头跟踪,促进了术中成像引导机器人技术的发展,潜在地降低了基因治疗和视网膜下出血治疗的手术风险。经过验证的框架填补了眼科机器人技术的关键空白,为视网膜手术中更安全、更精确的机器人干预提供了一条途径。在论文《基于PointMLP_RegNet的骨盆表面自动特征点提取方法》中,Kou等人指出,在机器人辅助骨折复位中,从复杂骨盆结构中精确提取解剖标志对于提高3D/3D配准精度至关重要。为了解决手动和传统自动化方法的挑战,本研究引入了PointMLP_RegNet,这是一个基于PointMLP的深度学习框架,通过用回归模块替换其分类层来预测10个骨盆地标的空间坐标。通过下采样、平移、旋转和噪声注入增强的40个ct重构点云的临床数据集进行训练,该模型通过留一交叉验证显示了稳健的性能。结果显示,所有地标的精度都在5毫米以下,其中80%的误差低于4毫米,在精度上超过了PointNet++和PointNet(平均误差降低了20%-30%),同时保持了优越的计算效率(0.688 M参数)。通过自动特征提取,该方法最大限度地减少了人为的可变性,简化了术中登记,提高了手术计划的可靠性。 这一创新弥补了骨盆骨折机器人的技术差距,为临床应用提供了可扩展的解决方案,并强调了骨科导航系统中定制深度学习架构的变革潜力。Gao等人在论文《康复外骨骼系统与双向虚拟现实反馈训练策略》中介绍了一种用于中风康复的vr集成外骨骼系统,将沉浸式3D环境与实时双向反馈相结合,增强神经再训练。该系统采用了一种新的肌肉激活模型,融合了线性和非线性收缩动力学,解决了传统基于hill的模型的局限性,而WOA-GRNN算法实现了精确的肌肉力量预测(RMSE: 0.0173, MAPE: 1.25%)。健康参与者的实验展示了同步的外骨骼- vr运动映射和不随意肌肉对虚拟刺激的反应,验证了神经通路的参与。值得注意的是,75%的受试者在vr诱导的幻肢激活过程中表现出潜意识的手臂运动,表明本体感觉整合增强。这种双向反馈框架通过肌电图驱动的指标客观量化康复,同时通过自适应虚拟任务保持患者参与,从而推进个性化康复。Chi等人在论文《A Demonstration Trajectory Segmentation Approach for轮椅式机器人手臂》中提出了一种新的轮椅式辅助机器人的轨迹分割方法,旨在增强其在非结构化环境中学习和重现复杂任务的能力。提出的GTW-BP-AR-HMM方法将广义时间规整(GTW)算法与β过程自回归隐马尔可夫模型(BP-AR-HMM)相结合,以解决变长演示轨迹对齐和分割的挑战。通过首先使用GTW暂时对齐多个任务演示,框架减轻了轨迹长度的不一致性,这是传统BP-AR-HMM的一个关键限制。随后的分割识别运动原语,从而创建可重用的任务库。在六自由度机械臂上的验证表明,在拿水杯和吃东西等分割任务中,分割点与手动注释非常匹配,具有很高的准确性。这种方法减少了对专家输入的依赖,简化了非专业人员的演示过程,同时提高了机器人对用户特定需求的适应性。这项工作强调了将时间对齐和概率建模相结合的潜力,以推进医疗保健和家庭环境中的辅助机器人。在论文“基于双透射率的水介质脊柱内镜图像处理”中,Hu和Zhang提出了一种新的双透射融合方法来增强微创手术中被悬浮污染物降解的水介质脊柱内镜图像。通过将水下成像模型应用于脊柱内窥镜检查,作者通过边界约束和局部对比分析来估计透光率,解决手术环境浑浊引起的光散射和吸收问题。通过引导过滤优化这些透光率图的融合,在保持结构完整性的同时最大限度地减少了人工制品。使用“灰色阴影”算法的环境光估计进一步确保平衡的色彩校正。针对经典方法(包括WGIF、AGCWD和msrcr)的实验验证表明,在熵、对比度和结构相似性指标方面具有优越的性能,可以有效地恢复组织纹理,而不会过度曝光或失真。这种基于物理的方法将计算效率与临床应用相结合,为精确的术中导航提供实时图像清晰度。该方法在各种退化情况下(从血液污染到工具阴影)的稳健性使其成为增强复杂脊柱手术可视化的关键进步,有望提高手术的准确性和安全性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CAAI Transactions on Intelligence Technology
CAAI Transactions on Intelligence Technology COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
11.00
自引率
3.90%
发文量
134
审稿时长
35 weeks
期刊介绍: CAAI Transactions on Intelligence Technology is a leading venue for original research on the theoretical and experimental aspects of artificial intelligence technology. We are a fully open access journal co-published by the Institution of Engineering and Technology (IET) and the Chinese Association for Artificial Intelligence (CAAI) providing research which is openly accessible to read and share worldwide.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信