IEEE Robotics and Automation Letters最新文献

筛选
英文 中文
Bundle Adjustment With Backtracking Line Search on Manifold 集形上带回溯线搜索的集形调整
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-09-04 DOI: 10.1109/LRA.2025.3606801
Lipu Zhou
{"title":"Bundle Adjustment With Backtracking Line Search on Manifold","authors":"Lipu Zhou","doi":"10.1109/LRA.2025.3606801","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606801","url":null,"abstract":"Bundle adjustment (BA) is a fundamental problem in visual 3D reconstruction. The Levenberg-Marquardt (LM) algorithm, a trust region method, is widely regarded as the gold standard for solving BA problems. In each LM iteration, the current solution is updated by an increment vector derived from solving a linear system with a damping factor to regularize the step size. However, directly applying this increment may fail to reduce the reprojection cost. To address this problem, the LM algorithm employs a trial-and-error strategy. Specifically, it repeatedly solves the linear system with an increasing damping factor until the cost decreases. This process leads to invalid iterations. Since solving the linear system is typically the most time-consuming step and a large damping factor limits the step size in the subsequent iterations, this strategy wastes computational resources and slows down convergence. However, this issue has received little attention in prior research on BA. On the other hand, line search offers an alternative technique to control the step size, however, its application to BA remains underexplored. This letter presents a simple yet effective solution to overcome the limitation of the LM algorithm. We introduce on-manifold backtracking line search into the LM algorithm to accelerate convergence. The Armijo condition is adopted to ensure a sufficient decrease in reprojection cost. We show that the Armijo condition on manifold can be efficiently computed in the LM framework. By fusing line search and the LM algorithm to control the step size, our method effectively reduces the number of invalid iterations and improves convergence speed. Extensive empirical evaluations on both unstructured internet image collections and sequential image streams show that our algorithm converges significantly faster compared to state-of-the-art BA algorithms.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10998-11005"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LIVOX-CAM: Adaptive Coarse-to-Fine Visual-Assisted LiDAR Odometry for Solid-State LiDAR LIVOX-CAM:用于固体激光雷达的自适应粗到精视觉辅助激光雷达里程计
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-09-04 DOI: 10.1109/LRA.2025.3606803
Xiaolong Cheng;Keke Geng;Zhichao Liu;Tianxiao Ma;Ye Sun
{"title":"LIVOX-CAM: Adaptive Coarse-to-Fine Visual-Assisted LiDAR Odometry for Solid-State LiDAR","authors":"Xiaolong Cheng;Keke Geng;Zhichao Liu;Tianxiao Ma;Ye Sun","doi":"10.1109/LRA.2025.3606803","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606803","url":null,"abstract":"The application of solid-state LiDAR is expanding across diverse scenarios. However, most existing methods rely on IMU data fusion to achieve stable performance. This letter presents LIVOX-CAM, a visual-assisted LiDAR odometry based on KISS-ICP, specifically tailored for small field-of-view (FoV) solid-state LiDAR. The system adopts a two-stage architecture comprising a front-end for data pre-processing and a back-end for coarse-to-fine iterative pose optimization. The system is designed to significantly broaden its application scenarios by incorporating a spatial adaptive module and visual assistance. Extensive experiments on public and private datasets show that, even without IMU input, the proposed method achieves robust and accurate performance in challenging scenes, including autonomous driving, degraded scenarios, unstructured environments, and aerial mapping, exhibiting strong competitiveness against state-of-the-art approaches.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10982-10989"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Moving Horizon Estimation for Autonomous Agricultural Vehicles With GNSS Outliers Using a Robust Loss Function 基于鲁棒损失函数的GNSS离群值自动农用车鲁棒运动地平线估计
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-09-04 DOI: 10.1109/LRA.2025.3606377
Nestor N. Deniz;Guido M. Sanchez;Fernando A. Auat Cheein;Leonardo L. Giovanini
{"title":"Robust Moving Horizon Estimation for Autonomous Agricultural Vehicles With GNSS Outliers Using a Robust Loss Function","authors":"Nestor N. Deniz;Guido M. Sanchez;Fernando A. Auat Cheein;Leonardo L. Giovanini","doi":"10.1109/LRA.2025.3606377","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606377","url":null,"abstract":"We propose a Moving Horizon Estimator (MHE) for autonomous agricultural vehicles to handle GNSS outliers, a common issue in farming. To improve robustness, we replace the standard <inline-formula><tex-math>$mathrm{L_{2}}$</tex-math></inline-formula> stage cost with a loss function based on the square of the derivative of the General Adaptive Robust Loss (GARL). The GARL framework, controlled by parameters <inline-formula><tex-math>$alpha in [1,,2)$</tex-math></inline-formula> and <inline-formula><tex-math>$c &gt; 0$</tex-math></inline-formula>, balances between quadratic and outlier-resistant behavior. By using the derivative, we avoid singularities at <inline-formula><tex-math>$alpha = 0$</tex-math></inline-formula> and <inline-formula><tex-math>$alpha = 2$</tex-math></inline-formula>, simplifying tuning and ensuring stable optimization within MHE. This approach retains the flexibility of GARL while narrowing the design space to a singularity-free regime. We prove robust stability under standard assumptions. Simulations show our method outperforms <inline-formula><tex-math>$mathrm{L_{2}}$</tex-math></inline-formula>-based MHE and state-of-the-art methods, rejecting GNSS outliers. Field experiments validate its practical effectiveness.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10815-10821"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRISM: Pointcloud Reintegrated Inference via Segmentation and Cross-Attention for Manipulation PRISM:基于分割和交叉注意的点云再整合推理
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-09-04 DOI: 10.1109/LRA.2025.3606379
Daqi Huang;Zhehao Cai;Yuzhi Hao;Zechen Li;Chee-Meng Chew
{"title":"PRISM: Pointcloud Reintegrated Inference via Segmentation and Cross-Attention for Manipulation","authors":"Daqi Huang;Zhehao Cai;Yuzhi Hao;Zechen Li;Chee-Meng Chew","doi":"10.1109/LRA.2025.3606379","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606379","url":null,"abstract":"Robust imitation learning for robot manipulation requires comprehensive 3D perception, yet many existing methods struggle in cluttered environments. Fixed camera view approaches are vulnerable to perspective changes, and 3D point cloud techniques often limit themselves to keyframes predictions, reducing their efficacy in dynamic, contact-intensive tasks. To address these challenges, we propose PRISM, designed as an end-to-end framework that directly learns from raw point cloud observations and robot states, eliminating the need for pre-trained models or external datasets. PRISM comprises three main components: a segmentation embedding unit that partitions the raw point cloud into distinct object clusters and encodes local geometric details; a cross-attention component that merges these visual features with processed robot joint states to highlight relevant targets; and a diffusion module that translates the fused representation into smooth robot actions. With training on 100 demonstrations per task, PRISM surpasses both 2D and 3D baseline policies in accuracy and efficiency within our simulated environments, demonstrating strong robustness in complex, object-dense scenarios.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"11110-11117"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
POW4R: POint-Wise Full-Velocity Estimation Using 4D Radar-Camera Fusion Beyond Radial Limitations POW4R:点明智的全速估计使用四维雷达-相机融合超越径向限制
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-09-04 DOI: 10.1109/LRA.2025.3606356
Hyun-Yong Jeon;Minseong Choi;Yeongseok Lee;Sangyoon Oh;Seunghoon Yang;Keun Ha Choi;Kyung-Soo Kim
{"title":"POW4R: POint-Wise Full-Velocity Estimation Using 4D Radar-Camera Fusion Beyond Radial Limitations","authors":"Hyun-Yong Jeon;Minseong Choi;Yeongseok Lee;Sangyoon Oh;Seunghoon Yang;Keun Ha Choi;Kyung-Soo Kim","doi":"10.1109/LRA.2025.3606356","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606356","url":null,"abstract":"This letter proposes an algorithm for full-velocity estimation by fusing radial velocity vectors obtained from 4D radar with optical flow vectors extracted from camera images. The full-velocity algorithm consists of a preprocessing step and a full-velocity vector estimation step. In preprocessing, radar noise is removed and ego-vehicle velocity estimation is enhanced using a Hampel filter for improved robustness in dynamic environments. In the full-velocity estimation stage, the algorithm estimates full-velocity vectors using a formulation derived from mathematical equations that incorporate multiple constraints. To evaluate the proposed method, an embedded system is implemented on a real vehicle, and datasets are collected under various scenarios. Experimental results show that the proposed algorithm significantly improves object velocity estimation performance. (error rate: baseline 81% <inline-formula><tex-math>$rightarrow$</tex-math></inline-formula> proposed 31%).","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10934-10941"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11150770","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OVITA: Open-Vocabulary Interpretable Trajectory Adaptations 开放词汇可解释的轨迹适应
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-09-04 DOI: 10.1109/LRA.2025.3606309
Anurag Maurya;Tashmoy Ghosh;Anh Nguyen;Ravi Prakash
{"title":"OVITA: Open-Vocabulary Interpretable Trajectory Adaptations","authors":"Anurag Maurya;Tashmoy Ghosh;Anh Nguyen;Ravi Prakash","doi":"10.1109/LRA.2025.3606309","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606309","url":null,"abstract":"Adapting trajectories to dynamic situations and user preferences is crucial for robot operation in unstructured environments with non-expert users. Natural language enables users to express these adjustments in an interactive manner. We introduce OVITA, an interpretable, open-vocabulary, language-driven framework designed for adapting robot trajectories in dynamic and novel situations based on human instructions. OVITA leverages multiple pre-trained Large Language Models (LLMs) to integrate user commands into trajectories generated by motion planners or those learned through demonstrations. OVITA employs code as an adaptation policy generated by an LLM, enabling users to adjust individual waypoints, thus providing flexible control. Another LLM, which acts as a code explainer, removes the need for expert users, enabling intuitive interactions. The efficacy and significance of the proposed OVITA framework is demonstrated through extensive simulations and real-world environments with diverse tasks involving spatiotemporal variations on heterogeneous robotic platforms such as a KUKA IIWA robot manipulator, Clearpath Jackal ground robot, and CrazyFlie drone.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"11054-11061"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experience-based Optimal Motion Planning Algorithm for Solving Difficult Planning Problems Using a Limited Dataset 利用有限数据集求解困难规划问题的基于经验的最优运动规划算法
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-09-04 DOI: 10.1109/LRA.2025.3606360
Ryota Takamido;Jun Ota
{"title":"Experience-based Optimal Motion Planning Algorithm for Solving Difficult Planning Problems Using a Limited Dataset","authors":"Ryota Takamido;Jun Ota","doi":"10.1109/LRA.2025.3606360","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606360","url":null,"abstract":"This study addresses the challenge of generating high-quality motion plans within a short computation time using only a limited dataset. In the informed experience-driven random trees connect star (IERTC*) process, the algorithm flexibly explores the search trees by morphing the micro paths generated from a single experience while reducing the path cost by introducing a rewiring process and an informed sampling process. Unlike recent learning-based or generative methods that rely on model training or probabilistic priors, IERTC* employs a non-parametric retrieve-and-repair strategy to generalize prior experiences without requiring pretraining or large datasets. This design facilitates broad exploration beyond the original experience, robust adaptation to unseen environments, high flexibility in cluttered environments, and efficient deployment without offline training. Experimental results from a general motion benchmark test revealed that IERTC* significantly improved the planning success rate in the cluttered environment compared to a state-of-the-art optimal motion planning algorithm (an average improvement of 49.3%) while also comparable reduction of the solution cost (a reduction of 56.3% from a benchmark algorithm) utilizing just one hundred experiences. Furthermore, the results demonstrated outstanding planning performance even when only one experience was available (a 43.8% improvement in success rate and a 57.8% reduction in solution cost).","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"11102-11109"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11150721","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reactive Aerobatic Flight via Reinforcement Learning 基于强化学习的反应式特技飞行
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-09-04 DOI: 10.1109/LRA.2025.3606383
Zhichao Han;Xijie Huang;Zhuxiu Xu;Jiarui Zhang;Yuze Wu;Mingyang Wang;Tianyue Wu;Fei Gao
{"title":"Reactive Aerobatic Flight via Reinforcement Learning","authors":"Zhichao Han;Xijie Huang;Zhuxiu Xu;Jiarui Zhang;Yuze Wu;Mingyang Wang;Tianyue Wu;Fei Gao","doi":"10.1109/LRA.2025.3606383","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606383","url":null,"abstract":"Quadrotors have demonstrated the versatility, yet their full aerobatic potential remains largely untapped due to inherent underactuation and the complexity of aggressive maneuvers. Traditional approaches, separating trajectory optimization and tracking control, suffer from tracking inaccuracies, computational latency, and sensitivity to initial conditions, limiting their effectiveness in dynamic, high-agility scenarios. Inspired by recent advances in data-driven methods, we propose a reinforcement learning-based framework that directly maps drone states and aerobatic targets to control commands, eliminating modular separation to enable quadrotors to perform end-to-end policy optimization for extreme aerobatic maneuvers. To ensure efficient and stable training, we introduce an automated curriculum learning strategy that dynamically adjusts aerobatic task difficulty. Enabled by domain randomization for robust zero-shot sim-to-real transfer, our approach is validated in demanding real-world experiments, including the demonstration of an autonomous drone continuously performing inverted flight while reactively navigating a moving gate.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"11014-11021"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145059838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JingWei: A Waterfowl-Inspired Flapping-Wing Robot With Multimodal Aerial-Aquatic Mobility 经纬:一种受水禽启发的具有多模态航水机动能力的扑翼机器人
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-09-04 DOI: 10.1109/LRA.2025.3606355
Chaofeng Wu;Yiming Xiao;Jiaxin Zhao;Feng Cui;Xiaosheng Wu;Wu Liu
{"title":"JingWei: A Waterfowl-Inspired Flapping-Wing Robot With Multimodal Aerial-Aquatic Mobility","authors":"Chaofeng Wu;Yiming Xiao;Jiaxin Zhao;Feng Cui;Xiaosheng Wu;Wu Liu","doi":"10.1109/LRA.2025.3606355","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606355","url":null,"abstract":"Aerial-aquatic amphibious robots can enhance adaptability to complex environments and have become a research hotspot in recent years. Waterfowl in nature exhibit remarkable multimodal cross-domain locomotion capabilities. Inspired by the aerial flight and aquatic swimming behavior of waterfowl, we propose a biomimetic flapping-wing aerial- aquatic robot called JingWei. JingWei achieves flight attitude control through the integration of flapping wings and stroke plane adjustment, enabling vertical takeoff, hovering, and six degrees of freedom (DoF) free flight. Its biomimetic flippers with asymmetric stiffness flexure hinges, fabricated using the Smart Composite Microstructures (SCM) method, are paired with a lightweight paddling mechanism, allowing efficient movement on water surface. The amphibious robot has a width of 36 cm and a weight of 39.4 g. JingWei is capable of sustained flight for 5.4 minutes or swimming for approximately 1.5 hours, with mode transitions between swimming and flying completed in under 0.8 seconds. Through experiments on flight, swimming, and mode transitions, we validated the robot's multimodal locomotion capabilities, providing new insights into the system design of biomimetic aerial-aquatic robots.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"11046-11053"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One-Shot Demonstration for Slicing and Cutting Everyday Food Items 切片和切割日常食品的一次性示范
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-09-04 DOI: 10.1109/LRA.2025.3606310
Yi Liu;Andreas Verleysen;Francis wyffels
{"title":"One-Shot Demonstration for Slicing and Cutting Everyday Food Items","authors":"Yi Liu;Andreas Verleysen;Francis wyffels","doi":"10.1109/LRA.2025.3606310","DOIUrl":"https://doi.org/10.1109/LRA.2025.3606310","url":null,"abstract":"Cutting everyday food items presents a significant challenge in robotics due to the multiple types of knife skills and the unpredictable mechanical behaviour of materials during manipulation. To address this, we propose a one-shot demonstration-based framework that integrates the imitation of both position and force trajectories of knife skills using dynamic movement primitives (DMPs). Our approach combines: (1) a compensation method to replicate human-like force trajectory, and (2) skill-specific constraints enabling online trajectory re-planning during cutting. We designed three knife skill demos for the robot and tested them on 14 unknown food items. The experiments are conducted to evaluate the effectiveness of the proposed force compensation and re-planning methods. The results demonstrate that our framework can successfully imitate various knife skills and cut previously unknown food items with high precision.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 10","pages":"10854-10861"},"PeriodicalIF":5.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11150765","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信