IEEE Robotics and Automation Letters最新文献

筛选
英文 中文
Using Mobile AR for Rapid Feasibility Analysis for Deployment of Robots: A Usability Study With Non-Expert Users 使用移动AR进行机器人部署的快速可行性分析:非专业用户的可用性研究
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-14 DOI: 10.1109/LRA.2025.3560888
Krzysztof Zielinski;Slawomir Tadeja;Bruce Blumberg;Mikkel Baun Kjærgaard
{"title":"Using Mobile AR for Rapid Feasibility Analysis for Deployment of Robots: A Usability Study With Non-Expert Users","authors":"Krzysztof Zielinski;Slawomir Tadeja;Bruce Blumberg;Mikkel Baun Kjærgaard","doi":"10.1109/LRA.2025.3560888","DOIUrl":"https://doi.org/10.1109/LRA.2025.3560888","url":null,"abstract":"Automating a production line with robotic arms is a complex, demanding task that requires not only substantial resources but also a deep understanding of the automated processes and available technologies and tools. Expert integrators must consider factors such as placement, payload, and robot reach requirements to determine the feasibility of automation. Ideally, such considerations are based on a detailed digital simulation developed before any hardware is deployed. However, this process is often time-consuming and challenging. To simplify these processes, we introduce a much simpler method for the feasibility analysis of robotic arms' reachability, designed for non-experts. We implement this method through a mobile, sensing-based prototype tool. The two-step experimental evaluation included the expert user study results, which helped us identify the difficulty levels of various deployment scenarios and refine the initial prototype. The results of the subsequent quantitative study with 22 non-expert participants utilizing both scenarios indicate that users could complete both simple and complex feasibility analyses in under ten minutes, exhibiting similar cognitive loads and high engagement. Overall, the results suggest that the tool was well-received and rated as highly usable, thereby showing a new path for changing the ease of feasibility analysis for automation.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5489-5496"},"PeriodicalIF":4.6,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143888433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Control of a High-Performance Hopping Robot 高性能跳跃机器人的设计与控制
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-14 DOI: 10.1109/LRA.2025.3560884
Samuel Burns;Matthew Woodward
{"title":"Design and Control of a High-Performance Hopping Robot","authors":"Samuel Burns;Matthew Woodward","doi":"10.1109/LRA.2025.3560884","DOIUrl":"https://doi.org/10.1109/LRA.2025.3560884","url":null,"abstract":"Jumping and hopping locomotion are efficient means of traversing unstructured rugged terrain with the former being the focus of roboticists; a focus that has recently been changing. This focus has led to significant performance and understanding in jumping robots but with limited practical applications as they require significant time between jumps to store energy, thus relegating jumping to a secondary role in locomotion. Hopping locomotion, however, can preserve and transfer energy to subsequent hops without long energy storage periods. However, incorporating the performance observed in jumping systems into their hopping counterparts is an ongoing challenge. To date, hopping robots typically operate around 1 m with a maximum of 1.63 m whereas jumping robots have reached heights of 30 m. This is due to the added design and control complexity inherent in developing a system able to input and store the necessary energy while withstanding the forces involved and managing the system's state. Here we report hopping robot design principles for efficient, robust, high-specific energy, and high-energy input actuation through analytical, simulation, and experimental results. The resulting robot (MultiMo-MHR) can hop over 4 meters or <inline-formula><tex-math>$sim$</tex-math></inline-formula>2.4x the current state-of-the-art.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5641-5648"},"PeriodicalIF":4.6,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Agent Learning Based Aerial Trajectory Tracking 基于双智能体学习的空中轨迹跟踪
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-14 DOI: 10.1109/LRA.2025.3560841
Shaswat Garg;Houman Masnavi;Baris Fidan;Farrokh Janabi-Sharifi
{"title":"Dual Agent Learning Based Aerial Trajectory Tracking","authors":"Shaswat Garg;Houman Masnavi;Baris Fidan;Farrokh Janabi-Sharifi","doi":"10.1109/LRA.2025.3560841","DOIUrl":"https://doi.org/10.1109/LRA.2025.3560841","url":null,"abstract":"This paper presents a novel reinforcement learning framework for trajectory tracking of autonomous aerial vehicles in cluttered environments using a dual-agent architecture. Traditional optimization methods for trajectory tracking face significant computational challenges and lack robustness in dynamic environments. Our approach employs deep reinforcement learning (RL) to overcome these limitations, leveraging 3D pointcloud data to perceive the environment without relying on memory-intensive obstacle representations like occupancy grids. The proposed system features two RL agents: one for predicting AAV velocities to follow a reference trajectory and another for managing collision avoidance in the presence of obstacles. This architecture ensures real-time performance and adaptability to uncertainties. We demonstrate the efficacy of our approach through simulated and real-world experiments, highlighting improvements over state-of-the-art RL and optimization-based methods. Additionally, a curriculum learning paradigm is employed to scale the algorithms to more complex environments, ensuring robust trajectory tracking and obstacle avoidance in both static and dynamic scenarios.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5609-5616"},"PeriodicalIF":4.6,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FoAR: Force-Aware Reactive Policy for Contact-Rich Robotic Manipulation 富接触机器人操作的力感知响应策略
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-14 DOI: 10.1109/LRA.2025.3560871
Zihao He;Hongjie Fang;Jingjing Chen;Hao-Shu Fang;Cewu Lu
{"title":"FoAR: Force-Aware Reactive Policy for Contact-Rich Robotic Manipulation","authors":"Zihao He;Hongjie Fang;Jingjing Chen;Hao-Shu Fang;Cewu Lu","doi":"10.1109/LRA.2025.3560871","DOIUrl":"https://doi.org/10.1109/LRA.2025.3560871","url":null,"abstract":"Contact-rich tasks present significant challenges for robotic manipulation policies due to the complex dynamics of contact and the need for precise control. Vision-based policies often struggle with the skill required for such tasks, as they typically lack critical contact feedback modalities like force/torque information. To address this issue, we propose FoAR, a force-aware reactive policy that combines high-frequency force/torque sensing with visual inputs to enhance the performance in contact-rich manipulation. Built upon the RISE policy, FoAR incorporates a multimodal feature fusion mechanism guided by a future contact predictor, enabling dynamic adjustment of force/torque data usage between non-contact and contact phases. Its reactive control strategy also allows FoAR to accomplish contact-rich tasks accurately through simple position control. Experimental results demonstrate that FoAR significantly outperforms all baselines across various challenging contact-rich tasks while maintaining robust performance under unexpected dynamic disturbances.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5625-5632"},"PeriodicalIF":4.6,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DVRP-MHSI: Dynamic Visualization Research Platform for Multimodal Human-Swarm Interaction 多模态人群交互动态可视化研究平台
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-14 DOI: 10.1109/LRA.2025.3560825
Pengming Zhu;Zhiwen Zeng;Weijia Yao;Wei Dai;Huimin Lu;Zongtan Zhou
{"title":"DVRP-MHSI: Dynamic Visualization Research Platform for Multimodal Human-Swarm Interaction","authors":"Pengming Zhu;Zhiwen Zeng;Weijia Yao;Wei Dai;Huimin Lu;Zongtan Zhou","doi":"10.1109/LRA.2025.3560825","DOIUrl":"https://doi.org/10.1109/LRA.2025.3560825","url":null,"abstract":"In recent years, there has been a significant amount of research on algorithms and control methods for distributed collaborative robots. However, the emergence of collective behavior in a swarm is still difficult to predict and control. Nevertheless, human interaction with the swarm helps render the swarm more predictable and controllable, as human operators can utilize intuition or knowledge that is not always available to the swarm. Therefore, this letter designs the Dynamic Visualization Research Platform for Multimodal Human-Swarm Interaction (DVRP-MHSI), which is an innovative open system that can perform real-time dynamic visualization and is specifically designed to accommodate a multitude of interaction modalities (such as brain-computer, eye-tracking, electromyographic, and touch-based interfaces), thereby expediting progress in human-swarm interaction research. Specifically, the platform consists of custom-made low-cost omnidirectional wheeled mobile robots, multitouch screens and two workstations. In particular, the mutitouch screens can recognize human gestures and the shapes of objects placed on them, and they can also dynamically render diverse scenes. One of the workstations processes communication information within robots and the other one implements human-robot interaction methods. The development of DVRP-MHSI frees researchers from hardware or software details and allows them to focus on versatile swarm algorithms and human-swarm interaction methods without being limited to predefined and static scenarios, tasks, and interfaces. The effectiveness and potential of the platform for human-swarm interaction studies are validated by several demonstrative experiments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5665-5672"},"PeriodicalIF":4.6,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10964715","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
REALM: Real-Time Estimates of Assistance for Learned Models in Human-Robot Interaction 领域:人机交互中学习模型的实时评估
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-14 DOI: 10.1109/LRA.2025.3560862
Michael Hagenow;Julie A. Shah
{"title":"REALM: Real-Time Estimates of Assistance for Learned Models in Human-Robot Interaction","authors":"Michael Hagenow;Julie A. Shah","doi":"10.1109/LRA.2025.3560862","DOIUrl":"https://doi.org/10.1109/LRA.2025.3560862","url":null,"abstract":"There are a variety of mechanisms (i.e., input types) for real-time human interaction that can facilitate effective human-robot teaming. For example, previous works have shown how teleoperation, corrective, and discrete (i.e., preference over a small number of choices) input can enable robots to complete complex tasks. However, few previous works have looked at combining different methods, and in particular, opportunities for a robot to estimate and elicit the most effective form of assistance given its understanding of a task. In this letter, we propose a method for estimating the value of different human assistance mechanisms based on the action uncertainty of a robot policy. Our key idea is to construct mathematical expressions for the expected post-interaction differential entropy (i.e., uncertainty) of a stochastic robot policy to compare the expected value of different interactions. As each type of human input imposes a different requirement for human involvement, we demonstrate how differential entropy estimates can be combined with a likelihood penalization approach to effectively balance feedback informational needs with the level of required input. We demonstrate evidence of how our approach interfaces with emergent learning models (e.g., a diffusion model) to produce accurate assistance value estimates through both simulation and a robot user study. Our user study results indicate that the proposed approach can enable task completion with minimal human feedback for uncertain robot behaviors.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5473-5480"},"PeriodicalIF":4.6,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143888232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FS$^{2}$D: Fully Sparse Few-Shot 3D Object Detection FS$^{2}$D:完全稀疏的少镜头3D物体检测
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-14 DOI: 10.1109/LRA.2025.3560868
Chunzheng Li;Gaihua Wang;Zeng Liang;Qian Long;Zhengshu Zhou;Xuran Pan
{"title":"FS$^{2}$D: Fully Sparse Few-Shot 3D Object Detection","authors":"Chunzheng Li;Gaihua Wang;Zeng Liang;Qian Long;Zhengshu Zhou;Xuran Pan","doi":"10.1109/LRA.2025.3560868","DOIUrl":"https://doi.org/10.1109/LRA.2025.3560868","url":null,"abstract":"Corner cases are a focal issue in current autonomous driving systems, with a significant portion attributed to few-shot detection. Due to the sparse distribution of point cloud data and the real-time requirements of autonomous driving, traditional few-shot detection methods face challenges in direct application to the 3D domain, making it more difficult for outdoor scene 3D detectors to handle corner cases. In this study, we employ fully sparse feature matching and aggregation operations, utilizing meta-learning methods to enhance performance on few-shot categories without increasing network inference parameters. Furthermore, our few-shot research is based on the inherent characteristics of publicly available data without introducing additional categories, allowing for fair comparisons with existing methods. Extensive experiments were conducted on the widely used nuScenes dataset to validate the effectiveness of our method. We demonstrate superior performance compared to the baseline method, especially in handling few-shot categories.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5847-5854"},"PeriodicalIF":4.6,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143902689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiVeDet: Lightweight Density-Guided Adaptive Transformer for Online On-Device Vessel Detection LiVeDet:用于在线设备上船舶检测的轻量级密度导向自适应变压器
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-10 DOI: 10.1109/LRA.2025.3559834
Zijie Zhang;Changhong Fu;Yongkang Cao;Mengyuan Li;Haobo Zuo
{"title":"LiVeDet: Lightweight Density-Guided Adaptive Transformer for Online On-Device Vessel Detection","authors":"Zijie Zhang;Changhong Fu;Yongkang Cao;Mengyuan Li;Haobo Zuo","doi":"10.1109/LRA.2025.3559834","DOIUrl":"https://doi.org/10.1109/LRA.2025.3559834","url":null,"abstract":"Vision-based online vessel detection boosts the automation of waterways monitoring, transportation management and navigation safety. However, a significant gap exists in on-device deployment between general high-performance PCs/servers and embedded AI processors. Existing state-of-the-art (SOTA) online vessel detectors lack sufficient accuracy and are prone to high latency on the edge AI camera, especially in scenarios with dense vessels and diverse distributions. To solve the above issues, a novel lightweight framework with density-guided adaptive Transformer (LiVeDet) is proposed for the edge AI camera to achieve online on-device vessel detection. Specifically, a new instance-aware representation extractor is designed to suppress cluttered background noise and capture instance-aware content information. Additionally, an innovative vessel distribution estimator is developed to direct superior feature representation learning by focusing on local regions with varying vessel density. Besides, a novel dynamic region embedding is presented to integrate hierarchical features represented by multi-scale vessels. A new benchmark comprising 100 high-definition, high-framerate video sequences from vessel-intensive scenarios is established to evaluate the efficacy of vessel detectors under challenging conditions prevalent in dynamic waterways. Extensive evaluations on this challenging benchmark demonstrate the robustness and efficiency of LiVeDet, achieving 32.9 FPS on the edge AI camera. Furthermore, real-world applications confirm the practicality of the proposed method.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5513-5520"},"PeriodicalIF":4.6,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143888385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RGB-Based Category-Level Object Pose Estimation via Depth Recovery and Adaptive Refinement 基于rgb的基于深度恢复和自适应细化的类别级目标姿态估计
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-10 DOI: 10.1109/LRA.2025.3559841
Hui Yang;Wei Sun;Jian Liu;Jin Zheng;Zhiwen Zeng;Ajmal Mian
{"title":"RGB-Based Category-Level Object Pose Estimation via Depth Recovery and Adaptive Refinement","authors":"Hui Yang;Wei Sun;Jian Liu;Jin Zheng;Zhiwen Zeng;Ajmal Mian","doi":"10.1109/LRA.2025.3559841","DOIUrl":"https://doi.org/10.1109/LRA.2025.3559841","url":null,"abstract":"Category-level pose estimation methods have received widespread attention as they can be generalized to intra-class unseen objects. Although RGB-D-based category-level methods have made significant progress, reliance on depth image limits practical application. RGB-based methods offer a more practical and cost-effective solution. However, current RGB-based methods struggle with object geometry perception, leading to inaccurate pose estimation. We propose depth recovery and adaptive refinement for category-level object pose estimation from a single RGB image. We leverage DINOv2 to reconstruct the coarse scene-level depth from the input RGB image and propose an adaptive refinement network based on an encoder-decoder architecture to dynamically improve the predicted coarse depth and reduce its gap from the ground truth. We introduce a 2D–3D consistency loss to ensure correspondence between the point cloud obtained from depth projection and the objects in the 2D image. This consistency supervision enables the model to maintain alignment between the depth image and the point cloud. Finally, we extract features from the refined point cloud and feed them into two confidence-aware rotation regression branches and a translation and size prediction residual branch for end-to-end training. Decoupling the rotation matrix provides a more direct representation, which facilitates parameter optimization and gradient propagation. Extensive experiments on the REAL275 and CAMERA25 datasets demonstrate the superior performance of our method. Real-world estimation and robotic grasping experiments demonstrate our model robustness to occlusion, clutter environments, and low-textured objects. Our code and robotic grasping video are available at DA-Pose.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5377-5384"},"PeriodicalIF":4.6,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Cross-Modal Visuomotor Policies for Autonomous Drone Navigation 自主无人机导航的跨模态视觉运动策略学习
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-10 DOI: 10.1109/LRA.2025.3559824
Yuhang Zhang;Jiaping Xiao;Mir Feroskhan
{"title":"Learning Cross-Modal Visuomotor Policies for Autonomous Drone Navigation","authors":"Yuhang Zhang;Jiaping Xiao;Mir Feroskhan","doi":"10.1109/LRA.2025.3559824","DOIUrl":"https://doi.org/10.1109/LRA.2025.3559824","url":null,"abstract":"Developing effective vision-based navigation algorithms adapting to various scenarios is a significant challenge for autonomous drone systems, with vast potential in diverse real-world applications. This paper proposes a novel visuomotor policy learning framework for monocular autonomous navigation, combining cross-modal contrastive learning with deep reinforcement learning (DRL) to train a visuomotor policy. Our approach first leverages contrastive learning to extract consistent, task-focused visual representations from high-dimensional RGB images as depth images, and then directly maps these representations to action commands with DRL. This framework enables RGB images to capture structural and spatial information similar to depth images, which remains largely invariant under changes in lighting and texture, thereby maintaining robustness across various environments. We evaluate our approach through simulated and physical experiments, showing that our visuomotor policy outperforms baseline methods in both effectiveness and resilience to unseen visual disturbances. Our findings suggest that the key to enhancing transferability in monocular RGB-based navigation lies in achieving consistent, well-aligned visual representations across scenarios, which is an aspect often lacking in traditional end-to-end approaches.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5425-5432"},"PeriodicalIF":4.6,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143860761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信