IEEE Robotics and Automation Letters最新文献

筛选
英文 中文
Flying in Highly Dynamic Environments With End-to-End Learning Approach
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-03 DOI: 10.1109/LRA.2025.3547306
Xiyu Fan;Minghao Lu;Bowen Xu;Peng Lu
{"title":"Flying in Highly Dynamic Environments With End-to-End Learning Approach","authors":"Xiyu Fan;Minghao Lu;Bowen Xu;Peng Lu","doi":"10.1109/LRA.2025.3547306","DOIUrl":"https://doi.org/10.1109/LRA.2025.3547306","url":null,"abstract":"Obstacle avoidance for autonomousaerial vehicles like quadrotors is a popular research topic. Most existing research focuses only on static environments, and obstacle avoidance in environments with multiple dynamic obstacles remains challenging. This letter proposes a novel deep-reinforcement learning-based approach for the quadrotors to navigate through highly dynamic environments. We propose a lidar data encoder to extract obstacle information from the massive point cloud data from the lidar. Multi frames of historical scans will be compressed into a 2-dimension obstacle map while maintaining the obstacle features required. An end-to-end deep neural network is trained to extract the kinematics of dynamic and static obstacles from the obstacle map, and it will generate acceleration commands to the quadrotor to control it to avoid these obstacles. Our approach contains perception and navigating functions in a single neural network, which can change from a navigating state into a hovering state without mode switching. We also present simulations and real-world experiments to show the effectiveness of our approach while navigating in highly dynamic cluttered environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3851-3858"},"PeriodicalIF":4.6,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Task Attention Residual Reinforcement Learning: Advancing Robotic Assembly in Unstructured Environment
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-03 DOI: 10.1109/LRA.2025.3547647
Ze Lin;Chuang Wang;Sihan Wu;Longhan Xie
{"title":"Multimodal Task Attention Residual Reinforcement Learning: Advancing Robotic Assembly in Unstructured Environment","authors":"Ze Lin;Chuang Wang;Sihan Wu;Longhan Xie","doi":"10.1109/LRA.2025.3547647","DOIUrl":"https://doi.org/10.1109/LRA.2025.3547647","url":null,"abstract":"Robotic assembly in dynamic and unstructured environments poses challenges for recent methods, due to background noise and wide-ranging errors. Directly learning from environments relies on complex models and extensive training iterations to adapt. Representation selection approaches, which depend on expert knowledge, can reduce training costs but suffer from poor robustness and high manual costs, limiting scalability. In response, this letter proposes a system that integrates task attention into residual reinforcement learning to address these challenges. By effectively segmenting task-relevant information from the background to leverage task attention, our approach mitigates the impact of environmental variability. Additionally, compared with existing baselines, our task attention mechanism based on instance segmentation and prompt-guided selection does not require additional offline training or local fine-tuning. Experimental evaluations conducted in both simulated and real environments demonstrate the superiority of our method over various baselines. Specifically, our system achieves high efficiency and effectiveness in learning and executing assembly tasks in dynamic and unstructured environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3900-3907"},"PeriodicalIF":4.6,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CMP: Cooperative Motion Prediction With Multi-Agent Communication
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-03 DOI: 10.1109/LRA.2025.3546862
Zehao Wang;Yuping Wang;Zhuoyuan Wu;Hengbo Ma;Zhaowei Li;Hang Qiu;Jiachen Li
{"title":"CMP: Cooperative Motion Prediction With Multi-Agent Communication","authors":"Zehao Wang;Yuping Wang;Zhuoyuan Wu;Hengbo Ma;Zhaowei Li;Hang Qiu;Jiachen Li","doi":"10.1109/LRA.2025.3546862","DOIUrl":"https://doi.org/10.1109/LRA.2025.3546862","url":null,"abstract":"The confluence of the advancement of Autonomous Vehicles (AVs) and the maturity of Vehicle-to-Everything (V2X) communication has enabled the capability of cooperative connected and automated vehicles (CAVs). Building on top of cooperative perception, this letter explores the feasibility and effectiveness of cooperative motion prediction. Our method, CMP, takes LiDAR signals as model input to enhance tracking and prediction capabilities. Unlike previous work that focuses separately on either cooperative perception or motion prediction, our framework, to the best of our knowledge, is the first to address the unified problem where CAVs share information in both perception and prediction modules. Incorporated into our design is the unique capability to tolerate realistic V2X transmission delays, while dealing with bulky perception representations. We also propose a prediction aggregation module, which unifies the predictions obtained by different CAVs and generates the final prediction. Through extensive experiments and ablation studies on the OPV2V and V2V4Real datasets, we demonstrate the effectiveness of our method in cooperative perception, tracking, and motion prediction. In particular, CMP reduces the average prediction error by 12.3% compared with the strongest baseline. Our work marks a significant step forward in the cooperative capabilities of CAVs, showcasing enhanced performance in complex scenarios.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3876-3883"},"PeriodicalIF":4.6,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Swift Pursuer: A Topology-Accelerated and Robust Approach for Pursuing an Evader in Obstacle Environments With State Measurement Uncertainty
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-02-28 DOI: 10.1109/LRA.2025.3546858
Kai Rao;Huaicheng Yan;Zhihao Huang;Penghui Yang;Yunkai Lv;Meng Wang
{"title":"Swift Pursuer: A Topology-Accelerated and Robust Approach for Pursuing an Evader in Obstacle Environments With State Measurement Uncertainty","authors":"Kai Rao;Huaicheng Yan;Zhihao Huang;Penghui Yang;Yunkai Lv;Meng Wang","doi":"10.1109/LRA.2025.3546858","DOIUrl":"https://doi.org/10.1109/LRA.2025.3546858","url":null,"abstract":"This letter presents a topology-accelerated and robust pursuit framework for environments with obstacles considering state measurement uncertainty. Our framework consists of three primary components: the selection of virtual target points using topological heuristic method to encourage path diversity, the computation of safe pursuit regions based on Voronoi cell (VC) and the solution of an adaptive robust path controller based on Control Barrier Function (CBF) to guarantee safety under state measurement uncertainty. Topological heuristics broadly capture the topological structure of the environment and provide guidance for the selection of target points for each pursuer. Then the chance constrained obstacle-aware Voronoi cell (CCOVC) for each pursuer is constructed by calculating separation hyperplane and buffer terms. Finally, we formulate chance CBF and chance Control Lyapunov Function (CLF) constraints based on CCOVC, using convex approximation to determine their upper bounds. We then find the adaptive robust path controller by solving a Quadratically Constrained Quadratic Program (QCQP). Benchmark simulation and experimental results demonstrate the efficiency and robustness of the proposed framework.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3972-3979"},"PeriodicalIF":4.6,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KINND: A Keyframe Insertion Framework via Neural Network Decision-Making for VSLAM
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-02-28 DOI: 10.1109/LRA.2025.3546795
Yanchao Dong;Peitong Li;Lulu Zhang;Xin Zhou;Bin He;Jie Tang
{"title":"KINND: A Keyframe Insertion Framework via Neural Network Decision-Making for VSLAM","authors":"Yanchao Dong;Peitong Li;Lulu Zhang;Xin Zhou;Bin He;Jie Tang","doi":"10.1109/LRA.2025.3546795","DOIUrl":"https://doi.org/10.1109/LRA.2025.3546795","url":null,"abstract":"Keyframe insertion is critical for the performance and robustness of SLAM systems. However, traditional heuristic-based methods often lead to suboptimal keyframe selection, compromising the accuracy of localization and mapping. To address this, we propose KINND, a lightweight neural network-based framework for real-time keyframe insertion. The framework introduces a novel foundational paradigm for learning-based keyframe insertion, encompassing the model architecture and training methodology. A neural network model is designed using a hierarchical weighted self-attention mechanism to encode real-time SLAM state information into high-dimensional representations, producing keyframe insertion decisions. To overcome the absence of ground truth for keyframe insertion, a composite loss function is developed by integrating pose error and system state information, providing a metric for this task. Additionally, a novel training mode enhances the model's real-time decision-making capabilities. Experimental results on public and private datasets demonstrate that KINND operates in real time without requiring a GPU and, with a single training session on a public dataset, achieves superior generalization performance on other datasets.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3908-3915"},"PeriodicalIF":4.6,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143621909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PEnG: Pose-Enhanced Geo-Localisation
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-02-27 DOI: 10.1109/LRA.2025.3546513
Tavis Shore;Oscar Mendez;Simon Hadfield
{"title":"PEnG: Pose-Enhanced Geo-Localisation","authors":"Tavis Shore;Oscar Mendez;Simon Hadfield","doi":"10.1109/LRA.2025.3546513","DOIUrl":"https://doi.org/10.1109/LRA.2025.3546513","url":null,"abstract":"Cross-view Geo-localisation is typically performed at a coarse granularity, because densely sampled satellite image patches overlap heavily. This heavy overlap would make disambiguating patches very challenging. However, by opting for sparsely sampled patches, prior work has placed an artificial upper bound on the localisation accuracy that is possible. Even a perfect oracle system cannot achieve accuracy greater than the average separation of the tiles. To solve this limitation, we propose combining cross-view geo-localisation and relative pose estimation to increase precision to a level practical for real-world application. We develop PEnG, a 2-stage system which first predicts the most likely edges from a city-scale graph representation upon which a query image lies. It then performs relative pose estimation within these edges to determine a precise position. PEnG presents the first technique to utilise both viewpoints available within cross-view geo-localisation datasets, referring to this as Multi-View Geo-Localisation (MVGL). This enhances accuracy to a sub-metre level, with some examples achieving centimetre level precision. Our proposed ensemble achieves state-of-the-art accuracy - with relative Top-5 m retrieval improvements on previous works of 213%. Decreasing the median Euclidean distance error by 96.90% from the previous best of 734 m down to 22.77 m, when evaluating with <inline-formula> <tex-math>$90^{circ }$</tex-math></inline-formula> horizontal FOV images.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3835-3842"},"PeriodicalIF":4.6,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pair-VPR: Place-Aware Pre-Training and Contrastive Pair Classification for Visual Place Recognition With Vision Transformers
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-02-27 DOI: 10.1109/LRA.2025.3546512
Stephen Hausler;Peyman Moghadam
{"title":"Pair-VPR: Place-Aware Pre-Training and Contrastive Pair Classification for Visual Place Recognition With Vision Transformers","authors":"Stephen Hausler;Peyman Moghadam","doi":"10.1109/LRA.2025.3546512","DOIUrl":"https://doi.org/10.1109/LRA.2025.3546512","url":null,"abstract":"In this work we propose a novel joint training method for Visual Place Recognition (VPR), which simultaneously learns a global descriptor and a pair classifier for re-ranking. The pair classifier can predict whether a given pair of images are from the same place or not. The network only comprises Vision Transformer components for both the encoder and the pair classifier, and both components are trained using their respective class tokens. In existing VPR methods, typically the network is initialized using pre-trained weights from a generic image dataset such as ImageNet. In this work we propose an alternative pre-training strategy, by using Siamese Masked Image Modeling as a pre-training task. We propose a Place-aware image sampling procedure from a collection of large VPR datasets for pre-training our model, to learn visual features tuned specifically for VPR. By re-using the Mask Image Modeling encoder and decoder weights in the second stage of training, <italic>Pair-VPR</i> can achieve state-of-the-art VPR performance across five benchmark datasets with a ViT-B encoder, along with further improvements in localization recall with larger encoders.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4013-4020"},"PeriodicalIF":4.6,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10906598","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Myoelectric Gesture Recognition Method for Enhancing the Reliability of Human-Robot Interaction
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-02-26 DOI: 10.1109/LRA.2025.3546095
Long Wang;Zhangyi Chen;Shanjun Zhou;Yilin Yu;Xiaoling Li
{"title":"A Robust Myoelectric Gesture Recognition Method for Enhancing the Reliability of Human-Robot Interaction","authors":"Long Wang;Zhangyi Chen;Shanjun Zhou;Yilin Yu;Xiaoling Li","doi":"10.1109/LRA.2025.3546095","DOIUrl":"https://doi.org/10.1109/LRA.2025.3546095","url":null,"abstract":"The myoelectric gesture recognition technology based on wearable armbands provides a natural and portable solution for human-robot interaction (HRI). However, various interferences during practical interactions can severely degrade the recognition model's performance, leading to reduced interaction reliability. Therefore, this study proposes a method called Distribution Shift Online Detection and Unsupervised Domain Adaptation (DSOD-UDA), aimed at addressing two key issues in the interactive process: when the model's performance declines and how to handle it after the decline. The method utilizes a discriminator with a sliding window to monitor real-time changes in the feature space of myoelectric signals, determining whether a distribution shift has occurred. Once a distribution shift is detected, the recognition model is updated online to ensure adaptability to the current distribution. Offline validation experiments were conducted on a public dataset that includes various interference factors. Ten participants conducted online experiments, simulating practical interference factors by performing the designated task during interactions and then using recognized gestures to control a robot to complete the object transfer task. The results demonstrate that, compared to comparison methods, the proposed method significantly enhances gesture recognition performance and exhibits superior robustness to various interference factors.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3731-3738"},"PeriodicalIF":4.6,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NavRL: Learning Safe Flight in Dynamic Environments
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-02-26 DOI: 10.1109/LRA.2025.3546069
Zhefan Xu;Xinming Han;Haoyu Shen;Hanyu Jin;Kenji Shimada
{"title":"NavRL: Learning Safe Flight in Dynamic Environments","authors":"Zhefan Xu;Xinming Han;Haoyu Shen;Hanyu Jin;Kenji Shimada","doi":"10.1109/LRA.2025.3546069","DOIUrl":"https://doi.org/10.1109/LRA.2025.3546069","url":null,"abstract":"Safe flight in dynamic environments requires unmanned aerial vehicles (UAVs) to make effective decisions when navigating cluttered spaces with moving obstacles. Traditional approaches often decompose decision-making into hierarchical modules for prediction and planning. Although these handcrafted systems can perform well in specific settings, they might fail if environmental conditions change and often require careful parameter tuning. Additionally, their solutions could be suboptimal due to the use of inaccurate mathematical model assumptions and simplifications aimed at achieving computational efficiency. To overcome these limitations, this letter introduces the NavRL framework, a deep reinforcement learning-based navigation method built on the Proximal Policy Optimization (PPO) algorithm. NavRL utilizes our carefully designed state and action representations, allowing the learned policy to make safe decisions in the presence of both static and dynamic obstacles, with zero-shot transfer from simulation to real-world flight. Furthermore, the proposed method adopts a simple but effective safety shield for the trained policy, inspired by the concept of velocity obstacles, to mitigate potential failures associated with the black-box nature of neural networks. To accelerate the convergence, we implement the training pipeline using NVIDIA Isaac Sim, enabling parallel training with thousands of quadcopters. Simulation and physical experiments show that our method ensures safe navigation in dynamic environments and results in the fewest collisions compared to benchmarks.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3668-3675"},"PeriodicalIF":4.6,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143602014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantifying the Sim2Real Gap: Model-Based Verification and Validation in Autonomous Ground Systems
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-02-26 DOI: 10.1109/LRA.2025.3546126
Ammar Waheed;Madhu Areti;Luke Gallantree;Zohaib Hasnain
{"title":"Quantifying the Sim2Real Gap: Model-Based Verification and Validation in Autonomous Ground Systems","authors":"Ammar Waheed;Madhu Areti;Luke Gallantree;Zohaib Hasnain","doi":"10.1109/LRA.2025.3546126","DOIUrl":"https://doi.org/10.1109/LRA.2025.3546126","url":null,"abstract":"Quantifying the Sim2Real gap is crucial for validating autonomous ground systems, enabling robust algorithm testing in simulations before real-world deployment, thereby reducing costs and time. This study introduces the Vinnicombe (<inline-formula><tex-math>$nu$</tex-math></inline-formula>-gap) metric as a quantitative tool for assessing this gap. To achieve this, a non-holonomic skid-steer differential drive robot was used. The <inline-formula><tex-math>$nu$</tex-math></inline-formula>-gap metric compares two dynamical control systems and returns a value between 0 and 1, where 0 indicates identical systems and 1 indicates significantly different systems. A linear time-invariant dynamic model, optimized through a genetic algorithm, was employed to ensure accurate representation of system behavior across varying conditions. Unlike task-specific metrics focused on localized errors, the <inline-formula><tex-math>$nu$</tex-math></inline-formula>-gap metric provides a holistic assessment by capturing system-wide differences. The <inline-formula><tex-math>$nu$</tex-math></inline-formula>-gap metric quantified significant differences with a maximum of 0.64 between real-world and simulated trials highlighting discrepancies in vehicle-environment interactions. Terrain-induced changes in real-world comparisons were quantified with values up to 0.27, reflecting increased compliance and friction on rubber-like surfaces versus concrete. Internal system changes were also identified with <inline-formula><tex-math>$nu$</tex-math></inline-formula>-gap values between 0.25 and 0.32, demonstrating sensitivity to changes in vehicle dynamics. These findings highlight the <inline-formula><tex-math>$nu$</tex-math></inline-formula>-gap metric's utility in enhancing simulation fidelity and reducing reliance on resource-intensive real-world testing.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3819-3826"},"PeriodicalIF":4.6,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143602015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信