IEEE Robotics and Automation Letters最新文献

筛选
英文 中文
TSO-BoW: Accurate Long-Term Loop Closure Detection With Constant Query Time via Online Bag of Words and Trajectory Segmentation
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550799
Shufang Zhang;Jiazheng Wu;Kaiyi Wang;Sanpeng Deng
{"title":"TSO-BoW: Accurate Long-Term Loop Closure Detection With Constant Query Time via Online Bag of Words and Trajectory Segmentation","authors":"Shufang Zhang;Jiazheng Wu;Kaiyi Wang;Sanpeng Deng","doi":"10.1109/LRA.2025.3550799","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550799","url":null,"abstract":"This letter presents TSO-BoW, a lightweight trajectory segmentation-based Bag-of-Words algorithm for loop closure detection, utilizing intermittent online training for collected segments. In the online training phase, segments of collected data form sub-trajectories that are used for online training based on their features, ultimately creating corresponding sub-databases for querying. In the querying phase, we use a multiple-level querying approach. Initially, candidate sub-databases are selected based on geometric distance using prior pose information. Subsequently, a lower bound criterion is applied to filter out some sub-databases, followed by PnP-RANSAC for geometric verification and precise relative pose estimation. Our algorithm mitigates the pose drift issue in prior pose selection-based loop detection algorithms by using a segmented Bag-of-Words and lower bound elimination. It maintains constant query time and memory cost without compromising query performance in long-term (Simultaneous localization and mapping) SLAM. Evaluations on large-scale public datasets demonstrate our algorithm's excellent computational and memory efficiency, query time efficiency, and superior query performance in long-term SLAM system.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4388-4395"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CSCPR: Cross-Source-Context Indoor RGB-D Place Recognition
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3551080
Jing Liang;Zhuo Deng;Zheming Zhou;Min Sun;Omid Ghasemalizadeh;Cheng-Hao Kuo;Arnie Sen;Dinesh Manocha
{"title":"CSCPR: Cross-Source-Context Indoor RGB-D Place Recognition","authors":"Jing Liang;Zhuo Deng;Zheming Zhou;Min Sun;Omid Ghasemalizadeh;Cheng-Hao Kuo;Arnie Sen;Dinesh Manocha","doi":"10.1109/LRA.2025.3551080","DOIUrl":"https://doi.org/10.1109/LRA.2025.3551080","url":null,"abstract":"We extend our previous work, PoCo (Liang et al. 2024), and present a new algorithm, Cross-Source-Context Place Recognition (CSCPR), for RGB-D indoor place recognition that integrates global retrieval and reranking into an end-to-end model and keeps the consistency of using Context-of-Clusters (CoCs) (Ma, et al. 2023) for feature processing. Unlike prior approaches that primarily focus on the RGB domain for place recognition reranking, CSCPR is designed to handle the RGB-D data. We apply the CoCs to handle cross-sourced and cross-scaled RGB-D point clouds and introduce two novel modules for reranking: the Self-Context Cluster (SCC) and the Cross Source Context Cluster (CSCC), which enhance feature representation and match query-database pairs based on local features, respectively. We also release two new datasets, ScanNetIPR and ARKitIPR. Our experiments demonstrate that CSCPR significantly outperforms state-of-the-art models on these datasets by at least 29.27% in Recall@1 on the ScanNet-PR dataset and 43.24% in the new datasets.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4628-4635"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance-Driven Constrained Optimal Auto-Tuner for MPC
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550838
Albert Gassol Puigjaner;Manish Prajapat;Andrea Carron;Andreas Krause;Melanie N. Zeilinger
{"title":"Performance-Driven Constrained Optimal Auto-Tuner for MPC","authors":"Albert Gassol Puigjaner;Manish Prajapat;Andrea Carron;Andreas Krause;Melanie N. Zeilinger","doi":"10.1109/LRA.2025.3550838","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550838","url":null,"abstract":"A key challenge in tuning Model Predictive Control (<sc>MPC</small>) cost function parameters is to ensure that the system performance stays consistently above a certain threshold. To address this challenge, we propose a novel method, <sc>COAt-MPC</small>, Constrained Optimal Auto-Tuner for <sc>MPC</small>. With every tuning iteration, <sc>COAt-MPC</small> gathers performance data and learns by updating its posterior belief. It explores the tuning parameters' domain towards optimistic parameters in a goal-directed fashion, which is key to its sample efficiency. We theoretically analyze <sc>COAt-MPC</small>, showing that it satisfies performance constraints with arbitrarily high probability at all times and provably converges to the optimum performance within finite time. Through comprehensive simulations and comparative analyses with a hardware platform, we demonstrate the effectiveness of <sc>COAt-MPC</small> in comparison to classical Bayesian Optimization (BO) and other state-of-the-art methods. When applied to autonomous racing, our approach outperforms baselines in terms of constraint violations and cumulative regret over time.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4698-4705"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot Adversarial Attack on Keystroke Dynamics Based User Authentication System
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550727
Rongyu Yu;Burak Kizilkaya;Zhen Meng;Emma Li;Philip Zhao
{"title":"Robot Adversarial Attack on Keystroke Dynamics Based User Authentication System","authors":"Rongyu Yu;Burak Kizilkaya;Zhen Meng;Emma Li;Philip Zhao","doi":"10.1109/LRA.2025.3550727","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550727","url":null,"abstract":"Adversarial attacks on machine learning systems are an important area of study in cybersecurity. Keystroke dynamics (KD)-based user authentication systems utilize human typing behavior to distinguish between users. As robots continue to advance and become more capable at mimicking human behavior, they may increasingly pose a threat to behavioral biometric systems by performing adversarial attacks. In this study, we propose a robot adversarial attack framework to evaluate the resilience of eight commonly used classifiers and detectors in the keystroke dynamics literature against robot attacks. We invited 27 participants across three types of passwords: a complex password (CP) <monospace>.tie5Roanl</monospace>, a text-based password (TP) <monospace>kicsikutyatarka</monospace>, and a numeric password (NP) <monospace>4121937761</monospace>. The results show that 1) in white-box attack scenarios, the robot achieves up to 100% Accuracy (ACC) and over 95% Equal Error Rate (EER); and 2) in grey-box attack scenarios, the results also demonstrate significant vulnerabilities, highlighting the need for robust defense strategies to enhance the security of keystroke dynamics-based authentication systems against robotic adversarial attacks.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4850-4857"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Prototype Network for Explainable Pedestrian Behavior Prediction
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550728
Yan Feng;Alexander Carballo;Kazuya Takeda
{"title":"Sparse Prototype Network for Explainable Pedestrian Behavior Prediction","authors":"Yan Feng;Alexander Carballo;Kazuya Takeda","doi":"10.1109/LRA.2025.3550728","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550728","url":null,"abstract":"Predicting pedestrian behavior is challenging yet crucial for applications such as autonomous driving and smart cities. Recent deep learning models have achieved remarkable performance in making accurate predictions, but they fail to provide explanations of their inner workings. One reason for this problem is the multi-modal inputs. To bridge this gap, we present Sparse Prototype Network (SPN), an explainable method designed to simultaneously predict a pedestrian's future action, trajectory, and pose. SPN leverages an intermediate prototype bottleneck layer to provide sample-based explanations for its predictions. The prototypes are modality-independent, meaning that they can correspond to any modality from the input. Therefore, SPN can extend to arbitrary combinations of modalities. Regularized by mono-semanticity and clustering constraints, the prototypes learn consistent and human-understandable features and achieve state-of-the-art performance on action, trajectory and pose prediction on TITAN and PIE. Finally, we propose a metric named Top-K Mono-semanticity Scale to quantitatively evaluate the explainability. Qualitative results show a positive correlation between sparsity and explainability.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4196-4203"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAFF-Net: Enhancing 3D Object Detection With 4D Radar via Multi-Assist Feature Fusion
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550707
Xin Bi;Caien Weng;Panpan Tong;Baojie Fan;Arno Eichberge
{"title":"MAFF-Net: Enhancing 3D Object Detection With 4D Radar via Multi-Assist Feature Fusion","authors":"Xin Bi;Caien Weng;Panpan Tong;Baojie Fan;Arno Eichberge","doi":"10.1109/LRA.2025.3550707","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550707","url":null,"abstract":"Perception systems are crucial for the safe operation of autonomous vehicles, particularly for 3D object detection. While LiDAR-based methods are limited by adverse weather conditions, 4D radars offer promising all-weather capabilities. However, 4D radars introduce challenges such as extreme sparsity, noise, and limited geometric information in point clouds. To address these issues, we propose MAFF-Net, a novel multi-assist feature fusion network specifically designed for 3D object detection using a single 4D radar. We introduce a sparsity pillar attention (SPA) module to mitigate the effects of sparsity while ensuring a sufficient receptive field. Additionally, we design the cluster query cross-attention (CQCA) module, which uses velocity-based clustered features as queries in the cross-attention fusion process. This helps the network enrich feature representations of potential objects while reducing measurement errors caused by angular resolution and multipath effects. Furthermore, we develop a cylindrical denoising assist (CDA) module to reduce noise interference, improving the accuracy of 3D bounding box predictions. Experiments on the VoD and TJ4DRadSet datasets demonstrate that MAFF-Net achieves state-of-the-art performance, outperforming 16-layer LiDAR systems and operating at over 17.9 FPS, making it suitable for real-time detection in autonomous vehicles.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4284-4291"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143688096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Decentralized Multi-Robot PointGoal Navigation
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550798
Takieddine Soualhi;Nathan Crombez;Yassine Ruichek;Alexandre Lombard;Stéphane Galland
{"title":"Learning Decentralized Multi-Robot PointGoal Navigation","authors":"Takieddine Soualhi;Nathan Crombez;Yassine Ruichek;Alexandre Lombard;Stéphane Galland","doi":"10.1109/LRA.2025.3550798","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550798","url":null,"abstract":"Integrating robots into real-world applications requires effective consideration of various agents, including other robots. Multi-agent reinforcement learning (MARL) is an established field that addresses multi-agent systems problems by leveraging reinforcement learning techniques. Despite its potential, the study of multi-robot systems, particularly in vision-based robotics, remains in its early stages. In this context, this article tackles the PointGoal navigation problem for multi-robot systems, extending the traditional single agent focus to a multi-agent context. To this end, we introduce a training environment designed to address vision-based multi-robot challenges. In addition, we propose a method based on the centralized training-decentralized execution paradigm within MARL to explore three PointGoal navigation scenarios: the SpecificGoal scenario, where each agent has a distinct target; the CommonGoal scenario, where all agents share the same target; and the Ad-hoCoop scenario, which requires agents to adapt to varying team sizes. Our results contribute to lay the groundwork for adopting MARL approaches to address vision-based tasks for multi-robot systems.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4117-4124"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143676024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot Behavior Personalization From Sparse User Feedback 根据稀疏用户反馈个性化机器人行为
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550833
Maithili Patel;Sonia Chernova
{"title":"Robot Behavior Personalization From Sparse User Feedback","authors":"Maithili Patel;Sonia Chernova","doi":"10.1109/LRA.2025.3550833","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550833","url":null,"abstract":"As service robots become more general-purpose, they will need to adapt to their users' preferences over a large set of <italic>all</i> possible tasks that they can perform. This includes preferences regarding which actions the users prefer to delegate to robots as opposed to doing themselves. Existing personalization approaches require task-specific data for each user. To handle diversity across all household tasks and users, and nuances in user preferences across tasks, we propose to learn a task adaptation function independently, which can be used in tandem with any universal robot policy to personalize robot behavior. We create Task Adaptation using Abstract Concepts (TAACo) framework. TAACo can learn to predict the user's preferred manner of assistance with any given task, by mediating reasoning through a representation composed of abstract concepts built based on user feedback. TAACo can generalize to an open set of household tasks from small amount of user feedback and explain its inferences through intuitive concepts. We evaluate our model on a dataset we collected of 5 people's preferences, and show that TAACo outperforms GPT-4 by 16% and a rule-based system by 54%, on prediction accuracy, with 40 samples of user feedback","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4580-4587"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autoregressive Action Sequence Learning for Robotic Manipulation
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550849
Xinyu Zhang;Yuhan Liu;Haonan Chang;Liam Schramm;Abdeslam Boularias
{"title":"Autoregressive Action Sequence Learning for Robotic Manipulation","authors":"Xinyu Zhang;Yuhan Liu;Haonan Chang;Liam Schramm;Abdeslam Boularias","doi":"10.1109/LRA.2025.3550849","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550849","url":null,"abstract":"Designing a universal policy architecture that performs well across diverse robots and task configurations remains a key challenge. In this work, we address this by representing robot actions as sequential data and generating actions through autoregressive sequence modeling. Existing autoregressive architectures generate end-effector waypoints sequentially as word tokens in language modeling, which are limited to low-frequency control tasks. Unlike language, robot actions are heterogeneous and often include high-frequency continuous values—such as joint positions, 2D pixel coordinates, and end-effector poses—which are not easily suited for language-based modeling. Based on this insight, we extend causal transformers' single-token prediction to support predicting a variable number of tokens in a single step through our Chunking Causal Transformer (CCT). This enhancement enables robust performance across diverse tasks of various control frequencies, greater efficiency by having fewer autoregression steps, and lead to a hybrid action sequence design by mixing different types of actions and using a different chunk size for each action type. Based on CCT, we propose the Autoregressive Policy (ARP) architecture, which solves manipulation tasks by generating hybrid action sequences. We evaluate ARP across diverse robotic manipulation environments, including Push-T, ALOHA, and RLBench, and show that ARP, as a universal architecture, matches or outperforms the environment-specific state-of-the-art in all tested benchmarks, while being more efficient in computation and parameter sizes.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4898-4905"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AquaFuse: Waterbody Fusion for Physics-Guided View Synthesis of Underwater Scenes
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550816
Md Abu Bakr Siddique;Jiayi Wu;Ioannis Rekleitis;Md Jahidul Islam
{"title":"AquaFuse: Waterbody Fusion for Physics-Guided View Synthesis of Underwater Scenes","authors":"Md Abu Bakr Siddique;Jiayi Wu;Ioannis Rekleitis;Md Jahidul Islam","doi":"10.1109/LRA.2025.3550816","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550816","url":null,"abstract":"In this letter, we introduce the idea of AquaFuse, a physics-based method for synthesizing <italic>waterbody properties</i> in underwater imagery. We formulate a closed-form solution for waterbody fusion that facilitates realistic data augmentation and geometrically consistent underwater scene rendering. AquaFuse leverages the physical characteristics of light propagation underwater to synthesize the waterbody from one scene to the object contents of another. Unlike data-driven style transfer methods, AquaFuse preserves the depth consistency and object geometry in an input scene. We validate this unique feature by comprehensive experiments over diverse sets of underwater scenes. We find that the <italic>AquaFused images</i> preserve over 94% depth consistency and 90–95% structural similarity of the input scenes. We also demonstrate that it generates accurate 3D view synthesis by preserving object geometry while adapting to the inherent waterbody fusion process. AquaFuse opens up a new research direction in data augmentation by geometry-preserving style transfer for underwater imaging and robot vision.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4316-4323"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143688064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信