IEEE Robotics and Automation Letters最新文献

筛选
英文 中文
Cosserat Rods With Cross-Sectional Deformation for Soft Robot Modeling 基于截面变形的Cosserat杆软机器人建模
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-10-15 DOI: 10.1109/LRA.2025.3621982
Samuel Tobin;Joshua Gaston;Vincent Aloi;Eric Barth;Caleb Rucker
{"title":"Cosserat Rods With Cross-Sectional Deformation for Soft Robot Modeling","authors":"Samuel Tobin;Joshua Gaston;Vincent Aloi;Eric Barth;Caleb Rucker","doi":"10.1109/LRA.2025.3621982","DOIUrl":"https://doi.org/10.1109/LRA.2025.3621982","url":null,"abstract":"Cosserat rod models are widely used to simulate, design, and control soft robots. The Cosserat framework accounts for bending, torsion, transverse shear, and elongation of a long, slender structure and correctly handles large rotations and deflections in 3D, while being far less computationally expensive than full 3D elasticity models using finite elements. However, the Cosserat model is not always appropriate for soft robotic structures since it assumes the cross sections never change size or shape. In this letter, we extend the standard Cosserat rod model to include cross-sectional deformation while retaining much of its simplicity. We add to the Cosserat model additional degrees of freedom that parameterize stretch and shear in the cross-sectional plane and their rates of change along the rod length. We then formulate several possible constitutive laws on the state variables (one linear and one non-linear) and compare them to the standard Cosserat energy expressions to gain insight. We further show how fluidic actuation and tendon actuation can be incorporated into the model, and we compare the extended Cosserat models to 3D nonlinear finite-element simulations with good agreement. Finally, we demonstrate use of this model in a robotics context to control the path-following gait of a peristaltic worm-inspired soft robot.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 12","pages":"12309-12316"},"PeriodicalIF":5.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145341063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSEC-Aware: Post-Collision Safety Control of Mobile Manipulators via Directional Energy Constraints 基于方向能量约束的移动机械臂碰撞后安全控制
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-10-15 DOI: 10.1109/LRA.2025.3621933
Jinhua Ye;Yechen Fan;Linxin Hong;Haibin Wu;Gengfeng Zheng
{"title":"DSEC-Aware: Post-Collision Safety Control of Mobile Manipulators via Directional Energy Constraints","authors":"Jinhua Ye;Yechen Fan;Linxin Hong;Haibin Wu;Gengfeng Zheng","doi":"10.1109/LRA.2025.3621933","DOIUrl":"https://doi.org/10.1109/LRA.2025.3621933","url":null,"abstract":"In this letter, we introduce a Directionally-aware Dynamic Energy Constraint (DSEC-Aware) framework to enhance energy-based safety control in post-collision human-robot collaboration (HRC). The method employs a selective energy dissipation mechanism, applying variable damping only along critical collision directions to effectively reduce impact forces while preserving user motion intentions in non-critical directions. It further adjusts energy boundaries dynamically according to human-robot proximity and introduces a multidimensional Danger Index (DI) model that incorporates physical parameters such as effective mass, contact stiffness, and human tolerance limits for accurate risk evaluation. Experimental results demonstrate that, compared with state-of-the-art (SOTA) methods, the proposed strategy reduces collision forces by approximately 58.04% and consistently maintains a low and stable collision risk, thereby significantly improving both the safety and practicality of HRC.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 12","pages":"12325-12332"},"PeriodicalIF":5.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145339722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FreeMask3D: Zero-Shot Point Cloud Instance Segmentation Without 3D Training FreeMask3D:零射击点云实例分割没有3D训练
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-10-15 DOI: 10.1109/LRA.2025.3621977
Mingquan Zhou;Xiaodong Wu;Chen He;Ruiping Wang;Xilin Chen
{"title":"FreeMask3D: Zero-Shot Point Cloud Instance Segmentation Without 3D Training","authors":"Mingquan Zhou;Xiaodong Wu;Chen He;Ruiping Wang;Xilin Chen","doi":"10.1109/LRA.2025.3621977","DOIUrl":"https://doi.org/10.1109/LRA.2025.3621977","url":null,"abstract":"Point cloud instance segmentation is crucial for 3D scene understanding in robotics. However, existing methods heavily rely on learning-based approaches that require large amounts of annotated 3D data, resulting in high annotation costs. Therefore, developing cost-effective and data-efficient solutions is essential. To this end, we propose FreeMask3D, a novel approach that achieves 3D point cloud instance segmentation without requiring any 3D annotation or additional training. Our method consists of two main steps: instance localization and instance recognition. For instance localization, we leverage pre-trained 2D instance segmentation models to perform instance segmentation on corresponding RGB-D images. These results are then mapped to 3D space and fused across frames to generate the final 3D instance masks. For instance recognition, the OpenSem module infers the category of each instance by leveraging the generalization capabilities of cross-modal large models, such as CLIP, to enable open-vocabulary semantic recognition. Experiments and ablation studies on four challenging benchmarks—ScanNetv2, ScanNet200, S3DIS, and Replica—demonstrate that FreeMask3D achieves competitive or superior performance compared to state-of-the-art methods, despite without 3D supervision. Qualitative results highlight its open-vocabulary capabilities based on color, affordance, or uncommon phrase description.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 12","pages":"12301-12308"},"PeriodicalIF":5.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145339721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MGPose: Wide-Baseline Relative Camera Pose Estimation Using Matching-Guided Dual Channel-Attention MGPose:使用匹配引导的双通道注意的宽基线相对相机姿态估计
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-10-15 DOI: 10.1109/LRA.2025.3621968
Wangping Wu;Chuhua Huang;Yongxing Shen;Xin Huang
{"title":"MGPose: Wide-Baseline Relative Camera Pose Estimation Using Matching-Guided Dual Channel-Attention","authors":"Wangping Wu;Chuhua Huang;Yongxing Shen;Xin Huang","doi":"10.1109/LRA.2025.3621968","DOIUrl":"https://doi.org/10.1109/LRA.2025.3621968","url":null,"abstract":"Relative camera pose estimation is a fundamental task in computer vision and robotics. In wide-baseline scenarios with limited visual overlap, traditional methods often perform poorly. Existing deep learning approaches are also hindered by irrelevant features and insufficient modeling of the relative motion between image pairs, making accurate pose estimation particularly challenging. In this letter, we propose MGPose, a camera relative pose estimation method using a matching-guided dual-channel attention mechanism. For wide-baseline image pairs, MGPose effectively reduces interference from uncorrelated features through a feature matching strategy, utilizes camera motion prior knowledge to capture the relative motion characteristics of matched points, and employs a bidirectional channel cross-attention mechanism along with a channel self-attention mechanism to fully capture the interactions between different channels of matched points, enabling efficient feature fusion for the image pairs. Extensive experiments on Matterport3D and ScanNet show that MGPose outperforms or matches state-of-the-art methods in camera relative pose estimation.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 12","pages":"12293-12300"},"PeriodicalIF":5.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145339597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Globally-Stable and Robust Image-Based Visual Servoing for Positioning With Respect to a Cylinder 基于全局稳定鲁棒图像的圆柱定位视觉伺服
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-10-09 DOI: 10.1109/LRA.2025.3619710
Alessandro Colotti;François Chaumette
{"title":"Globally-Stable and Robust Image-Based Visual Servoing for Positioning With Respect to a Cylinder","authors":"Alessandro Colotti;François Chaumette","doi":"10.1109/LRA.2025.3619710","DOIUrl":"https://doi.org/10.1109/LRA.2025.3619710","url":null,"abstract":"This letter proposes a new image-based visual servoing controller for positioning a camera with respect to a cylindrical object. Traditional image-based approaches often rely on estimating planar parameters from the cylinder’s projected edges, making them sensitive to noise and modeling errors. In this work, we introduce a novel controller that uses pure image features while directly tied to the cylinder’s 3D pose, which depends solely on the cylinder radius. Crucially, this controller offers formal global stability irrespective of the radius estimate. Simulations and real experiments with a robotic arm confirm the controller improved convergence and robustness under practical conditions.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"12071-12078"},"PeriodicalIF":5.3,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSF-PAN: Semantic Scene Flow-Based Perception for Autonomous Navigation in Traffic Scenarios 基于语义场景流的交通场景自主导航感知
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-10-09 DOI: 10.1109/LRA.2025.3619749
Yinqi Chen;Meiying Zhang;Qi Hao;Guang Zhou
{"title":"SSF-PAN: Semantic Scene Flow-Based Perception for Autonomous Navigation in Traffic Scenarios","authors":"Yinqi Chen;Meiying Zhang;Qi Hao;Guang Zhou","doi":"10.1109/LRA.2025.3619749","DOIUrl":"https://doi.org/10.1109/LRA.2025.3619749","url":null,"abstract":"Vehicle detection and localization in complex traffic scenarios pose significant challenges due to the interference of moving objects. Traditional methods often rely on outlier exclusions or semantic segmentations, which suffer from low computational efficiency and accuracy. The proposed SSF-PAN can achieve the functionalities of LiDAR point cloud based object detection/localization and SLAM (Simultaneous Localization and Mapping) with high computational efficiency and accuracy, enabling map-free navigation frameworks. The novelty of this work is threefold: 1) developing a neural network which can achieve segmentation among static and dynamic objects within the scene flows with different motion features, that is, semantic scene flow (SSF); 2) developing an iterative framework which can further optimize the quality of input scene flows and output segmentation results; 3) developing a scene flow-based navigation platform which can test the performance of the SSF perception system in the simulation environment. The proposed SSF-PAN method is validated using the SUScape-CARLA and the KITTI (Geiger et al., 2013) datasets, as well as on the CARLA simulator. Experimental results demonstrate that the proposed approach outperforms traditional methods in terms of scene flow computation accuracy, moving object detection accuracy, computational efficiency, and autonomous navigation effectiveness.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"12173-12180"},"PeriodicalIF":5.3,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Unsupervised Domain Adaptation for 3D Point Cloud Segmentation Under Source Adversarial Attacks 基于鲁棒无监督域自适应的三维点云分割
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-10-09 DOI: 10.1109/LRA.2025.3619750
Haosheng Li;Junjie Chen;Yuecong Xu;Kemi Ding
{"title":"Robust Unsupervised Domain Adaptation for 3D Point Cloud Segmentation Under Source Adversarial Attacks","authors":"Haosheng Li;Junjie Chen;Yuecong Xu;Kemi Ding","doi":"10.1109/LRA.2025.3619750","DOIUrl":"https://doi.org/10.1109/LRA.2025.3619750","url":null,"abstract":"Unsupervised domain adaptation (UDA) frameworks have shown good generalization capabilities for 3D point cloud semantic segmentation models on clean data. However, existing works overlook adversarial robustness when the source domain itself is compromised. To comprehensively explore the robustness of the UDA frameworks, we first design a stealthy adversarial point cloud generation attack that can significantly contaminate datasets with only minor perturbations to the point cloud surface. Based on that, we propose a novel dataset, AdvSynLiDAR, comprising synthesized contaminated LiDAR point clouds. With the generated corrupted data, we further develop the Adversarial Adaptation Framework as the countermeasure. Specifically, by extending the key point sensitive loss towards the Robust Long-Tailed loss and utilizing a decoder branch, our approach enables the model to focus on long-tailed classes during the pre-training phase and leverages high-confidence decoded point cloud information to restore point cloud structures during the adaptation phase. We evaluated our method on the AdvSynLiDAR dataset, where the results demonstrate that our method can mitigate performance degradation under source adversarial perturbations for UDA in the 3D point cloud segmentation application.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 12","pages":"12317-12324"},"PeriodicalIF":5.3,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145339720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TCB-VIO: Tightly-Coupled Focal-Plane Binary-Enhanced Visual Inertial Odometry TCB-VIO:紧密耦合焦平面二元增强视觉惯性里程计
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-10-09 DOI: 10.1109/LRA.2025.3619774
Matthew Lisondra;Junseo Kim;Glenn Takashi Shimoda;Kourosh Zareinia;Sajad Saeedi
{"title":"TCB-VIO: Tightly-Coupled Focal-Plane Binary-Enhanced Visual Inertial Odometry","authors":"Matthew Lisondra;Junseo Kim;Glenn Takashi Shimoda;Kourosh Zareinia;Sajad Saeedi","doi":"10.1109/LRA.2025.3619774","DOIUrl":"https://doi.org/10.1109/LRA.2025.3619774","url":null,"abstract":"Vision algorithms can be executed directly on the image sensor when implemented on the next-generation sensors known as focal-plane sensor-processor arrays (FPSP)s, where every pixel has a processor. FPSPs greatly improve latency, reducing the problems associated with the bottleneck of data transfer from a vision sensor to a processor. FPSPs accelerate vision-based algorithms such as visual-inertial odometry (VIO). However, VIO frameworks suffer from spatial drift due to the vision-based pose estimation, whilst temporal drift arises from the inertial measurements. FPSPs circumvent the spatial drift by operating at a high frame rate to match the high-frequency output of the inertial measurements. In this letter, we present TCB-VIO, a tightly-coupled 6 degrees-of-freedom VIO by a Multi-State Constraint Kalman Filter (MSCKF), operating at a high frame-rate of 250 FPS and from IMU measurements obtained at 400 Hz. TCB-VIO outperforms state-of-the-art methods: ROVIO, VINS-Mono, and ORB-SLAM3.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 12","pages":"12341-12348"},"PeriodicalIF":5.3,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145341066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ShapeICP: Iterative Category-Level Object Pose and Shape Estimation From Depth ShapeICP:基于深度的迭代分类级对象姿态和形状估计
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-10-09 DOI: 10.1109/LRA.2025.3619808
Yihao Zhang;Harpreet S. Sawhney;John J. Leonard
{"title":"ShapeICP: Iterative Category-Level Object Pose and Shape Estimation From Depth","authors":"Yihao Zhang;Harpreet S. Sawhney;John J. Leonard","doi":"10.1109/LRA.2025.3619808","DOIUrl":"https://doi.org/10.1109/LRA.2025.3619808","url":null,"abstract":"Category-level object pose and shape estimation from a single depth image has recently drawn research attention due to its potential utility for tasks such as robotics manipulation. The task is particularly challenging because the three unknowns, object pose, object shape, and model-to-measurement correspondences, are compounded together, but only a single view of depth measurements is provided. Most of the prior work heavily relies on data-driven approaches to obtain solutions to at least one of the unknowns, and typically two, risking generalization failures if not designed and trained carefully. The shape representations used in the prior work also mainly focus on point clouds and signed distance fields (SDFs). In stark contrast to the prior work, we approach the problem using an iterative estimation method that does not require learning from pose-annotated data. Moreover, we construct and adopt a novel mesh-based object active shape model (ASM), which additionally maintains vertex connectivity compared to the commonly used point-based object ASM. Our algorithm, ShapeICP, is based on the iterative closest point (ICP) algorithm but is equipped with additional features for the category-level pose and shape estimation task. Although not using pose-annotated data, ShapeICP surpasses many data-driven approaches that rely on pose data for training, opening up a new solution space for researchers to consider.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"12149-12156"},"PeriodicalIF":5.3,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual Learning for Traversability Prediction With Uncertainty-Aware Adaptation 具有不确定性感知自适应的可穿越性预测的持续学习
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-10-09 DOI: 10.1109/LRA.2025.3619687
Hojin Lee;Yunho Lee;Daniel A Duecker;Cheolhyeon Kwon
{"title":"Continual Learning for Traversability Prediction With Uncertainty-Aware Adaptation","authors":"Hojin Lee;Yunho Lee;Daniel A Duecker;Cheolhyeon Kwon","doi":"10.1109/LRA.2025.3619687","DOIUrl":"https://doi.org/10.1109/LRA.2025.3619687","url":null,"abstract":"Traversability prediction is a critical component of autonomous navigation in unstructured environments, where complex and uncertain robot-terrain interactions pose significant challenges such as traction loss and dynamic instability. Despite recent progress in learning-based traversability prediction, these methods often fail to adapt to novel terrains. Even when adaptation is achieved, retaining experience from previously trained environments remains a challenge, a problem known as catastrophic forgetting. To address this challenge, we propose a continual learning framework for traversability prediction that incrementally adapts to new terrains using a generative experience recall model. A key virtue of the proposed framework is two folds: i) retain prior experience without storing past data; and ii) incorporate the uncertainty of the generated samples from the recall model, enabling uncertainty-aware adaptation. Real-world experiments with a skid-steering robot validate the effectiveness of the proposed framework, demonstrating its ability to adapt across a series of diverse environments while mitigating catastrophic forgetting.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"12109-12116"},"PeriodicalIF":5.3,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信