IEEE Robotics and Automation Letters最新文献

筛选
英文 中文
Stereo-Based 3D Human Pose Estimation for Underwater Robots Without 3D Supervision
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-02 DOI: 10.1109/LRA.2025.3557235
Ying-Kun Wu;Junaed Sattar
{"title":"Stereo-Based 3D Human Pose Estimation for Underwater Robots Without 3D Supervision","authors":"Ying-Kun Wu;Junaed Sattar","doi":"10.1109/LRA.2025.3557235","DOIUrl":"https://doi.org/10.1109/LRA.2025.3557235","url":null,"abstract":"In this paper, we propose a novel deep learning-based 3D underwater human pose estimator capable of providing metric 3D poses of scuba divers from stereo image pairs. While existing research has made significant advancements in 3D human pose estimation, most methods rely on 3D ground truth for training, which is challenging to acquire in dynamic environments where traditional motion capture systems are impractical to deploy. To overcome this, our approach leverages epipolar geometry to derive 3D information from 2D estimations. Our method estimates semantic keypoints while capturing their corresponding disparity from binocular perspectives, thus avoiding challenges in calibrating for multi-view setups or scale-ambiguity in monocular settings. Additionally, to reduce the sensitivity of our method to 2D annotation accuracy, we propose an auto-refinement pipeline to automatically correct biases introduced by human labeling. Experiments demonstrate that our approach significantly improves performance compared to previous state-of-the-art methods in different environments, including but not limited to underwater scenarios, while being efficient enough to run on limited-capacity edge devices.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"5002-5009"},"PeriodicalIF":4.6,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143824578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpreting and Improving Optimal Control Problems With Directional Corrections
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-02 DOI: 10.1109/LRA.2025.3557226
Trevor Barron;Xiaojing Zhang
{"title":"Interpreting and Improving Optimal Control Problems With Directional Corrections","authors":"Trevor Barron;Xiaojing Zhang","doi":"10.1109/LRA.2025.3557226","DOIUrl":"https://doi.org/10.1109/LRA.2025.3557226","url":null,"abstract":"Many robotics tasks, such as path planning or trajectory optimization, are formulated as optimal control problems (OCPs). The key to obtaining high performance lies in the design of the OCP's objective function. In practice, the objective function consists of a set of individual components that must be carefully modeled and traded off such that the OCP has the desired solution. It is often challenging to balance multiple components to achieve the desired solution and to understand, when the solution is undesired, the impact of individual cost components. In this paper, we present a framework addressing these challenges based on the concept of <italic>directional corrections</i>. Specifically, given the solution to an OCP that is deemed undesirable, and access to an expert providing the direction of change that would increase the desirability of the solution, our method analyzes the individual cost components for their “consistency” with the provided directional correction. This information can be used to improve the OCP formulation, e.g., by increasing the weight of consistent cost components, or reducing the weight of – or even redesigning – inconsistent cost components. We also show that our framework can automatically tune parameters of the OCP to achieve consistency with a set of corrections.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4986-4993"},"PeriodicalIF":4.6,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SfmOcc: Vision-Based 3D Semantic Occupancy Prediction in Urban Environments
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-02 DOI: 10.1109/LRA.2025.3557227
Rodrigo Marcuzzi;Lucas Nunes;Elias Marks;Louis Wiesmann;Thomas Läbe;Jens Behley;Cyrill Stachniss
{"title":"SfmOcc: Vision-Based 3D Semantic Occupancy Prediction in Urban Environments","authors":"Rodrigo Marcuzzi;Lucas Nunes;Elias Marks;Louis Wiesmann;Thomas Läbe;Jens Behley;Cyrill Stachniss","doi":"10.1109/LRA.2025.3557227","DOIUrl":"https://doi.org/10.1109/LRA.2025.3557227","url":null,"abstract":"Semantic scene understanding is crucial for autonomous systems and 3D semantic occupancy prediction is a key task since it provides geometric and possibly semantic information of the vehicle's surroundings. Most existing vision-based approaches to occupancy estimation rely on 3D voxel labels or segmented LiDAR point clouds for supervision. This limits their application to the availability of a 3D LiDAR sensor or the costly labeling of the voxels. While other approaches rely only on images for training, they usually supervise only with a few consecutive images and optimize for proxy tasks like volume reconstruction or depth prediction. In this paper, we propose a novel method for semantic occupancy prediction using only vision data also for supervision. We leverage all the available training images of a sequence and use bundle adjustment to align the images and estimate camera poses from which we then obtain depth images. We compute semantic maps from a pre-trained open-vocabulary image model and generate occupancy pseudo labels to explicitly optimize for the 3D semantic occupancy prediction task. Without any manual or LiDAR-based labels, our approach predicts full 3D occupancy voxel grids and achieves state-of-the-art results for 3D occupancy prediction among methods trained without labels.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"5074-5081"},"PeriodicalIF":4.6,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Context Learning for Socially Compliant Navigation
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-02 DOI: 10.1109/LRA.2025.3557309
Iaroslav Okunevich;Alexandre Lombard;Tomas Krajnik;Yassine Ruichek;Zhi Yan
{"title":"Online Context Learning for Socially Compliant Navigation","authors":"Iaroslav Okunevich;Alexandre Lombard;Tomas Krajnik;Yassine Ruichek;Zhi Yan","doi":"10.1109/LRA.2025.3557309","DOIUrl":"https://doi.org/10.1109/LRA.2025.3557309","url":null,"abstract":"Robot social navigation needs to adapt to different human factors and environmental contexts. However, since these factors and contexts are difficult to predict and cannot be exhaustively enumerated, traditional learning-based methods have difficulty in ensuring the social attributes of robots in long-term and cross-environment deployments. This letter introduces an online context learning method that aims to empower robots to adapt to new social environments online. The proposed method adopts a two-layer structure. The bottom layer is built using a deep reinforcement learning-based method to ensure the output of basic robot navigation commands. The upper layer is implemented using an online robot learning-based method to socialize the control commands suggested by the bottom layer. Experiments using a community-wide simulator show that our method outperforms the state-of-the-art ones. Experimental results in the most challenging scenarios show that our method improves the performance of the state-of-the-art by 8%.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"5042-5049"},"PeriodicalIF":4.6,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143824510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On-line Shape Estimation for Hysteresis Compensation in Tendon-Sheath Mechanisms Using Endoscopic Camera
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-02 DOI: 10.1109/LRA.2025.3557306
Junho Hong;Daehie Hong;Chanwoo Kim;SeongHyun Won
{"title":"On-line Shape Estimation for Hysteresis Compensation in Tendon-Sheath Mechanisms Using Endoscopic Camera","authors":"Junho Hong;Daehie Hong;Chanwoo Kim;SeongHyun Won","doi":"10.1109/LRA.2025.3557306","DOIUrl":"https://doi.org/10.1109/LRA.2025.3557306","url":null,"abstract":"The tendon-sheath mechanism (TSM) has significantly advanced both robotic systems and minimally invasive surgery (MIS) by enabling flexible and precise movement through narrow and tortuous paths. However, the inherent flexibility of TSM introduces nonlinear behaviors which depend on its geometrical shape and applied forces, making accurate control challenging. Furthermore, the shape dependency becomes critical in endoscopic robots, where the geometrical shape varies and is not directly visible, limiting the applicability of existing distal sensorless compensation methods. To address the geometry identification problem of TSM, this paper proposes an approach that utilizes real-time visual input from an endoscopic camera for on-line calibration of the TSM's physical model. By introducing the concept of the ‘Equivalent Circle,’ complex shapes of TSMs are simplified, enabling the estimation of their equivalent geometry without direct observation or measurement. Simulation results validate the equivalent circle model, demonstrating minimal deadband percentage errors despite larger discrepancies in equivalent radii across varied configurations. On-line calibration experiments achieved a percent error of 1.38% (±2.92%) for accumulated curve angles and 2.32% (±3.08%) for equivalent radii, demonstrating the method's reliability in shape estimation across varying conditions. In prediction and feedforward experiments, leveraging the equivalent circle to compensate for deadband in arbitrarily shaped TSMs resulted in a maximum trajectory error of 0.25 mm and an RMSE of 0.09 mm. This approach advances distal sensorless control, improving the operational accuracy and feasibility of endoscopic surgical robots under varying geometrical and force conditions.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5201-5208"},"PeriodicalIF":4.6,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RaI-SLAM: Radar-Inertial SLAM for Autonomous Vehicles
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-02 DOI: 10.1109/LRA.2025.3557296
Daniel Casado Herraez;Matthias Zeller;Dong Wang;Jens Behley;Michael Heidingsfeld;Cyrill Stachniss
{"title":"RaI-SLAM: Radar-Inertial SLAM for Autonomous Vehicles","authors":"Daniel Casado Herraez;Matthias Zeller;Dong Wang;Jens Behley;Michael Heidingsfeld;Cyrill Stachniss","doi":"10.1109/LRA.2025.3557296","DOIUrl":"https://doi.org/10.1109/LRA.2025.3557296","url":null,"abstract":"Simultaneous localization and mapping are essential components for the operation of autonomous vehicles in unknown environments. While localization focuses on estimating the vehicle's pose, mapping captures the surrounding environment to enhance future localization and decision-making. Localization is commonly achieved using external GNSS systems combined with inertial measurement units, LiDARs, and/or cameras. Automotive radars offer an attractive onboard sensing alternative due to their robustness to adverse weather and low lighting conditions, compactness, affordability, and widespread integration into consumer vehicles. However, they output comparably sparse and noisy point clouds that are challenging for pose estimation, easily leading to noisy trajectory estimates. We propose a modular approach that performs radar-inertial SLAM by fully leveraging the characteristics of automotive consumer-vehicle radar sensors. Our system achieves smooth and accurate onboard simultaneous localization and mapping by combining automotive radars with an IMU and exploiting the additional velocity and radar cross-section information provided by radar sensors, without relying on GNSS data. Specifically, radar scan-matching and IMU measurements are first incorporated into a local pose graph for odometry estimation. We then correct the accumulated drift through a global pose graph backend that optimizes detected loop closures. Contrary to existing radar SLAM methods, our graph-based approach is divided into distinct submodules and all components are designed specifically to exploit the characteristics of automotive radar sensors for scan matching and loop closure detection, leading to enhanced system performance. Our method achieves state-of-the-art accuracy on public autonomous driving data.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5257-5264"},"PeriodicalIF":4.6,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Implicit Social Navigation Behavior Using Deep Inverse Reinforcement Learning 利用深度逆强化学习学习隐性社会导航行为
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-02 DOI: 10.1109/LRA.2025.3557299
Tribhi Kathuria;Ke Liu;Junwoo Jang;X. Jessie Yang;Maani Ghaffari
{"title":"Learning Implicit Social Navigation Behavior Using Deep Inverse Reinforcement Learning","authors":"Tribhi Kathuria;Ke Liu;Junwoo Jang;X. Jessie Yang;Maani Ghaffari","doi":"10.1109/LRA.2025.3557299","DOIUrl":"https://doi.org/10.1109/LRA.2025.3557299","url":null,"abstract":"This paper reports on learning a reward map for social navigation in dynamic environments where the robot can reason about its path at any time, given agent trajectories and scene geometry. Humans navigating in dense and dynamic indoor environments often work with several implied social rules. A rule-based approach fails to model all possible interactions between humans, robots, and scenes. We propose a novel Smooth Maximum Entropy Deep Inverse Reinforcement Learning (S-MEDIRL) algorithm that can extrapolate beyond expert demos to better encode scene navigability from few-shot demonstrations. The agent learns to predict the cost maps based on trajectory data as well as scene geometry. The trajectory sampled from the learned cost map is then executed using a local crowd navigation controller. We present results in a photo-realistic simulation environment, with a robot and a human navigating a narrow crossing scenario. The robot implicitly learns to exhibit social behaviors such as yielding to oncoming traffic and avoiding deadlocks. We compare the proposed approach to the popular model-based crowd navigation algorithm ORCA and a rule-based agent that exhibits yielding.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"5146-5153"},"PeriodicalIF":4.6,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective Data-Driven Joint Friction Modeling and Compensation With Physical Consistency 有效的数据驱动关节摩擦建模和物理一致性补偿
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-04-02 DOI: 10.1109/LRA.2025.3557308
Rui Dai;Luca Rossini;Arturo Laurenzi;Andrea Patrizi;Nikos Tsagarakis
{"title":"Effective Data-Driven Joint Friction Modeling and Compensation With Physical Consistency","authors":"Rui Dai;Luca Rossini;Arturo Laurenzi;Andrea Patrizi;Nikos Tsagarakis","doi":"10.1109/LRA.2025.3557308","DOIUrl":"https://doi.org/10.1109/LRA.2025.3557308","url":null,"abstract":"The complex nonlinear nature of friction in real-world applications renders traditional physical models inadequate for accurately capturing its characteristics. While numerous learning-based approaches have addressed this challenge, they often lack interpretability and fail to uphold the physical guarantees essential for reliable modeling. Additionally, existing structured data-driven methods, despite their efficacy in handling nonlinear systems, seldom account for the specific traits of friction or ensure passivity. To overcome these limitations, we introduce a structured Gaussian Process (GP) model that adheres to the physical consistency of joint friction torque, enabling data-driven modeling in function space that accurately captures Coulomb and viscous friction characteristics while further guaranteeing passivity. We experimentally validate our approach by deploying the friction model on a two-degree-of-freedom (2-DoF) leg prototype. Our approach exhibits robust performance in the presence of non-passive and high-noise data. Experimental results demonstrate that our joint friction model achieves enhanced data efficiency, superior friction compensation performance, and improved trajectory tracking dynamics compared to other friction models.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5321-5328"},"PeriodicalIF":4.6,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Plane Segmentation in Depth Image Based on Adaptive Patch-Wise Region Growing
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-31 DOI: 10.1109/LRA.2025.3555862
Lantao Zhang;Haochen Niu;Peilin Liu;Fei Wen;Rendong Ying
{"title":"Efficient Plane Segmentation in Depth Image Based on Adaptive Patch-Wise Region Growing","authors":"Lantao Zhang;Haochen Niu;Peilin Liu;Fei Wen;Rendong Ying","doi":"10.1109/LRA.2025.3555862","DOIUrl":"https://doi.org/10.1109/LRA.2025.3555862","url":null,"abstract":"Plane segmentation algorithms are widely used in robotics, serving key roles in scenarios such as indoor localization, scene understanding, and robotic manipulation. These applications typically require real-time, precise, and robust plane segmentation processing, which presents a significant challenge. Existing methods based on pixel-wise or fix-sized patch-wise operation are redundant, as planar regions in real-world scenes are of diverse sizes. In this paper, we introduce a highly efficient method for plane segmentation, namely Adaptive Patch-wise Region Growing (APRG). APRG begins with data sampling to construct a data pyramid. To avoid redundant planer fitting in large planar regions, we introduce an adaptive patch-wise plane fitting algorithm with the pyramid accessed in a top-down manner. The largest possible planar patches are obtained in this process. Subsequently we introduce a region growing algorithm specially designed for our patch representation. Overall, APRG achieves more than 600 FPS at a 640x480 resolution on a mid-range CPU without using parallel acceleration techniques, which outperforms the state-of-the-art method by a factor of 1.46. Besides, in addition to its speedup in run-time, APRG significantly improves the segmentation quality, especially on real-world data.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"5249-5256"},"PeriodicalIF":4.6,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VDG: Vision-Only Dynamic Gaussian for Driving Simulation
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-31 DOI: 10.1109/LRA.2025.3555938
Hao Li;Jingfeng Li;Dingwen Zhang;Chenming Wu;Jieqi Shi;Chen Zhao;Haocheng Feng;Errui Ding;Jingdong Wang;Junwei Han
{"title":"VDG: Vision-Only Dynamic Gaussian for Driving Simulation","authors":"Hao Li;Jingfeng Li;Dingwen Zhang;Chenming Wu;Jieqi Shi;Chen Zhao;Haocheng Feng;Errui Ding;Jingdong Wang;Junwei Han","doi":"10.1109/LRA.2025.3555938","DOIUrl":"https://doi.org/10.1109/LRA.2025.3555938","url":null,"abstract":"Recent advances in dynamic Gaussian splatting have significantly improved scene reconstruction and novel-view synthesis. However, existing methods often rely on pre-computed camera poses and Gaussian initialization using Structure from Motion (SfM) or other costly sensors, limiting their scalability. In this letter, we propose Vision-only Dynamic Gaussian (VDG), a novel method that, for the first time, integrates self-supervised visual odometry (VO) into a pose-free dynamic Gaussian splatting framework. Given the reason that estimated poses are not accurate enough to perform self-decomposition for dynamic scenes, we specifically design motion supervision, enabling precise static-dynamic decomposition and modeling of dynamic objects via dynamic Gaussians. Extensive experiments on urban driving datasets, including KITTI and Waymo, show that VDG consistently outperforms state-of-the-art dynamic view synthesis methods in both reconstruction accuracy and pose prediction with only image input.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"5138-5145"},"PeriodicalIF":4.6,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信