IEEE Robotics and Automation Letters最新文献

筛选
英文 中文
StreamCMT: Prior-Guided Multimodal Temporal Fusion for Sparse 3D Object Detection StreamCMT:先验引导的多模态时间融合稀疏三维目标检测
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2026-03-01 Epub Date: 2026-03-09 DOI: 10.1109/LRA.2026.3671538
Yanliang Huang;Yuansheng Liu
{"title":"StreamCMT: Prior-Guided Multimodal Temporal Fusion for Sparse 3D Object Detection","authors":"Yanliang Huang;Yuansheng Liu","doi":"10.1109/LRA.2026.3671538","DOIUrl":"https://doi.org/10.1109/LRA.2026.3671538","url":null,"abstract":"Multimodal 3D detection is critical for autonomous driving reliability. While most existing methods boost accuracy via elaborate networks, they neglect inference speed which is essential for real-world deployment. Although existing decoder-based sparse query detection methods offer advantages in real-time performance, they suffer from limitations in convergence speed and cross-modal feature integration. To address these challenges of slow convergence and inadequate feature fusion, this letter proposes a Prior-Guided Position Embedding Module based on the Cross Modal Transformer (CMT) framework. The module reconstructs 3D sampling point distribution through spatial geometric priors, effectively improving model accuracy and accelerating convergence without incurring additional computational overhead. Concurrently, to enhance motion awareness, we integrate a Temporal Fusion Module that leverages historical frame information to optimize current detection performance. Experimental results demonstrate that StreamCMT achieves a detection accuracy of 72.5% NDS and 69.6% mAP on the nuScenes test set. On the validation set, compared to the baseline model, it improves NDS and mAP by 1.0% and 1.1% respectively, while increasing inference speed from 12.0 to 14.4 FPS. The model maintains a lightweight architecture while achieving an effective trade-off between detection accuracy and inference efficiency for autonomous driving perception systems.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 5","pages":"5358-5365"},"PeriodicalIF":5.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information-Based Supervised Learning of In-Proximity Effects for 3D Distance Estimation and Collision Avoidance 基于信息监督学习的三维距离估计与避碰
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2026-03-01 Epub Date: 2026-02-16 DOI: 10.1109/LRA.2026.3665068
Jacob M. Anderson;Kam K. Leang
{"title":"Information-Based Supervised Learning of In-Proximity Effects for 3D Distance Estimation and Collision Avoidance","authors":"Jacob M. Anderson;Kam K. Leang","doi":"10.1109/LRA.2026.3665068","DOIUrl":"https://doi.org/10.1109/LRA.2026.3665068","url":null,"abstract":"In-proximity effects (IPE) in 3D, specifically in-ground, in-ceiling, and in-wall effects, experienced by a rotary-wing aerial robot as it flies near obstacles are leveraged for obstacle distance estimation and collision-free motion control. Onboard motor commands and inertial measurement unit (IMU) signals are processed to enable the robot to essentially “feel” the presence of nearby obstacles through aerodynamic interactions. The physics of IPE, along with Shannon information, are used to tailor the input space and train a deep neural network (DNN) to estimate the distance to ground, ceiling, and wall features. Simulation and physical experimental results demonstrate reliable and robust obstacle detection and collision avoidance with a median distance estimation accuracy of 93.35%, 89.22%, and 90.67% for ground, ceiling, and wall, respectively. This new form of “sensing” is useful in environments with fog, smoke, dust, rain, or snow, where traditional proximity sensors and vision-based systems struggle to detect obstacles and determine distance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 5","pages":"5398-5405"},"PeriodicalIF":5.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Orthogonal Ray Projection: A Tangent-Space Visual Measurement Model for Robust Visual-Inertial Odometry 正交射线投影:一种鲁棒视觉惯性里程计的切线空间视觉测量模型
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2026-03-01 Epub Date: 2026-03-13 DOI: 10.1109/LRA.2026.3673929
Bing Han;Tuan Li;Yuezu Lv;Weisong Wen;Zhipeng Wang;Chuang Shi
{"title":"Orthogonal Ray Projection: A Tangent-Space Visual Measurement Model for Robust Visual-Inertial Odometry","authors":"Bing Han;Tuan Li;Yuezu Lv;Weisong Wen;Zhipeng Wang;Chuang Shi","doi":"10.1109/LRA.2026.3673929","DOIUrl":"https://doi.org/10.1109/LRA.2026.3673929","url":null,"abstract":"The reprojection error in Visual-Inertial Odometry (VIO) suffers from high nonlinearity due to perspective division, which degrades estimator consistency and robustness, particularly under large depth uncertainty. To address this, we propose a novel visual measurement model, the Orthogonal Ray Projection Error (ORPE), which is formulated in the tangent space of the observation ray. By minimizing the orthogonal distance between the estimated landmark and the measurement ray, ORPE decouples the measurement error from the scalar depth, rendering the residual function linear with respect to the feature position. We derive the exact analytical Jacobians and an uncertainty propagation model, integrating ORPE into both the MSCKF-based OpenVINS and the optimization-based ORB-SLAM3 frameworks. Simulations confirm that ORPE achieves geometric linearity for features, while significantly reducing the system nonlinearity with respect to camera pose. Extensive real-world experiments demonstrate that the proposed method significantly improves trajectory accuracy and estimator consistency in challenging weak-parallax scenarios, while maintaining computational efficiency comparable to standard approaches.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 5","pages":"5406-5413"},"PeriodicalIF":5.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy Fusion Control Strategy With Efficient Deep Deterministic Policy Gradient for Robotic Peg-in-Hole Assembly 基于高效深度确定性策略梯度的机器人孔钉装配模糊融合控制策略
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2026-03-01 Epub Date: 2026-03-13 DOI: 10.1109/LRA.2026.3673950
Guang Li;Junfeng Wang;Longfei Lu
{"title":"Fuzzy Fusion Control Strategy With Efficient Deep Deterministic Policy Gradient for Robotic Peg-in-Hole Assembly","authors":"Guang Li;Junfeng Wang;Longfei Lu","doi":"10.1109/LRA.2026.3673950","DOIUrl":"https://doi.org/10.1109/LRA.2026.3673950","url":null,"abstract":"The robotic peg-in-hole assembly task remains challenging. Traditional force control methods struggle with complex parameter identification and contact state analysis, while deep reinforcement learning(DRL) suffers from low efficiency and poor adaptability. To address these shortcomings and to capitalize on the strengths of both, this paper presents a fuzzy fusion control strategy and improved Deep Deterministic Policy Gradient(DDPG) method to achieve efficient exploration. The proposed framework incorporates a segmented reward function and domain randomization techniques to enhance adaptability. A fuzzy mechanism integrates an admittance controller with DDPG, embedding expert knowledge to improve learning efficiency. Additionally, a state priority experience replay mechanism is introduced to mitigate sample priority estimation errors and accelerate exploration. Simulation results under diverse configurations demonstrate the superiority of the proposed method, achieving over 96% success rate with a 43.5% reduction in maximum contact force compared to the standard DDPG baseline. Experimental validations confirm the feasibility of the learned policy in real-world settings, successfully accomplishing the assembly task that the baseline failed to complete.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 5","pages":"5422-5429"},"PeriodicalIF":5.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Short-Term Fourier Transform Based Single-Channel Single-Fiber 3D Shape Sensing 基于空间短期傅里叶变换的单通道单光纤三维形状传感
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2026-03-01 Epub Date: 2026-03-13 DOI: 10.1109/LRA.2026.3674026
Guochong Qiu;Danqian Cao;Yanjin Zhao;Wei Wang;Hongbin Liu
{"title":"Spatial Short-Term Fourier Transform Based Single-Channel Single-Fiber 3D Shape Sensing","authors":"Guochong Qiu;Danqian Cao;Yanjin Zhao;Wei Wang;Hongbin Liu","doi":"10.1109/LRA.2026.3674026","DOIUrl":"https://doi.org/10.1109/LRA.2026.3674026","url":null,"abstract":"Accurate three-dimensional (3D) shape sensing is vital for continuum robots in minimally invasive surgery. Conventional optical fiber methods depend on multi-fiber or multicore configurations, increasing integration complexity and associated costs. Single-fiber approaches support miniaturization, but struggle to decouple 3D bending and twist. We propose a single-channel single-fiber framework based on the spatial short-term Fourier transform (SSTFT) for real-time 3D reconstruction. A helically wrapped fiber encodes multiple deformation modes into periodic strain patterns, which localized Fourier domain analysis converts into curvature, direction, and twist parameters. These parameters feed a piecewise constant curvature and torsion model to efficiently reconstruct the backbone. Experiments on a 1.45 m sensor achieve average shape errors of 2.15 % (bending), 5.32 % (3D helix), and 7.90 % (twist). Compared to multi-fiber Frenet–Serret methods, our approach improves accuracy and robustness while reducing system complexity, demonstrating a promising low-cost, miniaturized shape sensing approach with potential applications in surgical navigation.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 5","pages":"5438-5445"},"PeriodicalIF":5.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Event-Grounding Graph: Unified Spatio-Temporal Scene Graph From Robotic Observations 事件接地图:来自机器人观测的统一时空场景图
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2026-03-01 Epub Date: 2026-03-02 DOI: 10.1109/LRA.2026.3669042
Phuoc Nguyen;Francesco Verdoja;Ville Kyrki
{"title":"Event-Grounding Graph: Unified Spatio-Temporal Scene Graph From Robotic Observations","authors":"Phuoc Nguyen;Francesco Verdoja;Ville Kyrki","doi":"10.1109/LRA.2026.3669042","DOIUrl":"https://doi.org/10.1109/LRA.2026.3669042","url":null,"abstract":"A fundamental aspect for building intelligent autonomous robots that can assist humans in their daily lives is the construction of rich environmental representations. While advances in semantic scene representations have enriched robotic scene understanding, current approaches lack a connection between spatial features and dynamic events; <italic>e.g.</i>, connecting <italic>the blue mug</i> to the event <italic>washing a mug</i>. In this work, we introduce event-grounding graph (EGG), a framework grounding event interactions to spatial features of a scene. This representation allows robots to perceive, reason, and respond to complex spatio-temporal queries. Experiments using real robotic data demonstrate event-grounding graph (EGG)’s capability to retrieve relevant information and respond accurately to human inquiries concerning the environment and events within.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 5","pages":"5286-5293"},"PeriodicalIF":5.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11417727","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PILaN: Generating Task-Individual Independent Customized Assistive Control on a Hip-Knee Powered Exoskeleton 在髋关节-膝关节动力外骨骼上生成任务独立定制辅助控制
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2026-03-01 Epub Date: 2026-02-19 DOI: 10.1109/LRA.2026.3666397
Longwen Chen;Fangge Cui;Huimin Lu
{"title":"PILaN: Generating Task-Individual Independent Customized Assistive Control on a Hip-Knee Powered Exoskeleton","authors":"Longwen Chen;Fangge Cui;Huimin Lu","doi":"10.1109/LRA.2026.3666397","DOIUrl":"https://doi.org/10.1109/LRA.2026.3666397","url":null,"abstract":"Generating task-individual independent customized assistive control is a big challenge of wearable powered exoskeletons, which usually relies on accurate and robust motion intention perceptions (MIP) of the wearer’s limb. Traditional physics-based models and deep learning models, as two commonly used MIP methods, suffer from poor generalization and strong data dependence, respectively. To overcome these limitations, in this study, we propose a physics-informed neural network (PINN) model named PILaN, which integrates the Lagrange dynamics with a deep learning model, to realize lower limb motion intention perception-based (LLMIP-based) assistive control on a hip-knee powered exoskeleton (HKPE). The proposed PILaN trained by a self-collected small scale dataset is capable of conducting 2-degree of freedom (DoF) lower limb dynamics estimation (LLDE) with the current hip and knee joint states, and using LLDE results to obtain the next joint state according to the Lagrangian dynamics. The assistive control of applied HKPE depends on the estimated joint states from the PILaN. Eight participants are invited to perform designed motion sequences consisting of one or multiple actions with the assistance of the HKPE. Experimental results demonstrate that our proposed PILaN can successfully provide accurate and robust LLMIP for the HKPE assistive control under task-individual independent conditions.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 5","pages":"5382-5389"},"PeriodicalIF":5.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to “Clustered Orienteering Problem With Subgroups” 修正了“分组定向问题”
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2026-03-01 Epub Date: 2026-04-14 DOI: 10.1109/LRA.2026.3680559
Luciano E. Almeida;Cristiano Arbex Valle;Douglas G. Macharet
{"title":"Correction to “Clustered Orienteering Problem With Subgroups”","authors":"Luciano E. Almeida;Cristiano Arbex Valle;Douglas G. Macharet","doi":"10.1109/LRA.2026.3680559","DOIUrl":"https://doi.org/10.1109/LRA.2026.3680559","url":null,"abstract":"The authors’ affiliation in [1] was incorrectly modified in the final published version. The correct affiliation is Universidade Federal de Minas Gerais (UFMG), located in Belo Horizonte, Minas Gerais, Brazil.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 5","pages":"6455-6455"},"PeriodicalIF":5.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11481160","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147736984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flex: End-to-End Text-Instructed Visual Navigation From Foundation Model Features Flex:基于基础模型特性的端到端文本指示视觉导航
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2026-03-01 Epub Date: 2026-03-27 DOI: 10.1109/LRA.2026.3678454
Makram Chahine;Alex Quach;Alaa Maalouf;Tsun-Hsuan Wang;Daniela Rus
{"title":"Flex: End-to-End Text-Instructed Visual Navigation From Foundation Model Features","authors":"Makram Chahine;Alex Quach;Alaa Maalouf;Tsun-Hsuan Wang;Daniela Rus","doi":"10.1109/LRA.2026.3678454","DOIUrl":"https://doi.org/10.1109/LRA.2026.3678454","url":null,"abstract":"End-to-end learning directly maps sensory inputs to actions, creating highly integrated and efficient policies for complex robotics tasks. However, such models often struggle to generalize beyond their training scenarios, limiting adaptability to new environments, tasks, and concepts. In this work, we investigate the minimal data requirements and architectural adaptations necessary to achieve robust closed-loop performance with vision-based control policies under unseen text instructions and visual distribution shifts. Our findings are synthesized in <italic>Flex</i>(<italic>F</i>ly-<italic>lex</i>ically), a framework that uses pre-trained Vision Language Models (VLMs) as frozen patch-wise feature extractors, generating spatially aware embeddings that integrate semantic and visual information. We demonstrate the effectiveness of this approach on a quadrotor fly-to-target task, where agents trained via behavior cloning on a small simulated dataset (with zero real-world images) successfully generalize to real-world scenes with diverse novel goals and command formulations.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 5","pages":"6480-6487"},"PeriodicalIF":5.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147696723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RadarGaussianDet3D: Gaussian Representation-Based Real-Time 3D Object Detection With 4D Automotive Radars RadarGaussianDet3D:基于高斯表示的实时3D目标检测与4D汽车雷达
IF 5.3 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2026-03-01 Epub Date: 2026-03-13 DOI: 10.1109/LRA.2026.3673988
Weiyi Xiong;Bing Zhu;Zewei Zheng
{"title":"RadarGaussianDet3D: Gaussian Representation-Based Real-Time 3D Object Detection With 4D Automotive Radars","authors":"Weiyi Xiong;Bing Zhu;Zewei Zheng","doi":"10.1109/LRA.2026.3673988","DOIUrl":"https://doi.org/10.1109/LRA.2026.3673988","url":null,"abstract":"4D automotive radars have gained increasing attention for autonomous driving due to their low cost, robustness, and inherent velocity measurement capability. However, existing 4D radar-based 3D detectors rely heavily on pillar encoders for BEV feature extraction, where each point contributes to only a single BEV grid, resulting in sparse feature maps and degraded representation quality. In addition, they also optimize bounding box attributes independently, leading to sub-optimal detection accuracy. Moreover, their inference speed, while sufficient for high-end GPUs, may fail to meet the real-time requirement on vehicle-mounted embedded devices. To overcome these limitations, an efficient and effective Gaussian-based 3D detector, namely RadarGaussianDet3D is introduced, leveraging Gaussian primitives and distributions as intermediate representations for radar points and bounding boxes. In RadarGaussianDet3D, a novel Point Gaussian Encoder (PGE) is designed to transform each point into a Gaussian primitive after feature aggregation and employs the 3D Gaussian Splatting (3DGS) technique for BEV rasterization, yielding denser feature maps. PGE exhibits exceptionally low latency, owing to the optimized algorithm for point feature aggregation and fast rendering of 3DGS. In addition, a new Box Gaussian Loss (BGL) is proposed, which converts bounding boxes into 3D Gaussian distributions and measures their distance to enable more comprehensive and consistent optimization. Extensive experiments on TJ4DRadSet and View-of-Delft demonstrate that RadarGaussianDet3D achieves high detection accuracy while delivering substantially faster inference, highlighting its potential for real-time deployment in autonomous driving.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 5","pages":"5709-5716"},"PeriodicalIF":5.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书