IEEE Robotics and Automation Letters最新文献

筛选
英文 中文
AquaFuse: Waterbody Fusion for Physics-Guided View Synthesis of Underwater Scenes
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550816
Md Abu Bakr Siddique;Jiayi Wu;Ioannis Rekleitis;Md Jahidul Islam
{"title":"AquaFuse: Waterbody Fusion for Physics-Guided View Synthesis of Underwater Scenes","authors":"Md Abu Bakr Siddique;Jiayi Wu;Ioannis Rekleitis;Md Jahidul Islam","doi":"10.1109/LRA.2025.3550816","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550816","url":null,"abstract":"In this letter, we introduce the idea of AquaFuse, a physics-based method for synthesizing <italic>waterbody properties</i> in underwater imagery. We formulate a closed-form solution for waterbody fusion that facilitates realistic data augmentation and geometrically consistent underwater scene rendering. AquaFuse leverages the physical characteristics of light propagation underwater to synthesize the waterbody from one scene to the object contents of another. Unlike data-driven style transfer methods, AquaFuse preserves the depth consistency and object geometry in an input scene. We validate this unique feature by comprehensive experiments over diverse sets of underwater scenes. We find that the <italic>AquaFused images</i> preserve over 94% depth consistency and 90–95% structural similarity of the input scenes. We also demonstrate that it generates accurate 3D view synthesis by preserving object geometry while adapting to the inherent waterbody fusion process. AquaFuse opens up a new research direction in data augmentation by geometry-preserving style transfer for underwater imaging and robot vision.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4316-4323"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143688064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SamPose: Generalizable Model-Free 6D Object Pose Estimation via Single-View Prompt
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550796
Wubin Shi;Shaoyan Gai;Feipeng Da;Zeyu Cai
{"title":"SamPose: Generalizable Model-Free 6D Object Pose Estimation via Single-View Prompt","authors":"Wubin Shi;Shaoyan Gai;Feipeng Da;Zeyu Cai","doi":"10.1109/LRA.2025.3550796","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550796","url":null,"abstract":"Object pose estimation in open-world scenarios is a critical challenge in robotics, virtual reality, and autonomous driving. In this letter, we introduce SamPose, a novel framework designed to achieve model-free 6DoF pose estimation of any target object in open-world environments using only a single-view prompt. SamPose consists mainly of an Open-world Object Detector (OOD) and a Coarse-to-Fine Pose Estimator (CFPE). The OOD utilizes a pre-trained EfficientSAM model to perform zero-shot segmentation matching tasks. It selects the proposals most similar to new objects based on matching scores derived from semantic, geometric, and local descriptors. In the CFPE phase, a sparse keypoint matcher, guided by DINO semantics, first performs robust keypoint matching and calculates an initial pose. Then, after aligning the perspectives from two views, a two-stage semi-dense keypoint matcher is used to compute reliable point correspondences and ultimately determine the object's pose. Finally, our extensive experiments demonstrate its robustness and competitive performance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4420-4427"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Agent Generative Adversarial Interactive Self-Imitation Learning for AUV Formation Control and Obstacle Avoidance
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550743
Zheng Fang;Tianhao Chen;Tian Shen;Dong Jiang;Zheng Zhang;Guangliang Li
{"title":"Multi-Agent Generative Adversarial Interactive Self-Imitation Learning for AUV Formation Control and Obstacle Avoidance","authors":"Zheng Fang;Tianhao Chen;Tian Shen;Dong Jiang;Zheng Zhang;Guangliang Li","doi":"10.1109/LRA.2025.3550743","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550743","url":null,"abstract":"Multiple autonomous underwater vehicles (multi-AUVs) can cooperatively accomplish tasks that a single AUV cannot complete. Recently, multi-agent reinforcement learning has been introduced to control of multi-AUV. However, designing efficient reward functions for various tasks of multi-AUV control is difficult or even impractical. Multi-agent generative adversarial imitation learning (MAGAIL) allows multi-AUV to learn from expert demonstration instead of pre-defined reward functions, but suffers from the deficiency of requiring optimal demonstrations and not surpassing provided expert demonstrations. This letter builds upon the MAGAIL algorithm by proposing multi-agent generative adversarial interactive self-imitation learning (MAGAISIL), which can facilitate AUVs to learn policies by gradually replacing the provided sub-optimal demonstrations with self-generated good trajectories selected by a human trainer. Our experimental results in three multi-AUV formation control and obstacle avoidance tasks on the Gazebo platform with AUV simulator of our lab show that AUVs trained via MAGAISIL can surpass the provided sub-optimal expert demonstrations and reach a performance close to or even better than MAGAIL with optimal demonstrations. Further results indicate that AUVs' policies trained via MAGAISIL can adapt to complex and different tasks as well as MAGAIL learning from optimal demonstrations.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4356-4363"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143688081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCDA-Net: Structure Completion and Density Awareness Network for LiDAR-Based 3D Object Detection
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-12 DOI: 10.1109/LRA.2025.3550801
Shuwen Wu;Jinfu Yang;Jiaqi Ma;Shaochen Zhang;Tianhao Hao;Mingai Li
{"title":"SCDA-Net: Structure Completion and Density Awareness Network for LiDAR-Based 3D Object Detection","authors":"Shuwen Wu;Jinfu Yang;Jiaqi Ma;Shaochen Zhang;Tianhao Hao;Mingai Li","doi":"10.1109/LRA.2025.3550801","DOIUrl":"https://doi.org/10.1109/LRA.2025.3550801","url":null,"abstract":"As a fundamental task in various application scenarios, including autonomous driving and mobile robotic systems, 3D object detection has received extensive attention from researchers in both academia and industry. However, due to the working principle of LiDAR and external factors such as occlusion, the collected point cloud of the object is usually sparse and incomplete, which affects the performance of 3D object detector. In this letter, a Structure Completion and Density Awareness Network (SCDA-Net) is proposed for 3D object detection from point clouds. Specifically, a structure completion module is designed to predict dense shapes of complete point clouds by leveraging sequence transduction ability of the transformer architecture. Furthermore, we propose a density-aware voxel RoI pooling strategy to introduce density features that reflect the state information of the original objects in refinement stage. By restoring the complete structure of the objects and considering the true distribution of the points in raw point cloud, the proposed method achieves more accurate feature extraction and scene perception. Extensive experimental results on the KITTI and Waymo datasets demonstrate the effectiveness of the proposed SCDA-Net.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4268-4275"},"PeriodicalIF":4.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143688171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OMEGA: Open-Source and Multi-Mode Hopping Platform for Educational and Groundwork Aims
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-11 DOI: 10.1109/LRA.2025.3549661
Xiangyu Chu;Fei Yan Wong;Chun Yin Fan;Hongbo Zhang;Yanlin Chen;K. W. Samuel Au
{"title":"OMEGA: Open-Source and Multi-Mode Hopping Platform for Educational and Groundwork Aims","authors":"Xiangyu Chu;Fei Yan Wong;Chun Yin Fan;Hongbo Zhang;Yanlin Chen;K. W. Samuel Au","doi":"10.1109/LRA.2025.3549661","DOIUrl":"https://doi.org/10.1109/LRA.2025.3549661","url":null,"abstract":"This letter presents OMEGA, a new open-source, multi-mode hopping platform. It consists of a rig and a middle-size robot equipped with an omnidirectional parallel 3-RSR leg, allowing for 1D, 2D, and 3D hopping modes. All modes can be easily interchanged via detachable mechanisms. A control framework is developed to operate all modes based on a 3D SLIP model. To our knowledge, few middle-size monopod robots can locomote in the field, making OMEGA a complementary addition to existing legged platforms. This versatile solution uses accessible manufacturing technologies such as 3D printing and water-jet cutting, and the implementation of detachable mechanisms, enabling operators to explore legged dynamic motion with a single robot across different modes, instead of requiring multiple robots for different purposes. A simulator is developed for initial hopping control learning. Extensive experiments in 1D/2D tethered and 3D untethered modes demonstrate the platform's mobility and versatility. The proposed platform has the potential to serve both educational and groundwork aims.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4005-4012"},"PeriodicalIF":4.6,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visuotactile-Based Learning for Insertion With Compliant Hands
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-10 DOI: 10.1109/LRA.2025.3549657
Osher Azulay;Dhruv Metha Ramesh;Nimrod Curtis;Avishai Sintov
{"title":"Visuotactile-Based Learning for Insertion With Compliant Hands","authors":"Osher Azulay;Dhruv Metha Ramesh;Nimrod Curtis;Avishai Sintov","doi":"10.1109/LRA.2025.3549657","DOIUrl":"https://doi.org/10.1109/LRA.2025.3549657","url":null,"abstract":"Compared to rigid hands, underactuated compliant hands offer greater adaptability to object shapes, provide stable grasps, and are often more cost-effective. However, they introduce uncertainties in hand-object interactions due to their inherent compliance and lack of precise finger proprioception as in rigid hands. These limitations become particularly significant when performing contact-rich tasks like insertion. To address these challenges, additional sensing modalities are required to enable robust insertion capabilities. This letter explores the essential sensing requirements for successful insertion tasks with compliant hands, focusing on the role of visuotactile perception (i.e., visual and tactile perception). We propose a simulation-based multimodal policy learning framework that leverages all-around tactile sensing and an extrinsic depth camera. A transformer-based policy, trained through a teacher-student distillation process, is successfully transferred to a real-world robotic system without further training. Our results emphasize the crucial role of tactile sensing in conjunction with visual perception for accurate object-socket pose estimation, successful sim-to-real transfer and robust task execution.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4053-4060"},"PeriodicalIF":4.6,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143645327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bayesian Modeling Framework for Estimation and Ground Segmentation of Cluttered Staircases
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-10 DOI: 10.1109/LRA.2025.3549662
Prasanna Sriganesh;Burhanuddin Shirose;Matthew Travers
{"title":"A Bayesian Modeling Framework for Estimation and Ground Segmentation of Cluttered Staircases","authors":"Prasanna Sriganesh;Burhanuddin Shirose;Matthew Travers","doi":"10.1109/LRA.2025.3549662","DOIUrl":"https://doi.org/10.1109/LRA.2025.3549662","url":null,"abstract":"Autonomous robot navigation in complex environments requires robust perception as well as high-level scene understanding due to perceptual challenges, such as occlusions, and uncertainty introduced by robot movement. For example, a robot climbing a cluttered staircase can misinterpret clutter as a step, misrepresenting the state and compromising safety. This requires robust state estimation methods capable of inferring the underlying structure of the environment even from incomplete sensor data. In this letter, we introduce a novel method for robust state estimation of staircases. To address the challenge of perceiving occluded staircases extending beyond the robot's field-of-view, our approach combines an infinite-width staircase representation with a finite endpoint state to capture the overall staircase structure. This representation is integrated into a Bayesian inference framework to fuse noisy measurements enabling accurate estimation of staircase location even with partial observations and occlusions. Additionally, we present a segmentation algorithm that works in conjunction with the staircase estimation pipeline to accurately identify clutter-free regions on a staircase. Our method is extensively evaluated on real robots across diverse staircases, demonstrating significant improvements in estimation accuracy and segmentation performance compared to baseline approaches.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4164-4171"},"PeriodicalIF":4.6,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bring the Heat: Rapid Trajectory Optimization With Pseudospectral Techniques and the Affine Geometric Heat Flow Equation
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-10 DOI: 10.1109/LRA.2025.3547299
Challen Enninful Adu;César E. Ramos Chuquiure;Bohao Zhang;Ram Vasudevan
{"title":"Bring the Heat: Rapid Trajectory Optimization With Pseudospectral Techniques and the Affine Geometric Heat Flow Equation","authors":"Challen Enninful Adu;César E. Ramos Chuquiure;Bohao Zhang;Ram Vasudevan","doi":"10.1109/LRA.2025.3547299","DOIUrl":"https://doi.org/10.1109/LRA.2025.3547299","url":null,"abstract":"Generating optimal trajectories for high-dimensional robotic systems in a time-efficient manner while adhering to constraints is a challenging task. This letter introduces PHLAME, which applies pseudospectral collocation and spatial vector algebra to efficiently solve the Affine Geometric Heat Flow (AGHF) Partial Differential Equation (PDE) for trajectory optimization. Computing a solution to the AGHF PDE scales efficiently because its solution is defined over a two-dimensional domain. To solve the AGHF one usually applies the Method of Lines (MOL), which works by discretizing one variable of the AGHF PDE, effectively converting the PDE into a system of ordinary differential equations (ODEs) that can be solved using standard time-integration methods. Though powerful, this method requires a fine discretization to generate accurate solutions and still requires evaluating the AGHF PDE which can be computationally expensive for high-dimensional systems. PHLAME overcomes this deficiency by using a pseudospectral method, which reduces the number of function evaluations required to obtain a high-accuracy solution. To further increase computational speed, this letter presents analytical expressions for the AGHF which can be computed efficiently using rigid body dynamics algorithms. The proposed method PHLAME is tested across various dynamical systems, with and without obstacles and compared to a number of state-of-the-art techniques. PHLAME is able to generate trajectories for a 44-dimensional state-space system in <inline-formula><tex-math>$sim 5$</tex-math></inline-formula> seconds.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4148-4155"},"PeriodicalIF":4.6,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143676019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Group-Aware Robot Navigation in Crowds Using Spatio-Temporal Graph Attention Network With Deep Reinforcement Learning
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-10 DOI: 10.1109/LRA.2025.3549663
Xiaojun Lu;Angela Faragasso;Yongdong Wang;Atsushi Yamashita;Hajime Asama
{"title":"Group-Aware Robot Navigation in Crowds Using Spatio-Temporal Graph Attention Network With Deep Reinforcement Learning","authors":"Xiaojun Lu;Angela Faragasso;Yongdong Wang;Atsushi Yamashita;Hajime Asama","doi":"10.1109/LRA.2025.3549663","DOIUrl":"https://doi.org/10.1109/LRA.2025.3549663","url":null,"abstract":"Robots are becoming essential in human environments, requiring them to behave in a socially compliant manner. Although previous learning-based methods have shown potential in social navigation, most have treated pedestrians as individuals, failing to account for group level interactions. Additionally, these methods have modeled pairwise interactions only in the spatial domain, overlooking the temporal evolution of relations among agents. In this letter, the above limitations are addressed by proposing a novel spatio-temporal graph attention network that explicitly models group level interactions in both spatial and temporal domains. Specifically, a novel group-awareness mechanism is designed to model group-aware behaviors, and a new network is proposed to capture spatio-temporal features of relations among agents while leveraging the model-free deep reinforcement learning to optimize the group-aware navigation policy. The test results show that our approach outperforms the baselines in all metrics in both simulation and real-world experiments. Furthermore, quantitative analysis of questionnaire responses further verifies the benefits of our method in group awareness and social compliance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"4140-4147"},"PeriodicalIF":4.6,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143676057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variable Stiffness Actuation via 3D-Printed Nonlinear Torsional Springs
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-03-10 DOI: 10.1109/LRA.2025.3549658
Hannes Höppner;Annika Kirner;Joshua Göttlich;Linnéa Jakob;Alexander Dietrich;Christian Ott
{"title":"Variable Stiffness Actuation via 3D-Printed Nonlinear Torsional Springs","authors":"Hannes Höppner;Annika Kirner;Joshua Göttlich;Linnéa Jakob;Alexander Dietrich;Christian Ott","doi":"10.1109/LRA.2025.3549658","DOIUrl":"https://doi.org/10.1109/LRA.2025.3549658","url":null,"abstract":"Variable Stiffness Actuators (VSAs) are promising for advanced robotic systems, offering benefits such as improved energy efficiency, impact safety, stiffness adaptability, mechanical robustness, and dynamic versatility. However, traditional designs often rely on complex mechanical assemblies to achieve nonlinear torque–deflection characteristics, increasing system intricacy and introducing potential points of failure. This letter presents the design, implementation, and validation of a novel antagonistic VSA that drastically simplifies complexity of the mechanisms by utilizing 3D-printed progressive nonlinear torsional springs (3DNS). By directly 3D-printing springs, we enable precise control over nonlinear behavior through strategic variation of their geometry. Empirical testing and finite element simulations demonstrate that our springs exhibit low hysteresis, low variance across samples, and a strong correlation between simulated and measured behavior. Integrating these springs into an antagonistic setup demonstrates the feasibility of achieving VSAs with low damping, minimal hysteresis, and stiffness that aligns well with modeled predictions. Our findings suggest that this approach offers a cost-effective and accessible solution for the development of high-performance VSAs.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4324-4331"},"PeriodicalIF":4.6,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143688036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信