Conference on Robot Learning最新文献

筛选
英文 中文
Task-Relevant Failure Detection for Trajectory Predictors in Autonomous Vehicles 自动驾驶车辆轨迹预测器的任务相关故障检测
Conference on Robot Learning Pub Date : 2022-07-25 DOI: 10.48550/arXiv.2207.12380
Alec Farid, Sushant Veer, B. Ivanovic, Karen Leung, M. Pavone
{"title":"Task-Relevant Failure Detection for Trajectory Predictors in Autonomous Vehicles","authors":"Alec Farid, Sushant Veer, B. Ivanovic, Karen Leung, M. Pavone","doi":"10.48550/arXiv.2207.12380","DOIUrl":"https://doi.org/10.48550/arXiv.2207.12380","url":null,"abstract":"In modern autonomy stacks, prediction modules are paramount to planning motions in the presence of other mobile agents. However, failures in prediction modules can mislead the downstream planner into making unsafe decisions. Indeed, the high uncertainty inherent to the task of trajectory forecasting ensures that such mispredictions occur frequently. Motivated by the need to improve safety of autonomous vehicles without compromising on their performance, we develop a probabilistic run-time monitor that detects when a\"harmful\"prediction failure occurs, i.e., a task-relevant failure detector. We achieve this by propagating trajectory prediction errors to the planning cost to reason about their impact on the AV. Furthermore, our detector comes equipped with performance measures on the false-positive and the false-negative rate and allows for data-free calibration. In our experiments we compared our detector with various others and found that our detector has the highest area under the receiver operator characteristic curve.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132885354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Semantic Abstraction: Open-World 3D Scene Understanding from 2D Vision-Language Models 语义抽象:基于2D视觉语言模型的开放世界3D场景理解
Conference on Robot Learning Pub Date : 2022-07-23 DOI: 10.48550/arXiv.2207.11514
Huy Ha, Shuran Song
{"title":"Semantic Abstraction: Open-World 3D Scene Understanding from 2D Vision-Language Models","authors":"Huy Ha, Shuran Song","doi":"10.48550/arXiv.2207.11514","DOIUrl":"https://doi.org/10.48550/arXiv.2207.11514","url":null,"abstract":"We study open-world 3D scene understanding, a family of tasks that require agents to reason about their 3D environment with an open-set vocabulary and out-of-domain visual inputs - a critical skill for robots to operate in the unstructured 3D world. Towards this end, we propose Semantic Abstraction (SemAbs), a framework that equips 2D Vision-Language Models (VLMs) with new 3D spatial capabilities, while maintaining their zero-shot robustness. We achieve this abstraction using relevancy maps extracted from CLIP, and learn 3D spatial and geometric reasoning skills on top of those abstractions in a semantic-agnostic manner. We demonstrate the usefulness of SemAbs on two open-world 3D scene understanding tasks: 1) completing partially observed objects and 2) localizing hidden objects from language descriptions. Experiments show that SemAbs can generalize to novel vocabulary, materials/lighting, classes, and domains (i.e., real-world scans) from training on limited 3D synthetic data. Code and data is available at https://semantic-abstraction.cs.columbia.edu/","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134455272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Online Dynamics Learning for Predictive Control with an Application to Aerial Robots 基于在线动力学学习的预测控制及其在航空机器人中的应用
Conference on Robot Learning Pub Date : 2022-07-19 DOI: 10.48550/arXiv.2207.09344
Tom Z. Jiahao, K. Y. Chee, M. A. Hsieh
{"title":"Online Dynamics Learning for Predictive Control with an Application to Aerial Robots","authors":"Tom Z. Jiahao, K. Y. Chee, M. A. Hsieh","doi":"10.48550/arXiv.2207.09344","DOIUrl":"https://doi.org/10.48550/arXiv.2207.09344","url":null,"abstract":"In this work, we consider the task of improving the accuracy of dynamic models for model predictive control (MPC) in an online setting. Although prediction models can be learned and applied to model-based controllers, these models are often learned offline. In this offline setting, training data is first collected and a prediction model is learned through an elaborated training procedure. However, since the model is learned offline, it does not adapt to disturbances or model errors observed during deployment. To improve the adaptiveness of the model and the controller, we propose an online dynamics learning framework that continually improves the accuracy of the dynamic model during deployment. We adopt knowledge-based neural ordinary differential equations (KNODE) as the dynamic models, and use techniques inspired by transfer learning to continually improve the model accuracy. We demonstrate the efficacy of our framework with a quadrotor, and verify the framework in both simulations and physical experiments. Results show that our approach can account for disturbances that are possibly time-varying, while maintaining good trajectory tracking performance.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131938169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
QuaDUE-CCM: Interpretable Distributional Reinforcement Learning using Uncertain Contraction Metrics for Precise Quadrotor Trajectory Tracking 使用不确定收缩度量进行精确四旋翼轨迹跟踪的可解释分布强化学习
Conference on Robot Learning Pub Date : 2022-07-15 DOI: 10.48550/arXiv.2207.07789
Yanran Wang, James O’Keeffe, Qiuchen Qian, David E. Boyle
{"title":"QuaDUE-CCM: Interpretable Distributional Reinforcement Learning using Uncertain Contraction Metrics for Precise Quadrotor Trajectory Tracking","authors":"Yanran Wang, James O’Keeffe, Qiuchen Qian, David E. Boyle","doi":"10.48550/arXiv.2207.07789","DOIUrl":"https://doi.org/10.48550/arXiv.2207.07789","url":null,"abstract":"Accuracy and stability are common requirements for Quadrotor trajectory tracking systems. Designing an accurate and stable tracking controller remains challenging, particularly in unknown and dynamic environments with complex aerodynamic disturbances. We propose a Quantile-approximation-based Distributional-reinforced Uncertainty Estimator (QuaDUE) to accurately identify the effects of aerodynamic disturbances, i.e., the uncertainties between the true and estimated Control Contraction Metrics (CCMs). Taking inspiration from contraction theory and integrating the QuaDUE for uncertainties, our novel CCM-based trajectory tracking framework tracks any feasible reference trajectory precisely whilst guaranteeing exponential convergence. More importantly, the convergence and training acceleration of the distributional RL are guaranteed and analyzed, respectively, from theoretical perspectives. We also demonstrate our system under unknown and diverse aerodynamic forces. Under large aerodynamic forces (>2m/s^2), compared with the classic data-driven approach, our QuaDUE-CCM achieves at least a 56.6% improvement in tracking error. Compared with QuaDRED-MPC, a distributional RL-based approach, QuaDUE-CCM achieves at least a 3 times improvement in contraction rate.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116393845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
i-Sim2Real: Reinforcement Learning of Robotic Policies in Tight Human-Robot Interaction Loops i-Sim2Real:紧密人机交互循环中机器人策略的强化学习
Conference on Robot Learning Pub Date : 2022-07-14 DOI: 10.48550/arXiv.2207.06572
Saminda Abeyruwan, L. Graesser, David B. D'Ambrosio, Avi Singh, A. Shankar, A. Bewley, Deepali Jain, K. Choromanski, P. Sanketi
{"title":"i-Sim2Real: Reinforcement Learning of Robotic Policies in Tight Human-Robot Interaction Loops","authors":"Saminda Abeyruwan, L. Graesser, David B. D'Ambrosio, Avi Singh, A. Shankar, A. Bewley, Deepali Jain, K. Choromanski, P. Sanketi","doi":"10.48550/arXiv.2207.06572","DOIUrl":"https://doi.org/10.48550/arXiv.2207.06572","url":null,"abstract":"Sim-to-real transfer is a powerful paradigm for robotic reinforcement learning. The ability to train policies in simulation enables safe exploration and large-scale data collection quickly at low cost. However, prior works in sim-to-real transfer of robotic policies typically do not involve any human-robot interaction because accurately simulating human behavior is an open problem. In this work, our goal is to leverage the power of simulation to train robotic policies that are proficient at interacting with humans upon deployment. But there is a chicken and egg problem -- how to gather examples of a human interacting with a physical robot so as to model human behavior in simulation without already having a robot that is able to interact with a human? Our proposed method, Iterative-Sim-to-Real (i-S2R), attempts to address this. i-S2R bootstraps from a simple model of human behavior and alternates between training in simulation and deploying in the real world. In each iteration, both the human behavior model and the policy are refined. For all training we apply a new evolutionary search algorithm called Blackbox Gradient Sensing (BGS). We evaluate our method on a real world robotic table tennis setting, where the objective for the robot is to play cooperatively with a human player for as long as possible. Table tennis is a high-speed, dynamic task that requires the two players to react quickly to each other's moves, making for a challenging test bed for research on human-robot interaction. We present results on an industrial robotic arm that is able to cooperatively play table tennis with human players, achieving rallies of 22 successive hits on average and 150 at best. Further, for 80% of players, rally lengths are 70% to 175% longer compared to the sim-to-real plus fine-tuning (S2R+FT) baseline. For videos of our system in action, please see https://sites.google.com/view/is2r.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114809951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Inner Monologue: Embodied Reasoning through Planning with Language Models 内心独白:通过语言模型规划的具体化推理
Conference on Robot Learning Pub Date : 2022-07-12 DOI: 10.48550/arXiv.2207.05608
Wenlong Huang, F. Xia, Ted Xiao, Harris Chan, Jacky Liang, Peter R. Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, P. Sermanet, Noah Brown, Tomas Jackson, Linda Luu, S. Levine, Karol Hausman, Brian Ichter
{"title":"Inner Monologue: Embodied Reasoning through Planning with Language Models","authors":"Wenlong Huang, F. Xia, Ted Xiao, Harris Chan, Jacky Liang, Peter R. Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, P. Sermanet, Noah Brown, Tomas Jackson, Linda Luu, S. Levine, Karol Hausman, Brian Ichter","doi":"10.48550/arXiv.2207.05608","DOIUrl":"https://doi.org/10.48550/arXiv.2207.05608","url":null,"abstract":"Recent works have shown how the reasoning capabilities of Large Language Models (LLMs) can be applied to domains beyond natural language processing, such as planning and interaction for robots. These embodied problems require an agent to understand many semantic aspects of the world: the repertoire of skills available, how these skills influence the world, and how changes to the world map back to the language. LLMs planning in embodied environments need to consider not just what skills to do, but also how and when to do them - answers that change over time in response to the agent's own choices. In this work, we investigate to what extent LLMs used in such embodied contexts can reason over sources of feedback provided through natural language, without any additional training. We propose that by leveraging environment feedback, LLMs are able to form an inner monologue that allows them to more richly process and plan in robotic control scenarios. We investigate a variety of sources of feedback, such as success detection, scene description, and human interaction. We find that closed-loop language feedback significantly improves high-level instruction completion on three domains, including simulated and real table top rearrangement tasks and long-horizon mobile manipulation tasks in a kitchen environment in the real world.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114696425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 324
Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning 不要从零开始:利用先验数据自动化机器人强化学习
Conference on Robot Learning Pub Date : 2022-07-11 DOI: 10.48550/arXiv.2207.04703
Homer Walke, Jonathan Yang, Albert Yu, Aviral Kumar, Jedrzej Orbik, Avi Singh, S. Levine
{"title":"Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning","authors":"Homer Walke, Jonathan Yang, Albert Yu, Aviral Kumar, Jedrzej Orbik, Avi Singh, S. Levine","doi":"10.48550/arXiv.2207.04703","DOIUrl":"https://doi.org/10.48550/arXiv.2207.04703","url":null,"abstract":"Reinforcement learning (RL) algorithms hold the promise of enabling autonomous skill acquisition for robotic systems. However, in practice, real-world robotic RL typically requires time consuming data collection and frequent human intervention to reset the environment. Moreover, robotic policies learned with RL often fail when deployed beyond the carefully controlled setting in which they were learned. In this work, we study how these challenges can all be tackled by effective utilization of diverse offline datasets collected from previously seen tasks. When faced with a new task, our system adapts previously learned skills to quickly learn to both perform the new task and return the environment to an initial state, effectively performing its own environment reset. Our empirical results demonstrate that incorporating prior data into robotic reinforcement learning enables autonomous learning, substantially improves sample-efficiency of learning, and enables better generalization. Project website: https://sites.google.com/view/ariel-berkeley/","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114362841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Taskography: Evaluating robot task planning over large 3D scene graphs 任务图:在大型3D场景图上评估机器人任务规划
Conference on Robot Learning Pub Date : 2022-07-11 DOI: 10.48550/arXiv.2207.05006
Christopher Agia, Krishna Murthy Jatavallabhula, M. Khodeir, O. Mikšík, Vibhav Vineet, Mustafa Mukadam, L. Paull, F. Shkurti
{"title":"Taskography: Evaluating robot task planning over large 3D scene graphs","authors":"Christopher Agia, Krishna Murthy Jatavallabhula, M. Khodeir, O. Mikšík, Vibhav Vineet, Mustafa Mukadam, L. Paull, F. Shkurti","doi":"10.48550/arXiv.2207.05006","DOIUrl":"https://doi.org/10.48550/arXiv.2207.05006","url":null,"abstract":": 3D scene graphs (3DSGs) are an emerging description; unifying symbolic, topological, and metric scene representations. However, typical 3DSGs contain hundreds of objects and symbols even for small environments; rendering task planning on the full graph impractical. We construct T ASKOGRAPHY , the first large-scale robotic task planning benchmark over 3DSGs. While most benchmarking efforts in this area focus on vision-based planning , we systemati-cally study symbolic planning, to decouple planning performance from visual rep-resentation learning. We observe that, among existing methods, neither classical nor learning-based planners are capable of real-time planning over full 3DSGs. Enabling real-time planning demands progress on both (a) sparsifying 3DSGs for tractable planning and (b) designing planners that better exploit 3DSG hierarchies. Towards the former goal, we propose SCRUB , a task-conditioned 3DSG sparsification method; enabling classical planners to match and in some cases sur-pass state-of-the-art learning-based planners. Towards the latter goal, we propose SEEK , a procedure enabling learning-based planners to exploit 3DSG structure, reducing the number of replanning queries required by current best approaches by an order of magnitude. We will open-source all code and baselines to spur further research along the intersections of robot task planning, learning and 3DSGs.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130087914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action LM-Nav:机器人导航与语言,视觉和动作的大型预训练模型
Conference on Robot Learning Pub Date : 2022-07-10 DOI: 10.48550/arXiv.2207.04429
Dhruv Shah, B. Osinski, Brian Ichter, S. Levine
{"title":"LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action","authors":"Dhruv Shah, B. Osinski, Brian Ichter, S. Levine","doi":"10.48550/arXiv.2207.04429","DOIUrl":"https://doi.org/10.48550/arXiv.2207.04429","url":null,"abstract":"Goal-conditioned policies for robotic navigation can be trained on large, unannotated datasets, providing for good generalization to real-world settings. However, particularly in vision-based settings where specifying goals requires an image, this makes for an unnatural interface. Language provides a more convenient modality for communication with robots, but contemporary methods typically require expensive supervision, in the form of trajectories annotated with language descriptions. We present a system, LM-Nav, for robotic navigation that enjoys the benefits of training on unannotated large datasets of trajectories, while still providing a high-level interface to the user. Instead of utilizing a labeled instruction following dataset, we show that such a system can be constructed entirely out of pre-trained models for navigation (ViNG), image-language association (CLIP), and language modeling (GPT-3), without requiring any fine-tuning or language-annotated robot data. We instantiate LM-Nav on a real-world mobile robot and demonstrate long-horizon navigation through complex, outdoor environments from natural language instructions. For videos of our experiments, code release, and an interactive Colab notebook that runs in your browser, please check out our project page https://sites.google.com/view/lmnav","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122961686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 140
NeuralGrasps: Learning Implicit Representations for Grasps of Multiple Robotic Hands NeuralGrasps:学习多机械手抓取的隐式表示
Conference on Robot Learning Pub Date : 2022-07-06 DOI: 10.48550/arXiv.2207.02959
Ninad Khargonkar, Neil Song, Zesheng Xu, B. Prabhakaran, Yu Xiang
{"title":"NeuralGrasps: Learning Implicit Representations for Grasps of Multiple Robotic Hands","authors":"Ninad Khargonkar, Neil Song, Zesheng Xu, B. Prabhakaran, Yu Xiang","doi":"10.48550/arXiv.2207.02959","DOIUrl":"https://doi.org/10.48550/arXiv.2207.02959","url":null,"abstract":"We introduce a neural implicit representation for grasps of objects from multiple robotic hands. Different grasps across multiple robotic hands are encoded into a shared latent space. Each latent vector is learned to decode to the 3D shape of an object and the 3D shape of a robotic hand in a grasping pose in terms of the signed distance functions of the two 3D shapes. In addition, the distance metric in the latent space is learned to preserve the similarity between grasps across different robotic hands, where the similarity of grasps is defined according to contact regions of the robotic hands. This property enables our method to transfer grasps between different grippers including a human hand, and grasp transfer has the potential to share grasping skills between robots and enable robots to learn grasping skills from humans. Furthermore, the encoded signed distance functions of objects and grasps in our implicit representation can be used for 6D object pose estimation with grasping contact optimization from partial point clouds, which enables robotic grasping in the real world.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115513007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信