International Journal of Robotics Research最新文献

筛选
英文 中文
Locally active globally stable dynamical systems: Theory, learning, and experiments 局部活动全局稳定动力系统:理论、学习和实验
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2022-01-27 DOI: 10.1177/02783649211030952
Nadia Figueroa, A. Billard
{"title":"Locally active globally stable dynamical systems: Theory, learning, and experiments","authors":"Nadia Figueroa, A. Billard","doi":"10.1177/02783649211030952","DOIUrl":"https://doi.org/10.1177/02783649211030952","url":null,"abstract":"State-dependent dynamical systems (DSs) offer adaptivity, reactivity, and robustness to perturbations in motion planning and physical human–robot interaction tasks. Learning DS-based motion plans from non-linear reference trajectories is an active research area in robotics. Most approaches focus on learning DSs that can (i) accurately mimic the demonstrated motion, while (ii) ensuring convergence to the target, i.e., they are globally asymptotically (or exponentially) stable. When subject to perturbations, a compliant robot guided with a DS will continue following the next integral curves of the DS towards the target. If the task requires the robot to track a specific reference trajectory, this approach will fail. To alleviate this shortcoming, we propose the locally active globally stable DS (LAGS-DS), a novel DS formulation that provides both global convergence and stiffness-like symmetric attraction behaviors around a reference trajectory in regions of the state space where trajectory tracking is important. This allows for a unified approach towards motion and impedance encoding in a single DS-based motion model, i.e., stiffness is embedded in the DS. To learn LAGS-DS from demonstrations we propose a learning strategy based on Bayesian non-parametric Gaussian mixture models, Gaussian processes, and a sequence of constrained optimization problems that ensure estimation of stable DS parameters via Lyapunov theory. We experimentally validated LAGS-DS on writing tasks with a KUKA LWR 4+ arm and on navigation and co-manipulation tasks with iCub humanoid robots.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45850650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Inducing structure in reward learning by learning features 基于学习特征的奖励学习诱导结构
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2022-01-18 DOI: 10.1177/02783649221078031
Andreea Bobu, Marius Wiggert, C. Tomlin, A. Dragan
{"title":"Inducing structure in reward learning by learning features","authors":"Andreea Bobu, Marius Wiggert, C. Tomlin, A. Dragan","doi":"10.1177/02783649221078031","DOIUrl":"https://doi.org/10.1177/02783649221078031","url":null,"abstract":"Reward learning enables robots to learn adaptable behaviors from human input. Traditional methods model the reward as a linear function of hand-crafted features, but that requires specifying all the relevant features a priori, which is impossible for real-world tasks. To get around this issue, recent deep Inverse Reinforcement Learning (IRL) methods learn rewards directly from the raw state but this is challenging because the robot has to implicitly learn the features that are important and how to combine them, simultaneously. Instead, we propose a divide-and-conquer approach: focus human input specifically on learning the features separately, and only then learn how to combine them into a reward. We introduce a novel type of human input for teaching features and an algorithm that utilizes it to learn complex features from the raw state space. The robot can then learn how to combine them into a reward using demonstrations, corrections, or other reward learning frameworks. We demonstrate our method in settings where all features have to be learned from scratch, as well as where some of the features are known. By first focusing human input specifically on the feature(s), our method decreases sample complexity and improves generalization of the learned reward over a deep IRL baseline. We show this in experiments with a physical 7-DoF robot manipulator, and in a user study conducted in a simulated environment.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2022-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45495193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Self-supervised learning for using overhead imagery as maps in outdoor range sensor localization. 在室外范围传感器定位中使用高空图像作为地图的自监督学习。
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-12-01 Epub Date: 2021-09-28 DOI: 10.1177/02783649211045736
Tim Y Tang, Daniele De Martini, Shangzhe Wu, Paul Newman
{"title":"Self-supervised learning for using overhead imagery as maps in outdoor range sensor localization.","authors":"Tim Y Tang, Daniele De Martini, Shangzhe Wu, Paul Newman","doi":"10.1177/02783649211045736","DOIUrl":"10.1177/02783649211045736","url":null,"abstract":"<p><p>Traditional approaches to outdoor vehicle localization assume a reliable, prior map is available, typically built using the same sensor suite as the on-board sensors used during localization. This work makes a different assumption. It assumes that an overhead image of the workspace is available and utilizes that as a map for use for range-based sensor localization by a vehicle. Here, range-based sensors are radars and lidars. Our motivation is simple, off-the-shelf, publicly available overhead imagery such as Google satellite images can be a ubiquitous, cheap, and powerful tool for vehicle localization when a usable prior sensor map is unavailable, inconvenient, or expensive. The challenge to be addressed is that overhead images are clearly not directly comparable to data from ground range sensors because of their starkly different modalities. We present a learned metric localization method that not only handles the modality difference, but is also cheap to train, learning in a self-supervised fashion without requiring metrically accurate ground truth. By evaluating across multiple real-world datasets, we demonstrate the robustness and versatility of our method for various sensor configurations in cross-modality localization, achieving localization errors on-par with a prior supervised approach while requiring no pixel-wise aligned ground truth for supervision at training. We pay particular attention to the use of millimeter-wave radar, which, owing to its complex interaction with the scene and its immunity to weather and lighting conditions, makes for a compelling and valuable use case.</p>","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8721700/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39904384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to solve sequential physical reasoning problems from a scene image 学习从场景图像中解决顺序物理推理问题
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-12-01 DOI: 10.1177/02783649211056967
Danny Driess, Jung-Su Ha, Marc Toussaint
{"title":"Learning to solve sequential physical reasoning problems from a scene image","authors":"Danny Driess, Jung-Su Ha, Marc Toussaint","doi":"10.1177/02783649211056967","DOIUrl":"https://doi.org/10.1177/02783649211056967","url":null,"abstract":"In this article, we propose deep visual reasoning, which is a convolutional recurrent neural network that predicts discrete action sequences from an initial scene image for sequential manipulation problems that arise, for example, in task and motion planning (TAMP). Typical TAMP problems are formalized by combining reasoning on a symbolic, discrete level (e.g., first-order logic) with continuous motion planning such as nonlinear trajectory optimization. The action sequences represent the discrete decisions on a symbolic level, which, in turn, parameterize a nonlinear trajectory optimization problem. Owing to the great combinatorial complexity of possible discrete action sequences, a large number of optimization/motion planning problems have to be solved to find a solution, which limits the scalability of these approaches. To circumvent this combinatorial complexity, we introduce deep visual reasoning: based on a segmented initial image of the scene, a neural network directly predicts promising discrete action sequences such that ideally only one motion planning problem has to be solved to find a solution to the overall TAMP problem. Our method generalizes to scenes with many and varying numbers of objects, although being trained on only two objects at a time. This is possible by encoding the objects of the scene and the goal in (segmented) images as input to the neural network, instead of a fixed feature vector. We show that the framework can not only handle kinematic problems such as pick-and-place (as typical in TAMP), but also tool-use scenarios for planar pushing under quasi-static dynamic models. Here, the image-based representation enables generalization to other shapes than during training. Results show runtime improvements of several orders of magnitudes by, in many cases, removing the need to search over the discrete action sequences.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49360154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Robotics: Science and Systems (RSS) 2020 机器人:科学与系统(RSS) 2020
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-12-01 DOI: 10.1177/02783649211052346
T. Nanayakkara, T. Barfoot, T. Howard
{"title":"Robotics: Science and Systems (RSS) 2020","authors":"T. Nanayakkara, T. Barfoot, T. Howard","doi":"10.1177/02783649211052346","DOIUrl":"https://doi.org/10.1177/02783649211052346","url":null,"abstract":"","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41568869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enabling impedance-based physical human–multi–robot collaboration: Experiments with four torque-controlled manipulators 实现基于阻抗的物理人-多机器人协作:四个扭矩控制机械手的实验
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-11-24 DOI: 10.1177/02783649211053650
Niels Dehio, Joshua Smith, D. L. Wigand, Pouya Mohammadi, M. Mistry, Jochen J. Steil
{"title":"Enabling impedance-based physical human–multi–robot collaboration: Experiments with four torque-controlled manipulators","authors":"Niels Dehio, Joshua Smith, D. L. Wigand, Pouya Mohammadi, M. Mistry, Jochen J. Steil","doi":"10.1177/02783649211053650","DOIUrl":"https://doi.org/10.1177/02783649211053650","url":null,"abstract":"Robotics research into multi-robot systems so far has concentrated on implementing intelligent swarm behavior and contact-less human interaction. Studies of haptic or physical human-robot interaction, by contrast, have primarily focused on the assistance offered by a single robot. Consequently, our understanding of the physical interaction and the implicit communication through contact forces between a human and a team of multiple collaborative robots is limited. We here introduce the term Physical Human Multi-Robot Collaboration (PHMRC) to describe this more complex situation, which we consider highly relevant in future service robotics. The scenario discussed in this article covers multiple manipulators in close proximity and coupled through physical contacts. We represent this set of robots as fingers of an up-scaled agile robot hand. This perspective enables us to employ model-based grasping theory to deal with multi-contact situations. Our torque-control approach integrates dexterous multi-manipulator grasping skills, optimization of contact forces, compensation of object dynamics, and advanced impedance regulation into a coherent compliant control scheme. For this to achieve, we contribute fundamental theoretical improvements. Finally, experiments with up to four collaborative KUKA LWR IV+ manipulators performed both in simulation and real world validate the model-based control approach. As a side effect, we notice that our multi-manipulator control framework applies identically to multi-legged systems, and we execute it also on the quadruped ANYmal subject to non-coplanar contacts and human interaction.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49246810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
GRSTAPS: Graphically Recursive Simultaneous Task Allocation, Planning, and Scheduling GRSTAPS:图形递归的同时任务分配、规划和调度
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-11-09 DOI: 10.1177/02783649211052066
Andrew Messing, Glen Neville, S. Chernova, S. Hutchinson, H. Ravichandar
{"title":"GRSTAPS: Graphically Recursive Simultaneous Task Allocation, Planning, and Scheduling","authors":"Andrew Messing, Glen Neville, S. Chernova, S. Hutchinson, H. Ravichandar","doi":"10.1177/02783649211052066","DOIUrl":"https://doi.org/10.1177/02783649211052066","url":null,"abstract":"Effective deployment of multi-robot teams requires solving several interdependent problems at varying levels of abstraction. Specifically, heterogeneous multi-robot systems must answer four important questions: what (task planning), how (motion planning), who (task allocation), and when (scheduling). Although there are rich bodies of work dedicated to various combinations of these questions, a fully integrated treatment of all four questions lies beyond the scope of the current literature, which lacks even a formal description of the complete problem. In this article, we address this absence, first by formalizing this class of multi-robot problems under the banner Simultaneous Task Allocation and Planning with Spatiotemporal Constraints (STAP-STC), and then by proposing a solution that we call Graphically Recursive Simultaneous Task Allocation, Planning, and Scheduling (GRSTAPS). GRSTAPS interleaves task planning, task allocation, scheduling, and motion planning, performing a multi-layer search while effectively sharing information among system modules. In addition to providing a unified solution to STAP-STC problems, GRSTAPS includes individual innovations both in task planning and task allocation. At the task planning level, our interleaved approach allows the planner to abstract away which agents will perform a task using an approach that we refer to as agent-agnostic planning. At the task allocation level, we contribute a search-based algorithm that can simultaneously satisfy planning constraints and task requirements while optimizing the associated schedule. We demonstrate the efficacy of GRSTAPS using detailed ablative and comparative experiments in a simulated emergency-response domain. Results of these experiments conclusively demonstrate that GRSTAPS outperforms both ablative baselines and state-of-the-art temporal planners in terms of computation time, solution quality, and problem coverage.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47403327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
NTU VIRAL: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint NTU VIRAL:从飞行器视角的视觉-惯性测距-激光雷达数据集
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-11-06 DOI: 10.1177/02783649211052312
Thien-Minh Nguyen, Shenghai Yuan, Muqing Cao, Yang Lyu, T. Nguyen, Lihua Xie
{"title":"NTU VIRAL: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint","authors":"Thien-Minh Nguyen, Shenghai Yuan, Muqing Cao, Yang Lyu, T. Nguyen, Lihua Xie","doi":"10.1177/02783649211052312","DOIUrl":"https://doi.org/10.1177/02783649211052312","url":null,"abstract":"In recent years, autonomous robots have become ubiquitous in research and daily life. Among many factors, public datasets play an important role in the progress of this field, as they waive the tall order of initial investment in hardware and manpower. However, for research on autonomous aerial systems, there appears to be a relative lack of public datasets on par with those used for autonomous driving and ground robots. Thus, to fill in this gap, we conduct a data collection exercise on an aerial platform equipped with an extensive and unique set of sensors: two 3D lidars, two hardware-synchronized global-shutter cameras, multiple Inertial Measurement Units (IMUs), and especially, multiple Ultra-wideband (UWB) ranging units. The comprehensive sensor suite resembles that of an autonomous driving car, but features distinct and challenging characteristics of aerial operations. We record multiple datasets in several challenging indoor and outdoor conditions. Calibration results and ground truth from a high-accuracy laser tracker are also included in each package. All resources can be accessed via our webpage https://ntu-aris.github.io/ntu_viral_dataset/.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47219948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
A large-scale dataset for indoor visual localization with high-precision ground truth 基于高精度地面真值的大规模室内视觉定位数据集
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-10-26 DOI: 10.1177/02783649211052064
Yuchen Liu, Wei Gao, Zhanyi Hu
{"title":"A large-scale dataset for indoor visual localization with high-precision ground truth","authors":"Yuchen Liu, Wei Gao, Zhanyi Hu","doi":"10.1177/02783649211052064","DOIUrl":"https://doi.org/10.1177/02783649211052064","url":null,"abstract":"This article presents a challenging new dataset for indoor localization research. We have recorded the whole internal structure of Fengtai Wanda Plaza which is an area of over 15,800 m2 with a Navvis M6 device. The dataset contains 679 RGB-D panoramas and 2,664 query images collected by three different smartphones. In addition to the data, an aligned 3D point cloud is produced after the elimination of moving objects based on the building floorplan. Furthermore, a method is provided to generate corresponding high-resolution depth images for each panorama. By fixing the smartphones on the device using a specially designed bracket, six-degree-of-freedom camera poses can be calculated precisely. We believe it can give a new benchmark for indoor visual localization and the full dataset can be downloaded from http://vision.ia.ac.cn/Faculty/wgao/data_code/data_indoor_localizaiton/data_indoor_localization.htm","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44600365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Systematic object-invariant in-hand manipulation via reconfigurable underactuation: Introducing the RUTH gripper 通过可重构欠驱动的系统对象不变在手操作:介绍RUTH夹具
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-10-22 DOI: 10.1177/02783649211048929
Qiujie Lu, Nicholas Baron, A. B. Clark, Nicolás Rojas
{"title":"Systematic object-invariant in-hand manipulation via reconfigurable underactuation: Introducing the RUTH gripper","authors":"Qiujie Lu, Nicholas Baron, A. B. Clark, Nicolás Rojas","doi":"10.1177/02783649211048929","DOIUrl":"https://doi.org/10.1177/02783649211048929","url":null,"abstract":"We introduce a reconfigurable underactuated robot hand able to perform systematic prehensile in-hand manipulations regardless of object size or shape. The hand utilizes a two-degree-of-freedom five-bar linkage as the palm of the gripper, with three three-phalanx underactuated fingers, jointly controlled by a single actuator, connected to the mobile revolute joints of the palm. Three actuators are used in the robot hand system in total, one for controlling the force exerted on objects by the fingers through an underactuated tendon system, and two for changing the configuration of the palm and, thus, the positioning of the fingers. This novel layout allows decoupling grasping and manipulation, facilitating the planning and execution of in-hand manipulation operations. The reconfigurable palm provides the hand with a large grasping versatility, and allows easy computation of a map between task space and joint space for manipulation based on distance-based linkage kinematics. The motion of objects of different sizes and shapes from one pose to another is then straightforward and systematic, provided the objects are kept grasped. This is guaranteed independently and passively by the underactuated fingers using a custom tendon routing method, which allows no tendon length variation when the relative finger base positions change with palm reconfigurations. We analyze the theoretical grasping workspace and grasping and manipulation capability of the hand, present algorithms for computing the manipulation map and in-hand manipulation planning, and evaluate all these experimentally. Numerical and empirical results of several manipulation trajectories with objects of different size and shape clearly demonstrate the viability of the proposed concept.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42035977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信