International Journal of Robotics Research最新文献

筛选
英文 中文
Sequential contact-based adaptive grasping for robotic hands 基于顺序接触的机器人手自适应抓取
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2022-04-01 DOI: 10.1177/02783649221081154
G. J. Pollayil, M. J. Pollayil, M. Catalano, A. Bicchi, G. Grioli
{"title":"Sequential contact-based adaptive grasping for robotic hands","authors":"G. J. Pollayil, M. J. Pollayil, M. Catalano, A. Bicchi, G. Grioli","doi":"10.1177/02783649221081154","DOIUrl":"https://doi.org/10.1177/02783649221081154","url":null,"abstract":"This paper proposes a novel type of grasping strategy that draws inspiration from the role of touch and the importance of wrist motions in human grasping. The proposed algorithm, which we call Sequential Contact-based Adaptive Grasping, can be used to reactively modify a given grasp plan according to contacts arising between the hand and the object. This technique, based on a systematic constraint categorization and an iterative task inversion procedure, is shown to lead to synchronized motions of the fingers and the wrist, as it can be observed in humans, and to increase grasp success rate by substantially mitigating the relevant problems of object slippage during hand closure and of uncertainties caused by the environment and by the perception system. After describing the grasping problem in its quasi-static aspects, the algorithm is derived and discussed with some simple simulations. The proposed method is general as it can be applied to different kinds of robotic hands. It refines a priori defined grasp plans and significantly reduces their accuracy requirements by relying only on a forward kinematic model and elementary contact information. The efficacy of our approach is confirmed by experimental results of tests performed on a collaborative robot manipulator equipped with a state-of-the-art underactuated soft hand.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46066998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Boreas: A multi-season autonomous driving dataset Boreas:多季节自动驾驶数据集
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2022-03-18 DOI: 10.1177/02783649231160195
Keenan Burnett, David J. Yoon, Yuchen Wu, A. Z. Li, Haowei Zhang, Shichen Lu, Jingxing Qian, Wei-Kang Tseng, A. Lambert, K. Leung, Angela P. Schoellig, T. Barfoot
{"title":"Boreas: A multi-season autonomous driving dataset","authors":"Keenan Burnett, David J. Yoon, Yuchen Wu, A. Z. Li, Haowei Zhang, Shichen Lu, Jingxing Qian, Wei-Kang Tseng, A. Lambert, K. Leung, Angela P. Schoellig, T. Barfoot","doi":"10.1177/02783649231160195","DOIUrl":"https://doi.org/10.1177/02783649231160195","url":null,"abstract":"The Boreas dataset was collected by driving a repeated route over the course of 1 year, resulting in stark seasonal variations and adverse weather conditions such as rain and falling snow. In total, the Boreas dataset includes over 350 km of driving data featuring a 128-channel Velodyne Alpha-Prime lidar, a 360° Navtech CIR304-H scanning radar, a 5MP FLIR Blackfly S camera, and centimetre-accurate post-processed ground truth poses. Our dataset will support live leaderboards for odometry, metric localization, and 3D object detection. The dataset and development kit are available at boreas.utias.utoronto.ca.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48399630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
An efficient, modular controller for flapping flight composing model-based and model-free components 一种高效、模块化的扑翼飞行控制器,由基于模型和无模型的组件组成
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2022-03-15 DOI: 10.1177/02783649211063225
Avik De, Rebecca McGill, R. Wood
{"title":"An efficient, modular controller for flapping flight composing model-based and model-free components","authors":"Avik De, Rebecca McGill, R. Wood","doi":"10.1177/02783649211063225","DOIUrl":"https://doi.org/10.1177/02783649211063225","url":null,"abstract":"We present a controller that combines model-based methods with model-free data-driven methods hierarchically, utilizing the predictive power of template models with the strengths of model-free methods to account for model error, such as due to manufacturing variability in the RoboBee, a 100 mg flapping-wing micro aerial vehicle (FWMAV). Using a large suite of numerical trials, we show that the model-predictive high-level component of the proposed controller is more performant, easier to tune, and able to stabilize more dynamic tasks than a baseline reactive controller, while the data-driven inverse dynamics controller is able to better compensate for biases arising from manufacturing variability. At the same time, the formulated controller is very computationally efficient, with the MPC implemented at 5 KHz on a Simulink embedded target, via which we empirically demonstrate controlled hovering on a RoboBee.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2022-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41639077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BenchBot environments for active robotics (BEAR): Simulated data for active scene understanding research 用于主动机器人的BenchBot环境(BEAR):用于主动场景理解研究的模拟数据
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2022-03-01 DOI: 10.1177/02783649211069404
David Hall, Ben Talbot, S. Bista, Haoyang Zhang, Rohan Smith, Feras Dayoub, Niko Sünderhauf
{"title":"BenchBot environments for active robotics (BEAR): Simulated data for active scene understanding research","authors":"David Hall, Ben Talbot, S. Bista, Haoyang Zhang, Rohan Smith, Feras Dayoub, Niko Sünderhauf","doi":"10.1177/02783649211069404","DOIUrl":"https://doi.org/10.1177/02783649211069404","url":null,"abstract":"We present a platform to foster research in active scene understanding, consisting of high-fidelity simulated environments and a simple yet powerful API that controls a mobile robot in simulation and reality. In contrast to static, pre-recorded datasets that focus on the perception aspect of scene understanding, agency is a top priority in our work. We provide three levels of robot agency, allowing users to control a robot at varying levels of difficulty and realism. While the most basic level provides pre-defined trajectories and ground-truth localisation, the more realistic levels allow us to evaluate integrated behaviours comprising perception, navigation, exploration and SLAM. In contrast to existing simulation environments, we focus on robust scene understanding research using our environment interface (BenchBot) that provides a simple API for seamless transition between the simulated environments and real robotic platforms. We believe this scaffolded design is an effective approach to bridge the gap between classical static datasets without any agency and the unique challenges of robotic evaluation in reality. Our BenchBot Environments for Active Robotics (BEAR) consist of 25 indoor environments under day and night lighting conditions, a total of 1443 objects to be identified and mapped, and ground-truth 3D bounding boxes for use in evaluation. BEAR website: https://qcr.github.io/dataset/benchbot-bear-data/.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47712061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
AURORA, a multi-sensor dataset for robotic ocean exploration AURORA,一个用于机器人海洋探测的多传感器数据集
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2022-02-07 DOI: 10.1177/02783649221078612
Marco Bernardi, Brett Hosking, C. Petrioli, B. Bett, Daniel Jones, V. Huvenne, Rachel Marlow, M. Furlong, S. McPhail, A. Munafò
{"title":"AURORA, a multi-sensor dataset for robotic ocean exploration","authors":"Marco Bernardi, Brett Hosking, C. Petrioli, B. Bett, Daniel Jones, V. Huvenne, Rachel Marlow, M. Furlong, S. McPhail, A. Munafò","doi":"10.1177/02783649221078612","DOIUrl":"https://doi.org/10.1177/02783649221078612","url":null,"abstract":"The current maturity of autonomous underwater vehicles (AUVs) has made their deployment practical and cost-effective, such that many scientific, industrial and military applications now include AUV operations. However, the logistical difficulties and high costs of operating at sea are still critical limiting factors in further technology development, the benchmarking of new techniques and the reproducibility of research results. To overcome this problem, this paper presents a freely available dataset suitable to test control, navigation, sensor processing algorithms and others tasks. This dataset combines AUV navigation data, sidescan sonar, multibeam echosounder data and seafloor camera image data, and associated sensor acquisition metadata to provide a detailed characterisation of surveys carried out by the National Oceanography Centre (NOC) in the Greater Haig Fras Marine Conservation Zone (MCZ) of the U.K in 2015.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2022-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65097872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Inducing structure in reward learning by learning features 基于学习特征的奖励学习诱导结构
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2022-01-18 DOI: 10.1177/02783649221078031
Andreea Bobu, Marius Wiggert, C. Tomlin, A. Dragan
{"title":"Inducing structure in reward learning by learning features","authors":"Andreea Bobu, Marius Wiggert, C. Tomlin, A. Dragan","doi":"10.1177/02783649221078031","DOIUrl":"https://doi.org/10.1177/02783649221078031","url":null,"abstract":"Reward learning enables robots to learn adaptable behaviors from human input. Traditional methods model the reward as a linear function of hand-crafted features, but that requires specifying all the relevant features a priori, which is impossible for real-world tasks. To get around this issue, recent deep Inverse Reinforcement Learning (IRL) methods learn rewards directly from the raw state but this is challenging because the robot has to implicitly learn the features that are important and how to combine them, simultaneously. Instead, we propose a divide-and-conquer approach: focus human input specifically on learning the features separately, and only then learn how to combine them into a reward. We introduce a novel type of human input for teaching features and an algorithm that utilizes it to learn complex features from the raw state space. The robot can then learn how to combine them into a reward using demonstrations, corrections, or other reward learning frameworks. We demonstrate our method in settings where all features have to be learned from scratch, as well as where some of the features are known. By first focusing human input specifically on the feature(s), our method decreases sample complexity and improves generalization of the learned reward over a deep IRL baseline. We show this in experiments with a physical 7-DoF robot manipulator, and in a user study conducted in a simulated environment.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2022-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45495193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Self-supervised learning for using overhead imagery as maps in outdoor range sensor localization. 在室外范围传感器定位中使用高空图像作为地图的自监督学习。
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-12-01 Epub Date: 2021-09-28 DOI: 10.1177/02783649211045736
Tim Y Tang, Daniele De Martini, Shangzhe Wu, Paul Newman
{"title":"Self-supervised learning for using overhead imagery as maps in outdoor range sensor localization.","authors":"Tim Y Tang, Daniele De Martini, Shangzhe Wu, Paul Newman","doi":"10.1177/02783649211045736","DOIUrl":"10.1177/02783649211045736","url":null,"abstract":"<p><p>Traditional approaches to outdoor vehicle localization assume a reliable, prior map is available, typically built using the same sensor suite as the on-board sensors used during localization. This work makes a different assumption. It assumes that an overhead image of the workspace is available and utilizes that as a map for use for range-based sensor localization by a vehicle. Here, range-based sensors are radars and lidars. Our motivation is simple, off-the-shelf, publicly available overhead imagery such as Google satellite images can be a ubiquitous, cheap, and powerful tool for vehicle localization when a usable prior sensor map is unavailable, inconvenient, or expensive. The challenge to be addressed is that overhead images are clearly not directly comparable to data from ground range sensors because of their starkly different modalities. We present a learned metric localization method that not only handles the modality difference, but is also cheap to train, learning in a self-supervised fashion without requiring metrically accurate ground truth. By evaluating across multiple real-world datasets, we demonstrate the robustness and versatility of our method for various sensor configurations in cross-modality localization, achieving localization errors on-par with a prior supervised approach while requiring no pixel-wise aligned ground truth for supervision at training. We pay particular attention to the use of millimeter-wave radar, which, owing to its complex interaction with the scene and its immunity to weather and lighting conditions, makes for a compelling and valuable use case.</p>","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8721700/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39904384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to solve sequential physical reasoning problems from a scene image 学习从场景图像中解决顺序物理推理问题
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-12-01 DOI: 10.1177/02783649211056967
Danny Driess, Jung-Su Ha, Marc Toussaint
{"title":"Learning to solve sequential physical reasoning problems from a scene image","authors":"Danny Driess, Jung-Su Ha, Marc Toussaint","doi":"10.1177/02783649211056967","DOIUrl":"https://doi.org/10.1177/02783649211056967","url":null,"abstract":"In this article, we propose deep visual reasoning, which is a convolutional recurrent neural network that predicts discrete action sequences from an initial scene image for sequential manipulation problems that arise, for example, in task and motion planning (TAMP). Typical TAMP problems are formalized by combining reasoning on a symbolic, discrete level (e.g., first-order logic) with continuous motion planning such as nonlinear trajectory optimization. The action sequences represent the discrete decisions on a symbolic level, which, in turn, parameterize a nonlinear trajectory optimization problem. Owing to the great combinatorial complexity of possible discrete action sequences, a large number of optimization/motion planning problems have to be solved to find a solution, which limits the scalability of these approaches. To circumvent this combinatorial complexity, we introduce deep visual reasoning: based on a segmented initial image of the scene, a neural network directly predicts promising discrete action sequences such that ideally only one motion planning problem has to be solved to find a solution to the overall TAMP problem. Our method generalizes to scenes with many and varying numbers of objects, although being trained on only two objects at a time. This is possible by encoding the objects of the scene and the goal in (segmented) images as input to the neural network, instead of a fixed feature vector. We show that the framework can not only handle kinematic problems such as pick-and-place (as typical in TAMP), but also tool-use scenarios for planar pushing under quasi-static dynamic models. Here, the image-based representation enables generalization to other shapes than during training. Results show runtime improvements of several orders of magnitudes by, in many cases, removing the need to search over the discrete action sequences.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49360154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Robotics: Science and Systems (RSS) 2020 机器人:科学与系统(RSS) 2020
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-12-01 DOI: 10.1177/02783649211052346
T. Nanayakkara, T. Barfoot, T. Howard
{"title":"Robotics: Science and Systems (RSS) 2020","authors":"T. Nanayakkara, T. Barfoot, T. Howard","doi":"10.1177/02783649211052346","DOIUrl":"https://doi.org/10.1177/02783649211052346","url":null,"abstract":"","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41568869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enabling impedance-based physical human–multi–robot collaboration: Experiments with four torque-controlled manipulators 实现基于阻抗的物理人-多机器人协作:四个扭矩控制机械手的实验
IF 9.2 1区 计算机科学
International Journal of Robotics Research Pub Date : 2021-11-24 DOI: 10.1177/02783649211053650
Niels Dehio, Joshua Smith, D. L. Wigand, Pouya Mohammadi, M. Mistry, Jochen J. Steil
{"title":"Enabling impedance-based physical human–multi–robot collaboration: Experiments with four torque-controlled manipulators","authors":"Niels Dehio, Joshua Smith, D. L. Wigand, Pouya Mohammadi, M. Mistry, Jochen J. Steil","doi":"10.1177/02783649211053650","DOIUrl":"https://doi.org/10.1177/02783649211053650","url":null,"abstract":"Robotics research into multi-robot systems so far has concentrated on implementing intelligent swarm behavior and contact-less human interaction. Studies of haptic or physical human-robot interaction, by contrast, have primarily focused on the assistance offered by a single robot. Consequently, our understanding of the physical interaction and the implicit communication through contact forces between a human and a team of multiple collaborative robots is limited. We here introduce the term Physical Human Multi-Robot Collaboration (PHMRC) to describe this more complex situation, which we consider highly relevant in future service robotics. The scenario discussed in this article covers multiple manipulators in close proximity and coupled through physical contacts. We represent this set of robots as fingers of an up-scaled agile robot hand. This perspective enables us to employ model-based grasping theory to deal with multi-contact situations. Our torque-control approach integrates dexterous multi-manipulator grasping skills, optimization of contact forces, compensation of object dynamics, and advanced impedance regulation into a coherent compliant control scheme. For this to achieve, we contribute fundamental theoretical improvements. Finally, experiments with up to four collaborative KUKA LWR IV+ manipulators performed both in simulation and real world validate the model-based control approach. As a side effect, we notice that our multi-manipulator control framework applies identically to multi-legged systems, and we execute it also on the quadruped ANYmal subject to non-coplanar contacts and human interaction.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49246810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信