2019 International Conference on Robotics and Automation (ICRA)最新文献

筛选
英文 中文
Design and Analysis of A Miniature Two-Wheg Climbing Robot with Robust Internal and External Transitioning Capabilities 具有鲁棒内外过渡能力的微型两轮爬行机器人的设计与分析
2019 International Conference on Robotics and Automation (ICRA) Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793910
Darren C. Y. Koh, A. G. Dharmawan, H. Hariri, G. Soh, S. Foong, Roland Bouffanais, H. Low, K. Wood
{"title":"Design and Analysis of A Miniature Two-Wheg Climbing Robot with Robust Internal and External Transitioning Capabilities","authors":"Darren C. Y. Koh, A. G. Dharmawan, H. Hariri, G. Soh, S. Foong, Roland Bouffanais, H. Low, K. Wood","doi":"10.1109/ICRA.2019.8793910","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793910","url":null,"abstract":"Plane-to-plane transitioning has been a significant challenge for climbing robots. To accomplish this, additional actuator or robot module is usually required which significantly increases both size and weight of the robot. This paper presents a two-wheg miniature climbing robot with a novel passive vertical tail component which results in robust transitioning capabilities. The design decision was derived from an indepth force analysis of the climbing robot while performing the transition. The theoretical analysis is verified through a working prototype with robust transitioning capabilities whose performance follows closely the analytical prediction. The climbing robot is able to climb any slope angles, 4-way internal transitions, and 4-way external transitions. This work contributes to the understanding and advancement of the transitioning capabilities and the design of a simple climbing robot, which expands the possibilities of scaling down miniature climbing robot further.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"29 1","pages":"9740-9746"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89777004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Accounting for Part Pose Estimation Uncertainties during Trajectory Generation for Part Pick-Up Using Mobile Manipulators 基于移动机械手的零件提取轨迹生成过程中零件姿态估计的不确定性
2019 International Conference on Robotics and Automation (ICRA) Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793501
Shantanu Thakar, P. Rajendran, Vivek Annem, A. Kabir, Satyandra K. Gupta
{"title":"Accounting for Part Pose Estimation Uncertainties during Trajectory Generation for Part Pick-Up Using Mobile Manipulators","authors":"Shantanu Thakar, P. Rajendran, Vivek Annem, A. Kabir, Satyandra K. Gupta","doi":"10.1109/ICRA.2019.8793501","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793501","url":null,"abstract":"To minimize the operation time, mobile manipulators need to pick-up parts while the mobile base and the gripper are moving. The gripper speed needs to be selected to ensure that the pick-up operation does not fail due to uncertainties in part pose estimation. This, in turn, affects the mobile base trajectory. This paper presents an active learning based approach to construct a meta-model to estimate the probability of successful part pick-up for a given level of uncertainty in the part pose estimate. Using this model, we present an optimization-based framework to generate time-optimal trajectories that satisfy the given level of success probability threshold for picking-up the part.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"74 1","pages":"1329-1336"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86104699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Robotic Forceps without Position Sensors using Visual SLAM 使用视觉SLAM的无位置传感器的机器人钳
2019 International Conference on Robotics and Automation (ICRA) Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794321
Takuya Iwai, T. Kanno, Tetsuro Miyazaki, Toshihiro Kawase, K. Kawashima
{"title":"Robotic Forceps without Position Sensors using Visual SLAM","authors":"Takuya Iwai, T. Kanno, Tetsuro Miyazaki, Toshihiro Kawase, K. Kawashima","doi":"10.1109/ICRA.2019.8794321","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794321","url":null,"abstract":"In this study, a robotic forceps with a wrist joint using visual SLAM for joint angle sensing was developed. The forceps has a flexible joint connected to the wrist joint at its rear end and the motion of the rear joint is driven by a parallel linkage. A monocular camera attached on the rear of the parallel linkage is in charge of position sensing, and the joint angles are estimated from the pose of the camera. The pose of the camera is obtained by a visual SLAM. The visual servo system realizes a simple attaching mechanism. The static and dynamic positioning experiments are conducted. We confirmed that the visual servoing system controls the forceps tip within the error of 3 deg in the motion range of 50 deg.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"40 1","pages":"6331-6336"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83641020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated UWB-Vision Approach for Autonomous Docking of UAVs in GPS-denied Environments gps拒绝环境下无人机自主对接的集成超宽带视觉方法
2019 International Conference on Robotics and Automation (ICRA) Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793851
Thien-Minh Nguyen, T. Nguyen, Muqing Cao, Zhirong Qiu, Lihua Xie
{"title":"Integrated UWB-Vision Approach for Autonomous Docking of UAVs in GPS-denied Environments","authors":"Thien-Minh Nguyen, T. Nguyen, Muqing Cao, Zhirong Qiu, Lihua Xie","doi":"10.1109/ICRA.2019.8793851","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793851","url":null,"abstract":"Though vision-based techniques have become quite popular for autonomous docking of Unmanned Aerial Vehicles (UAVs), due to limited field of view (FOV), the UAV must rely on other methods to detect and approach the target before vision can be used. In this paper we propose a method combining Ultra-wideband (UWB) ranging sensor with vision-based techniques to achieve both autonomous approaching and landing capabilities in GPS-denied environments. In the approaching phase, a robust and efficient recursive least-square optimization algorithm is proposed to estimate the position of the UAV relative to the target by using the distance and relative displacement measurements. Using this estimate, UAV is able to approach the target until the landing pad is detected by an onboard vision system, then UWB measurements and vision-derived poses are fused with onboard sensor of UAV to facilitate an accurate landing maneuver. Real-world experiments are conducted to demonstrate the efficiency of our method.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"18 1","pages":"9603-9609"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79363424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Control and Configuration Planning of an Aerial Cable Towed System 空中电缆拖曳系统的控制与配置规划
2019 International Conference on Robotics and Automation (ICRA) Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794396
Julian Erskine, A. Chriette, S. Caro
{"title":"Control and Configuration Planning of an Aerial Cable Towed System","authors":"Julian Erskine, A. Chriette, S. Caro","doi":"10.1109/ICRA.2019.8794396","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794396","url":null,"abstract":"This paper investigates the effect of the robot configuration on the performance of an aerial cable towed system (ACTS) composed of three quadrotors manipulating a point mass payload. The kinematic and dynamic models of the ACTS are derived in a minimal set of geometric coordinates, and a centralized feedback linearization controller is developed. Independent to the payload trajectory, the configuration of the ACTS is controlled and is evaluated using a robustness index named the capacity margin. Experiments are performed with optimal, suboptimal, and wrench infeasible configurations. It is shown that configurations near the point of zero capacity margin allow the ACTS to hover but not to follow dynamic trajectories, and that the ACTS cannot fly with a negative capacity margin. Dynamic tests of the ACTS show the effects of the configuration on the achievable accelerations.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"22 1","pages":"6440-6446"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75930806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
ModQuad-Vi: A Vision-Based Self-Assembling Modular Quadrotor ModQuad-Vi:一种基于视觉的自组装模块化四旋翼飞行器
2019 International Conference on Robotics and Automation (ICRA) Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794056
Guanrui Li, Bruno Gabrich, David Saldaña, J. Das, Vijay R. Kumar, Mark H. Yim
{"title":"ModQuad-Vi: A Vision-Based Self-Assembling Modular Quadrotor","authors":"Guanrui Li, Bruno Gabrich, David Saldaña, J. Das, Vijay R. Kumar, Mark H. Yim","doi":"10.1109/ICRA.2019.8794056","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794056","url":null,"abstract":"Flying modular robots have the potential to rapidly form temporary structures. In the literature, docking actions rely on external systems and indoor infrastructures for relative pose estimation. In contrast to related work, we provide local estimation during the self-assembly process to avoid dependency on external systems. In this paper, we introduce ModQuad-Vi, a flying modular robot that is aimed to operate in outdoor environments. We propose a new robot design and vision-based docking method. Our design is based on a quadrotor platform with onboard computation and visual perception. Our control method is able to accurately align modules for docking actions. Additionally, we present the dynamics and a geometric controller for the aerial modular system. Experiments validate the vision-based docking method with successful results.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"41 1","pages":"346-352"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80168620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Intent-Uncertainty-Aware Grasp Planning for Robust Robot Assistance in Telemanipulation 鲁棒机器人辅助遥操作的意图-不确定性感知抓取规划
2019 International Conference on Robotics and Automation (ICRA) Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793819
Michael Bowman, Songpo Li, Xiaoli Zhang
{"title":"Intent-Uncertainty-Aware Grasp Planning for Robust Robot Assistance in Telemanipulation","authors":"Michael Bowman, Songpo Li, Xiaoli Zhang","doi":"10.1109/ICRA.2019.8793819","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793819","url":null,"abstract":"Promoting a robot agent’s autonomy level, which allows it to understand the human operator’s intent and provide motion assistance to achieve it, has demonstrated great advantages to the operator’s intent in teleoperation. However, the research has been limited to the target approaching process. We advance the shared control technique one step further to deal with the more challenging object manipulation task. Appropriately manipulating an object is challenging as it requires fine motion constraints for a certain manipulation task. Although these motion constraints are critical for task success, they are subtle to observe from ambiguous human motion. The disembodiment problem and physical discrepancy between the human and robot hands bring additional uncertainty, make the object manipulation task more challenging. Moreover, there is a lack of modeling and planning techniques that can effectively combine the human motion input and robot agent’s motion input while accounting for the ambiguity of the human intent. To overcome this challenge, we built a multi-task robot grasping model and developed an intent-uncertainty-aware grasp planner to generate robust grasp poses given the ambiguous human intent inference inputs. With this validated modeling and planning techniques, it is expected to extend teleoperated robots’ functionality and adoption in practical telemanipulation scenarios.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"30 1","pages":"409-415"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73990089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Visual Guidance and Automatic Control for Robotic Personalized Stent Graft Manufacturing 机器人个性化支架制造的视觉引导与自动控制
2019 International Conference on Robotics and Automation (ICRA) Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794123
Yu Guo, Miao Sun, F. P. Lo, Benny P. L. Lo
{"title":"Visual Guidance and Automatic Control for Robotic Personalized Stent Graft Manufacturing","authors":"Yu Guo, Miao Sun, F. P. Lo, Benny P. L. Lo","doi":"10.1109/ICRA.2019.8794123","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794123","url":null,"abstract":"Personalized stent graft is designed to treat Abdominal Aortic Aneurysms (AAA). Due to the individual difference in arterial structures, stent graft has to be custom made for each AAA patient. Robotic platforms for autonomous personalized stent graft manufacturing have been proposed in recently which rely upon stereo vision systems for coordinating multiple robots for fabricating customized stent grafts. This paper proposes a novel hybrid vision system for real-time visual-sevoing for personalized stent-graft manufacturing. To coordinate the robotic arms, this system is based on projecting a dynamic stereo microscope coordinate system onto a static wide angle view stereo webcam coordinate system. The multiple stereo camera configuration enables accurate localization of the needle in 3D during the sewing process. The scale-invariant feature transform (SIFT) method and color filtering are implemented for stereo matching and feature identifications for object localization. To maintain the clear view of the sewing process, a visual-servoing system is developed for guiding the stereo microscopes for tracking the needle movements. The deep deterministic policy gradient (DDPG) reinforcement learning algorithm is developed for real-time intelligent robotic control. Experimental results have shown that the robotic arm can learn to reach the desired targets autonomously.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"11 1","pages":"8740-8746"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91294875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the Performance of Auxiliary Null Space Tasks via Time Scaling-Based Relaxation of the Primary Task 基于时间尺度的主任务松弛改进辅助零空间任务的性能
2019 International Conference on Robotics and Automation (ICRA) Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794225
Nico Mansfeld, Youssef Michel, T. Bruckmann, S. Haddadin
{"title":"Improving the Performance of Auxiliary Null Space Tasks via Time Scaling-Based Relaxation of the Primary Task","authors":"Nico Mansfeld, Youssef Michel, T. Bruckmann, S. Haddadin","doi":"10.1109/ICRA.2019.8794225","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794225","url":null,"abstract":"Kinematic redundancy enhances the dexterity and flexibility of robot manipulators. By exploiting the redundant degrees of freedom, auxiliary null space tasks can be carried out in addition to the primary task. Such auxiliary tasks are often formulated in terms of a performance or safety criterion that shall be minimized. If the optimization criterion, however, is defined in global terms, then it is directly affected by the primary task. As a consequence, the task achievement of the auxiliary task may be unnecessarily detrimented by the main task. In addition to modifying the primary task via constraint relaxation, a possible solution for improving the performance of the auxiliary task is to relax the primary task temporarily via time scaling. This gives the null space task more time for achieving its objective. In this paper, we propose several such time scaling schemes and verify their performance for a DLR/KUKA Lightweight Robot with one redundant degree of freedom. Finally, we extend the concept to multiple prioritized tasks and provide a simulation example.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"9342-9348"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88867347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Surfel-Based Dense RGB-D Reconstruction With Global And Local Consistency 基于surf的全局和局部一致性稠密RGB-D重建
2019 International Conference on Robotics and Automation (ICRA) Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794355
Yi Yang, W. Dong, M. Kaess
{"title":"Surfel-Based Dense RGB-D Reconstruction With Global And Local Consistency","authors":"Yi Yang, W. Dong, M. Kaess","doi":"10.1109/ICRA.2019.8794355","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794355","url":null,"abstract":"Achieving high surface reconstruction accuracy in dense mapping has been a desirable target for both robotics and vision communities. In the robotics literature, simultaneous localization and mapping (SLAM) systems use RGB-D cameras to reconstruct a dense map of the environment. They leverage the depth input to provide accurate local pose estimation and a locally consistent model. However, drift in the pose tracking over time leads to misalignments and artifacts. On the other hand, offline computer vision methods, such as the pipeline that combines structure-from-motion (SfM) and multi-view stereo (MVS), estimate the camera poses by performing batch optimization. These methods achieve global consistency, but suffer from heavy computation loads. We propose a novel approach that integrates both methods to achieve locally and globally consistent reconstruction. First, we estimate poses of keyframes in the offline SfM pipeline to provide strong global constraints at relatively low cost. Afterwards, we compute odometry between frames driven by off-the-shelf SLAM systems with high local accuracy. We fuse the two pose estimations using factor graph optimization to generate accurate camera poses for dense reconstruction. Experiments on real-world and synthetic datasets demonstrate that our approach produces more accurate models comparing to existing dense SLAM systems, while achieving significant speedup with respect to state-of-the-art SfM-MVS pipelines.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"9 5","pages":"5238-5244"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91480866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信