2020 IEEE International Conference on Robotics and Automation (ICRA)最新文献

筛选
英文 中文
PARC: A Plan and Activity Recognition Component for Assistive Robots 辅助机器人的计划和活动识别组件
2020 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196856
Jean Massardi, Mathieu Gravel, É. Beaudry
{"title":"PARC: A Plan and Activity Recognition Component for Assistive Robots","authors":"Jean Massardi, Mathieu Gravel, É. Beaudry","doi":"10.1109/ICRA40945.2020.9196856","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196856","url":null,"abstract":"Mobile robot assistants have many applications, such as helping people in their daily living activities. These robots have to detect and recognize the actions and goals of the humans they are assisting. While there are several wide-spread plan and activity recognition solutions for controlled environments with many built-in sensors, like smart-homes, there is a lack of such systems for mobile robots operating in open settings, such as an apartment. We propose a module for the recognition of activities and goals for daily living by mobile robots, in real time and for complex activities. Our approach recognizes human-object interaction using an RGB-D camera to infer low-level actions which are sent to a goal recognition algorithm. Results show that our approach is both in real time and requires little computational resources, which facilitates its deployment on a mobile and low-cost robotics platform.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75045869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Simultaneous Estimations of Joint Angle and Torque in Interactions with Environments using EMG 利用肌电图同时估计与环境相互作用下的关节角度和扭矩
2020 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197441
Dongwon Kim, Kyung Koh, Giovanni Oppizzi, Raziyeh Baghi, Li-Chuan Lo, Chunyang Zhang, Li-Qun Zhang
{"title":"Simultaneous Estimations of Joint Angle and Torque in Interactions with Environments using EMG","authors":"Dongwon Kim, Kyung Koh, Giovanni Oppizzi, Raziyeh Baghi, Li-Chuan Lo, Chunyang Zhang, Li-Qun Zhang","doi":"10.1109/ICRA40945.2020.9197441","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197441","url":null,"abstract":"We develop a decoding technique that estimates both the position and torque of a joint of the limb in interaction with an environment based on activities of the agonist-antagonist pair of muscles using electromyography in real time. The long short-term memory (LSTM) network is employed as the core processor of the proposed technique that is capable of learning time series of a long-time span with varying time lags. A validation that is conducted on the wrist joint shows that the decoding approach provides an agreement of greater than 95% in kinetics (i.e. torque) estimation and an agreement of greater than 85% in kinematics (i.e. angle) estimation, between the actual and estimated variables, during interactions with an environment. Also demonstrated is the fact that the proposed decoding method inherits the strengths of the LSTM network in terms of the capability of learning EMG signals and the corresponding responses with time dependency.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77343697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Magnetic miniature swimmers with multiple rigid flagella 具有多个刚性鞭毛的磁性微型游泳者
2020 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196531
Johan E. Quispe, S. Régnier
{"title":"Magnetic miniature swimmers with multiple rigid flagella","authors":"Johan E. Quispe, S. Régnier","doi":"10.1109/ICRA40945.2020.9196531","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196531","url":null,"abstract":"In this paper, we introduce novel miniature swimmers with multiple rigid tails based on spherical helices. The tail distribution of these prototypes enhances its swimming features as well as allowing to carry objects with it. The proposed swimmers are actuated by a rotating magnetic field, generating the robot rotation and thus producing a considerable thrust to start self-propelling. These prototypes achieved propulsion speeds up to 6 mm/s at 3.5 Hz for a 6-mm in size prototypes. We study the efficiency of different tail distribution for a 2-tailed swimmer by varying the angular position between both tails. Moreover, it is demonstrated that these swimmers experience great sensibility when changing their tail height. Besides, these swimmers demonstrate to be effective for cargo carrying tasks since they can displace objects up to 3.5 times their weight. Finally, wall effect is studied with multi-tailed swimmer robots considering 2 containers with 20 and 50-mm in width. Results showed speeds’ increments up to 59% when swimmers are actuated in the smaller container.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77742385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Eciton robotica: Design and Algorithms for an Adaptive Self-Assembling Soft Robot Collective 工程机器人:自适应自组装软机器人群的设计与算法
2020 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196565
Melinda J. D. Malley, Bahar Haghighat, Lucie Houel, R. Nagpal
{"title":"Eciton robotica: Design and Algorithms for an Adaptive Self-Assembling Soft Robot Collective","authors":"Melinda J. D. Malley, Bahar Haghighat, Lucie Houel, R. Nagpal","doi":"10.1109/ICRA40945.2020.9196565","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196565","url":null,"abstract":"Social insects successfully create bridges, rafts, nests and other structures out of their own bodies and do so with no centralized control system, simply by following local rules. For example, while traversing rough terrain, army ants (genus Eciton) build bridges which grow and dissolve in response to local traffic. Because these self-assembled structures incorporate smart, flexible materials (i.e. ant bodies) and emerge from local behavior, the bridges are adaptive and dynamic. With the goal of realizing robotic collectives with similar features, we designed a hardware system, Eciton robotica, consisting of flexible robots that can climb over each other to assemble compliant structures and communicate locally using vibration. In simulation, we demonstrate self-assembly of structures: using only local rules and information, robots build and dissolve bridges in response to local traffic and varying terrain. Unlike previous self-assembling robotic systems that focused on latticebased structures and predetermined shapes, our system takes a new approach where soft robots attach to create amorphous structures whose final self-assembled shape can adapt to the needs of the group.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77883078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
3D Orientation Estimation and Vanishing Point Extraction from Single Panoramas Using Convolutional Neural Network 基于卷积神经网络的单幅全景图三维方向估计和消失点提取
2020 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196966
Yongjie Shi, Xin Tong, Jingsi Wen, He Zhao, Xianghua Ying, H. Zha
{"title":"3D Orientation Estimation and Vanishing Point Extraction from Single Panoramas Using Convolutional Neural Network","authors":"Yongjie Shi, Xin Tong, Jingsi Wen, He Zhao, Xianghua Ying, H. Zha","doi":"10.1109/ICRA40945.2020.9196966","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196966","url":null,"abstract":"3D orientation estimation is a key component of many important computer vision tasks such as autonomous navigation and 3D scene understanding. This paper presents a new CNN architecture to estimate the 3D orientation of an omnidirectional camera with respect to the world coordinate system from a single spherical panorama. To train the proposed architecture, we leverage a dataset of panoramas named VOP60K from Google Street View with labeled 3D orientation, including 50 thousand panoramas for training and 10 thousand panoramas for testing. Previous approaches usually estimate 3D orientation under pinhole cameras. However, for a panorama, due to its larger field of view, previous approaches cannot be suitable. In this paper, we propose an edge extractor layer to utilize the low-level and geometric information of panorama, an attention module to fuse different features generated by previous layers. A regression loss for two column vectors of the rotation matrix and classification loss for the position of vanishing points are added to optimize our network simultaneously. The proposed algorithm is validated on our benchmark, and experimental results clearly demonstrate that it outperforms previous methods.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80174186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agile 3D-Navigation of a Helical Magnetic Swimmer 螺旋磁泳者的敏捷3d导航
2020 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197323
J. Leclerc, Haoran Zhao, Daniel Bao, Aaron T. Becker, M. Ghosn, D. Shah
{"title":"Agile 3D-Navigation of a Helical Magnetic Swimmer","authors":"J. Leclerc, Haoran Zhao, Daniel Bao, Aaron T. Becker, M. Ghosn, D. Shah","doi":"10.1109/ICRA40945.2020.9197323","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197323","url":null,"abstract":"Rotating miniature magnetic swimmers are de-vices that could navigate within the bloodstream to access remote locations of the body and perform minimally invasive procedures. The rotational movement could be used, for example, to abrade a pulmonary embolus. Some regions, such as the heart, are challenging to navigate. Cardiac and respiratory motions of the heart combined with a fast and variable blood flow necessitate a highly agile swimmer. This swimmer should minimize contact with the walls of the blood vessels and the cardiac structures to mitigate the risk of complications. This paper presents experimental tests of a millimeter-scale magnetic helical swimmer navigating in a blood-mimicking solution and describes its turning capabilities. The step-out frequency and the position error were measured for different values of turn radius. The paper also introduces rapid movements that increase the swimmer’s agility and demonstrates these experimentally on a complex 3D trajectory.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80197700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
View-Invariant Loop Closure with Oriented Semantic Landmarks 具有面向语义标志的视图不变循环闭包
2020 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9196886
J. Li, Karim Koreitem, D. Meger, Gregory Dudek
{"title":"View-Invariant Loop Closure with Oriented Semantic Landmarks","authors":"J. Li, Karim Koreitem, D. Meger, Gregory Dudek","doi":"10.1109/ICRA40945.2020.9196886","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9196886","url":null,"abstract":"Recent work on semantic simultaneous localization and mapping (SLAM) have shown the utility of natural objects as landmarks for improving localization accuracy and robustness. In this paper we present a monocular semantic SLAM system that uses object identity and inter-object geometry for view-invariant loop detection and drift correction. Our system's ability to recognize an area of the scene even under large changes in viewing direction allows it to surpass the mapping accuracy of ORB-SLAM, which uses only local appearance-based features that are not robust to large viewpoint changes. Experiments on real indoor scenes show that our method achieves mean drift reduction of 70% when compared directly to ORB-SLAM. Additionally, we propose a method for object orientation estimation, where we leverage the tracked pose of a moving camera under the SLAM setting to overcome ambiguities caused by object symmetry. This allows our SLAM system to produce geometrically detailed semantic maps with object orientation, translation, and scale.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80224969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Hand Pose Estimation for Hand-Object Interaction Cases using Augmented Autoencoder 基于增强自编码器的手-物交互情形手部姿态估计
2020 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197299
Shile Li, Haojie Wang, Dongheui Lee
{"title":"Hand Pose Estimation for Hand-Object Interaction Cases using Augmented Autoencoder","authors":"Shile Li, Haojie Wang, Dongheui Lee","doi":"10.1109/ICRA40945.2020.9197299","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197299","url":null,"abstract":"Hand pose estimation with objects is challenging due to object occlusion and the lack of large annotated datasets. To tackle these issues, we propose an Augmented Autoencoder based deep learning method using augmented clean hand data. Our method takes 3D point cloud of a hand with an augmented object as input and encodes the input to latent representation of the hand. From the latent representation, our method decodes 3D hand pose and we propose to use an auxiliary point cloud decoder to assist the formation of the latent space. Through quantitative and qualitative evaluation on both synthetic dataset and real captured data containing objects, we demonstrate state-of-the-art performance for hand pose estimation with objects, even using only a small number of annotated hand-object samples.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80247272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Radar Sensors in Collaborative Robotics: Fast Simulation and Experimental Validation 协同机器人中的雷达传感器:快速仿真与实验验证
2020 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197180
Christian Stetco, Barnaba Ubezio, Stephan Mühlbacher-Karrer, H. Zangl
{"title":"Radar Sensors in Collaborative Robotics: Fast Simulation and Experimental Validation","authors":"Christian Stetco, Barnaba Ubezio, Stephan Mühlbacher-Karrer, H. Zangl","doi":"10.1109/ICRA40945.2020.9197180","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197180","url":null,"abstract":"With the availability of small system in package realizations, radar systems become more and more attractive for a variety of applications in robotics, in particular also for collaborative robotics. As the simulation of robot systems in realistic scenarios has become an important tool, not only for design and optimization, but also e.g. for machine learning approaches, realistic simulation models are needed. In the case of radar sensor simulations, this means providing more realistic results than simple proximity sensors, e.g. in the presence of multiple objects and/or humans, objects with different relative velocities and differentiation between background and foreground movement. Due to the short wavelength in the millimeter range, we propose to utilize methods known from computer graphics (e.g. z-buffer, Lambertian reflectance model) to quickly acquire depth images and reflection estimates. This information is used to calculate an estimate of the received signal for a Frequency Modulated Continuous Wave (FMCW) radar by superposition of the corresponding signal contributions. Due to the moderate computational complexity, the approach can be used with various simulation environments such as V-Rep or Gazebo. Validity and benefits of the approach are demonstrated by means of a comparison with experimental data obtained with a radar sensor on a UR10 arm in different scenarios.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79022264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Evaluation of Perception Latencies in a Human-Robot Collaborative Environment 人机协作环境中感知延迟的评估
2020 IEEE International Conference on Robotics and Automation (ICRA) Pub Date : 2020-05-01 DOI: 10.1109/ICRA40945.2020.9197067
Atle Aalerud, G. Hovland
{"title":"Evaluation of Perception Latencies in a Human-Robot Collaborative Environment","authors":"Atle Aalerud, G. Hovland","doi":"10.1109/ICRA40945.2020.9197067","DOIUrl":"https://doi.org/10.1109/ICRA40945.2020.9197067","url":null,"abstract":"The latency in vision-based sensor systems used in human-robot collaborative environments is an important safety parameter which in most cases has been neglected by researchers. The main reason for this neglect is the lack of an accurate ground-truth sensor system with a minimal delay to benchmark the vision-sensors against. In this paper the latencies of 3D vision-based sensors are experimentally evaluated and analyzed using an accurate laser-tracker system which communicates on a dedicated EtherCAT channel with minimal delay. The experimental results in the paper demonstrate that the latency in the vision-based sensor system is many orders higher than the latency in the control and actuation system.","PeriodicalId":6859,"journal":{"name":"2020 IEEE International Conference on Robotics and Automation (ICRA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79066614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信