2022 International Conference on Robotics and Automation (ICRA)最新文献

筛选
英文 中文
Deep Reinforcement Learning for Next-Best-View Planning in Agricultural Applications 农业应用中次优视图规划的深度强化学习
2022 International Conference on Robotics and Automation (ICRA) Pub Date : 2022-05-23 DOI: 10.1109/icra46639.2022.9811800
Xiangyu Zeng, Tobias Zaenker, Maren Bennewitz
{"title":"Deep Reinforcement Learning for Next-Best-View Planning in Agricultural Applications","authors":"Xiangyu Zeng, Tobias Zaenker, Maren Bennewitz","doi":"10.1109/icra46639.2022.9811800","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811800","url":null,"abstract":"Automated agricultural applications, i.e., fruit picking require spatial information about crops and, especially, their fruits. In this paper, we present a novel deep reinforcement learning (DRL) approach to determine the next best view for automatic exploration of 3D environments with a robotic arm equipped with an RGB-D camera. We process the obtained images into an octree with labeled regions of interest (ROIs), i.e., fruits. We use this octree to generate 3D observation maps that serve as encoded input to the DRL network. We hereby do not only rely on known information about the environment, but explicitly also represent information about the unknown space to force exploration. Our network takes as input the encoded 3D observation map and the temporal sequence of camera view pose changes, and outputs the most promising camera movement direction. Our experimental results show an improved ROI targeted exploration performance resulting from our learned network in comparison to a state-of-the-art method.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130504529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
RangeBird: Multi View Panoptic Segmentation of 3D Point Clouds with Neighborhood Attention RangeBird:基于邻域关注的三维点云多视场分割
2022 International Conference on Robotics and Automation (ICRA) Pub Date : 2022-05-23 DOI: 10.1109/icra46639.2022.9811998
Fabian Duerr, H. Weigel, J. Beyerer
{"title":"RangeBird: Multi View Panoptic Segmentation of 3D Point Clouds with Neighborhood Attention","authors":"Fabian Duerr, H. Weigel, J. Beyerer","doi":"10.1109/icra46639.2022.9811998","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811998","url":null,"abstract":"Panoptic segmentation of point clouds is one of the key challenges of 3D scene understanding, requiring the simultaneous prediction of semantics and object instances. Tasks like autonomous driving strongly depend on these information to get a holistic understanding of their 3D environment. This work presents a novel proposal free framework for lidar-based panoptic segmentation, which exploits three different point cloud representations, leveraging their strengths and compensating their weaknesses. The efficient projection-based range view and bird's eye view are combined and further extended by a point-based network with a novel attention-based neighborhood aggregation for improved semantic features. Cluster-based object recognition in bird's eye view enables an efficient and high-quality instance segmentation. Semantic and instance segmentation are fused and further refined by a novel instance classification for the final panoptic segmentation. The results on two challenging large-scale datasets, nuScenes and SemanticKITTI, show the success of the proposed framework, which outperforms all existing approaches on nuScenes and achieves state-of-the-art results on SemanticKITTI.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129217141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Kinematic Structure Estimation of Arbitrary Articulated Rigid Objects for Event Cameras 事件相机中任意关节刚体的运动结构估计
2022 International Conference on Robotics and Automation (ICRA) Pub Date : 2022-05-23 DOI: 10.1109/icra46639.2022.9812430
Urbano Miguel Nunes, Y. Demiris
{"title":"Kinematic Structure Estimation of Arbitrary Articulated Rigid Objects for Event Cameras","authors":"Urbano Miguel Nunes, Y. Demiris","doi":"10.1109/icra46639.2022.9812430","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812430","url":null,"abstract":"We propose a novel method that estimates the Kinematic Structure (KS) of arbitrary articulated rigid objects from event-based data. Event cameras are emerging sensors that asynchronously report brightness changes with a time resolution of microseconds, making them suitable candidates for motion-related perception. By assuming that an articulated rigid object is composed of body parts whose shape can be approximately described by a Gaussian distribution, we jointly segment the different parts by combining an adapted Bayesian inference approach and incremental event-based motion estimation. The respective KS is then generated based on the segmented parts and their respective biharmonic distance, which is estimated by building an affinity matrix of points sampled from the estimated Gaussian distributions. The method outperforms frame-based methods in sequences obtained by simulating events from video sequences and achieves a solid performance on new high-speed motions sequences, which frame-based KS estimation methods can not handle.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Continuum Robot Surface of Woven, McKibben Muscles Embedded in and Giving Shape to Rooms 一个连续的机器人表面编织,麦基本肌肉嵌入和形状的房间
2022 International Conference on Robotics and Automation (ICRA) Pub Date : 2022-05-23 DOI: 10.1109/icra46639.2022.9811987
G. Tan, Harrison Hidalgo, H. Kao, I. Walker, K. Green
{"title":"A Continuum Robot Surface of Woven, McKibben Muscles Embedded in and Giving Shape to Rooms","authors":"G. Tan, Harrison Hidalgo, H. Kao, I. Walker, K. Green","doi":"10.1109/icra46639.2022.9811987","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811987","url":null,"abstract":"Robots are typically designed as occupants of rooms, adapting to, and navigating within them. “Robot surfaces,” an emerging robot typology, are not occupants of but integral with rooms, physically shaping rooms to support human activity. We report on an advancement of robot surfaces formed by weaving McKibben Pneumatic Air Muscles that, when actuated, morph a 2D planar surface to generate 3D geometries including a “spherical cap.” Following our foundational study at different scales with different materials, we developed a full-scale prototype that offers an intimate and private space for people meeting in open plan environments. We report on our research, focusing on a design case, and validate the full-scale prototype as compared to our Non-Uniform Rational B-Splines (NURBS) model for three useful configurations. Our quantitative and qualitative results suggest that our robot surface can support human activity as envisioned. This research contributes foundational understanding of an emerging category of robotics from which our team and peers can build.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"31 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114085593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Looking for Trouble: Informative Planning for Safe Trajectories with Occlusions 寻找麻烦:信息规划与闭塞的安全轨迹
2022 International Conference on Robotics and Automation (ICRA) Pub Date : 2022-05-23 DOI: 10.1109/icra46639.2022.9811994
Barry Gilhuly, Armin Sadeghi, P. Yadmellat, K. Rezaee, Stephen L. Smith
{"title":"Looking for Trouble: Informative Planning for Safe Trajectories with Occlusions","authors":"Barry Gilhuly, Armin Sadeghi, P. Yadmellat, K. Rezaee, Stephen L. Smith","doi":"10.1109/icra46639.2022.9811994","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811994","url":null,"abstract":"Planning a safe trajectory for an ego vehicle through an environment with occluded regions is a challenging task. Existing methods use some combination of metrics to evaluate a trajectory, either taking a worst case view or allowing for some probabilistic estimate, to eliminate or minimize the risk of collision respectively. Typically, these approaches assume occluded regions of the environment are unsafe and must be avoided, resulting in overly conservative trajectories-particularly when there are no hidden risks present. We propose a local trajectory planning algorithm which generates safe trajectories that maximize observations on un-certain regions. In particular, we seek to gain information on occluded areas that are most likely to pose a risk to the ego vehicle on its future path. Calculating the information gain is a computationally complex problem; our method approximates the maximum information gain and results in vehicle motion that remains safe but is less conservative than state-of-the-art approaches. We evaluate the performance of the proposed method within the CARLA simulator in different scenarios.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114615387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Perception Engine Using a Multi-Sensor Head to Enable High-level Humanoid Robot Behaviors 使用多传感器头的感知引擎实现高级人形机器人行为
2022 International Conference on Robotics and Automation (ICRA) Pub Date : 2022-05-23 DOI: 10.1109/icra46639.2022.9812178
Bhavyansh Mishra, Duncan Calvert, Brendon Ortolano, M. Asselmeier, Luke Fina, Stephen McCrory, H. Sevil, Robert J. Griffin
{"title":"Perception Engine Using a Multi-Sensor Head to Enable High-level Humanoid Robot Behaviors","authors":"Bhavyansh Mishra, Duncan Calvert, Brendon Ortolano, M. Asselmeier, Luke Fina, Stephen McCrory, H. Sevil, Robert J. Griffin","doi":"10.1109/icra46639.2022.9812178","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812178","url":null,"abstract":"For achieving significant levels of autonomy, legged robot behaviors require perceptual awareness of both the terrain for traversal, as well as structures and objects in their surroundings for planning, obstacle avoidance, and high-level decision making. In this work, we present a perception engine for legged robots that extracts the necessary information for developing semantic, contextual, and metric awareness of their surroundings. Our custom sensor configuration consists of (1) an active depth sensor, (2) two monocular cameras looking sideways, (3) a passive stereo sensor observing the terrain, (4) a forward facing active depth camera, and (5) a rotating 3D LIDAR with a large vertical field-of-view (FOV). The mutual overlap in the sensors' FOVs allows us to redundantly detect and track objects of both dynamic and static types. We fuse class masks generated by a semantic segmentation model with LIDAR and depth data to accurately identify and track individual instances of dynamically moving objects. In parallel, active depth and passive stereo streams of the terrain are also fused to map the terrain using the on-board GPU. We evaluate the engine using two different humanoid behaviors, (1) look-and-step and (2) track-and-follow, on the Boston Dynamics Atlas.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116251121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Tightly-coupled GNSS-aided Visual-Inertial Localization 紧密耦合gnss辅助视觉惯性定位
2022 International Conference on Robotics and Automation (ICRA) Pub Date : 2022-05-23 DOI: 10.1109/icra46639.2022.9811362
W. Lee, Patrick Geneva, Yulin Yang, G. Huang
{"title":"Tightly-coupled GNSS-aided Visual-Inertial Localization","authors":"W. Lee, Patrick Geneva, Yulin Yang, G. Huang","doi":"10.1109/icra46639.2022.9811362","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811362","url":null,"abstract":"A navigation system which can output drift-free global trajectory estimation with local consistency holds great potential for autonomous vehicles and mobile devices. We propose a tightly-coupled GNSS-aided visual-inertial navigation system (GAINS) which is able to leverage the complementary sensing modality from a visual-inertial sensing pair, which provides high-frequency local information, and a Global Navigation Satellite System (GNSS) receiver with low-frequency global observations. Specifically, the raw GNSS measurements (including pseudorange, carrier phase changes, and Doppler frequency shift) are carefully leveraged and tightly fused within a visual-inertial framework. The proposed GAINS can accurately model the raw measurement uncertainties by canceling the atmospheric effects (e.g., ionospheric and tropospheric delays) which requires no prior model information. A robust state initialization procedure is presented to facilitate the fusion of global GNSS information with local visual-inertial odometry, and the spatiotemporal calibration between IMU-GNSS are also optimized in the estimator. The proposed GAINS is evaluated on extensive Monte-Carlo simulations on a trajectory generated from a large-scale urban driving dataset with specific verification for each component (i.e., online calibration and system initialization). GAINS also demonstrates competitive performance against existing state-of-the-art methods on a publicly available dataset with ground truth.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116322557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Underwater Dock Detection through Convolutional Neural Networks Trained with Artificial Image Generation 基于人工图像生成训练的卷积神经网络水下船坞检测
2022 International Conference on Robotics and Automation (ICRA) Pub Date : 2022-05-23 DOI: 10.1109/icra46639.2022.9812143
Jalil Chavez-Galaviz, N. Mahmoudian
{"title":"Underwater Dock Detection through Convolutional Neural Networks Trained with Artificial Image Generation","authors":"Jalil Chavez-Galaviz, N. Mahmoudian","doi":"10.1109/icra46639.2022.9812143","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812143","url":null,"abstract":"Autonomous Underwater Vehicles (AUVs) are a vital element for ocean exploration in various applications; however, energy sustainability still limits long-term operations. An option to overcome this problem is using underwater docking for power and data transfer. To robustly guide an AUV into a docking station, we propose an underwater vision algorithm for short-distance detection. In this paper, we present a Convolutional Neural Network architecture to accurately estimate the dock position during the terminal homing stage of the docking. Additionally, to alleviate the lack of available underwater datasets, two methods are proposed to generate synthetic datasets, one using a CycleGAN network, and another using Artistic Style transfer network. Both methods are used to train the same CNN architecture to compare the results. Finally, implementation details of the CNN are presented under the backseat architecture and ROS framework, running on an IVER3 AUV.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"17 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113964910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel Limbs-Free Variable Structure Wheelchair based on Face-Computer Interface (FCI) with Shared Control 一种基于人脸-计算机界面共享控制的新型无肢变结构轮椅
2022 International Conference on Robotics and Automation (ICRA) Pub Date : 2022-05-23 DOI: 10.1109/icra46639.2022.9811571
Bo Zhu, Daohui Zhang, Yaqi Chu, Xingang Zhao
{"title":"A Novel Limbs-Free Variable Structure Wheelchair based on Face-Computer Interface (FCI) with Shared Control","authors":"Bo Zhu, Daohui Zhang, Yaqi Chu, Xingang Zhao","doi":"10.1109/icra46639.2022.9811571","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811571","url":null,"abstract":"In order to meet the mobility and physical activity needs of people with impaired limbs function, a novel limbs-free variable structure wheelchair system controled by face-computer interface (FCI) was developed in this study. FCI used facial electromyography (fEMG) as a human intention recognition method from 6 facial movements, and the accuracy of intent recognition reached 97.6% under a series of offline optimization including channel optimization based on the Hilbert transform to obtain the envelope of fEMG, features optimization, and channel-independent model optimization. A collection of finite state machines (FSM) was used to control the movement and structural changes of the wheelchair. A shared control strategy called “ Keep Action after Take Over (KAaTO) “ that can reduce user fatigue while increasing safety was used in long-distance movement control of wheelchair. To test the performance of the system, in the braking distance test experiment, the result of 0.429m under KAaTO was better than the EMG-based discrete command control and speech command control method. Finally, an outdoor long-distance control pilot experiment proved the superior performance of the developed system.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114796405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An MPC Framework For Planning Safe & Trustworthy Robot Motions 安全可靠机器人运动规划的MPC框架
2022 International Conference on Robotics and Automation (ICRA) Pub Date : 2022-05-23 DOI: 10.1109/icra46639.2022.9812160
Moritz Eckhoff, R. J. Kirschner, Elena Kern, Saeed Abdolshah, S. Haddadin
{"title":"An MPC Framework For Planning Safe & Trustworthy Robot Motions","authors":"Moritz Eckhoff, R. J. Kirschner, Elena Kern, Saeed Abdolshah, S. Haddadin","doi":"10.1109/icra46639.2022.9812160","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812160","url":null,"abstract":"Strategies for safe human-robot interaction (HRI), such as the well-established Safe Motion Unit, provide a velocity scaling for biomechanically safe robot motion. In addition, psychologically-based safety approaches are required for trustworthy HRI. Such schemes can be very conservative and robot motion complying with such safety approaches should be time efficient within the robot motion planning. In this study, we improve the efficiency of a previously introduced approach for psychologically-based safety in HRI via a Model Predictive Control robot motion planner that simultaneously adjusts Cartesian path and speed to minimise the distance to the target pose as fast as possible. A subordinate real-time motion generator ensures human physical safety by integrating the Safe Motion Unit. Our motion planner is validated by two experiments. The simultaneous adjustment of path and velocity accomplishes highly time efficient robot motion, while considering the human physical and psychological safety. Compared to direct path velocity scaling approaches our planner enables 28 % faster motion execution.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125579286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信