2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)最新文献

筛选
英文 中文
On 3D simulators for multi-robot systems in ROS: MORSE or Gazebo? 多机器人系统的三维模拟器在ROS: MORSE或Gazebo?
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088134
F. Noori, David Portugal, R. Rocha, M. Couceiro
{"title":"On 3D simulators for multi-robot systems in ROS: MORSE or Gazebo?","authors":"F. Noori, David Portugal, R. Rocha, M. Couceiro","doi":"10.1109/SSRR.2017.8088134","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088134","url":null,"abstract":"Realistically simulating a population of robots has been an important subject to the robotics community for the last couple of decades. Multi-robot systems are often challenging to deploy in the real world due to the complexity involved, and researchers often develop and validate coordination mechanisms and collaborative robotic behavior preliminarily in simulations. Thus, choosing a useful, flexible and realistic simulator becomes an important task. In this paper, we overview several 3D multi-robot simulators, focusing on those that support the Robot Operating System (ROS). We also provide a comparative analysis, discussing two popular open-source 3D simulators compatible with ROS - MORSE and Gazebo -, using a multi-robot patrolling application, i.e. a distributed security task, as a case study.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122479752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Tempered point clouds and octomaps: A step towards true 3D temperature measurement in unknown environments 调温点云和八层地图:在未知环境中实现真正的3D温度测量的一步
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088145
B. Zeise, Bernardo Wagner
{"title":"Tempered point clouds and octomaps: A step towards true 3D temperature measurement in unknown environments","authors":"B. Zeise, Bernardo Wagner","doi":"10.1109/SSRR.2017.8088145","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088145","url":null,"abstract":"Although the generation of 3D temperature maps has become a frequently used technique, not only in search and rescue applications but also during inspection tasks, the remote measurement of a surface's true temperature is still a huge challenge. In this work, we face the problem of creating corrected 3D temperature maps in unknown environments without prior knowledge of surface emissivities. Using a calibrated sensor stack consisting of a 3D laser range finder and a thermal imaging camera, we generate Tempered Point Clouds (TPCs). With the help of the TPCs, we show how to perform a basic material classification, i.e. to make a distinction between metal and dielectric surface areas. For this purpose, we investigate measurements taken from different viewing angles. With the help of this approach, it is also possible to estimate corrected surface temperatures. The presented methods are evaluated making use of the OctoMap framework.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127253077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Position estimation of tethered micro unmanned aerial vehicle by observing the slack tether 基于松弛系绳的系绳微型无人机位置估计
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088157
Seiga Kiribayashi, Kaede Yakushigawa, K. Nagatani
{"title":"Position estimation of tethered micro unmanned aerial vehicle by observing the slack tether","authors":"Seiga Kiribayashi, Kaede Yakushigawa, K. Nagatani","doi":"10.1109/SSRR.2017.8088157","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088157","url":null,"abstract":"At disaster sites, the use of Micro Unmanned Aerial Vehicles (MUAVs) is expected for human safety. One application is to support first-phase emergency restoration work conducted by teleoperated construction machines. To extend the operation time of a MUAV, the authors proposed a powerfeeding tethered MUAV to provide an overhead view of the site to operators. The target application is to be used outdoors, so a robust and simple position estimation method for the MUAV is required. Therefore, in this paper, the authors propose a position estimation method for the MUAV by observing the slack tether instead of using the Global Positioning System (GPS), vision sensors, or a laser rangefinder. The tether shape is assumed to be a catenary curve that can be estimated by measuring the tether's length, tension, and outlet direction. To evaluate the proposed method, the authors developed a prototype of a helipad with a tether winding mechanism for the tethered MUAV, which contains a measurement function of the tether status. Some indoor experimental results proved the feasibility of the proposed method.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132533599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Safe navigation in dynamic, unknown, continuous, and cluttered environments 在动态、未知、连续和混乱的环境中安全导航
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088169
Mike D'Arcy, Pooyan Fazli, D. Simon
{"title":"Safe navigation in dynamic, unknown, continuous, and cluttered environments","authors":"Mike D'Arcy, Pooyan Fazli, D. Simon","doi":"10.1109/SSRR.2017.8088169","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088169","url":null,"abstract":"We introduce ProbLP, a probabilistic local planner, for safe navigation of an autonomous robot in dynamic, unknown, continuous, and cluttered environments. We combine the proposed reactive planner with an existing global planner and evaluate the hybrid in challenging simulated environments. The experiments show that our method achieves a 77% reduction in collisions over the straight-line local planner we use as a benchmark.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122773240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Autonomous observation of multiple USVs from UAV while prioritizing camera tilt and yaw over UAV motion 在优先考虑相机倾斜和偏航而不是无人机运动的情况下,无人机自主观察多个usv
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088154
C. Krishna, Mengdie Cao, R. Murphy
{"title":"Autonomous observation of multiple USVs from UAV while prioritizing camera tilt and yaw over UAV motion","authors":"C. Krishna, Mengdie Cao, R. Murphy","doi":"10.1109/SSRR.2017.8088154","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088154","url":null,"abstract":"This paper proposes a scheme for observing cooperative Unmanned Surface Vehicles (USV), using a rotorcraft Unmanned Aerial Vehicle (UAV) with camera movements (tilt and yaw) prioritized over UAV movements. Most of the current researches consider a fixed-wing type UAV for surveillance of multiple moving targets (MMT), whose functionality is limited to just UAV movements. Experiments in simulation are conducted and verified that, prioritizing camera movements increased the number of times each USV is visited (on an average by 5.68 times more), decreased the percentage of the duration that the UAV is not observing any USV (on an average by 19.8%) and increased the efficiency by decreasing the distance traveled by the UAV (on an average by 747 pixels) for the six test cases. Autonomous repositioning of the UAV at regular intervals to observe USVs during a disaster scenario will provide the operator with better situational awareness. Using a rotorcraft over a fixed-wing type UAV provides the operator with a flexibility of observing the target for the required duration by hovering and freedom of unrestricted movements, which help improve the efficiency of target observation.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116279673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Vehicle detection and localization on bird's eye view elevation images using convolutional neural network 基于卷积神经网络的鸟瞰高程图像车辆检测与定位
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088147
Shang-Lin Yu, Thomas Westfechtel, Ryunosuke Hamada, K. Ohno, S. Tadokoro
{"title":"Vehicle detection and localization on bird's eye view elevation images using convolutional neural network","authors":"Shang-Lin Yu, Thomas Westfechtel, Ryunosuke Hamada, K. Ohno, S. Tadokoro","doi":"10.1109/SSRR.2017.8088147","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088147","url":null,"abstract":"For autonomous vehicles, the ability to detect and localize surrounding vehicles is critical. It is fundamental for further processing steps like collision avoidance or path planning. This paper introduces a convolutional neural network- based vehicle detection and localization method using point cloud data acquired by a LIDAR sensor. Acquired point clouds are transformed into bird's eye view elevation images, where each pixel represents a grid cell of the horizontal x-y plane. We intentionally encode each pixel using three channels, namely the maximal, median and minimal height value of all points within the respective grid. A major advantage of this three channel representation is that it allows us to utilize common RGB image-based detection networks without modification. The bird's eye view elevation images are processed by a two stage detector. Due to the nature of the bird's eye view, each pixel of the image represent ground coordinates, meaning that the bounding box of detected vehicles correspond directly to the horizontal position of the vehicles. Therefore, in contrast to RGB-based detectors, we not just detect the vehicles, but simultaneously localize them in ground coordinates. To evaluate the accuracy of our method and the usefulness for further high-level applications like path planning, we evaluate the detection results based on the localization error in ground coordinates. Our proposed method achieves an average precision of 87.9% for an intersection over union (IoU) value of 0.5. In addition, 75% of the detected cars are localized with an absolute positioning error of below 0.2m.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129558129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Monocular visual-inertial state estimation on 3D large-scale scenes for UAVs navigation 无人机导航三维大尺度场景单目视觉惯性状态估计
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088162
J. Su, Xutao Li, Yunming Ye, Yan Li
{"title":"Monocular visual-inertial state estimation on 3D large-scale scenes for UAVs navigation","authors":"J. Su, Xutao Li, Yunming Ye, Yan Li","doi":"10.1109/SSRR.2017.8088162","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088162","url":null,"abstract":"Direct method for visual odometry has gained popularity, it needs not to compute feature descriptor and uses the actual values of camera sensors directly. Hence, it is very fast. However, its accuracy and consistency are not satisfactory. Based on these considerations, we propose a tightly-coupled, optimization-based method to fuse inertial measurement unit (IMU) and visual measurement, in which uses IMU preintegration to provide prior state for semi-direct method tracking and uses precise state estimation of visual odometry to optimizate IMU state estimation. Furthermore, we incorporate Kanade-Lucas-Tomasi tracking and a probabilistic depth filter such that the pixels in environments with little or high- frequency texture can be efficiently tracked. Our approach is able to obtain the gravity orientation in initial IMU body frame and the scale information by using the monocular camera and IMU. More importantly, we do not need any prior landmark points. Our monocular visual-inertial state estimation is much faster and achieves better accuracy on benchmark datasets.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130893144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual pose stabilization of tethered small unmanned aerial system to assist drowning victim recovery 系绳小型无人机系统的视觉姿态稳定,以协助溺水者恢复
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088149
J. Dufek, Xuesu Xiao, R. Murphy
{"title":"Visual pose stabilization of tethered small unmanned aerial system to assist drowning victim recovery","authors":"J. Dufek, Xuesu Xiao, R. Murphy","doi":"10.1109/SSRR.2017.8088149","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088149","url":null,"abstract":"This paper proposes a method for visual pose stabilization of Fotokite, a tethered small unmanned aerial system, using a forward facing monocular camera. Conventionally, Fotokite stabilizes itself only relative to its tether and not relative to the global frame. It is, therefore, susceptible to environmental disturbances (especially wind) or motion of its ground station. Related work proposed visual stabilization for unmanned aerial systems using a downward facing camera and homography estimation. The major disadvantage of this approach is that all the features used in the homography estimation must be in the same plane. The method proposed in this paper works for features in different planes and can be used with a forward-facing camera. This paper is the part of a bigger project on saving drowning victims using lifesaving unmanned surface vehicle visually servoed by Fotokite to reach the victims. Some of the used algorithms are motion sensitive and, therefore, it is desirable for Fotokite to keep its pose relative to the world. The method presented in this paper will enable to prevent gradual drifting of Fotokite in windy conditions typical for coastal areas or when the ground station is on a boat. The quality of pose stabilization was quantitatively analyzed in 9 trials by measuring metric displacement from the initial pose. The achieved mean metric displacement was 34 cm. The results were also compared to 3 trials with no stabilization.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124515582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Robotic bridge statics assessment within strategic flood evacuation planning using low-cost sensors 利用低成本传感器进行洪水疏散规划中的机器人桥梁静力学评估
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088133
Maik Benndorf, T. Haenselmann, Maximilian Garsch, N. Gebbeken, Christian A. Mueller, Tobias Fromm, T. Luczynski, A. Birk
{"title":"Robotic bridge statics assessment within strategic flood evacuation planning using low-cost sensors","authors":"Maik Benndorf, T. Haenselmann, Maximilian Garsch, N. Gebbeken, Christian A. Mueller, Tobias Fromm, T. Luczynski, A. Birk","doi":"10.1109/SSRR.2017.8088133","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088133","url":null,"abstract":"Scenario: A rescue team needs to cross a partially damaged bridge in a flooded area. It is unknown whether the construction is still able to carry a vehicle. Assessing the construction's integrity can be accomplished by the analysis of the bridge's eigenfrequencies. Rather than using proprietary expensive Vibration Measurement Systems (VMS) we propose to utilize off-the-shelf smartphones as sensors - which still require to be placed at the spot on the bridge best suited for picking up vibrations. Within this work, we use an Unmanned Ground Vehicle (UGV) featuring a robotic manipulator. It allows a non-technician operator to optimally place the device semi- automatically. We evaluate our approach in a real-life scenario. Demo video: https://youtu.be/u_3pe0nZ5tw","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116238468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Intelligent vehicle for search, rescue and transportation purposes 用于搜索、救援和运输的智能车辆
2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) Pub Date : 2017-10-01 DOI: 10.1109/SSRR.2017.8088148
Abdulla Al-Kaff, Francisco Miguel Moreno, A. D. L. Escalera, Jose M. Armingol
{"title":"Intelligent vehicle for search, rescue and transportation purposes","authors":"Abdulla Al-Kaff, Francisco Miguel Moreno, A. D. L. Escalera, Jose M. Armingol","doi":"10.1109/SSRR.2017.8088148","DOIUrl":"https://doi.org/10.1109/SSRR.2017.8088148","url":null,"abstract":"Recent development in micro-electronics technologies as well as the computer vision techniques increased demand to use Unmanned Aerial Vehicles (UAVs) in several industrial and civil applications. This paper proposed a vision based system, that is used in UAVs for search, rescue and transportation purposes. The proposed system is divided into two main parts: Vision-based object detection and classification, in which, a Kinect V2 sensor is used; to extract the objects from the ground plane, and estimate the distance to the UAV. In addition, Support Vector Machine (SVM) human detector based on Histograms of Oriented Gradients (HOG) features is applied to classify the human bodies from the all detected objects. Secondly, a semi-autonomous reactive control for visual servoing system is implemented; to control the position and the velocity of the UAV for performing safe approaching maneuvers to the detected objects. The proposed system has been validated by performing several real flights, and the obtained results show the high robustness and accuracy of the system.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124874345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信