{"title":"Hazardous workspace modeling for manipulators using spatial hazard functions","authors":"Brian O'Neil, Cheryl Brabec, M. Pryor","doi":"10.1109/SSRR.2012.6523880","DOIUrl":"https://doi.org/10.1109/SSRR.2012.6523880","url":null,"abstract":"This paper describes an approach to motion planning that includes hazards individually characterized as continuous functions of space. These functions contribute to a complex and discontinuous yet practical representation of the hazards found in a manipulator's workspace. The hazard model is a smoothed and scaled representation of the actual physical hazard sampled discretely over the robot's workspace. This research is primarily motivated to reduce damage to manipulators working in high-radiation environments, but is easily extended to other spatial hazards including heat sources, overlapping workspaces, etc. The gradient of the hazard function is used to generate a force that can be included by an artificial potential field motion planner easing its integration with other existing techniques used for obstacle avoidance, target acquisition, etc. The motion planner additionally scales the robot's velocity in proportion to the magnitude of the hazard model and determines the path. This results in a motion influenced in the direction of greatest hazard reduction at a speed that reduces the time the robot is subject to abnormally high hazard. These techniques are demonstrated on a port-deployed glovebox manipulator in a simulated hazardous environment. Over the course of a demonstration task, the radiation exposure to the robot is reduced by over 50%.","PeriodicalId":408300,"journal":{"name":"2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128056094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Prior, Mehmet Ali Erbil, Mantas Brazinskas, Witold Mielniczek
{"title":"Selecting a small unmanned air vehicle system using the DARPA crowdsourcing model","authors":"S. Prior, Mehmet Ali Erbil, Mantas Brazinskas, Witold Mielniczek","doi":"10.1109/SSRR.2012.6523914","DOIUrl":"https://doi.org/10.1109/SSRR.2012.6523914","url":null,"abstract":"The UAVForge challenge, announced in July 2011, was designed to bring together a diverse group of UAV enthusiasts to develop the next generation of low cost, small unmanned aerial systems for perch and stare operations in a SSRR context. The challenge combined a collaborative website with a live competitive fly-off event held at Fort Stewart, Georgia in May 2012. UAVForge was a Defense Advanced Research Projects Agency (DARPA) and Space and Naval Warfare Systems Center, Atlantic (SSC Atlantic) initiative to leverage the exchange of ideas among an international community united through common interests and inspired by creative thought. More than 140 teams and 3,500 registered citizen scientists from 153 countries participated in this year-long event. From several selection rounds, a core of nine teams competed in the fly-off event and in June 2012 Team HALO from the UK was declared the winner scoring 47.7 points out of a maximum possible 60 points, with their co-axial tri-rotor, Y6 design of mass 2.5 kg (30 min endurance).","PeriodicalId":408300,"journal":{"name":"2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116188452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social head gaze and proxemics scaling for an affective robot used in victim management","authors":"Vasant Srinivasan, Zachary Henkel, R. Murphy","doi":"10.1109/SSRR.2012.6523916","DOIUrl":"https://doi.org/10.1109/SSRR.2012.6523916","url":null,"abstract":"This paper evaluates the use of social head gaze and proxemic scaling in an affective robot for victim management using two large scale simulated Urban Search and Rescue (US&R) scenario studies.On average between four and ten hours pass from the time a victim is discovered to the time of extrication of the victims [2], [5]. During this time an urban search and rescue robot remains with the victim to monitor their condition and the environment. Throughout this critical period, it is important that the robot interacts with the victims in a socially appropriate way in order to reduce stress levels, keep the victims calm, at ease, positive, and engaged until assistance arrives while preventing a condition known as shock.","PeriodicalId":408300,"journal":{"name":"2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125748578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual place recognition for multi-robots maps merging","authors":"Zhao Li, S. R. U. N. Jafri, R. Chellali","doi":"10.1109/SSRR.2012.6523870","DOIUrl":"https://doi.org/10.1109/SSRR.2012.6523870","url":null,"abstract":"This paper presents a new approach allowing to a group of robots team to merge their individual maps without any a priori knowledge about their relative positions. While moving, each robot acquires laser data and a video stream. The first data are used to create individual maps (SLAM), while images sequences are used to derive invariant visual descriptions of the visited places. The later are then exchanged between robots to determine a probability of being close to each other and sharing a common place in order to initiate a map merging process. In such a way, ambiguous situations that usually occur when a single sensor is used are reduced. We present the solution we developed in order to extract a compact visual description that remains constant regardless to the actual pose of the robots. Two or more robots having such description can verify if they are or not present at the same place, before merging their respective individual maps. This scheme, e.g. the fusion of laser-range data and visual information enhances and accelerates the construction of the global map and consequently disambiguate individual maps. The validation of the approach has been performed on various indoor environments mainly, office-like spaces and houses.","PeriodicalId":408300,"journal":{"name":"2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131244947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MOTHERSHIP — A serpentine tread/limb hybrid marsupial robot for USAR","authors":"Justin Huff, Stephen A. Conyers, R. Voyles","doi":"10.1109/SSRR.2012.6523893","DOIUrl":"https://doi.org/10.1109/SSRR.2012.6523893","url":null,"abstract":"In this paper we present a novel serpentine marsupial robot that consists of three articulated modules of 10 treads each, ringing the circumference. The treads of each cylindrical tread module move in two dimensions, resulting in holonomic behavior. The differentially-driven planetary gears permit both transverse and longitudinal tread motion while both motors remain stationary in the module frame of reference. Active articulated joints between the modules enable the navigation of obstacles and climbing steep gradients and steps larger than the diameter of the modules. As a marsupial robot team, the larger MOTHERSHIP carries small, rubble-penetrating CRAWLER robots to assist a search, either via teleoperation or, eventually, via autonomous teaming.","PeriodicalId":408300,"journal":{"name":"2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124937026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Grady, Mark Moll, C. Hegde, Aswin C. Sankaranarayanan, Richard Baraniuk, L. Kavraki
{"title":"Multi-objective sensor-based replanning for a car-like robot","authors":"D. Grady, Mark Moll, C. Hegde, Aswin C. Sankaranarayanan, Richard Baraniuk, L. Kavraki","doi":"10.1109/SSRR.2012.6523898","DOIUrl":"https://doi.org/10.1109/SSRR.2012.6523898","url":null,"abstract":"This paper studies a core problem in multi-objective mission planning for robots governed by differential constraints. The problem considered is the following. A car-like robot computes a plan to move from a start configuration to a goal region. The robot is equipped with a sensor that can alert it if an anomaly appears within some range while the robot is moving. In that case, the robot tries to deviate from its computed path and gather more information about the target without incurring considerable delays in fulfilling its primary mission, which is to move to its final destination. This problem is important in, e.g., surveillance, where inspection of possible threats needs to be balanced with completing a nominal route. The paper presents a simple and intuitive framework to study the trade-offs present in the above problem. Our work utilizes a state-of-the-art sampling-based planner, which employs both a high-level discrete guide and low-level planning. We show that modifications to the distance function used by the planner and to the weights that the planner employs to compute the high-level guide can help the robot react online to new secondary objectives that were unknown at the outset of the mission. The modifications are computed using information obtained from a conventional camera model. We find that for small percentage increases in path length, the robot can achieve significant gains in information about an unexpected target.","PeriodicalId":408300,"journal":{"name":"2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115797638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive mapping in 3D using RGB-D data","authors":"P. Vieira, R. Ventura","doi":"10.1109/SSRR.2012.6523879","DOIUrl":"https://doi.org/10.1109/SSRR.2012.6523879","url":null,"abstract":"The task of 3D mapping indoor environments in Search and Rescue missions can be very useful on providing detailed spacial informarion to human teams. This can be accomplished using field robots, equipped with sensors capable of obtaining depth and color data, such as the one provided by the Kinect sensor. Several methods have been proposed in the literature to address the problem of automatic 3D reconstruction from depth data. Most methods rely on the minimization of the matching error among individual depth frames. However, ambiguity in sensor data often leads to erroneous matching (due to local minima), hard to cope with in a purely automatic approach. This paper is targeted to 3D reconstruction from RGB-D data, and proposes a semi-automatic approach, denoted Interactive Mapping, involving a human operator in the process of detecting and correcting erroneous matches. Instead of allowing the operator complete freedom in correcting the matching in a frame by frame basis, the proposed method constrains human intervention along the degrees of freedom with most uncertainty. The user is able to translate and rotate individual RGB-D point clouds, with the help of a force field-like reaction to the movement of each point cloud. A dataset was obtained and used using a kinect equipped on the tracked wheel robot RAPOSA-NG, developed for Search and Rescue missions. Some preliminary results are presented, illustrating the advantages of the method.","PeriodicalId":408300,"journal":{"name":"2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"28 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133701411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Marconi, C. Melchiorri, M. Beetz, Dejan Pangercic, R. Siegwart, Stefan Leutenegger, R. Carloni, S. Stramigioli, H. Bruyninckx, P. Doherty, A. Kleiner, V. Lippiello, Alberto Finzi, B. Siciliano, A. Sala, N. Tomatis
{"title":"The SHERPA project: Smart collaboration between humans and ground-aerial robots for improving rescuing activities in alpine environments","authors":"L. Marconi, C. Melchiorri, M. Beetz, Dejan Pangercic, R. Siegwart, Stefan Leutenegger, R. Carloni, S. Stramigioli, H. Bruyninckx, P. Doherty, A. Kleiner, V. Lippiello, Alberto Finzi, B. Siciliano, A. Sala, N. Tomatis","doi":"10.1109/SSRR.2012.6523905","DOIUrl":"https://doi.org/10.1109/SSRR.2012.6523905","url":null,"abstract":"The goal of the paper is to present the foreseen research activity of the European project “SHERPA” whose activities will start officially on February 1th 2013. The goal of SHERPA is to develop a mixed ground and aerial robotic platform to support search and rescue activities in a real-world hostile environment, like the alpine scenario that is specifically targeted in the project. Looking into the technological platform and the alpine rescuing scenario, we plan to address a number of research topics about cognition and control. What makes the project potentially very rich from a scientific viewpoint is the heterogeneity and the capabilities to be owned by the different actors of the SHERPA system: the human rescuer is the “busy genius”, working in team with the ground vehicle, as the “intelligent donkey”, and with the aerial platforms, i.e. the “trained wasps” and “patrolling hawks”. Indeed, the research activity focuses on how the “busy genius” and the “SHERPA animals” interact and collaborate with each other, with their own features and capabilities, toward the achievement of a common goal.","PeriodicalId":408300,"journal":{"name":"2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"79 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131544240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A short overview of recent advances in map evaluation","authors":"S. Schwertfeger, A. Birk","doi":"10.1109/SSRR.2012.6523906","DOIUrl":"https://doi.org/10.1109/SSRR.2012.6523906","url":null,"abstract":"Mapping is an important task for mobile robots in general and in Safety, Security, and Rescue Robotics (SSRR) in particular - often maps are even a core mission deliverable for SSRR applications. The assessment of the quality of maps in a simple, efficient and automated way is hence of high interest. But it is not trivial and an ongoing research topic. Here, an overview of advances with a new approach on map evaluation is presented. This structure-based method makes use of a Topology Graph, a topological, abstracted representation of the map. The Topology Graph is constructed by generating and processing a Voronoi Diagram from a 2D grid map. Having a ground truth map, the Topology Graphs of both maps are matched using both similarity metrics on the vertices as well as structural matching of subgraph isomorphisms.","PeriodicalId":408300,"journal":{"name":"2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131851593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proposal of EARLI for the snake robot's obstacle aided locomotion","authors":"T. Kamegawa, Ryoma Kuroki, M. Travers, H. Choset","doi":"10.1109/SSRR.2012.6523889","DOIUrl":"https://doi.org/10.1109/SSRR.2012.6523889","url":null,"abstract":"In this paper, EARLI (Extended Asymmetrical Reverse Lateral Inhibition) is proposed for the snake robot's obstacle aided locomotion and behavior. The idea of EARLI starts with an original idea of lateral inhibition; although joints rotate in reverse direction compared with the original lateral inhibition; and information of contact affects not only adjacent joints but also a couple of neighboring joints away from a contacting link. Furthermore, distribution of adding torque is empirically set asymmetrically in order to propel the snake robot forward. The algorithm of EARLI is implemented to ODE (Open Dynamics Engine) to see its behavior in simulation environments and to verify its effectiveness. As a result, a behavior emerges in which the the snake robot is pushing obstacles for longer times and moving greater distances than when using original lateral inhibition. In addition, continuous pushing behavior is also observed when an obstacle is located behind the the snake robot.","PeriodicalId":408300,"journal":{"name":"2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117154071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}