Conor McGann, F. Py, K. Rajan, H. Thomas, R. Henthorn, R. McEwen
{"title":"A deliberative architecture for AUV control","authors":"Conor McGann, F. Py, K. Rajan, H. Thomas, R. Henthorn, R. McEwen","doi":"10.1109/ROBOT.2008.4543343","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543343","url":null,"abstract":"Autonomous Underwater Vehicles (AUVs) are an increasingly important tool for oceanographic research demonstrating their capabilities to sample the water column in depths far beyond what humans are capable of visiting, and doing so routinely and cost-effectively. However, control of these platforms to date has relied on fixed sequences for execution of pre-planned actions limiting their effectiveness for measuring dynamic and episodic ocean phenomenon. In this paper we present an agent architecture developed to overcome this limitation through on-board planning using Constraint- based Reasoning. Preliminary versions of the architecture have been integrated and tested in simulation and at sea.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126724875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High quality 3D laser ranging under general vehicle motion","authors":"A. Harrison, P. Newman","doi":"10.1109/ROBOT.2008.4543179","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543179","url":null,"abstract":"This paper describes an end-to-end system capable of generating high-quality 3D point clouds from the popular LMS200 laser on a continuously moving platform. We describe the hardware, data capture, calibration and data stream processing we have developed which yields remarkable detail in the generated point clouds of urban scenes. Given the increasing interest in outdoor 3D navigation and scene reconstruction by mobile platforms, our aim is to provide a level of hardware and algorithmic detail suitable for replication of our system by interested parties who do not wish to invest in dedicated 3D laser rangers.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126856975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotic airship trajectory tracking control using a backstepping methodology","authors":"F. Repoulias, E. Papadopoulos","doi":"10.1109/ROBOT.2008.4543207","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543207","url":null,"abstract":"This paper considers the design of a novel closed- loop trajectory tracking controller for an underactuated robotic airship having 6 degrees of freedom (DOF) and 3 controls, on forward, yaw and pitch motions using two side thrusters. A backstepping methodology is adopted as a design tool, since it is suitable for the cascaded nature of the vehicle dynamics. It also offers design flexibility and robustness against parametric uncertainties which are often encountered in aerodynamic modeling and air stream disturbances. Indeed, in our simulations we assume a 10% error in all dynamic parameters and yet the controller performs position, orientation, linear and angular velocities tracking successfully. We also impose an additional air stream disturbance and the controller corrects the vehicle's trajectory successfully too.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114884577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active target search from UAVs in urban environments","authors":"Christopher Geyer","doi":"10.1109/ROBOT.2008.4543567","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543567","url":null,"abstract":"In this paper we consider the problem of searching for a target from a camera-equipped unmanned aerial vehicle (UAV) flying in an urban area. Urban areas present challenges because buildings can hamper the ability to see regions on the ground. We describe an algorithm that constructs paths that take into account obstructions due to buildings or other large objects. The approach combines search trees and a particle filters to evaluate a large number of possible paths, while at the same time performing all the Bayes' filter innovations that would need to occur during the evaluation of each path.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115105942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin A. Riedmiller, Roland Hafner, S. Lange, M. Lauer
{"title":"Learning to dribble on a real robot by success and failure","authors":"Martin A. Riedmiller, Roland Hafner, S. Lange, M. Lauer","doi":"10.1109/ROBOT.2008.4543536","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543536","url":null,"abstract":"Learning directly on real world systems such as autonomous robots is a challenging task, especially if the training signal is given only in terms of success or failure (reinforcement learning). However, if successful, the controller has the advantage of being tailored exactly to the system it eventually has to control. Here we describe, how a neural network based RL controller learns the challenging task of ball dribbling directly on our middle-size robot. The learned behaviour was actively used throughout the RoboCup world championship tournament 2007 in Atlanta, where we won the first place. This constitutes another important step within our Brainstormers project. The goal of this project is to develop an intelligent control architecture for a soccer playing robot, that is able to learn more and more complex behaviours from scratch.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116417341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lianqing Liu, N. Xi, Yilun Luo, Yuechao Wang, Jiangbo Zhang, Guangyong Li
{"title":"Detection and real-time correction of faulty visual feedback in atomic force microscopy based nanorobotic manipulation","authors":"Lianqing Liu, N. Xi, Yilun Luo, Yuechao Wang, Jiangbo Zhang, Guangyong Li","doi":"10.1109/ROBOT.2008.4543245","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543245","url":null,"abstract":"One of the main roadblocks to Atomic Force Microscope (AFM) based nanomanipulation is lack of real time visual feedback. Although the model based visual feedback can partly solve this problem, its unguaranteed reliability due to the inaccurate models in nano-environment still limits the efficiency of AFM based nanomanipulation. This paper introduce a Realtime Fault Detection and Correction (RFDC) method to improve the reliability of the visual feedback. By utilizing Kalman filter and local scan technologies, the RFDC method not only can realtime detect the fault display caused by the modeling error, but also can on-line correct it without interrupting manipulation. In this way, the visual feedback keeps consistent with the true environment changes during manipulation, which makes several operations being finished without a image scanning in between. The theoretical study and the implementation of the RFDC method are elaborated in this paper. Experiments of manipulating nano-particles have been carried out to demonstrate the effectiveness and efficiency of the proposed method.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122487354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Bonnlander, John R. Rebula, P. Neuhaus, Matt Johnson, Greg Hill, Carlos Pérez, John Carff, William Howell, J. Pratt
{"title":"Hierarchical two stage planner for little dog","authors":"B. Bonnlander, John R. Rebula, P. Neuhaus, Matt Johnson, Greg Hill, Carlos Pérez, John Carff, William Howell, J. Pratt","doi":"10.1109/ROBOT.2008.4543533","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543533","url":null,"abstract":"We first developed a single stage footstep planner that is capable of solving local search problems for locating a goal. It is implemented with a modified A* search algorithm that utilizes a crows-fly heuristic for measuring the distance to the goal. However, pathfinding over extreme terrain with this method can take a long time: the planner becomes \"stuck\" if an obstacle lies along the crows-fly path. In this video, the dog struggles while searching for valid footsteps near sharp discontinuities in the terrain. In addition, the planner gives preference to solutions where the robot always faces the goal, forcing the robot to sidestep around obstacles. This can lead to unnatural footstep sequences To address the shortcomings of a single stage planner in maze-like terrain, we developed a two-stage planner that first looks at the terrain for a smooth body trajectory from the starting point to the prescribed goal location. The body trajectory is then passed to the second stage, which finds a sequence of footsteps close to that body trajectory. The first stage of our algorithm produces a terrain cost map from terrain height data that quantifies the expected difficulty of finding a path through a particular point on the terrain. The terrain cost map takes into account three main conditions. The first condition measures whether the four patches of ground for all four feet are relatively flat. This is calculated for a given body location by fitting a plane to the four terrain patches that represent locations that the dog's feet can comfortably reach. The second condition measures the amount of clearance for the robot's underbelly by comparing the terrain's highest point under the body against a preset height above the feet. The third condition measures the likelihood of all four feet finding a safe footstep away from sharp terrain discontinuities. We multiply all three scores for the given terrain location to produce a final score. To complete the first stage, we utilize this terrain cost map to search for a connected path that minimizes the average expected difficulty of crossing the terrain. The search algorithm is A* utilizing a crows-fly heuristic similar to the one employed in the original footstep planner, but the state space is much smaller. Therefore, it runs quickly, even for large, complicated terrains. In the second stage we run the footstep planner, but with a modified heuristic: the search gives preference to footstep configurations …","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122494136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulated regulator to synthesize ZMP manipulation and foot location for autonomous control of biped robots","authors":"T. Sugihara","doi":"10.1109/ROBOT.2008.4543377","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543377","url":null,"abstract":"An autonomous biped controller synthesizing the ZMP manipulation and the foot location is proposed; each of them has a strongly nonlinear property, so that they had been hard to be synthesized without referential trajectories given as functions of time. The former is equivalent to the partial indirect manipulation of the reaction force through the contact points with the environment to control the center of mass (COM) under the current supporting state. The latter means discontinuous relocation of grounding feet in order to deform the supporting region to include the desired but unachievable ZMP in the future. They run on an identical control system without any confliction, since they originate from the same simulated regulator in the sense that the feasible region of ZMP is not bounded. It is also shown that a cyclic walk is automatically generated without giving a walking period explicitly by coupling the support-state transition and the goal-state transition in a simulation.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122951459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Target detection and position likelihood using an aerial image sensor","authors":"Zuwhan Kim, R. Sengupta","doi":"10.1109/ROBOT.2008.4543187","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543187","url":null,"abstract":"Sensor-based control is an emerging challenge in UAV applications. It is essential in a sensing task to account for sensor measurement errors when computing a target position estimate. Source of measurement error includes those in vehicle position and orientation measurements as well as algorithm failures such as missed detections or false detections. Incorporating such errors in aerial sensors is non-trival because of the camera's perspective geometry. This paper is about a method to incorporate such errors into target position estimates and a calibration methodology to measure the error distributions. A preliminary experiment with real flight data is presented.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114542602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experimental comparison of several posture estimation solutions for biped robot Rabbit","authors":"Y. Aoustin, F. Plestan, V. Lebastard","doi":"10.1109/ROBOT.2008.4543378","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543378","url":null,"abstract":"Experimental validation of absolute orientation estimation solutions is displayed for the dynamical stable five-link biped robot Rabbit during a walking gait. The objective is to prove the technical feasibility of posture online software estimation in order to remove sensors. Finally, this paper presents the first experimental results of walking biped robot posture estimation.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121955600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}