{"title":"IMU and cable encoder data fusion for in-pipe mobile robot localization","authors":"Andreu Corominas Murtra, J. M. M. Tur","doi":"10.1109/TePRA.2013.6556377","DOIUrl":"https://doi.org/10.1109/TePRA.2013.6556377","url":null,"abstract":"Inner pipe inspection of sewer networks is a hard and tedious task, due to the nature of the environment, which is narrow, dark, wet and dirty. So, mobile robots can play an important role to solve condition assessment of such huge civil infrastructures, resulting in a clear benefit for citizens. One of the fundamental tasks that a mobile robot should solve is localization, but in such environments GPS signal is completely denied, so alternative methods have to be developed. Visual odometry and visual SLAM are promising techniques to be applied in such environments, but they require a populated set of visual feature tracks, which is a requirement that can not be fulfilled in such environments in a continuous way. With the aim of designing robust and reliable robot systems, this paper proposes and evaluates a complementary approach to localize a mobile robot, which is based on sensor data fusion of an inertial measurement unit and of a cable encoder, which measures the length of an unfolded cable, from the starting point of operations up to the tethered robot. Data fusion is based on optimization of a set of windowed states given the sensor measurements in that window. The paper details theoretical basis, practical implementation issues and results obtained in testing pipe scenarios.","PeriodicalId":102284,"journal":{"name":"2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA)","volume":"3 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133007526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gautam Narang, Arjun Narang, Soumya Singh, J. Lempiäinen
{"title":"Use of unobtrusive human-machine interface for rehabilitation of stroke victims through robot assisted mirror therapy","authors":"Gautam Narang, Arjun Narang, Soumya Singh, J. Lempiäinen","doi":"10.1109/TePRA.2013.6556363","DOIUrl":"https://doi.org/10.1109/TePRA.2013.6556363","url":null,"abstract":"Stroke is one of the leading causes of long-term disability worldwide. Present techniques employed for rehabilitation of victims suffering from partial paralysis or loss of function, such as mirror therapy, require substantial amount of resources, which may not be readily available. In traditional mirror therapy, patients place a mirror beside the functional limb, blocking their view of the affected limb, creating the illusion that both the limbs are working properly, which enhances recovery by enlisting direct simulation. This paper proposes an alternate robot based concept, named Wear-A-BAN, where the rehabilitative task will be carried out by a normal articulated industrial robot. During the proposed rehabilitative procedure, the patients are made to wear a smart sleeve on the functional limb. Movement of this limb is monitored in real-time, by wireless Body-Area Network (BAN) sensors placed inside the sleeve, and copied over the sagittal plane to the affected limb. This procedure results in considerable savings in terms of money and personnel, as even though this procedure does not make the rehabilitation process autonomous, but one therapist can monitor various patients at a time. The industrial robot used is suitable for this purpose due to safety aspects naturally existing in the robot, is relatively cheap in price, and allows comprehensive 3-D motions of the limb. Also, unlike traditional therapy, this procedure allows actual movement of the affected limb. The sensors can also be used for other applications, such as gaming and daily life personal activity monitoring.","PeriodicalId":102284,"journal":{"name":"2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116413943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proposal for the initiation of general and military specific benchmarking of robotic convoys","authors":"P. Maxwell, Joshua Rykowski, Gregory Hurlock","doi":"10.1109/TePRA.2013.6556355","DOIUrl":"https://doi.org/10.1109/TePRA.2013.6556355","url":null,"abstract":"This paper identifies the need for a standard method of benchmarking emerging robotic systems with a focus on military, multi-robot convoys. Benchmarking is commonly used throughout academia and industry as a method of evaluating and comparing products. In this paper we propose a generic form that these benchmarks may take in the future. Classification categories, such as, obstacle avoidance, area mapping, and convoy coherence are all possible elements of this benchmark. The goal is a standard benchmark that can be used to evaluate military multi-robot convoy systems.","PeriodicalId":102284,"journal":{"name":"2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126912347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yajia Zhang, Jingru Luo, Kris K. Hauser, Robert Ellenberg, P. Oh, Hyungjun Park, Manas Paldhe, C. S. G. Lee
{"title":"Motion planning of ladder climbing for humanoid robots","authors":"Yajia Zhang, Jingru Luo, Kris K. Hauser, Robert Ellenberg, P. Oh, Hyungjun Park, Manas Paldhe, C. S. G. Lee","doi":"10.1109/TePRA.2013.6556364","DOIUrl":"https://doi.org/10.1109/TePRA.2013.6556364","url":null,"abstract":"This paper describes preliminary steps toward providing the Hubo-U+ humanoid robot with ladder climbing capabilities. Ladder climbing is an essential mode of locomotion for navigating industrial environments and conducting maintenance tasks in buildings, trees, and other man-made structures (e.g., utility poles). Although seemingly straightforward for humans, this task is quite challenging for humanoid robots due to differences from human kinematics, significant physical stresses, simultaneous coordination of four limbs in contact, and limited motor torques. We present a planning strategy for the Hubo-U+ robot that automatically generates multi-limbed locomotion sequences that satisfy contact, collision, and torque limit constraints for a given ladder specification. This method is used to automatically test climbing strategies on a variety of ladders in simulation. This planner-aided design paradigm allows us to employ extensive simulation in order to rapidly design, test, and verify novel climbing strategies, as well as testing how candidate hardware changes would affect the robot's ladder climbing capabilities.","PeriodicalId":102284,"journal":{"name":"2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122337598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An application of fuzzy DL-based semantic perception to soil container classification","authors":"M. Eich","doi":"10.1109/TePRA.2013.6556369","DOIUrl":"https://doi.org/10.1109/TePRA.2013.6556369","url":null,"abstract":"Semantic perception and object labeling are key requirements for robots interacting with objects on a higher level. Symbolic annotation of objects allows the usage of planning algorithms for object interaction, for instance in a typical fetchand-carry scenario. In current research, perception is usually based on 3D scene reconstruction and geometric model matching, where trained features are matched with a 3D sample point cloud. In this work we propose a semantic perception method which is based on spatio-semantic features. These features are defined in a natural, symbolic way, such as geometry and spatial relation. In contrast to point-based model matching methods, a spatial ontology is used where objects are rather described how they \"look like\", similar to how a human would described unknown objects to another person. A fuzzy based reasoning approach matches perceivable features with a spatial ontology of the objects. The approach provides a method which is able to deal with senor noise and occlusions. Another advantage is that no training phase is needed in order to learn object features. The use-case of the proposed method is the detection of soil sample containers in an outdoor environment which have to be collected by a mobile robot. The approach is verified using real world experiments.","PeriodicalId":102284,"journal":{"name":"2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122726505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human-human interaction using a behavioural control strategy","authors":"Paramin Neranon, R. Bicker","doi":"10.1109/TePRA.2013.6556361","DOIUrl":"https://doi.org/10.1109/TePRA.2013.6556361","url":null,"abstract":"This paper presents an outline of human-human interaction to establish a framework to understand how a behaviour based approach can be developed in the design of a human-robot interactive strategy. To approach the conceptual design guidelines for an interactive human-robot strategy, the mathematical model of human behaviour during transferring the compliant object to a receiver without any types of communication has been strategically analysed. The Auto Regressive Moving Average with Exogenous Input (ARMAX) system identification has been applied to identify the human arm model. A set of experiments have been designed (based on BoxBehnken), along with the influence variables affecting the human forces, which consist of mass, friction and target displacement. The estimated ARMAX models were shown to be good matching with the actual experimental data, where the best-fit percentages of human force profiles are between 88.73%-97.2%; the proposed models can then be used to present the human arm characteristics effectively.","PeriodicalId":102284,"journal":{"name":"2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA)","volume":"270 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130722782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"tf: The transform library","authors":"Tully Foote","doi":"10.1109/TePRA.2013.6556373","DOIUrl":"https://doi.org/10.1109/TePRA.2013.6556373","url":null,"abstract":"The tf library was designed to provide a standard way to keep track of coordinate frames and transform data within an entire system such that individual component users can be confident that the data is in the coordinate frame that they want without requiring knowledge of all the coordinate frames in the system. During early development of the Robot Operating System (ROS), keeping track of coordinate frames was identified as a common pain point for developers. The complexity of this task made it a common place for bugs when developers improperly applied transforms to data. The problem is also a challenge due to the often distributed sources of information about transformations between different sets of coordinate frames. This paper will explain the complexity of the problem and distill the requirements. Then it will discuss the design of the tf library in relation to the requirements. A few use cases will be presented to demonstrate successful deployment of the library. And powerful extensions to the core capabilities such as being able to transform data in time as well as in space.","PeriodicalId":102284,"journal":{"name":"2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131278532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. McCann, M. Medvedev, Daniel J. Brooks, Kate Saenko
{"title":"“Off the grid”: Self-contained landmarks for improved indoor probabilistic localization","authors":"E. McCann, M. Medvedev, Daniel J. Brooks, Kate Saenko","doi":"10.1109/TePRA.2013.6556349","DOIUrl":"https://doi.org/10.1109/TePRA.2013.6556349","url":null,"abstract":"Indoor localization is a challenging problem, especially in dynamically changing environments and in the presence of sensor errors such as odometry drift. We present a method for robustly localizing a robot in realistic indoor environments. We improve a popular probabilistic approach called Monte Carlo localization, which estimates the robot's position using depth features of the environment and is prone to errors when the topology changes (e.g., due to a moved piece of furniture). We propose a technique that improves localization by augmenting the environment with a set of QR code landmarks. Each landmark embeds information about its 3D pose relative to the world coordinate system, the same coordinate system as the map. Our algorithm detects the landmarks in images from an RGB-D camera, uses depth information to estimates their pose relative to the robot, and incorporates the resulting position evidence in a probabilistic manner. We conducted experiments on an iRobot ATRV-JR robot and show that our method is more reliable in dynamic environments than the exclusively probabilistic localization method.","PeriodicalId":102284,"journal":{"name":"2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130850617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thane R. Hunt, Christopher J. Berthelette, Marko B. Popovic
{"title":"Linear One-to-Many (OTM) system","authors":"Thane R. Hunt, Christopher J. Berthelette, Marko B. Popovic","doi":"10.1109/TePRA.2013.6556359","DOIUrl":"https://doi.org/10.1109/TePRA.2013.6556359","url":null,"abstract":"We report on progress on the \"One-To-Many\" (OTM) concept that allows a single electric motor to store energy in the form of elastic potential energy to drive multiple (e.g. hundred) motor units or independently controlled mechanical degrees of freedom. Critical to this concept is the OTM architecture which utilizes light weight, high-speed, energy efficient, robust, and cost-effective clutches that provide positional feedback. Here, we address linear springs as elastic mediums for energy storage and bi-stable solenoid based clutches that require minimal energy to transition between states. We analyze the power transfer of the system, discuss current and future designs and suggest avenues for potential applications of this practical technology.","PeriodicalId":102284,"journal":{"name":"2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128508572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Developing baxter","authors":"C. Fitzgerald","doi":"10.1109/TePRA.2013.6556344","DOIUrl":"https://doi.org/10.1109/TePRA.2013.6556344","url":null,"abstract":"This paper provides insight into the process that engineers at Rethink Robotics are using to develop a user-friendly experience for Baxter, a new industrial robot with common sense. For the purpose of this paper, the user experience centers around three core areas of development: the functionality of the robot, the intuitive UI and Use Cases being tested, along with the ensuing applications for the initial release of the robot's software. Personas aid UI designers in establishing accurate assumptions about users and Use Cases enable Software Quality Assurance engineers to test Baxter on real world applications to ensure that the robot successfully performs the types of tasks that are needed by U.S. Manufacturers. The paper also includes a number of customer applications that are well suited for Baxter.","PeriodicalId":102284,"journal":{"name":"2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134044237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}