Proceedings of the 2005 IEEE International Conference on Robotics and Automation最新文献

筛选
英文 中文
Clamping Tools of a Capsule for Monitoring the Gastrointestinal Tract Problem Analysis and Preliminary Technological Activity 胃肠道监测胶囊夹紧工具问题分析及初步技术活动
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570296
A. Menciassi, S. Gorini, A. Moglia, G. Pernorio, C. Stefanini, P. Dario
{"title":"Clamping Tools of a Capsule for Monitoring the Gastrointestinal Tract Problem Analysis and Preliminary Technological Activity","authors":"A. Menciassi, S. Gorini, A. Moglia, G. Pernorio, C. Stefanini, P. Dario","doi":"10.1109/ROBOT.2005.1570296","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570296","url":null,"abstract":"This paper describes the development of an active clamping mechanism to be integrated into a swallowable pill for the diagnosis of the gastrointestinal (GI) tract. The clamping system allows to stop the pill in desired sites of the GI tract for long monitoring purposes. After discussing the major technical constraints, the design of the clamping system, based on FEA (Finite Element Analysis), is illustrated as well as its fabrication process. The clamping unit is actuated exploiting Shape Memory Alloys (SMA), in wires and spring configuration, and it is driven by a dedicated electrical interface. A fine tuning has been performed in order to limit the power consumption. Then a working prototype is fabricated and preliminarily tested, pointing out a capability of the grasping system over 40 g.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"2012 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113966249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A Fixed– Camera Controller for Visual Guidance of Mobile Robots via Velocity Fields 一个固定的# 8211;基于速度场的移动机器人视觉导引摄像机控制器
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570593
R. Kelly, Victor Sanchez, E. Bugarin, Humberto Rodríguez
{"title":"A Fixed– Camera Controller for Visual Guidance of Mobile Robots via Velocity Fields","authors":"R. Kelly, Victor Sanchez, E. Bugarin, Humberto Rodríguez","doi":"10.1109/ROBOT.2005.1570593","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570593","url":null,"abstract":"Navigation of mobile robot in the configuration space requires the robot pose measurement for control algorithms implementation. Calibrated fixed cameras may be utilized to such measurements. A promising approach where calibration can be avoided is navigation in the image space. This leads to the concept of direct image based control where absolute measurement of the robot pose in the configuration space is obviated; only information extracted from visual information — and partial camera calibration— is utilized for guidance of the robot. This paper adopt this concept and the application of the velocity field control philosophy to visual guidance of wheeled mobile robots. The main feature of this approach is that the robot task is encoded by means of a specified velocity field in image space.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"11 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113980115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Variable Baseline Stereo Tracking Vision System Using High-Speed Linear Slider 基于高速线性滑块的可变基线立体跟踪视觉系统
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570337
Y. Nakabo, T. Mukai, Yusuke Hattori, Y. Takeuchi, N. Ohnishi
{"title":"Variable Baseline Stereo Tracking Vision System Using High-Speed Linear Slider","authors":"Y. Nakabo, T. Mukai, Yusuke Hattori, Y. Takeuchi, N. Ohnishi","doi":"10.1109/ROBOT.2005.1570337","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570337","url":null,"abstract":"In this research, we have developed a variable baseline stereo tracking vision system using a 1ms high-speed vision system and a high-speed linear slider. The vision system tracks a moving object, captures a stereo image pair and estimates its distance in only 1ms. It also calculates stereo matching and estimates its 3D shape every 2ms. The high speed linear slider moves stereo cameras horizontally and independently so the baseline length can be changed. We also propose a baseline control method which is adaptive to the distance to the object. When the object appears large in the image, the system reconstructs its 3D shape, and when the object appears small in the image, the system only estimates the 3D position of its center of gravity. The experimental results show that the proposed baseline control method improves the depth accuracy and the system can track and estimate the 3D shapes of fast moving objects.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114689881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Autonomous Modular Optical Underwater Robot (AMOUR) Design, Prototype and Feasibility Study 自主模块化光学水下机器人(AMOUR)的设计、原型及可行性研究
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570343
I. Vasilescu, Paulina Varshavskaya, K. Kotay, D. Rus
{"title":"Autonomous Modular Optical Underwater Robot (AMOUR) Design, Prototype and Feasibility Study","authors":"I. Vasilescu, Paulina Varshavskaya, K. Kotay, D. Rus","doi":"10.1109/ROBOT.2005.1570343","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570343","url":null,"abstract":"We propose a novel modular underwater robot which can self-reconfigure by stacking and unstacking its component modules. Applications for this robot include underwater monitoring, exploration, and surveillance. Our current prototype is a single module which contains several subsystems that later will be segregated into different modules. This robot functions as a testbed for the subsystems which are needed in the modular implementation. We describe the module design and discuss the propulsion, docking, and optical ranging subsystems in detail. Experimental results demonstrate depth control, linear motion, target module detection, and docking capabilities.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"8 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124296061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Active Task-Space Sensing and Localization of Autonomous Vehicles 主动任务空间感知与自动驾驶汽车定位
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570695
G. Nejat, B. Benhabib, A. Membre
{"title":"Active Task-Space Sensing and Localization of Autonomous Vehicles","authors":"G. Nejat, B. Benhabib, A. Membre","doi":"10.1109/ROBOT.2005.1570695","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570695","url":null,"abstract":"In this paper, an active line-of-sight-sensing (LOS) methodology is proposed for the docking of autonomous vehicles/robotic end-effectors. The novelty of the overall system is its applicability to cases that do not allow for the direct proximity measurement of the vehicle's pose (position and orientation). In such instances, a guidance-based technique must be employed to move the vehicle to its desired pose using corrective actions at the final stages of its motion. The objective of the proposed guidance method is, thus, to successfully minimize the systematic errors of the vehicle, accumulated after a long-range motion, while allowing it to converge within the random noise limits via a three-step procedure: active LOS realignment, determination of the new (actual) location of the vehicle, and implementation of a corrective action. The proposed system was successfully tested via simulation for a three degree-of-freedom (dof) planar robotic platform and via experiments.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"275 19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124355554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
C-space Subdivision and Integration in Feature-Sensitive Motion Planning 特征敏感运动规划中的c空间细分与集成
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570589
M. Morales, Lydia Tapia, R. Pearce, S. Rodríguez, N. Amato
{"title":"C-space Subdivision and Integration in Feature-Sensitive Motion Planning","authors":"M. Morales, Lydia Tapia, R. Pearce, S. Rodríguez, N. Amato","doi":"10.1109/ROBOT.2005.1570589","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570589","url":null,"abstract":"There are many randomized motion planning techniques, but it is often difficult to determine what planning method to apply to best solve a problem. Planners have their own strengths and weaknesses, and each one is best suited to a specific type of problem. In previous work, we proposed a meta-planner that, through analysis of the problem features, subdivides the instance into regions and determines which planner to apply in each region. The results obtained with our prototype system were very promising even though it utilized simplistic strategies for all components. Even so, we did determine that strategies for problem subdivision and for combination of partial regional solutions have a crucial impact on performance. In this paper, we propose new methods for these steps to improve the performance of the meta-planner. For problem subdivision, we propose two new methods: a method based on ‘ gaps’ and a method based on information theory. For combining partial solutions, we propose two new methods that concentrate on neighboring areas of the regional solutions. We present results that show the performance gain achieved by utilizing these new strategies.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124482400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Poincaré-Map-Based Reinforcement Learning For Biped Walking Poincaré-基于地图的两足行走强化学习
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570469
J. Morimoto, J. Nakanishi, G. Endo, G. Cheng, C. Atkeson, G. Zeglin
{"title":"Poincaré-Map-Based Reinforcement Learning For Biped Walking","authors":"J. Morimoto, J. Nakanishi, G. Endo, G. Cheng, C. Atkeson, G. Zeglin","doi":"10.1109/ROBOT.2005.1570469","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570469","url":null,"abstract":"We propose a model-based reinforcement learning algorithm for biped walking in which the robot learns to appropriately modulate an observed walking pattern. Via-points are detected from the observed walking trajectories using the minimum jerk criterion. The learning algorithm modulates the via-points as control actions to improve walking trajectories. This decision is based on a learned model of the Poincaré map of the periodic walking pattern. The model maps from a state in the single support phase and the control actions to a state in the next single support phase. We applied this approach to both a simulated robot model and an actual biped robot. We show that successful walking policies are acquired.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126199494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Goal-Directed Imitation in a Humanoid Robot 仿人机器人的目标定向模仿
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570135
S. Calinon, F. Guenter, A. Billard
{"title":"Goal-Directed Imitation in a Humanoid Robot","authors":"S. Calinon, F. Guenter, A. Billard","doi":"10.1109/ROBOT.2005.1570135","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570135","url":null,"abstract":"Our work aims at developing a robust discriminant controller for robot programming by demonstration. It addresses two core issues of imitation learning, namely “what to imitate” and “how to imitate”. This paper presents a method by which a robot extracts the goals of a demonstrated task and determines the imitation strategy that satisfies best these goals. The method is validated in a humanoid platform, taking inspiration of an influential experiment from developmental psychology.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126204493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 104
An Object Tracking and Visual Servoing System for the Visually Impaired 视障对象跟踪与视觉伺服系统
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570653
Duane J. Jacques, R. Rodrigo, K. McIsaac, J. Samarabandu
{"title":"An Object Tracking and Visual Servoing System for the Visually Impaired","authors":"Duane J. Jacques, R. Rodrigo, K. McIsaac, J. Samarabandu","doi":"10.1109/ROBOT.2005.1570653","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570653","url":null,"abstract":"In this work, we have taken the first step towards the creation of a computerized seeing-eye guide dog. The system we presented extends the development of assistive technology for the visually impaired into a new area: object tracking and visual servoing. The system uses computer vision to provide a kind of surrogate sight for the human user; sensing information from the environment and communicating it through haptic signalling. Our proof-of concept prototype is a low-cost wearable system which uses a colour camera to analyze a scene and recognize a desired object, then generate tactile cues to the wearer to steer his or her hand towards the object. We have proved the system in trials with random users in an unstructured environment.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128090023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Information Driven Coordinated Air-Ground Proactive Sensing 信息驱动的协同空地主动感知
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570441
B. Grocholsky, Rahul Swaminathan, J. Keller, Vijay R. Kumar, George J. Pappas
{"title":"Information Driven Coordinated Air-Ground Proactive Sensing","authors":"B. Grocholsky, Rahul Swaminathan, J. Keller, Vijay R. Kumar, George J. Pappas","doi":"10.1109/ROBOT.2005.1570441","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570441","url":null,"abstract":"This paper concerns the problem of actively searching for and localizing ground features by a coordinated team of air and ground robotic sensor platforms. The approach taken builds on well known Decentralized Data Fusion (DDF) methodology. In particular, it brings together established representations developed for identification and linearized estimation problems to jointly address feature detection and localization. This provides transparent and scalable integration of sensor information from air and ground platforms. As in previous studies, an Information-theoretic utility measure and local control strategy drive the robots to uncertainty reducing team configurations. Complementary characteristics in terms of coverage and accuracy are revealed through analysis of the observation uncertainty for air and ground on-board cameras. Implementation results for a detection and localization example indicate the ability of this approach to scalably and efficiently realize such collaborative potential.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125999091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信