Proceedings of the 2005 IEEE International Conference on Robotics and Automation最新文献

筛选
英文 中文
A Real-Time Haptic/Graphic Demonstration of how Error Augmentation can Enhance Learning 实时触觉/图形演示错误增强功能如何促进学习
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570798
Y. Wei, J. Patton, P. Bajaj, R. Scheidt
{"title":"A Real-Time Haptic/Graphic Demonstration of how Error Augmentation can Enhance Learning","authors":"Y. Wei, J. Patton, P. Bajaj, R. Scheidt","doi":"10.1109/ROBOT.2005.1570798","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570798","url":null,"abstract":"We developed a real-time controller for a 2 degree-of-freedom robotic system using xPC Target. This system was used to investigate how different methods of performance error feedback can lead to faster and more complete motor learning in individuals asked to compensate for a novel visuo-motor transformation (a 30 degree rotation). Four groups of human subjects were asked to reach with their unseen arm to visual targets surrounding a central starting location. A cursor tracking hand motion was provided during each reach. For one group of subjects, deviations from the “ideal” compensatory hand movement (i.e. trajectory errors) were amplified with a gain of 2 whereas another group was provided visual feedback with a gain of 3.1. Yet another group was provided cursor feedback wherein the cursor was rotated by an additional (constant) offset angle. We compared the rates at which the hand paths converged to the steady-state trajectories. Our results demonstrate that error-augmentation can improve the rate and extent of motor learning of visuomotor rotations in healthy subjects. Furthermore, our results suggest that both error amplification and offset-augmentation may facilitate neuro-rehabilitation strategies that restore function in brain injuries such as stroke.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131412403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Analysis and Removal of Artifacts in 3-D LADAR Data 三维雷达数据中的伪影分析与去除
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570440
J. Tuley, N. Vandapel, M. Hebert
{"title":"Analysis and Removal of Artifacts in 3-D LADAR Data","authors":"J. Tuley, N. Vandapel, M. Hebert","doi":"10.1109/ROBOT.2005.1570440","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570440","url":null,"abstract":"Errors in laser based range measurements can be divided into two categories: intrinsic sensor errors (range drift with temperature, systematic and random errors), and errors due to the interaction of the laser beam with the environment. The former have traditionally received attention and can be modeled. The latter in contrast have long been observed but not well characterized. We propose to do so in this paper. In addition, we present a sensor independent method to remove such artifacts. The objective is to improve the overall quality of 3-D scene reconstruction to perform terrain classification of scenes with vegetation.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127312631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Dynamic-Domain RRTs: Efficient Exploration by Controlling the Sampling Domain 动态域RRTs:控制采样域的有效探索
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570709
A. Yershova, L. Jaillet, T. Siméon, S. LaValle
{"title":"Dynamic-Domain RRTs: Efficient Exploration by Controlling the Sampling Domain","authors":"A. Yershova, L. Jaillet, T. Siméon, S. LaValle","doi":"10.1109/ROBOT.2005.1570709","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570709","url":null,"abstract":"Sampling-based planners have solved difficult problems in many applications of motion planning in recent years. In particular, techniques based on the Rapidly-exploring Random Trees (RRTs) have generated highly successful single-query planners. Even though RRTs work well on many problems, they have weaknesses which cause them to explore slowly when the sampling domain is not well adapted to the problem. In this paper we characterize these issues and propose a general framework for minimizing their effect. We develop and implement a simple new planner which shows significant improvement over existing RRT-based planners. In the worst cases, the performance appears to be only slightly worse in comparison to the original RRT, and for many problems it performs orders of magnitude better.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"252 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115613623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 303
3D Motion Planning for Image-Based Visual Servoing Tasks 基于图像的视觉伺服任务的三维运动规划
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570435
B. Allotta, D. Fioravanti
{"title":"3D Motion Planning for Image-Based Visual Servoing Tasks","authors":"B. Allotta, D. Fioravanti","doi":"10.1109/ROBOT.2005.1570435","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570435","url":null,"abstract":"The execution of positioning tasks by using image-based visual servoing can be easier if a trajectory planning mechanism exists. This paper deals with the problem of generating image plane trajectories (a trajectory is made of a path plus a time law) for tracked points in an eye-in-hand system which has to be positioned with respect to a fixed object. The generated image plane paths must be feasible i.e. they must be compliant with rigid body motion of the camera with respect to the object so as to avoid image jacobian singularities and local minima problems. In addition, the image plane trajectories must generate camera velocity screws which are smooth and within the allowed bounds of the robot. We show that a scaled 3D motion planning algorithm can be devised in order to generate feasible image plane trajectories.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115748271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
The Role of Motion Information in Learning Human-Robot Joint Attention 运动信息在学习人-机器人关节注意中的作用
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570418
Y. Nagai
{"title":"The Role of Motion Information in Learning Human-Robot Joint Attention","authors":"Y. Nagai","doi":"10.1109/ROBOT.2005.1570418","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570418","url":null,"abstract":"To realize natural human-robot interactions and investigate the developmental mechanism of human communication, an effective approach is to construct models by which a robot imitates cognitive functions of humans. Focusing on the knowledge that humans utilize motion information of others’ action, this paper presents a learning model that enables a robot to acquire the ability to establish joint attention with a human by utilizing both static and motion information. As the motion information, the robot uses the optical flow detected when observing a human who is shifting his/her gaze from looking at the robot to looking at another object. As the static information, it extracts the edge image of the human face when he/she is gazing at the object. The static and motion information have complementary characteristics. The former gives the exact direction of gaze, even though it is difficult to interpret. On the other hand, the latter provides a rough but easily understandable relationship between the direction of gaze shift and motor output to follow the gaze. The learning model utilizing both static and motion information acquired from observing a human’s gaze shift enables the robot to efficiently acquire joint attention ability and to naturally interact with the human. Experimental results show that the motion information accelerates the learning of joint attention while the static information improves the task performance. The results are discussed in terms of analogy with cognitive development in human infants.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"878 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124163340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Deployment Strategy for Mobile Robots with Energy and Timing Constraints 具有能量和时间约束的移动机器人部署策略
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570540
Yongguo Mei, Yung-Hsiang Lu, Y.C. Hu, C.S.G. Lee
{"title":"Deployment Strategy for Mobile Robots with Energy and Timing Constraints","authors":"Yongguo Mei, Yung-Hsiang Lu, Y.C. Hu, C.S.G. Lee","doi":"10.1109/ROBOT.2005.1570540","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570540","url":null,"abstract":"Mobile robots usually carry limited energy and have to accomplish their tasks before deadlines. Examples of these tasks include search and rescue, landmine detection, and carpet cleaning. Many researchers have been studying control, sensing, and coordination for these tasks. However, one major problem has not been fully addressed: the initial deployment of mobile robots. The deployment problem considers the number of robots needed and their initial locations. In this paper, we present a solution for the deployment problem when robots have limited energy and time to collectively accomplish coverage tasks. Simulation results show that our method uses 26% fewer robots comparing with two heuristics for covering the same size of area.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124190038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Vision-based Control for Car Platooning using Homography Decomposition 基于单应性分解的汽车队列视觉控制
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570433
Selim Benhimane, E. Malis, P. Rives, J. Azinheira
{"title":"Vision-based Control for Car Platooning using Homography Decomposition","authors":"Selim Benhimane, E. Malis, P. Rives, J. Azinheira","doi":"10.1109/ROBOT.2005.1570433","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570433","url":null,"abstract":"In this paper, we present a complete system for car platooning using visual tracking. The visual tracking is achieved by directly estimating the projective transformation (in our case a homography) between a selected reference template attached to the leading vehicle and the corresponding area in the current image. The relative position and orientation of the servoed car with regard to the leading one is computed by decomposing the homography. The control objective is stated in terms of path following task in order to cope with the non-holonomic constraints of the vehicles.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114460933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 92
Multi-aided Inertial Navigation for Ground Vehicles in Outdoor Uneven Environments 非均匀环境下地面车辆多辅助惯性导航
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570846
Bingbing Liu, M. Adams, J. Guzman
{"title":"Multi-aided Inertial Navigation for Ground Vehicles in Outdoor Uneven Environments","authors":"Bingbing Liu, M. Adams, J. Guzman","doi":"10.1109/ROBOT.2005.1570846","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570846","url":null,"abstract":"A good localization ability is essential for an autonomous vehicle to perform any functions. For ground vehicles operating in outdoor, uneven and unstructured environments, the localization task becomes much more difficult than in indoor environments. In urban or forest environments where high buildings or tall trees exist, GPS sensors also fail easily. The main contribution of this paper is that a multi-aided inertial based localization system has been developed to solve the outdoor localization problem. The multi-aiding information is from odometry, an accurate gyroscope and vehicle constraints. Contrary to previous work, a kinematic model is developed to estimate the inertial sensor’s lateral velocity. This is particularly important when cornering at speed, and side slip occurs. Experimental results are presented of this system which is able to provide a vehicle’s position, velocity and attitude estimation accurately, even when the testing vehicle runs in outdoor uneven environments.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"300 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114481734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Dynamics Model of Paramecium Galvanotaxis for Microrobotic Application 微型机器人应用草履虫顺流动力学模型
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570286
N. Ogawa, H. Oku, K. Hashimoto, M. Ishikawa
{"title":"Dynamics Model of Paramecium Galvanotaxis for Microrobotic Application","authors":"N. Ogawa, H. Oku, K. Hashimoto, M. Ishikawa","doi":"10.1109/ROBOT.2005.1570286","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570286","url":null,"abstract":"We propose a dynamics model of galvanotaxis (locomotor response to electrical stimulus) of the protozoan Paramecium. Our purpose is to utilize microorganisms as micro-robots by using galvanotaxis. For precise and advanced actuation, it is necessary to describe the dynamics of galvanotaxis in a mathematical and quantitative manner in the framework of robotics. However, until now the explanation of Paramecium galvanotaxis in previous works has remained only qualitative. In this paper, we construct a novel model of galvanotaxis as a minimal step to utilizing Paramecium cells as micro-robots. Numerical experiments for our model demonstrate realistic behaviors, such as U-turn motions, like those of real cells.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114777935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A Hierarchical Multiple-Target Tracking Algorithm for Sensor Networks 传感器网络的分层多目标跟踪算法
Proceedings of the 2005 IEEE International Conference on Robotics and Automation Pub Date : 2005-04-18 DOI: 10.1109/ROBOT.2005.1570439
Songhwai Oh, L. Schenato, S. Sastry
{"title":"A Hierarchical Multiple-Target Tracking Algorithm for Sensor Networks","authors":"Songhwai Oh, L. Schenato, S. Sastry","doi":"10.1109/ROBOT.2005.1570439","DOIUrl":"https://doi.org/10.1109/ROBOT.2005.1570439","url":null,"abstract":"Multiple-target tracking is a canonical application of sensor networks as it exhibits different aspects of sensor networks such as event detection, sensor information fusion, multi-hop communication, sensor management and decision making. The task of tracking multiple objects in a sensor network is challenging due to constraints on a sensor node such as short communication and sensing ranges, a limited amount of memory and limited computational power. In addition, since a sensor network surveillance system needs to operate autonomously without human operators, it requires an autonomous tracking algorithm which can track an unknown number of targets. In this paper, we develop a scalable hierarchical multiple-target tracking algorithm that is autonomous and robust against transmission failures, communication delays and sensor localization error.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114995025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 100
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信