2008 IEEE International Conference on Robotics and Automation最新文献

筛选
英文 中文
Removal of adherent noises from image sequences by spatio-temporal image processing 基于时空图像处理的图像序列伴随噪声去除
2008 IEEE International Conference on Robotics and Automation Pub Date : 2008-05-19 DOI: 10.1109/ROBOT.2008.4543570
A. Yamashita, Isao Fukuchi, T. Kaneko, K. Miura
{"title":"Removal of adherent noises from image sequences by spatio-temporal image processing","authors":"A. Yamashita, Isao Fukuchi, T. Kaneko, K. Miura","doi":"10.1109/ROBOT.2008.4543570","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543570","url":null,"abstract":"This paper describes a method for removing adherent noises from image sequences. In outdoor environments, it is often the case that scenes taken by a camera are deteriorated because of adherent noises such as waterdrops on the surface of the lens-protecting glass of the camera. To solve this problem, our method takes advantage of image sequences captured with a moving camera. The method makes a spatio-temporal image to extract the regions of adherent noises by examining differences of track slopes in cross section images between adherent noises and other objects. Finally, regions of noises are eliminated by replacing with image data corresponding to object regions. Experimental results show the effectiveness of our method.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126885097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Pose detection of 3-D objects using S2-correlated images and discrete spherical harmonic transforms 基于s2相关图像和离散球谐变换的三维目标姿态检测
2008 IEEE International Conference on Robotics and Automation Pub Date : 2008-05-19 DOI: 10.1109/ROBOT.2008.4543334
R. Hoover, A. A. Maciejewski, R. Roberts
{"title":"Pose detection of 3-D objects using S2-correlated images and discrete spherical harmonic transforms","authors":"R. Hoover, A. A. Maciejewski, R. Roberts","doi":"10.1109/ROBOT.2008.4543334","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543334","url":null,"abstract":"The pose detection of three-dimensional (3-D) objects from two-dimensional (2-D) images is an important issue in computer vision and robotics applications. Specific examples include automated assembly, automated part inspection, robotic welding, and human robot interaction, as well as a host of others. Eigendecomposition is a common technique for dealing with this issue and has been applied to sets of correlated images for this purpose. Unfortunately, for the pose detection of 3-D objects, a very large number of correlated images must be captured from many different orientations. As a result, the eigendecomposition of this large set of images is very computationally expensive. In this work, we present a method for capturing images of objects from many locations by sampling S2 appropriately. Using this spherical sampling pattern, the computational burden of computing the eigendecomposition can be reduced by using the spherical harmonic transform to \"condense\" information due to the correlation in S2. We propose a computationally efficient algorithm for approximating the eigendecomposition based on the spherical harmonic transform analysis. Experimental results are presented to compare and contrast the algorithm against the true eigendecomposition, as well as quantify the computational savings.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121335766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Two-channel-based voice activity detection for humanoid robots in noisy home environments 基于双通道的类人机器人在嘈杂家庭环境中的语音活动检测
2008 IEEE International Conference on Robotics and Automation Pub Date : 2008-05-19 DOI: 10.1109/ROBOT.2008.4543745
Hyun-Don Kim, Kazunori Komatani, T. Ogata, HIroshi G. Okuno
{"title":"Two-channel-based voice activity detection for humanoid robots in noisy home environments","authors":"Hyun-Don Kim, Kazunori Komatani, T. Ogata, HIroshi G. Okuno","doi":"10.1109/ROBOT.2008.4543745","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543745","url":null,"abstract":"The purpose of this research is to accurately classify the speech signals originating from the front even in noisy home environments. This ability can help robots to improve speech recognition and to spot keywords. We therefore developed a new voice activity detection (VAD) based on the complex spectrum circle centroid (CSCC) method. It can classify the speech signals that are received at the front of two microphones by comparing the spectral energy of observed signals with that of target signals estimated by CSCC. Also, it can work in real time without training filter coefficients beforehand even in noisy environments (SNR > 0 dB) and can cope with speech noises generated by audio-visual equipments such as televisions and audio devices. Since the CSCC method requires the directions of the noise signals, we also developed a sound source localization system integrated with cross-power spectrum phase (CSP) analysis and an expectation-maximization (EM) algorithm. This system was demonstrated to enable a robot to cope with multiple sound sources using two microphones.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121501396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Information value-driven approach to path clearance with multiple scout robots 多侦察机器人路径间隙的信息价值驱动方法
2008 IEEE International Conference on Robotics and Automation Pub Date : 2008-05-19 DOI: 10.1109/ROBOT.2008.4543612
M. Likhachev, A. Stentz
{"title":"Information value-driven approach to path clearance with multiple scout robots","authors":"M. Likhachev, A. Stentz","doi":"10.1109/ROBOT.2008.4543612","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543612","url":null,"abstract":"In the path clearance problem the robot needs to reach its goal as quickly as possible without being detected by enemies. The robot does not know the precise locations of enemies, but has a list of their possible locations. These locations can be sensed, and the robot can go through them if no enemy is present or has to take a detour otherwise. We have previously developed an approach to the path clearance problem when the robot itself had to sense possible enemy locations. In this paper we investigate the problem of path clearance when the robot can use multiple scout robots to sense the possible enemy locations. This becomes a high-dimensional planning under uncertainty problem. We propose an efficient and scalable approach to it. While the approach requires centralized planning, it can scale to very large environments and to a large number of scouts and allows the scouts to be heterogenous. The experimental results show the benefits of using our approach when multiple scout robots are available.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114832618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A constrained optimization approach to virtual fixtures for multi-handed tasks 多手任务虚拟夹具的约束优化方法
2008 IEEE International Conference on Robotics and Automation Pub Date : 2008-05-19 DOI: 10.1109/ROBOT.2008.4543730
A. Kapoor, R. Taylor
{"title":"A constrained optimization approach to virtual fixtures for multi-handed tasks","authors":"A. Kapoor, R. Taylor","doi":"10.1109/ROBOT.2008.4543730","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543730","url":null,"abstract":"In this work, we have extended the concept of constrained motion control of robots to surgical tasks that require multiple robots. We present virtual fixtures to guide the motion of multiple robots such that spatial and temporal relationship is maintained between them. At the same time, our algorithm keeps the surgeon in the control loop. Moreover, we show that these virtual fixtures allow bimanual tasks to be completed using input for a single robot. That is, the user requires only one hand to cooperatively control multiple robots. This reduces the cognitive load on the surgeon and makes multiple-robot setup for surgery more relevant. We demonstrate this architecture by using an example of manipulating a surgical knot to position it at a target point. Significant improvement is observed in the accuracy when bimanual virtual fixture assistance is provided. Moreover, the accuracy when using a single input from user is similar to the accuracy obtained from bimanual assistance.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"PP 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126431084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Probabilistic localization with a blind robot 盲机器人的概率定位
2008 IEEE International Conference on Robotics and Automation Pub Date : 2008-05-19 DOI: 10.1109/ROBOT.2008.4543472
Lawrence H. Erickson, Joseph Knuth, J. O’Kane, S. LaValle
{"title":"Probabilistic localization with a blind robot","authors":"Lawrence H. Erickson, Joseph Knuth, J. O’Kane, S. LaValle","doi":"10.1109/ROBOT.2008.4543472","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543472","url":null,"abstract":"Researchers have addressed the localization problem for mobile robots using many different kinds of sensors, including rangefinders, cameras, and odometers. In this paper, we consider localization using a robot that is virtually \"blind\", having only a clock and contact sensor at its disposal. This represents a drastic reduction in sensing requirements, even in light of existing work that considers localization with limited sensing. We present probabilistic techniques that represent and update the robot's position uncertainty and algorithms to reduce this uncertainty. We demonstrate the experimental effectiveness of these methods using a Roomba autonomous vacuum cleaner robot in laboratory environments.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126418216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Toward a multi-disciplinary model for bio-robotic systems 迈向生物机器人系统的多学科模型
2008 IEEE International Conference on Robotics and Automation Pub Date : 2008-05-19 DOI: 10.1109/ROBOT.2008.4543218
Richard Primerano, D. Wilkie, W. Regli
{"title":"Toward a multi-disciplinary model for bio-robotic systems","authors":"Richard Primerano, D. Wilkie, W. Regli","doi":"10.1109/ROBOT.2008.4543218","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543218","url":null,"abstract":"The design of robotic systems involves contributions from several areas of science and engineering. Electrical, mechanical and software components must be integrated to form the final system. Increasingly, simulation tools are being introduced into the design flow as a means to verify the performance of particular subsystems. In order to accurately simulate the complete robotic system we propose a framework that allows designers to describe the robotic system as an interconnection of mechanical, electrical, and software components, with well defined mechanisms for communicating with each other. Through this, we form a multi-disciplinary model that captures both the dynamics of the individual subsystems, and the dynamics resulting from the interconnection of the above subsystems. As a case-study, we will apply the framework to a biologically inspired robotic snake.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121949950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An easy calibration for oblique-viewing endoscopes 一个容易校准的斜视内窥镜
2008 IEEE International Conference on Robotics and Automation Pub Date : 2008-05-19 DOI: 10.1109/ROBOT.2008.4543402
Chenyu Wu, B. Jaramaz
{"title":"An easy calibration for oblique-viewing endoscopes","authors":"Chenyu Wu, B. Jaramaz","doi":"10.1109/ROBOT.2008.4543402","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543402","url":null,"abstract":"Oblique-viewing endoscopes (oblique scopes) are widely used in minimally invasive surgery. The viewing direction of an oblique endoscope can be changed by rotating the scope cylinder, which enables a larger field of view, but makes the scope calibration process more difficult. The calibration is a critical step for incorporating oblique scope into computer assisted surgical procedures (robotics, navigation, augmented reality), though few calibration methods of oblique endoscopes has been developed. Yamaguchi et al. [1] first modelled and calibrated the oblique scope. They directly tracked the camera head and formulated the scope cylinder's rotation to the camera model as an extrinsic parameter. Their method requires five additional parameters to be estimated. In this work, we track the scope cylinder instead. Since the rotation of the camera head with respect to the cylinder only causes the rotation of the image plane, less parameter needs to be estimated. Experiments demonstrate the ease, simplicity and accuracy of our method.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122280739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A passive force amplifier 无源力放大器
2008 IEEE International Conference on Robotics and Automation Pub Date : 2008-05-19 DOI: 10.1109/ROBOT.2008.4543513
B. Cagneau, G. Morel, D. Bellot, N. Zemiti, Ginluca A. d'Agostino
{"title":"A passive force amplifier","authors":"B. Cagneau, G. Morel, D. Bellot, N. Zemiti, Ginluca A. d'Agostino","doi":"10.1109/ROBOT.2008.4543513","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543513","url":null,"abstract":"The proposed robotic system provides the surgeon with an augmented sensation of the interaction forces between the instrument and the organ. Such a system aims at increasing the surgeon's dexterity for tasks requiring that only small forces be applied on the organ (eg. for micro-surgery). In the proposed setup, the surgeon manipulates a handle mounted on the instrument. This is a comanipulation system because the surgeon and the robot simultaneously manipulate the instrument. The proposed control scheme allows an augmented force control: the control law ensures that the instrument applies on the organ the same forces that the surgeon applies on the handle but decreased by a scale factor. As a consequence, the forces sensed by the surgeon are the forces between the instrument and the organ amplified by a scale factor. This control scheme is proved stable thanks to a passivity study. Indeed, passivity analysis is a useful tool for the stability analysis of a robot interacting with the environment. Experimental results are presented on a robot dedicated to minimally invasive surgery.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121586355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Towards robust place recognition for robot localization 面向机器人定位的鲁棒位置识别
2008 IEEE International Conference on Robotics and Automation Pub Date : 2008-05-19 DOI: 10.1109/ROBOT.2008.4543261
M. M. Ullah, Andrzej Pronobis, B. Caputo, Jie Luo, P. Jensfelt, H. Christensen
{"title":"Towards robust place recognition for robot localization","authors":"M. M. Ullah, Andrzej Pronobis, B. Caputo, Jie Luo, P. Jensfelt, H. Christensen","doi":"10.1109/ROBOT.2008.4543261","DOIUrl":"https://doi.org/10.1109/ROBOT.2008.4543261","url":null,"abstract":"Localization and context interpretation are two key competences for mobile robot systems. Visual place recognition, as opposed to purely geometrical models, holds promise of higher flexibility and association of semantics to the model. Ideally, a place recognition algorithm should be robust to dynamic changes and it should perform consistently when recognizing a room (for instance a corridor) in different geographical locations. Also, it should be able to categorize places, a crucial capability for transfer of knowledge and continuous learning. In order to test the suitability of visual recognition algorithms for these tasks, this paper presents a new database, acquired in three different labs across Europe. It contains image sequences of several rooms under dynamic changes, acquired at the same time with a perspective and omnidirectional camera, mounted on a socket. We assess this new database with an appearance- based algorithm that combines local features with support vector machines through an ad-hoc kernel. Results show the effectiveness of the approach and the value of the database.","PeriodicalId":351230,"journal":{"name":"2008 IEEE International Conference on Robotics and Automation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121701163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信