Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.最新文献

筛选
英文 中文
Vision-based mobile robot speed control using a probabilistic occupancy map 基于视觉的移动机器人速度控制的概率占用图
Y. Negishi, J. Miura, Y. Shirai
{"title":"Vision-based mobile robot speed control using a probabilistic occupancy map","authors":"Y. Negishi, J. Miura, Y. Shirai","doi":"10.1109/MFI-2003.2003.1232601","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232601","url":null,"abstract":"This paper describes a method of controlling robot speed using a probabilistic occupancy map. It is usually necessary for a robot to make sure that a target region is free before entering it. If the robot is not confident in the state (free or occupied) of the region, the robot has to make enough observations to be confident. If the distance to the region is long, the robot can expect to have enough observations until reaching it. If the distance is short, however, the robot may have to slow down to make enough observations. Based on this consideration, we develop a method for controlling robot speed by considering the state of a target region. The method is applied to our mobile robot with omnidirectional stereo and a laser range finder. The robot successfully moved around in an unknown environment by adaptively controlling its speed.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128747809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multi-sensor guided behaviors in whole body tendon-driven humanoid Kenta 多传感器引导全身肌腱驱动人形Kenta的行为
T. Yoshikai, S. Yoshida, I. Mizuuchi, D. Sato, M. Inaba, H. Inoue
{"title":"Multi-sensor guided behaviors in whole body tendon-driven humanoid Kenta","authors":"T. Yoshikai, S. Yoshida, I. Mizuuchi, D. Sato, M. Inaba, H. Inoue","doi":"10.1109/MFI-2003.2003.1232570","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232570","url":null,"abstract":"In generating sensor-based behavior of the robot, which has various sensors and many degrees of freedom, the key point is how it utilizes information from many sensors in multi modals for moving its body flexibly. In our experimental systems for generating multi-sensor guided behavior, we have introduced the following methods: 1) Integration with reflexes: Integrating objective behavior (behavior which is aimed for achieving some specific goals) and various kinds of reflexes (behavior which instantly reacts to the changes in an environment).; 2) Posture-sensor data memory: Memorizing the robot postures, relating them with changes of various sensors. This memory can also be used for predicting human intention from previous experiences.; and 3) ABC-Net (attention-based conditional network): Behavior description network, where nodes are expressed by states of sensors paid attention to and arcs are expressed by actions to transit between nodes. In those methods, behavior experiments for making sure the effectiveness have been done in both real and virtual environment using Kenta, a multi-DOF and multi-sensor humanoid that we have developed. In this paper, the design and implementation of the whole behavior systems including the above three methods for realizing the multi-sensor guided behavior are described and the results of the behavior experiments using Kenta are shown.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129889060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Measurement of ground surface displacement using stereo vision and mechanical sensors on humanoid robots 在人形机器人上使用立体视觉和机械传感器测量地表位移
T. Yoshimi, Y. Kawai, Y. Fukase, H. Araki, F. Tomita
{"title":"Measurement of ground surface displacement using stereo vision and mechanical sensors on humanoid robots","authors":"T. Yoshimi, Y. Kawai, Y. Fukase, H. Araki, F. Tomita","doi":"10.1109/MFI-2003.2003.1232644","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232644","url":null,"abstract":"This paper presents a system for measuring ground surface displacement using stereo vision and mechanical sensors on a humanoid robot. The stereo vision system generates a set of range data for each view by a correlation method as ground surfaces can be regarded as smoothed surfaces with random texture patterns. The mechanical sensors are encoders, gyro sensors and acceleration sensors that measure the angles of joints, angular velocity and acceleration of the robot. With those angles and the link geometry, the transfer matrices between different views are generated and used to register sets of range data.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130647867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Super-resolution modeling 超分辨率建模
H. Nagahara, Y. Yagi, M. Yachida
{"title":"Super-resolution modeling","authors":"H. Nagahara, Y. Yagi, M. Yachida","doi":"10.1109/MFI-2003.2003.1232660","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232660","url":null,"abstract":"Recently, many virtual reality and robotics applications have been called on to create an environmental model from real scenes, because virtual views can be freely created from the model. However, the resolution of sensor limits that of created virtual views. In this paper, we propose super-resolution modeling concept that estimates a higher resolution texture on object from an image sequence. This concept is suitable for an environmental modeling for virtual reality applications. We also propose a modeling system with omnidirectional image sensor, and estimate the system to confirm the effectiveness of super-resolution modeling.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124298903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analysis and synthesis of human dance motions 人体舞蹈动作的分析与综合
A. Nakazawa, Shin'ichiro Nakaoka, Takaaki Shiratori, K. Ikeuchi, Cybermedia
{"title":"Analysis and synthesis of human dance motions","authors":"A. Nakazawa, Shin'ichiro Nakaoka, Takaaki Shiratori, K. Ikeuchi, Cybermedia","doi":"10.1109/MFI-2003.2003.1232637","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232637","url":null,"abstract":"This paper presents the method for synthesizing stylistic human motions through visual observations. The human motion data is acquired from a motion capture system. The whole motion sequence is divided into motion elements and clustered into groups according to the correlation of end-effectors' trajectories. We call these segments 'motion primitives'. Concatenating these motion primitives, we can generate new dance motions. We also think that a motion primitive consists of a basic motion and a motion style. The basic motion is common to all dancers, and the style represents their characteristics. We extracted these two components through further analysis steps. The experiment results show the validity of our approach.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124146821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Learning Bayesian network structure from environment and sensor planning for mobile robot localization 从环境和传感器规划中学习贝叶斯网络结构,用于移动机器人定位
Hongjun Zhou, S. Sakane
{"title":"Learning Bayesian network structure from environment and sensor planning for mobile robot localization","authors":"Hongjun Zhou, S. Sakane","doi":"10.1109/MFI-2003.2003.1232636","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232636","url":null,"abstract":"In this paper, we propose a method for sensor planning for a mobile robot localization problem. We represent causal relation between local sensing results, actions, and belief of the global localization using a Bayesian network. Initially, the structure of the Bayesian network is learned from the complete data of the environment using K2 algorithm combined with GA (genetic algorithm). In the execution phase, when the robot is taking into account the trade-off between the sensing cost and the global localization belief, which is obtained by inference in the Bayesian network. We have validated the learning and planning algorithm by simulation experiments in an office environment.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128130885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Map generation based on the interaction between robot body and its surrounding environment 基于机器人身体与周围环境相互作用的地图生成
T. Nakamura, Y. Uezu, H. Wu, T. Wada
{"title":"Map generation based on the interaction between robot body and its surrounding environment","authors":"T. Nakamura, Y. Uezu, H. Wu, T. Wada","doi":"10.1109/MFI-2003.2003.1232635","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232635","url":null,"abstract":"This paper presents a method for map generation based on the interaction between a robot body and its surrounding environment. While a robot moves in the environment, the robot interacts with its surrounding environment. If the effect of the environment on the robot changes, such interactions also change. By observing the robot's body, our method detects such change of the interaction and generates a description representing the type of change and the location where such change is observed. In the current implementation, we assume that there are two types of the change in the interaction. The real robot experiments are conducted in order to show the validity of our method.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127915702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual data mining using omni-directional sensor 基于全方位传感器的可视化数据挖掘
K. Kato, T. Shibata, H. Ishiguro
{"title":"Visual data mining using omni-directional sensor","authors":"K. Kato, T. Shibata, H. Ishiguro","doi":"10.1109/MFI-2003.2003.1232671","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232671","url":null,"abstract":"In this paper, we propose a method that analyses images recorded throughout long term. Conventional methods used feature extraction and geometric technique, when these images were analyzed. However, the problem of the lack of information occurs in such methods. Then, we propose the method that uses pattern conversion of the images and statistical technique. Moreover, we experimented, and the effectiveness of our method was verified.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131105109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Pose estimation of a vehicle on rough terrain by using the Sun direction 利用太阳方向估计车辆在崎岖地形上的姿态
S. Furuki, S. Yuta
{"title":"Pose estimation of a vehicle on rough terrain by using the Sun direction","authors":"S. Furuki, S. Yuta","doi":"10.1109/MFI-2003.2003.1232599","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232599","url":null,"abstract":"We are interested in the pose measurement of a rover using Sun direction while it moves on a rough terrain. First, we derived a sensor fusion formula to estimate rover pose from a three-dimensional gyro sensor, Sun direction sensor and gravity sensor. We developed a prototype of a Sun direction sensor, which can achieve fast and precise measurement even when rover pose changes dynamically. This sensor consists of a two-DOF mechanically rotational sensor base and an optical position sensing device. This sensor keeps its sensor head directed towards the Sun by controlling the two-DOF sensor base. We constructed a test system which consists of a Sun direction sensor, three gyro sensors, and an accelerometer on our mobile robot platform Yamabico AQURO.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124360021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Integrated estimation of facial scale and position 面部尺度和位置的综合估计
T. Hirayama, Y. Iwai, M. Yachida
{"title":"Integrated estimation of facial scale and position","authors":"T. Hirayama, Y. Iwai, M. Yachida","doi":"10.1109/MFI-2003.2003.1232661","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232661","url":null,"abstract":"Face detection incurs the highest computational cost in the process of automatic face recognition. To localize a face having scale variations, there needs to be a trade-off between accuracy and efficiency. In this paper, we integrate estimation of facial position and scale, and we propose a method that estimates facial position in parallel with facial scale. The method is composed of four states: position estimation by global scanning, position estimation by local scanning, scale conversion, and verification. The scale conversion estimates facial scale efficiently. Facial position and scale are estimated by changing these states, and are updated by using a beam search. We demonstrate the advantages of the proposed method through face localization experiments using images taken under various conditions. The proposed method can accurately localize the face having scale variations at a small computational cost.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122071887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信