Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.最新文献

筛选
英文 中文
A method of personal positioning based on sensor data fusion of wearable camera and self-contained sensors 一种基于可穿戴式摄像头与自备传感器传感器数据融合的个人定位方法
M. Kourogi, T. Kurata
{"title":"A method of personal positioning based on sensor data fusion of wearable camera and self-contained sensors","authors":"M. Kourogi, T. Kurata","doi":"10.1109/MFI-2003.2003.1232672","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232672","url":null,"abstract":"In this paper, we propose a method of personal positioning that combines images taken from a wearable camera with data from self-contained sensors attached to the user through a Kalman filter as a data integration mechanism. The proposed method estimates the user's position and direction by image registration between the input images from the camera and a set of images captured at known positions and directions beforehand as a database. It updates the estimation of the user's position and direction with pedestrian dead-reckoning by detecting walking behavior of the user and by estimating the heading direction of the body with the self-contained sensors.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129492950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Detecting moving humans using color and infrared video 利用彩色和红外视频检测移动的人
Ju Han, B. Bhanu
{"title":"Detecting moving humans using color and infrared video","authors":"Ju Han, B. Bhanu","doi":"10.1109/MFI-2003.2003.1232662","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232662","url":null,"abstract":"We approach the task of human silhouette extraction from color and infrared video using automatic image registration. Image registration between color and thermal images is a challenging problem due to the difficulties associated with finding correspondence. However, the moving people in a static scene provide cues to address this problem. In this paper, we propose a hierarchical scheme to automatically find the correspondence between the preliminary human silhouettes extracted from color and infrared video for image registration. Next, we discuss some strategies for probabilistically combining cues from registered color and thermal images. It is shown that the proposed approach achieves good results for image registration and human silhouette extraction. Experimental results also show a comparison of sensor fusion strategies and demonstrate the improvement in performance for human silhouette extraction.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128202426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Color blending based on viewpoint and surface normal for generating images from any viewpoint using multiple cameras 基于视点和表面法线的颜色混合,用于使用多个相机从任何视点生成图像
Y. Mukaigawa, Daisuke Genda, R. Yamane, T. Shakunaga
{"title":"Color blending based on viewpoint and surface normal for generating images from any viewpoint using multiple cameras","authors":"Y. Mukaigawa, Daisuke Genda, R. Yamane, T. Shakunaga","doi":"10.1109/MFI-2003.2003.1232639","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232639","url":null,"abstract":"A color blending method for generating a high quality image of human motion is presented. The 3D (three-dimensional) human shape is reconstructed by volume intersection and expressed as a set of voxels. As each voxel is observed as different colors from different cameras, voxel color needs to be assigned appropriately from several colors. We present a color blending method, which calculates voxel color from a linear combination of the colors observed by multiple cameras. The weightings in the linear combination are calculated based on both viewpoint and surface normal. As surface normal is taken into account, the images with clear texture can be generated. Moreover, since viewpoint is also taken into account, high quality images free of unnatural warping can be generated. To examine the effectiveness of the algorithm, a traditional dance motion was captured and new images were generated from arbitrary viewpoints. Compared to existing methods, quality at the boundaries was confirmed to improve.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131300127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Wide view imaging system by using multiple random access image sensors and mirrors 由多个随机存取图像传感器和反射镜组成的宽视场成像系统
R. Kawahara, S. Shimizu, T. Hamamoto
{"title":"Wide view imaging system by using multiple random access image sensors and mirrors","authors":"R. Kawahara, S. Shimizu, T. Hamamoto","doi":"10.1109/MFI-2003.2003.1232654","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232654","url":null,"abstract":"We describe a system for wide view imaging, which uses multiple image sensors and mirrors. In this system, each image obtained by the multiple sensors has a non-overlapped area and is equivalent to the partial image of the wide view obtained by an imaginary sensor. Therefore depth estimation from the sensor to each object is not required for combination. In this paper, we describe the wide view imaging system using random access image sensors we have designed and a FPGA (field programmable gate array). We can control the system to show a panoramic image or partial image in real time. The new image sensor has 128/spl times/128 pixels, the main functions being random access and interpolation of pixel values on a quarter pitch. We show results obtained by the chip.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125551932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of real-time binocular stereo vision and whole body information for dynamic walking navigation of humanoid robot 基于实时双目立体视觉和全身信息的仿人机器人动态行走导航
K. Okada, M. Inaba, H. Inoue
{"title":"Integration of real-time binocular stereo vision and whole body information for dynamic walking navigation of humanoid robot","authors":"K. Okada, M. Inaba, H. Inoue","doi":"10.1109/MFI-2003.2003.1232645","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232645","url":null,"abstract":"A method to segment a floor and an obstacle region from images is a fundamental function for robots in real world. This paper describes a floor detection method by integrating binocular stereo vision and whole body information, for walking direction control of a humanoid robot. We developed the humanoid robot navigation system using vision-based local floor map. The developed system consists of a map building stage and a walking direction control stage. In the map building stage, the system builds a local floor map around a robot by integrating floor region information from visual input and whole body posture information. Plane segment finder (PSF) algorithm, which is able to extract planner surface from 3D vision input, is utilized to segment a floor and an obstacle regions. Floor region segmentation from input images is represented in view coordinates, then the whole body posture information is utilized to transform from view coordinates to body coordinate to build a local floor map. In another stage, the system search for open space direction on the local floor map and control walking direction towards open space to avoid obstacles. Finally, walking navigation experiments based on floor detection using a life-size humanoid robot are shown.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122272087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
3D modeling of micro transparent object with integrated vision 集成视觉的微透明物体三维建模
K. Ohara, M. Mizukawa, K. Ohba, K. Taki
{"title":"3D modeling of micro transparent object with integrated vision","authors":"K. Ohara, M. Mizukawa, K. Ohba, K. Taki","doi":"10.1109/MFI-2003.2003.1232641","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232641","url":null,"abstract":"This paper describes the 3D (three-dimensional) modeling technique for micro transparent objects, such as crystal and cell, using the integrated vision system. At first, several 3D modeling techniques for transparent object are surveyed, which were mainly applied to use the polarization technique. Secondly, polarization characteristics on micro transparent object are evaluated. Thirdly, integrated vision system for micro objects is briefly reviewed. Then, to solve the problem when a transparent body is treated with this integrated vision system, several new methods based on the volume rendering and edge information using the integrated vision system is proposed.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126091401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Recording multiple videos and 3D video display in a large-scale space 录制多段视频,在大尺度空间内进行3D视频展示
I. Kitahara, Y. Ohta
{"title":"Recording multiple videos and 3D video display in a large-scale space","authors":"I. Kitahara, Y. Ohta","doi":"10.1109/MFI-2003.2003.1232646","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232646","url":null,"abstract":"In this paper, we introduce our research for realizing a 3D (three-dimensional) video display system in a very large-scale space such as a soccer stadium, concert hall, etc. We have developed a 4D (four-dimensional) digitization system of a large-scale space. We place sixteen progressive scan cameras around the space. The videos from those cameras are digitized into PCs (personal computers) that are connected via Ethernet. A camera calibration technique for a large-scale space that is accurate enough to apply CV-based algorithms is developed. We propose a method for describing the shape of a 3D object with a set of planes in order to synthesize a view of the object effectively. The most effective layout of the planes can be determined based on the relative locations of an observer's viewing position, multiple cameras, and 3D objects. The data size of the 3D model and the processing time can be reduced drastically. The effectiveness of the proposed method is demonstrated by experimental results.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"25 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116737689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simple and robust tracking of hands and objects for video-based multimedia production 基于视频的多媒体制作的简单而稳健的手和物体跟踪
Masatsugu Itoh Motoyuki Ozeki, Yuichi Nakamura, Y. Ohta
{"title":"Simple and robust tracking of hands and objects for video-based multimedia production","authors":"Masatsugu Itoh Motoyuki Ozeki, Yuichi Nakamura, Y. Ohta","doi":"10.1109/MFI-2003.2003.1232666","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232666","url":null,"abstract":"We propose a simple and robust method for detecting hands and hand-held objects involved in desktop manipulation and its use for indexing the videos. In order to achieve robust tracking with few constraints, we use multiple image sensors, which is a RGB (red, green, blue) camera, a stereo camera, and an infrared (IR) camera. By integrating these sensors, our system realized robust tracking without prior knowledge of an object, even if there was movement whether of people or objects in the background. We experimentally verified the object tracking performance and evaluated the effectiveness of integration.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121580668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Outdoor scene reconstruction from multiple image sequences captured by a hand-held video camera 从手持摄像机捕获的多个图像序列中重建户外场景
T. Sato, M. Kanbara, N. Yokoya
{"title":"Outdoor scene reconstruction from multiple image sequences captured by a hand-held video camera","authors":"T. Sato, M. Kanbara, N. Yokoya","doi":"10.1109/MFI-2003.2003.1232642","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232642","url":null,"abstract":"Three-dimensional (3D) models of outdoor scenes can be widely used in a number of fields such as object recognition, navigation, scenic simulation, and mixed reality. Such models are often made manually with high cost, so that automatic 3D reconstruction has been widely investigated. In related works, a dense 3D model is generated by using a stereo method. However, such approaches cannot use several hundred images together or dense depth estimation of large constructs and urban environments because it is difficult to accurately calibrate a large number of cameras. This paper proposes a dense 3D reconstruction method that uses multiple image sequences. First, our method estimates extrinsic camera parameters of each image sequence, then reconstructs a dense 3D model of a scene using an extended multi-baseline stereo and voxel voting technique.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125288514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A multiagent multisensor based security system for intelligent building 基于多智能体多传感器的智能建筑安防系统
R. Luo, Shin Yao Lin, K. Su
{"title":"A multiagent multisensor based security system for intelligent building","authors":"R. Luo, Shin Yao Lin, K. Su","doi":"10.1109/MFI-2003.2003.1232676","DOIUrl":"https://doi.org/10.1109/MFI-2003.2003.1232676","url":null,"abstract":"The security of buildings, homes, laboratories, offices and factories is essential to human life. An unlucky event is often caused by human negligence. We have developed a multiagent multisensor-based security system or intelligent building. The system can be widely employed in daily life and can detect dangerous situations using sensors. The structure of the security system is divided into four parts, the fire detection/diagnosis agent, intruder detection/diagnosis agent, environment detection/diagnosis agent, and power detection/diagnosis agent. In this paper, we use an adaptive data fusion method in the fire detection/diagnosis agent and use a rule-based method in the intruder detection/diagnosis agent. We use statistical signal detection theory in the environment detection/diagnosis agent, and use a fault detection and isolation procedure (FDIP) in the power detection/diagnosis agent. The security system has a four-variety detection/diagnosis agent. Finally, we implement these methods using computer simulation and achieve quite satisfactory results.","PeriodicalId":328873,"journal":{"name":"Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127780798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信