2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)最新文献

筛选
英文 中文
Distributed optimal fusion estimation for multi-sensor systems subject to random delay and packet drop 随机时延和丢包多传感器系统的分布式最优融合估计
Jiabing Sun, Chengjin Zhang
{"title":"Distributed optimal fusion estimation for multi-sensor systems subject to random delay and packet drop","authors":"Jiabing Sun, Chengjin Zhang","doi":"10.1109/MFI.2012.6343075","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343075","url":null,"abstract":"This paper considers the distributed optimal (i.e., linear minimum variance) fusion estimation problems for two classes of networked multi-sensor systems. In the first class of systems, the sensors' measurements are transmitted via unreliable digital communication networks (DCN) to local estimators for obtaining local estimates of the state. Then the local estimates are fused in the fusion center to get the fusion estimate. The data transmission from local estimators to the fusion center is not via DCN. In the second class of systems, the sensors' measurements are processed locally to obtain local estimates of the state. Then, via DCN, the local estimates are transmitted to the fusion center where they are fused to get the fusion estimate. In these systems, the data transmission via DCN is subject to random delay and packet drop. The results of local estimation have been presented in the literature. The estimation error cross-covariances between local estimates are derived in this paper. By the fusion rule weighted by matrices, the distributed optimal fusion estimators are developed.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130904341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving robustness of robotic grasping by fusing multi-sensor 多传感器融合提高机器人抓取鲁棒性
Jun Zhang, Caixia Song, Ying Hu, Bin Yu
{"title":"Improving robustness of robotic grasping by fusing multi-sensor","authors":"Jun Zhang, Caixia Song, Ying Hu, Bin Yu","doi":"10.1109/MFI.2012.6343002","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343002","url":null,"abstract":"Since the visual system is susceptible to the lighting condition and surroundings changes, the accuracy for object localization of robot grasping system based on visual servo is rather poor so as to the low grasping success rate and bad robustness of the whole system. In view of such phenomenon, in this paper, we propose a method of fusing binocular camera accompany with monocular vision, IR sensors, tactile sensors and encoders to design a reliable and robust grasping system that could offer real-time feedback information. In order to avoid the situation of robot grasping-nothing, we use the binocular vision supplemented by monocular camera and IR sensors to locate accurately. By analyzing the contact model and pressure between gripper and the object, a durable, non-slip rubber coating is designed to increase the fingertip's friction, What's more, Fuzzy Neural Network (FNN) method was applied to fuse the information of multiple sensors in our robot system. By monitoring force and position information in the process of grasping all the time, the system can reduce the phenomenon of slippage and crush of object as well as improve the grasping stability greatly. The experimental results show the effectiveness of our system.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"515 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122324121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Sensor fault detection for industrial systems using a hierarchical clustering-based graphical user interface 基于分层聚类的图形用户界面的工业系统传感器故障检测
Yu Zhang, C. Bingham, M. Gallimore, Zhijing Yang, Jun Chen
{"title":"Sensor fault detection for industrial systems using a hierarchical clustering-based graphical user interface","authors":"Yu Zhang, C. Bingham, M. Gallimore, Zhijing Yang, Jun Chen","doi":"10.1109/MFI.2012.6343071","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343071","url":null,"abstract":"The paper presents an effective and efficient method for sensor fault detection and identification within a large group of sensors based upon hierarchical cluster analysis. Fingerprints of the hierarchical clustering dendrograms are found for normal operation using normalized data, and sensor faults are detected through cluster changes occurring in the dendrogram. The proposed strategy is built into a user-friendly graphical interface, which is applied to a sub-15MW industrial gas turbine. It is shown, through use of real-time operational data, that inoperation sensor faults can be detected and identified by the hierarchical clustering-based graphical user interface.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126809455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Estimation of occupant distribution by detecting the entrance and leaving events of zones in building 通过检测建筑物各区域的进出事件来估计人员分布
Hengtao Wang, Q. Jia, Yu Lei, Qianchuan Zhao, X. Guan
{"title":"Estimation of occupant distribution by detecting the entrance and leaving events of zones in building","authors":"Hengtao Wang, Q. Jia, Yu Lei, Qianchuan Zhao, X. Guan","doi":"10.1109/MFI.2012.6343074","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343074","url":null,"abstract":"For energy saving and security in building, the information of occupant number of each zone is very important. This paper works on the estimation of the occupant number of zones in building by detecting the entrance and leaving events. In this paper, we first formulate the problem under an assumption of Markov Chain, and basing on the theoretical analysis of the model, we propose a method of occupant distribution estimation, which can be implemented distributively. The method counts occupant by detecting the entrance and leaving events of zones in real time and uses the prior information of the occupant's entrance and leaving events in each zone to reduce the estimation error, which increases the accuracy of the estimation of occupant distribution in building. Numerical experiments including simulation and field test demonstrate the performance of the method.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116148726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Robust visual tracking based on adaptive depth-color-cue integration using range sensor 基于深度-颜色自适应线索融合的距离传感器鲁棒视觉跟踪
Can Wang, Hong Liu
{"title":"Robust visual tracking based on adaptive depth-color-cue integration using range sensor","authors":"Can Wang, Hong Liu","doi":"10.1109/MFI.2012.6343012","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343012","url":null,"abstract":"In visual tracking field, multi-cue integration has been researched extensively, but only color-based method still suffers from illumination changes, color-similar background or complete occlusion. To overcome these shortages, this paper presents an adaptive depth-color-cue integration framework for Mean-shift tracking. The state-of-art 2D rectangles evolves to 3D cubes for representing target region, and depth and color cues are combined together for representing target appearance. Moreover, a novel depth-data-based motion detection method is introduced to get more reliable motion cues during tracking. Furthermore, a reliability evaluation function is proposed to tune cues' weights based on the assumption that most reliable cues are those which are most discriminative between target region and background regions. Finally, cues' probability distribution maps are integrated for Mean-shift tracking. Extensive experiments under various conditions demonstrate the reliability and robustness of this depth-color-integrated tracking framework.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126697081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Space encoding based compressive multiple human tracking with distributed binary pyroelectric infrared sensor networks 基于空间编码的分布式二值热释电红外传感器网络压缩多人跟踪
Jiang Lu, Jiaqi Gong, Qi Hao, Fei Hu
{"title":"Space encoding based compressive multiple human tracking with distributed binary pyroelectric infrared sensor networks","authors":"Jiang Lu, Jiaqi Gong, Qi Hao, Fei Hu","doi":"10.1109/MFI.2012.6342997","DOIUrl":"https://doi.org/10.1109/MFI.2012.6342997","url":null,"abstract":"This paper presents a distributed, compressive multiple human tracking system based on binary pyroelectric infrared (PIR) sensor networks. The goal of our research is to develop an energy-efficient, low-data-throughput infrared surveillance system for various indoor applications. The compressive measurements are achieved by using techniques of (1) multiplex binary sensing and (2) space encoding. The target positions are reconstructed from the binary compressive measurements through (1) an expectation-maximization (EM) framework for space decoding, (2) representing the prior knowledge of target / sampling geometries with statistical parameters, and (3) hierarchical space encoding / decoding for multiple targets tracking. A wireless networked PIR sensor system is designed to demonstrate the improved sensing efficiency and system scalability of the proposed distributed multiple human tracking system. The proposed compressive tracking framework can be extended to various binary sensing modalities.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127197104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
A sensing cushion using simple pressure distribution sensors 使用简单压力分布传感器的传感垫
Lishuang Xu, Gang Chen, Jiajun Wang, R. Shen, Shen Zhao
{"title":"A sensing cushion using simple pressure distribution sensors","authors":"Lishuang Xu, Gang Chen, Jiajun Wang, R. Shen, Shen Zhao","doi":"10.1109/MFI.2012.6343048","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343048","url":null,"abstract":"It has been promising to provide personalized services for improving our living environment in support of information technologies. Sitting is one of the frequent actions in our daily life, especially for people who work in office. In this paper, we present the design and implementation of a sensing cushion. The cushion's information about seated postures can be used to help avoid adverse effects of sitting for long periods of time or to predict seated activities for a human-computer interface. The cushion's seat pan and backrest surface is mounted with simple binary valued pressure distribution sensors. The pressure distribution data is scanned by a micro controller unit and then transited to a personal computer by a Bluetooth module. The recognition task is performed on the PC side, which can recognize 9 different seated postures in real time with high accuracy.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131989191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Map segmentation based SLAM using embodied data 基于嵌入数据的SLAM地图分割
J. Schwendner
{"title":"Map segmentation based SLAM using embodied data","authors":"J. Schwendner","doi":"10.1109/MFI.2012.6343018","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343018","url":null,"abstract":"Autonomous mobile robots offer the prospect of extending our knowledge of remote places in the solar system or in the ocean. They also have the potential to improve everyday life with ever increasing adaptability to a large variety of environments. One of the key technological elements is the ability to navigate unknown and uncooperative environments. A range of solutions for the simultaneous localisation and mapping (SLAM) problem have emerged in the last decade. One factor which is often neglected is the fact that the robot has a body which interacts with the environment. In this paper a method is presented, which utilises this information and uses visual and non-visual correlations to generate accurate local map segments. Further, a method is presented to combine particle filter based local map segments and constraint graph based global pose optimization to a single coherent map representation. The method is evaluated on a Leg/Wheel hybrid mobile robot and the resulting maps compared against high resolution environment models generated with a commercial laser scanner.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133347944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study on impedance generation using an exoskeleton device for upper-limb rehabilitation 上肢康复外骨骼装置阻抗产生的研究
Zhibin Song, Shuxiang Guo
{"title":"Study on impedance generation using an exoskeleton device for upper-limb rehabilitation","authors":"Zhibin Song, Shuxiang Guo","doi":"10.1109/MFI.2012.6343006","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343006","url":null,"abstract":"Rehabilitation robotics has been one of the most important branches of robotics. Particularly, the exoskeleton device for the upper limb rehabilitation develops fast in recent years, but most of them are heavy and large. In this paper, we proposed a light and wearable exoskeleton device which is potential to be used in home rehabilitation. It should be able to perform passive and active training. In this paper, we proposed a method to implement the active rehabilitation based on the upper limb exoskeleton rehabilitation device (ULERD). It provides a wide approach for human machine interface (HMI) in which the device is high friction and non-backdrivable, and meanwhile it is difficult to obtain the contact force information directly. The method is to measure the motion of human body other than the motion of device. It is implemented with passive DoFs unlocked during elbow flexion and extension performance. In contrast experiments, three level resistances are generated and provided to the user. The surface electromyography (sEMG) signals detected from biceps and triceps were recoded and processed to evaluate the effect of this method via wavelet packet transform (WPT).","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123919983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Tumbling target reconstruction and pose estimation through fusion of monocular vision and sparse-pattern range data 基于单目视觉与稀疏模式距离数据融合的翻滚目标重建与位姿估计
J. Padial, M. Hammond, S. Augenstein, S. Rock
{"title":"Tumbling target reconstruction and pose estimation through fusion of monocular vision and sparse-pattern range data","authors":"J. Padial, M. Hammond, S. Augenstein, S. Rock","doi":"10.1109/MFI.2012.6343026","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343026","url":null,"abstract":"A framework for 3D target reconstruction and relative pose estimation through fusion of vision and sparse-pattern range data (e.g. line-scanning LIDAR) is presented. The algorithm augments previous work in monocular vision-only SLAM/SfM to incorporate range data into the overall solution. The aim of this work is to enable a more dense reconstruction with accurate relative pose estimation that is unambiguous in scale. In order to incorporate range data, a linear estimator is presented to estimate the overall scale factor using vision-range correspondence. A motivating mission is the use of resource-constrained micro- and nano-satellites to perform autonomous rendezvous and docking operations with uncommunicative, tumbling targets, about which little or no prior information is available. The rationale for the approach is explained, and an algorithm is presented. The implementation using a modified Rao-Blackwellised particle filter is described and tested. Results from numerical simulations are presented that demonstrate the performance and viability of the approach.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115156780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信