2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)最新文献

筛选
英文 中文
Map-based drone homing using shortcuts 基于地图的无人机导航使用快捷键
D. Bender, W. Koch, D. Cremers
{"title":"Map-based drone homing using shortcuts","authors":"D. Bender, W. Koch, D. Cremers","doi":"10.1109/MFI.2017.8170371","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170371","url":null,"abstract":"Up to the present day, GPS signals are the key component in almost all outdoor navigation tasks of robotic platforms. To obtain the platform pose, comprising the position as well as the orientation, and receive information at a higher frequency, the GPS signals are commonly used in a GPS-corrected inertial navigation system (INS). The GPS is a critical single point of failure, especially for autonomous drones. We propose an approach which creates a metric map of the observed area by fusing camera images with inertial and GPS data during its normal operation and use this map to steer a drone efficiently to its home position in the case of an GPS outage. A naive approach would follow the previously traveled path and get accurate pose estimates by comparing the current camera image with the previously created map. The presented procedure allows the usage of shortcuts through unexplored areas to minimize the travel distance. Thereby, we ensure to reach the starting point by taking into consideration the maximal positional drift while performing pure visual navigation in unknown areas. We achieved close to optimal results in intensive numerical studies and we demonstrate the usability of the algorithm in a realistic simulation environment.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"11 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116069103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Design & implementation of distributed congestion control scheme for heterogeneous traffic in wireless sensor networks 无线传感器网络中异构流量分布式拥塞控制方案的设计与实现
A. Khan, S. Ghani, S. Siddiqui
{"title":"Design & implementation of distributed congestion control scheme for heterogeneous traffic in wireless sensor networks","authors":"A. Khan, S. Ghani, S. Siddiqui","doi":"10.1109/MFI.2017.8170384","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170384","url":null,"abstract":"Recently emerging wireless sensor technologies integrate different types of sensor nodes in a network for information collection. The heterogeneous Wireless Sensor Network (WSN) imposes complex design challenges as nodes in such a network often have different requirements in terms of latency and bandwidth. Therefore, the channel access for nodes needs to be managed ensuring differentiated quality of service for each priority. This paper aims at developing and evaluating a distributed congestion control scheme for CSMA to make it feasible for prioritized heterogeneous traffic. For this purpose, a model earlier developed for 802.15.4 has been enhanced and integrated with the duty-cycled CSMA. Heterogeneous Traffic of three different priorities has been used for evaluating the performance of proposed scheme. The scheme has been implemented in nes-C for the mica2 platform. It has been revealed that for heterogeneous traffic, the throughput of CSMA integrated with our proposed scheme has a significant advantage over basic CSMA.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124073579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Image brightness adjustment system based on ANFIS by RGB and CIE L∗a∗b∗ 基于RGB和CIE L * a * b *的ANFIS图像亮度调节系统
Eunkyeong Kim, Hyunhak Cho, Hansoo Lee, Jongeun Park, Sungshin Kim
{"title":"Image brightness adjustment system based on ANFIS by RGB and CIE L∗a∗b∗","authors":"Eunkyeong Kim, Hyunhak Cho, Hansoo Lee, Jongeun Park, Sungshin Kim","doi":"10.1109/MFI.2017.8170365","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170365","url":null,"abstract":"This paper proposes the method to adjust brightness information by applying CIE L∗a∗b∗ color space and adaptive neuro-fuzzy inference system. The image which is already captured by vision sensor should be adjusted brightness to recognize objects in an image. In case of proper intensity of lights, the clarity of an image is good to recognize objects. However, in case of improper intensity of lights, the image has darkish regions. It will leads to reduce success of object recognition. To make up for this week point, we adjust the image, which is a darkish image, by controlling brightness information of an image. Brightness information can be represented by CIE L∗a∗b∗ color space. So based on CIE L∗a∗b∗ color space, adaptive neuro-fuzzy inference system is implemented as control function. Control function carries out adjusting of brightness information by dealing with the value of L component of CIE L∗a∗b∗ color space. L component describes brightness information of an image. The values which is calculated by adaptive neuro-fuzzy inference system is called the adjustment coefficient. Finally, the adjustment coefficient is added to L component for adjusting brightness information. To verify the propose method, we calculated color difference with respect to RGB and CIE L∗a∗b∗ color space. As experimental results, the propose method can reduce color difference and makes the target image will be similar with reference image under proper intensity of lights.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"13 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117265221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Object map building on various terrains for a Wheeled mobile robot 为轮式移动机器人在各种地形上建立物体地图
J. Oh, Beomhee Lee
{"title":"Object map building on various terrains for a Wheeled mobile robot","authors":"J. Oh, Beomhee Lee","doi":"10.1109/MFI.2017.8170350","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170350","url":null,"abstract":"This paper presents an objects-based topological mapping algorithm on different floors with various objects using a wheeled mobile robot. The extended Kalman filter (EKF) with adaptive measurement noise according to the terrain type is proposed to estimate the position of the robot. If an infrared distance sensor detects an object, the robot moves around the object to obtain the shape information. The rowwise max-pooling with a convolutional neural network (CNN) is proposed to classify objects regardless of the starting position of the observation. Finally, the object map consisting of nodes and edges generated from the classified objects and the distance between objects. Experimental results showed that the proposed algorithm could improve an accuracy of position estimation of the robot and efficiently generated the object map on various terrains.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130076650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection-level fusion for multi-object perception in dense traffic environment 密集交通环境下多目标感知的检测级融合
Bin Huang, Hui Xiong, Jianqiang Wang, Qing Xu, Xiaofei Li, Keqiang Li
{"title":"Detection-level fusion for multi-object perception in dense traffic environment","authors":"Bin Huang, Hui Xiong, Jianqiang Wang, Qing Xu, Xiaofei Li, Keqiang Li","doi":"10.1109/MFI.2017.8170355","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170355","url":null,"abstract":"Due to much imperfect detection performance of onboard sensors in dense driving scenarios, the accurate and explicit perception of surrounding objects for Advanced Driver Assistance Systems and Autonomous Driving is challenging. This paper proposes a novel detection-level fusion approach for multi-object perception in dense traffic environment based on evidence theory. In order to remove uninterested targets and keep tracking important, we integrate four states of track life into a generic fusion framework to improve the performance of multi-object perception. The information of object type, position and velocity is made use of to reduce erroneous data association between tracks and detections. Several experiments in real dense traffic environment on highways and urban roads are conducted. The results verify the proposed fusion approach achieves low false and missing tracking.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125276954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
First approach of an optical localization and tracking method applied to a micro-conveying system 一种应用于微输送系统的光学定位与跟踪方法
Saly Malak, Hani Al Hajjar, E. Dupont, F. Lamarque
{"title":"First approach of an optical localization and tracking method applied to a micro-conveying system","authors":"Saly Malak, Hani Al Hajjar, E. Dupont, F. Lamarque","doi":"10.1109/MFI.2017.8170424","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170424","url":null,"abstract":"In this paper, a study has been conducted to present a high resolution optical localization and tracking method for micro-robots or micro-conveyors moved over a smart surface in a context of micro-factory. The first approach of this work is presented here, the localization and tracking principles are described, the algorithm is presented and finally, experimentation work on system calibration and open-loop tracking is illustrated. The scanning of the surface as well as the tracking of the mobile micro-conveyor will be ensured by steering a laser beam via a MEMS mirror. Depending on the light power received by a photodetector, the conveyor will be localized and tracked. This technique will ensure the achievement of different micro-robots tasks depending on their priorities without collision between them and avoiding defective cells.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125911833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Development of an upper limb exoskeleton for rehabilitation training in virtual environment 虚拟环境中用于康复训练的上肢外骨骼研制
Qingcong Wu, Xingsong Wang
{"title":"Development of an upper limb exoskeleton for rehabilitation training in virtual environment","authors":"Qingcong Wu, Xingsong Wang","doi":"10.1109/MFI.2017.8170425","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170425","url":null,"abstract":"In recent years, a great many robot-assisted therapy systems have been developed and applied in neural rehabilitation. In this paper, we develop a wearable upper limb exoskeleton robot for the purpose of assisting the disable patients to execute effective rehabilitation. The proposed exoskeleton system consists of 7 degrees of freedom (DOFs) and is capable of providing naturalistic assistance of shoulder, elbow, forearm, and wrist. The major hardware of the robotic system is introduced. The Denavit-Hartenburg (D-H) approach and Monte Carlo method are utilized to establish the kinematic model and analyze the accessible workspace of exoskeleton. Besides, a salient feature of this work is the development of an admittance-based control strategy which can provide patient-active rehabilitation training in virtual environment. Two preliminary comparison experiments are implemented on a healthy subject wearing the exoskeleton. The experimental results verify the effectiveness of the developed robotic rehabilitation system and control strategy.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127009410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
LIDAR-data accumulation strategy to generate high definition maps for autonomous vehicles 为自动驾驶汽车生成高清地图的激光雷达数据积累策略
Mohammad Aldibaja, Noaki Suganuma, Keisuke Yoneda
{"title":"LIDAR-data accumulation strategy to generate high definition maps for autonomous vehicles","authors":"Mohammad Aldibaja, Noaki Suganuma, Keisuke Yoneda","doi":"10.1109/MFI.2017.8170357","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170357","url":null,"abstract":"Mapping is a very critical issue for enabling autonomous driving. This paper proposes a robust approach to generate high definition maps based on LIDAR point clouds and post-processed localization measurements. Many problems are addressed including quality, saving size, global labeling and processing time. High quality is guaranteed by accumulating and killing the sparsity of the point clouds in a very efficient way. The storing size is decreased using sub-image sampling of the entire map. The global labeling is achieved by continuously considering the top-left corner of the map images as a reference regardless to the driven distance and the vehicle orientation. The processing time is discussed in terms of using the generated maps in autonomous driving. Moreover, the paper highlights a method to increase the density of online LIDAR frames to be compatible with the intensity level of the generated maps. The proposed method was used since 2015 to generate maps of different areas and courses in Japan and USA with very high stability and accuracy.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133028453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Side channel attack on digital door lock with vibration signal analysis: Longer password does not guarantee higher security level 带振动信号分析的数字门锁侧信道攻击:密码越长,安全性越高
Young-Mok Ha, Soohee Jang, Kwang-Won Kim, J. Yoon
{"title":"Side channel attack on digital door lock with vibration signal analysis: Longer password does not guarantee higher security level","authors":"Young-Mok Ha, Soohee Jang, Kwang-Won Kim, J. Yoon","doi":"10.1109/MFI.2017.8170414","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170414","url":null,"abstract":"Digital door lock system is a widely used physical security system. It restricts unauthorized accesses and protects assets or private spaces. However, once its password has been exposed to unauthorized people, it becomes useless. In this paper, we propose a novel side channel attack model, which enables a cracking of a digital door lock password. We noted that when people press the key-lock button, irrespective of how careful they are, the generated vibrations differ with the location of the button pressed. Our model uses and analyzes the natural phenomenon of vibration to infer passwords. Under our attack, the ease of password inference depends on the number of distinguishable buttons rather than password length. The results of our experiments contradict the commonly held security principle that a longer password guarantees a higher level of security.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133309497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A deep neural network approach to fusing vision and heteroscedastic motion estimates for low-SWaP robotic applications 一种融合视觉和异方差运动估计的深度神经网络方法,用于低swap机器人应用
Jared Shamwell, W. Nothwang, D. Perlis
{"title":"A deep neural network approach to fusing vision and heteroscedastic motion estimates for low-SWaP robotic applications","authors":"Jared Shamwell, W. Nothwang, D. Perlis","doi":"10.1109/MFI.2017.8170407","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170407","url":null,"abstract":"Due both to the speed and quality of their sensors and restrictive on-board computational capabilities, current state-of-the-art (SOA) size, weight, and power (SWaP) constrained autonomous robotic systems are limited in their abilities to sample, fuse, and analyze sensory data for state estimation. Aimed at improving SWaP-constrained robotic state estimation, we present Multi-Hypothesis DeepEfference (MHDE) — an unsupervised, deep convolutional-deconvolutional sensor fusion network that learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. This new multi-hypothesis formulation of our previous architecture, DeepEfference [1], has been augmented to handle dynamic heteroscedastic sensor and motion noise and computes hypothesis image mappings and predictions at 150–400 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel architectural pathways and n (1, 2, 4, or 8 in this work) multi-hypothesis generation subpathways to generate n pixel-level predictions and correspondences between source and target images. We evaluated MHDE on the KITTI Odometry dataset [2] and benchmarked it against DeepEfference [1] and DeepMatching [3] by mean pixel error and runtime. MHDE with 8 hypotheses outperformed DeepEfference in root mean squared (RMSE) pixel error by 103% in the maximum heteroscedastic noise condition and by 18% in the noise-free condition. MHDE with 8 hypotheses was over 5, 000% faster than DeepMatching with only a 3% increase in RMSE.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129357830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信