2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)最新文献

筛选
英文 中文
Localization based on multiple visual-metric maps 基于多个视觉度量地图的定位
Adi Sujiwo, E. Takeuchi, Luis Yoichi Morales Saiki, Naoki Akai, Y. Ninomiya, M. Edahiro
{"title":"Localization based on multiple visual-metric maps","authors":"Adi Sujiwo, E. Takeuchi, Luis Yoichi Morales Saiki, Naoki Akai, Y. Ninomiya, M. Edahiro","doi":"10.1109/MFI.2017.8170431","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170431","url":null,"abstract":"This paper presents a fusion of monocular camera-based metric localization, IMU and odometry in dynamic environments of public roads. We build multiple vision-based maps and use them at the same time in localization phase. For the mapping phase, visual maps are built by employing ORB-SLAM and accurate metric positioning from LiDAR-based NDT scan matching. This external positioning is utilized to correct for scale drift inherent in all vision-based SLAM methods. Next in the localization phase, these embedded positions are used to estimate the vehicle pose in metric global coordinates using solely monocular camera. Furthermore, to increase system robustness we also proposed utilization of multiple maps and sensor fusion with odometry and IMU using particle filter method. Experimental testing were performed through public road environment as far as 170 km at different times of day to evaluate and compare localization results of vision-only, GNSS and sensor fusion methods. The results show that sensor fusion method offers lower average errors than GNSS and better coverage than vision-only one.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115498409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
3D reconstruction of line features using multi-view acoustic images in underwater environment 水下环境下多视点声学图像的线特征三维重建
Ngoc Trung Mai, Hanwool Woo, Yonghoon Ji, Y. Tamura, A. Yamashita, H. Asama
{"title":"3D reconstruction of line features using multi-view acoustic images in underwater environment","authors":"Ngoc Trung Mai, Hanwool Woo, Yonghoon Ji, Y. Tamura, A. Yamashita, H. Asama","doi":"10.1109/MFI.2017.8170447","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170447","url":null,"abstract":"In order to understand the underwater environment, it is essential to use sensing methodologies able to perceive the three dimensional (3D) information of the explored site. Sonar sensors are commonly employed in underwater exploration. This paper presents a novel methodology able to retrieve 3D information of underwater objects. The proposed solution employs an acoustic camera, which represents the next generation of sonar sensors, to extract and track the line of the underwater objects which are used as visual features for the image processing algorithm. In this work, we concentrate on artificial underwater environments, such as dams and bridges. In these structured environments, the line segments are preferred over the points feature, as they can represent structure information more effectively. We also developed a method for automatic extraction and correspondences matching of line features. Our approach enables 3D measurement of underwater objects using arbitrary viewpoints based on an extended Kalman filter (EKF). The probabilistic method allows computing the 3D reconstruction of underwater objects even in presence of uncertainty in the control input of the camera's movements. Experiments have been performed in real environments. Results showed the effectiveness and accuracy of the proposed solution.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125055790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Development of robot manipulation technology in ROS environment ROS环境下机器人操作技术的发展
Dong-Eon Kim, Dongju Park, Jeong-Hwan Moon, Ki-Seo Kim, Jin‐Hyun Park, Jangmyung Lee
{"title":"Development of robot manipulation technology in ROS environment","authors":"Dong-Eon Kim, Dongju Park, Jeong-Hwan Moon, Ki-Seo Kim, Jin‐Hyun Park, Jangmyung Lee","doi":"10.1109/MFI.2017.8170364","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170364","url":null,"abstract":"A new manipulation strategy has been proposed to grasp various objects stably using a dual-arm robotic system in the ROS environment. The grasping pose of the dual-arm has been determined depending upon the shape of the objects which is identified by the pan/tilt camera. For the stable grasping of the object, an operability index of the dual-arm robot (OPIND) has been defined by using the current values applied to the motors for the given grasping pose. When analyzing the motion of a manipulator, the manipulability index of both arms has been derived from the Jacobian to represent the relationship between the joint velocity vector and the workspace velocity vector, which has an elliptical range representing easiness to work with. Through the experiments, the OPIND applied state and the non — applied state of the dual-arm robotic system have been compared to each to other.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128338714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Wearable gesture control of agile micro quadrotors 敏捷微型四旋翼机的可穿戴手势控制
Yunho Choi, Inhwan Hwang, Songhwai Oh
{"title":"Wearable gesture control of agile micro quadrotors","authors":"Yunho Choi, Inhwan Hwang, Songhwai Oh","doi":"10.1109/MFI.2017.8170439","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170439","url":null,"abstract":"Quadrotor unmanned aerial vehicles (UAVs) have seen a surge of use in various applications due to its structural simplicity and high maneuverability. However, conventional control methods using joysticks prohibit novices from getting used to maneuvering quadrotors in short time. In this paper, we suggest the use of a wearable device, such as a smart watch, as a new remote-controller for a quadrotor. The user's command is recognized as gestures using the 9-DoF inertial measurement unit (IMU) of a wearable device through a recurrent neural network (RNN) with long short-term memory (LSTM) cells. Our implementation also makes it possible to align the heading of a quadrotor with the heading of the user. Our implementation allows nine different gestures and the trained RNN is used for real-time gesture recognition for controlling a micro quadrotor. The proposed system exploits available sensors in a wearable device and a quadrotor as much as possible to make the gesture-based control intuitive. We have experimentally validated the performance of the proposed system by using a Samsung Gear S smart watch and a Crazyflie Nano Quadcopter.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128849876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Detection and classification of stochastic features using a multi-Bayesian approach 基于多贝叶斯方法的随机特征检测与分类
J. J. Steckenrider, T. Furukawa
{"title":"Detection and classification of stochastic features using a multi-Bayesian approach","authors":"J. J. Steckenrider, T. Furukawa","doi":"10.1109/MFI.2017.8170421","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170421","url":null,"abstract":"This paper introduces a multi-Bayesian framework for detection and classification of features in environments abundant with error-inducing noise. This approach takes advantage of Bayesian correction and classification in three distinct stages. The corrective scheme described here extracts useful but highly stochastic features from a data source, whether vision-based or otherwise, to aid in higher-level classification. Unlike conventional methods, these features' uncertainties are characterized so that test data can be correctively cast into the feature space with probability distribution functions that can be integrated over class decision boundaries created by a quadratic Bayesian classifier. The proposed approach is specifically formulated for road crack detection and characterization, which is one of the potential applications. For test images assessed with this technique, ground truth was estimated accurately and consistently with effective Bayesian correction, showing a 25% improvement in recall rate over standard classification. Application to road cracks demonstrated successful detection and classification in a practical domain. The proposed approach is extremely effective in characterizing highly probabilistic features in noisy environments when several correlated observations are available either from multiple sensors or from data sequentially obtained by a single sensor.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130625631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
UJI RobInLab's approach to the Amazon Robotics Challenge 2017 UJI RobInLab参加2017年亚马逊机器人挑战赛的方法
A. P. Pobil, Majd Kassawat, A. J. Duran, M. Arias, N. Nechyporenko, Arijit Mallick, E. Cervera, Dipendra Subedi, Ilia Vasilev, D. Cardin, Emanuele Sansebastiano, Ester Martínez-Martín, A. Morales, Gustavo A. Casañ, A. Arenal, B. Goriatcheff, C. Rubert, G. Recatalá
{"title":"UJI RobInLab's approach to the Amazon Robotics Challenge 2017","authors":"A. P. Pobil, Majd Kassawat, A. J. Duran, M. Arias, N. Nechyporenko, Arijit Mallick, E. Cervera, Dipendra Subedi, Ilia Vasilev, D. Cardin, Emanuele Sansebastiano, Ester Martínez-Martín, A. Morales, Gustavo A. Casañ, A. Arenal, B. Goriatcheff, C. Rubert, G. Recatalá","doi":"10.1109/MFI.2017.8170448","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170448","url":null,"abstract":"This paper describes the approach taken by the team from the Robotic Intelligence Laboratory at Jaume I University to the Amazon Robotics Challenge 2017. The goal of the challenge is to automate pick and place operations in unstructured environments, specifically the shelves in an Amazon warehouse. RobInLab's approach is based on a Baxter Research robot and a customized storage system. The system's modular architecture, based on ROS, allows communication between two computers, two Arduinos and the Baxter. It integrates 9 hardware components along with 10 different algorithms to accomplish the pick and stow tasks. We describe the main components and pipelines of the system, along with some experimental results.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"329 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133084234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Design of multiple classifier systems based on testing sample pairs 基于测试样本对的多分类器系统设计
Gaochao Feng, Deqiang Han, Yi Yang, Jiankun Ding
{"title":"Design of multiple classifier systems based on testing sample pairs","authors":"Gaochao Feng, Deqiang Han, Yi Yang, Jiankun Ding","doi":"10.1109/MFI.2017.8170429","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170429","url":null,"abstract":"A new multiple classifier system (MCS) is proposed based on CTSP (classification based on Testing Sample Pairs), which is a kind of applicable and efficient classification method. However, the original output form of the CTSP is only crisp class labels. To make use of the information provided by the classifier, in this paper, the output of CTSP is modeled using the membership function. Then, the fuzzy-cautious ordered weighted averaging approach with evidential reasoning (FCOWA-ER) is used to combine the membership functions originated from different member classifiers. It is shown by experimental results that the proposed MCS effectively can improve the classification performance.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133502058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On state estimation and fusion with elliptical constraints 椭圆约束下的状态估计与融合
Qiang Liu, N. Rao
{"title":"On state estimation and fusion with elliptical constraints","authors":"Qiang Liu, N. Rao","doi":"10.1109/MFI.2017.8170411","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170411","url":null,"abstract":"We consider tracking of a target with elliptical nonlinear constraints on its motion dynamics. The state estimates are generated by sensors and sent over long-haul links to a remote fusion center for fusion. We show that the constraints can be projected onto the known ellipse and hence incorporated into the estimation and fusion process. In particular, two methods based on (i) direct connection to the center, and (ii) shortest distance to the ellipse are discussed. A tracking example is used to illustrate the tracking performance using projection-based methods with various fusers in a lossy long-haul tracking environment.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134310673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
3D handheld scanning based on multiview 3D registration using Kinect Sensing device 基于Kinect传感设备的多视角三维配准的手持式3D扫描
Shirazi Muhammad Ayaz, Danish Khan, M. Y. Kim
{"title":"3D handheld scanning based on multiview 3D registration using Kinect Sensing device","authors":"Shirazi Muhammad Ayaz, Danish Khan, M. Y. Kim","doi":"10.1109/MFI.2017.8170450","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170450","url":null,"abstract":"This paper describes the implementation of a 3D handheld scanning approach based on Kinect. User may get the 3D scans at a very fast rate using real time scanning devices like Kinect. These devices have been utilized in several applications, but the scanning lacks in the accuracy and reliability of the 3D data, which makes their employment a difficult task. This research proposed the 3D handheld scanning approach based on Kinect device which renders the 3D point cloud data for different views and registers them using visual navigation and ICP. This research also compares several ICP variants with the proposed method. The proposed approach can be used for the 3D modeling applications especially in medical domain. Experiments and results demonstrate the feasibility of the proposed approach to generate reliable 3D reconstructions from the Kinect's point clouds.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134350749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A nearest neighbour ensemble Kalman Filter for multi-object tracking 多目标跟踪的最近邻集成卡尔曼滤波
Fabian Sigges, M. Baum
{"title":"A nearest neighbour ensemble Kalman Filter for multi-object tracking","authors":"Fabian Sigges, M. Baum","doi":"10.1109/MFI.2017.8170433","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170433","url":null,"abstract":"In this paper, we present an approach to Multi-Object Tracking (MOT) that is based on the Ensemble Kalman Filter (EnKF). The EnKF is a standard algorithm for data assimilation in high-dimensional state spaces that is mainly used in geosciences, but has so far only attracted little attention for object tracking problems. In our approach, the Optimal Subpattern Assignment (OSPA) distance is used for coping with unlabeled noisy measurements and a robust covariance estimation is done using FastMCD to deal with possible outliers due to false detections. The algorithm is evaluated and compared against a global nearest neighbour Kalman Filter (NNKF) and a recently proposed JPDA-Ensemble Kalman Filter (JPDA-EnKF) in a simulated scenario with multiple objects and false detections.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132922472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信