2019 22th International Conference on Information Fusion (FUSION)最新文献

筛选
英文 中文
Secure Method for De-Identifying and Anonymizing Large Panel Datasets 大型面板数据集去识别和匿名化的安全方法
2019 22th International Conference on Information Fusion (FUSION) Pub Date : 2019-07-01 DOI: 10.23919/fusion43075.2019.9011394
Mohanad Ajina, B. Yousefi, Jim Jones, Kathryn B. Laskey
{"title":"Secure Method for De-Identifying and Anonymizing Large Panel Datasets","authors":"Mohanad Ajina, B. Yousefi, Jim Jones, Kathryn B. Laskey","doi":"10.23919/fusion43075.2019.9011394","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011394","url":null,"abstract":"Government agencies, as well as private companies, may need to share private information with third party organizations for various reasons. There exist legitimate concerns about disclosing the information of individuals, sensitive details of agencies and organizations, and other private information. Consequently, information shared with external parties may be redacted to hide confidential information about individuals and companies while providing essential data required by third parties in order to perform their duties. This paper presents a method to de-identify and anonymize large-scale panel data from an organization. The method can handle a variety of data types, and it is scalable to datasets of any size. The challenge of de-identification and anonymization a large-scale and diverse dataset is to protect individual identities and retain useful data in the presence of unstructured field data and unpredictable frequency distributions. This is addressed by analyzing the dataset and applying a filtering and aggregation method. This is accompanied by a streamlined implementation and post-validation process, which ensures the security of the organization's data, and the computational efficiency of the approach when handling large-scale panel data sets.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132300127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visible and Infrared Image Fusion Framework based on RetinaNet for Marine Environment 基于retanet的海洋环境可见光与红外图像融合框架
2019 22th International Conference on Information Fusion (FUSION) Pub Date : 2019-07-01 DOI: 10.23919/fusion43075.2019.9011182
F. Farahnakian, Jussi Poikonen, Markus Laurinen, D. Makris, J. Heikkonen
{"title":"Visible and Infrared Image Fusion Framework based on RetinaNet for Marine Environment","authors":"F. Farahnakian, Jussi Poikonen, Markus Laurinen, D. Makris, J. Heikkonen","doi":"10.23919/fusion43075.2019.9011182","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011182","url":null,"abstract":"Safety and security are critical issues in maritime environment. Automatic and reliable object detection based on multi-sensor data fusion is one of the efficient way for improving these issues in intelligent systems. In this paper, we propose an early fusion framework to achieve a robust object detection. The framework firstly utilizes a fusion strategy to combine both visible and infrared images and generates fused images. The resulting fused images are then processed by a simple dense convolutional neural network based detector, RetinaNet, to predict multiple 2D box hypotheses and the infrared confidences. To evaluate the proposed framework, we collected a real marine dataset using a sensor system onboard a vessel in the Finnish archipelago. This system is used for developing autonomous vessels, and records data in a range of operation and climatic and light conditions. The experimental results show that the proposed fusion framework able to identify the interest of objects surrounding the vessel substantially better compared with the baseline approaches.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130309669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Gibbs Sampling of Measurement Partitions and Associations for Extended Multi-Target Tracking 扩展多目标跟踪的测量分区和关联的吉布斯抽样
2019 22th International Conference on Information Fusion (FUSION) Pub Date : 2019-07-01 DOI: 10.23919/fusion43075.2019.9011272
J. Honer, Fabian Schmieder
{"title":"Gibbs Sampling of Measurement Partitions and Associations for Extended Multi-Target Tracking","authors":"J. Honer, Fabian Schmieder","doi":"10.23919/fusion43075.2019.9011272","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011272","url":null,"abstract":"In this paper we propose a novel approach to handle the extended target association problem in multi-target tracking for conjugate priors like the $delta$-GLMB, LMB, PMBM and MBM. By introducing dependencies between partition cells we are able to employ a Gibbs sampler to simultaneously sample from partitions of the measurement set and association mappings. This formulation allows for a reduction of the approximation error as well as a more efficient implementation.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114761902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Feature-Aided Multitarget Tracking for Optical Belt Sorters 光学带式分选机的特征辅助多目标跟踪
2019 22th International Conference on Information Fusion (FUSION) Pub Date : 2019-07-01 DOI: 10.23919/fusion43075.2019.9011447
Tobias Kronauer, F. Pfaff, B. Noack, Wei Tiant, G. Maier, U. Hanebeck
{"title":"Feature-Aided Multitarget Tracking for Optical Belt Sorters","authors":"Tobias Kronauer, F. Pfaff, B. Noack, Wei Tiant, G. Maier, U. Hanebeck","doi":"10.23919/fusion43075.2019.9011447","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011447","url":null,"abstract":"Industrial optical belt sorters are highly versatile in sorting bulk material or food, especially if mechanical properties are not sufficient for an adequate sorting quality. In previous works, we could show that the sorting quality can be enhanced by replacing the line scan camera, which is normally used, with an area scan camera. By performing multitarget tracking within the field of view, the precision of the utilized separation mechanism can be enhanced. The employed kinematics-based multitarget tracking crucially depends on the ability to associate detection hypotheses of the same particle across multiple frames. In this work, we propose a procedure to incorporate the visual similarity of the detected particles into the kinematics-based multitarget tracking that is generic and evaluates the visual similarity independent of the kinematics. For evaluating the visual similarity, we use the Kernelized Correlation Filter, the Large Margin Nearest Neighbor method and the Normalized Cross Correlation. Although no clear superiority for any of the visual similarity measures mentioned above could be determined, an improvement of all considered error metrics was attained.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114828690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fusion of LiDAR and Camera Images in End-to-end Deep Learning for Steering an Off-road Unmanned Ground Vehicle 端到端深度学习中激光雷达和相机图像的融合用于越野无人驾驶地面车辆的转向
2019 22th International Conference on Information Fusion (FUSION) Pub Date : 2019-07-01 DOI: 10.23919/fusion43075.2019.9011341
N. Warakagoda, Johann A. Dirdal, Erlend Faxvaag
{"title":"Fusion of LiDAR and Camera Images in End-to-end Deep Learning for Steering an Off-road Unmanned Ground Vehicle","authors":"N. Warakagoda, Johann A. Dirdal, Erlend Faxvaag","doi":"10.23919/fusion43075.2019.9011341","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011341","url":null,"abstract":"We consider the task of learning the steering policy based on deep learning for an off-road autonomous vehicle. The goal is to train a system in an end-to-end fashion to make steering predictions from input images delivered by a single optical camera and a LiDAR sensor. To achieve this, we propose a neural network-based information fusion approach and study several architectures. In one study focusing on late fusion, we investigate a system comprising two convolutional networks and a fully-connected network. The convolutional nets are trained on camera images and LiDAR images, respectively, whereas the fully-connected net is trained on combined features from each of these networks. Our experimental results show that fusing image and LiDAR information yields more accurate steering predictions on our dataset, than considering each data source separately. In another study we consider several architectures performing early fusion that circumvent the expensive full concatenation at raw image level. Even though the proposed early fusion approaches performed better than unimodal systems, they were significantly inferior to the best system based on late fusion. Overall, through fusion of camera and LiDAR images in an off-road setting, the normalized RMSE errors can be brought down to a region comparable to that for on-road environments.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114976505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Nonlinear Decentralized Data Fusion with Generalized Inverse Covariance Intersection 广义逆协方差交的非线性分散数据融合
2019 22th International Conference on Information Fusion (FUSION) Pub Date : 2019-07-01 DOI: 10.23919/fusion43075.2019.9011163
B. Noack, U. Orguner, U. Hanebeck
{"title":"Nonlinear Decentralized Data Fusion with Generalized Inverse Covariance Intersection","authors":"B. Noack, U. Orguner, U. Hanebeck","doi":"10.23919/fusion43075.2019.9011163","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011163","url":null,"abstract":"Decentralized data fusion is a challenging task even for linear estimation problems. Nonlinear estimation renders data fusion even more difficult as dependencies among the nonlinear estimates require complicated parameterizations. It is nearly impossible to reconstruct or keep track of dependencies. Therefore, conservative approaches have become a popular solution to nonlinear data fusion. As a generalization of Covariance Intersection, exponential mixture densities have been widely applied for nonlinear fusion. However, this approach inherits the conservativeness of Covariance Intersection. For this reason, the less conservative fusion rule Inverse Covariance Intersection is studied in this paper and also generalized to nonlinear data fusion. This generalization employs a conservative approximation of the common information shared by the estimates to be fused. This bound of the common information is subtracted from the fusion result. In doing so, less conservative fusion results can be attained as an empirical analysis demonstrates.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125742527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Deep Correlation Filter based Real-Time Tracker 基于深度相关滤波器的实时跟踪器
2019 22th International Conference on Information Fusion (FUSION) Pub Date : 2019-07-01 DOI: 10.23919/fusion43075.2019.9011354
Lei Pu, Xinxi Feng, Z. Hou, Wangsheng Yu, Yufei Zha, Sugang Ma
{"title":"Deep Correlation Filter based Real-Time Tracker","authors":"Lei Pu, Xinxi Feng, Z. Hou, Wangsheng Yu, Yufei Zha, Sugang Ma","doi":"10.23919/fusion43075.2019.9011354","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011354","url":null,"abstract":"Visual tracking is a challenging task in computer vision. Recently correlation filter based trackers have gained much attention due to their high efficiency and impressive performance. Several methods have been developed to utilize deep features extracted from Convolutional Neural Networks (CNNs) for correlation filter tracking. Despite their success, most of these approaches suffer from low tracking speed due to high computation burden of the deep models. In this paper, we propose a CNN based real-time deep tracker for robust visual tracking. We first exploit hierarchical deep features as target representation, which can better distinguish the target from the backgrounds in the presence of complex situations. Before applied to learn correlation filters, the channel number of hierarchical features is reduced by the principal component analysis (PCA). This method can decrease both feature redundancy and computation. Then we construct a reliable map with features from the last convolutional layer as they encode the most semantic information. The reliable map is used to constraint the searching area where the target most likely exist. To further handle long-term tracking, we improve the model discrimination capability by introducing the original template into the current correlation filter model. As the target movement is usually small between two adjacent frames, we reuse the deep features from location step to update current model. The features are reused by making a circular shift with position displacement of the target, which increases the tracking speed significantly. Extensive experimental results on large benchmarks show that our proposed tracker can perform at real-time and achieves the state-of-the-art performance.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129949617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended target Poisson multi-Bernoulli mixture trackers based on sets of trajectories 基于轨迹集的扩展目标泊松多伯努利混合跟踪器
2019 22th International Conference on Information Fusion (FUSION) Pub Date : 2019-07-01 DOI: 10.23919/fusion43075.2019.9011181
Yuxuan Xia, Karl Granstrom, L. Svensson, 'Angel F. Garc'ia-Fern'andez, Jason L. Williams
{"title":"Extended target Poisson multi-Bernoulli mixture trackers based on sets of trajectories","authors":"Yuxuan Xia, Karl Granstrom, L. Svensson, 'Angel F. Garc'ia-Fern'andez, Jason L. Williams","doi":"10.23919/fusion43075.2019.9011181","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011181","url":null,"abstract":"The Poisson multi-Bernoulli mixture (PMBM) is a multi-target distribution for which the prediction and update are closed. By applying the random finite set (RFS) framework to multi-target tracking with sets of trajectories as the variable of interest, the PMBM trackers can efficiently estimate the set of target trajectories. This paper derives two trajectory RFS filters for extended target tracking, called extended target PMBM trackers. Compared to the extended target PMBM filter based on sets on targets, explicit track continuity between time steps is provided in the extended target PMBM trackers.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130108287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Derived Heading Using Lagrange Five Point Difference Method and Fusion (poster) 利用拉格朗日五点差分法和融合法推导航向(海报)
2019 22th International Conference on Information Fusion (FUSION) Pub Date : 2019-07-01 DOI: 10.23919/fusion43075.2019.9011213
B. Sindhu, J. Valarmathi, S. Christopher
{"title":"Derived Heading Using Lagrange Five Point Difference Method and Fusion (poster)","authors":"B. Sindhu, J. Valarmathi, S. Christopher","doi":"10.23919/fusion43075.2019.9011213","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011213","url":null,"abstract":"In general, bearing measurements combined with the derived heading increases the accuracy of target state estimate in bearing only tracking (BOT). Here, Lagrange five point difference (LFPD) method is proposed for deriving the heading measurements. This method is compared with Lagrange three point difference (LTPD) and centered difference (CD) method by considering the scenario of two sensors tracking a single target. The bearing and derived heading measurements from two sensors are fused using measurement fusion (MF) technique. The nonlinear Extended Kalman filter (EKF) is implemented using fused measurement to obtain the optimized target state estimate. Performance analyses are made between LFPD compared with LTPD and CD by considering without and with fusion technique. Simulation results indicate MF-LFPD performs comparatively better.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129360919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion based estimation of the a-priori probability distribution of unknown non-stationary processes 基于融合的未知非平稳过程先验概率分布估计
2019 22th International Conference on Information Fusion (FUSION) Pub Date : 2019-07-01 DOI: 10.23919/fusion43075.2019.9011183
M. Junghans, A. Leich
{"title":"Fusion based estimation of the a-priori probability distribution of unknown non-stationary processes","authors":"M. Junghans, A. Leich","doi":"10.23919/fusion43075.2019.9011183","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011183","url":null,"abstract":"Non-stationary processes can be hard to handle, particular if one would like to know their characterizing time dependent probability functions. In this paper the a-priori probability distributions of unknown non-stationary processes are estimated with different combinations of weakly coupled sensors. For quantification of the unknown a-priori probabilities Bayesian Networks (BN) are adopted for data fusion and Dirichlet functions are applied on non-stationary, time-dependent maximum likelihood (ML) parameter learning. In several experiments the adaption of the non-stationary a-priori probability density functions is shown and the accuracy of data fusion regarding the underlying process variables with different characteristics are determined quantitatively. It is shown that the proposed algorithm can improve data fusion in case conditions for specific process and sensor characteristics are met.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130069342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信