Mohanad Ajina, B. Yousefi, Jim Jones, Kathryn B. Laskey
{"title":"Secure Method for De-Identifying and Anonymizing Large Panel Datasets","authors":"Mohanad Ajina, B. Yousefi, Jim Jones, Kathryn B. Laskey","doi":"10.23919/fusion43075.2019.9011394","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011394","url":null,"abstract":"Government agencies, as well as private companies, may need to share private information with third party organizations for various reasons. There exist legitimate concerns about disclosing the information of individuals, sensitive details of agencies and organizations, and other private information. Consequently, information shared with external parties may be redacted to hide confidential information about individuals and companies while providing essential data required by third parties in order to perform their duties. This paper presents a method to de-identify and anonymize large-scale panel data from an organization. The method can handle a variety of data types, and it is scalable to datasets of any size. The challenge of de-identification and anonymization a large-scale and diverse dataset is to protect individual identities and retain useful data in the presence of unstructured field data and unpredictable frequency distributions. This is addressed by analyzing the dataset and applying a filtering and aggregation method. This is accompanied by a streamlined implementation and post-validation process, which ensures the security of the organization's data, and the computational efficiency of the approach when handling large-scale panel data sets.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132300127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Farahnakian, Jussi Poikonen, Markus Laurinen, D. Makris, J. Heikkonen
{"title":"Visible and Infrared Image Fusion Framework based on RetinaNet for Marine Environment","authors":"F. Farahnakian, Jussi Poikonen, Markus Laurinen, D. Makris, J. Heikkonen","doi":"10.23919/fusion43075.2019.9011182","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011182","url":null,"abstract":"Safety and security are critical issues in maritime environment. Automatic and reliable object detection based on multi-sensor data fusion is one of the efficient way for improving these issues in intelligent systems. In this paper, we propose an early fusion framework to achieve a robust object detection. The framework firstly utilizes a fusion strategy to combine both visible and infrared images and generates fused images. The resulting fused images are then processed by a simple dense convolutional neural network based detector, RetinaNet, to predict multiple 2D box hypotheses and the infrared confidences. To evaluate the proposed framework, we collected a real marine dataset using a sensor system onboard a vessel in the Finnish archipelago. This system is used for developing autonomous vessels, and records data in a range of operation and climatic and light conditions. The experimental results show that the proposed fusion framework able to identify the interest of objects surrounding the vessel substantially better compared with the baseline approaches.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130309669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gibbs Sampling of Measurement Partitions and Associations for Extended Multi-Target Tracking","authors":"J. Honer, Fabian Schmieder","doi":"10.23919/fusion43075.2019.9011272","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011272","url":null,"abstract":"In this paper we propose a novel approach to handle the extended target association problem in multi-target tracking for conjugate priors like the $delta$-GLMB, LMB, PMBM and MBM. By introducing dependencies between partition cells we are able to employ a Gibbs sampler to simultaneously sample from partitions of the measurement set and association mappings. This formulation allows for a reduction of the approximation error as well as a more efficient implementation.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114761902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Kronauer, F. Pfaff, B. Noack, Wei Tiant, G. Maier, U. Hanebeck
{"title":"Feature-Aided Multitarget Tracking for Optical Belt Sorters","authors":"Tobias Kronauer, F. Pfaff, B. Noack, Wei Tiant, G. Maier, U. Hanebeck","doi":"10.23919/fusion43075.2019.9011447","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011447","url":null,"abstract":"Industrial optical belt sorters are highly versatile in sorting bulk material or food, especially if mechanical properties are not sufficient for an adequate sorting quality. In previous works, we could show that the sorting quality can be enhanced by replacing the line scan camera, which is normally used, with an area scan camera. By performing multitarget tracking within the field of view, the precision of the utilized separation mechanism can be enhanced. The employed kinematics-based multitarget tracking crucially depends on the ability to associate detection hypotheses of the same particle across multiple frames. In this work, we propose a procedure to incorporate the visual similarity of the detected particles into the kinematics-based multitarget tracking that is generic and evaluates the visual similarity independent of the kinematics. For evaluating the visual similarity, we use the Kernelized Correlation Filter, the Large Margin Nearest Neighbor method and the Normalized Cross Correlation. Although no clear superiority for any of the visual similarity measures mentioned above could be determined, an improvement of all considered error metrics was attained.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114828690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion of LiDAR and Camera Images in End-to-end Deep Learning for Steering an Off-road Unmanned Ground Vehicle","authors":"N. Warakagoda, Johann A. Dirdal, Erlend Faxvaag","doi":"10.23919/fusion43075.2019.9011341","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011341","url":null,"abstract":"We consider the task of learning the steering policy based on deep learning for an off-road autonomous vehicle. The goal is to train a system in an end-to-end fashion to make steering predictions from input images delivered by a single optical camera and a LiDAR sensor. To achieve this, we propose a neural network-based information fusion approach and study several architectures. In one study focusing on late fusion, we investigate a system comprising two convolutional networks and a fully-connected network. The convolutional nets are trained on camera images and LiDAR images, respectively, whereas the fully-connected net is trained on combined features from each of these networks. Our experimental results show that fusing image and LiDAR information yields more accurate steering predictions on our dataset, than considering each data source separately. In another study we consider several architectures performing early fusion that circumvent the expensive full concatenation at raw image level. Even though the proposed early fusion approaches performed better than unimodal systems, they were significantly inferior to the best system based on late fusion. Overall, through fusion of camera and LiDAR images in an off-road setting, the normalized RMSE errors can be brought down to a region comparable to that for on-road environments.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114976505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonlinear Decentralized Data Fusion with Generalized Inverse Covariance Intersection","authors":"B. Noack, U. Orguner, U. Hanebeck","doi":"10.23919/fusion43075.2019.9011163","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011163","url":null,"abstract":"Decentralized data fusion is a challenging task even for linear estimation problems. Nonlinear estimation renders data fusion even more difficult as dependencies among the nonlinear estimates require complicated parameterizations. It is nearly impossible to reconstruct or keep track of dependencies. Therefore, conservative approaches have become a popular solution to nonlinear data fusion. As a generalization of Covariance Intersection, exponential mixture densities have been widely applied for nonlinear fusion. However, this approach inherits the conservativeness of Covariance Intersection. For this reason, the less conservative fusion rule Inverse Covariance Intersection is studied in this paper and also generalized to nonlinear data fusion. This generalization employs a conservative approximation of the common information shared by the estimates to be fused. This bound of the common information is subtracted from the fusion result. In doing so, less conservative fusion results can be attained as an empirical analysis demonstrates.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125742527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Pu, Xinxi Feng, Z. Hou, Wangsheng Yu, Yufei Zha, Sugang Ma
{"title":"Deep Correlation Filter based Real-Time Tracker","authors":"Lei Pu, Xinxi Feng, Z. Hou, Wangsheng Yu, Yufei Zha, Sugang Ma","doi":"10.23919/fusion43075.2019.9011354","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011354","url":null,"abstract":"Visual tracking is a challenging task in computer vision. Recently correlation filter based trackers have gained much attention due to their high efficiency and impressive performance. Several methods have been developed to utilize deep features extracted from Convolutional Neural Networks (CNNs) for correlation filter tracking. Despite their success, most of these approaches suffer from low tracking speed due to high computation burden of the deep models. In this paper, we propose a CNN based real-time deep tracker for robust visual tracking. We first exploit hierarchical deep features as target representation, which can better distinguish the target from the backgrounds in the presence of complex situations. Before applied to learn correlation filters, the channel number of hierarchical features is reduced by the principal component analysis (PCA). This method can decrease both feature redundancy and computation. Then we construct a reliable map with features from the last convolutional layer as they encode the most semantic information. The reliable map is used to constraint the searching area where the target most likely exist. To further handle long-term tracking, we improve the model discrimination capability by introducing the original template into the current correlation filter model. As the target movement is usually small between two adjacent frames, we reuse the deep features from location step to update current model. The features are reused by making a circular shift with position displacement of the target, which increases the tracking speed significantly. Extensive experimental results on large benchmarks show that our proposed tracker can perform at real-time and achieves the state-of-the-art performance.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129949617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuxuan Xia, Karl Granstrom, L. Svensson, 'Angel F. Garc'ia-Fern'andez, Jason L. Williams
{"title":"Extended target Poisson multi-Bernoulli mixture trackers based on sets of trajectories","authors":"Yuxuan Xia, Karl Granstrom, L. Svensson, 'Angel F. Garc'ia-Fern'andez, Jason L. Williams","doi":"10.23919/fusion43075.2019.9011181","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011181","url":null,"abstract":"The Poisson multi-Bernoulli mixture (PMBM) is a multi-target distribution for which the prediction and update are closed. By applying the random finite set (RFS) framework to multi-target tracking with sets of trajectories as the variable of interest, the PMBM trackers can efficiently estimate the set of target trajectories. This paper derives two trajectory RFS filters for extended target tracking, called extended target PMBM trackers. Compared to the extended target PMBM filter based on sets on targets, explicit track continuity between time steps is provided in the extended target PMBM trackers.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130108287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Derived Heading Using Lagrange Five Point Difference Method and Fusion (poster)","authors":"B. Sindhu, J. Valarmathi, S. Christopher","doi":"10.23919/fusion43075.2019.9011213","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011213","url":null,"abstract":"In general, bearing measurements combined with the derived heading increases the accuracy of target state estimate in bearing only tracking (BOT). Here, Lagrange five point difference (LFPD) method is proposed for deriving the heading measurements. This method is compared with Lagrange three point difference (LTPD) and centered difference (CD) method by considering the scenario of two sensors tracking a single target. The bearing and derived heading measurements from two sensors are fused using measurement fusion (MF) technique. The nonlinear Extended Kalman filter (EKF) is implemented using fused measurement to obtain the optimized target state estimate. Performance analyses are made between LFPD compared with LTPD and CD by considering without and with fusion technique. Simulation results indicate MF-LFPD performs comparatively better.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129360919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion based estimation of the a-priori probability distribution of unknown non-stationary processes","authors":"M. Junghans, A. Leich","doi":"10.23919/fusion43075.2019.9011183","DOIUrl":"https://doi.org/10.23919/fusion43075.2019.9011183","url":null,"abstract":"Non-stationary processes can be hard to handle, particular if one would like to know their characterizing time dependent probability functions. In this paper the a-priori probability distributions of unknown non-stationary processes are estimated with different combinations of weakly coupled sensors. For quantification of the unknown a-priori probabilities Bayesian Networks (BN) are adopted for data fusion and Dirichlet functions are applied on non-stationary, time-dependent maximum likelihood (ML) parameter learning. In several experiments the adaption of the non-stationary a-priori probability density functions is shown and the accuracy of data fusion regarding the underlying process variables with different characteristics are determined quantitatively. It is shown that the proposed algorithm can improve data fusion in case conditions for specific process and sensor characteristics are met.","PeriodicalId":348881,"journal":{"name":"2019 22th International Conference on Information Fusion (FUSION)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130069342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}