Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII最新文献

筛选
英文 中文
Estimation of single-point sea-surface brightness statistics (Conference Presentation) 单点海面亮度统计估计(会议报告)
K. Nielson
{"title":"Estimation of single-point sea-surface brightness statistics (Conference Presentation)","authors":"K. Nielson","doi":"10.1117/12.2304912","DOIUrl":"https://doi.org/10.1117/12.2304912","url":null,"abstract":"","PeriodicalId":115861,"journal":{"name":"Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123672174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of emerging quantum information technologies (QIT) on information fusion: panel summary (Conference Presentation) 新兴量子信息技术(QIT)对信息融合的影响:小组总结(会议报告)
Erik Blasch, B. Balaji, I. Kadar
{"title":"Impact of emerging quantum information technologies (QIT) on information fusion: panel summary (Conference Presentation)","authors":"Erik Blasch, B. Balaji, I. Kadar","doi":"10.1117/12.2305578","DOIUrl":"https://doi.org/10.1117/12.2305578","url":null,"abstract":"Quantum physics has a growing influence on sensor technology; particularly, in the areas of quantum computer science, quantum communications, and quantum sensing based on recent insights from atomic, molecular and optical physics. These quantum contributions have the potential to impact information fusion techniques. Quantum information technology (QIT) methods of interest suggest benefits for information fusion, so a panel was organized to articulate methods of importance for the community. The panel discussion presented many ideas from which the leading impact for information fusion is directly related to the sub-Rayleigh sensing that reduces uncertainty for object assessment through enhanced resolution. The second areas of importance is in the cyber security of data that supports data, sensor, and information fusion. Some elements of QIT that require further analysis is in quantum computing for which only a limited set of information fusion techniques can harness the methods associated with quantum computer architectures. The panel reviewed various aspects of QIT for information fusion which provides a foundation to identify future alignment between quantum and information fusion techniques.","PeriodicalId":115861,"journal":{"name":"Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134019892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale synthetic SAR and IR imagery features generation in the cluttered virtual environment (Conference Presentation) 杂乱虚拟环境下多尺度合成SAR和IR图像特征的生成(会议报告)
A. Shirkhodaie, Yuanyuan Zhou, Leila Borooshak
{"title":"Multiscale synthetic SAR and IR imagery features generation in the cluttered virtual environment (Conference Presentation)","authors":"A. Shirkhodaie, Yuanyuan Zhou, Leila Borooshak","doi":"10.1117/12.2305539","DOIUrl":"https://doi.org/10.1117/12.2305539","url":null,"abstract":"","PeriodicalId":115861,"journal":{"name":"Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132764687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-camera multi-target perceptual activity recognition via meta-data fusion (Conference Presentation) 基于元数据融合的多相机多目标感知活动识别(会议报告)
A. Shirkhodaie, Kalyankumar Bogi
{"title":"Multi-camera multi-target perceptual activity recognition via meta-data fusion (Conference Presentation)","authors":"A. Shirkhodaie, Kalyankumar Bogi","doi":"10.1117/12.2305283","DOIUrl":"https://doi.org/10.1117/12.2305283","url":null,"abstract":"Human activity detection and recognition capabilities have broad applications for civilian, military, and homeland security. However, monitoring of human activities are very complicated and tedious tasks especially when multiple persons involved perform activities in confined spaces that impose significant obstruction, occultation and observability uncertainty. These applications require fast and reliable tracking systems to observe and inference dynamic objects from multiple coherent video sequences. In compact surveillance systems utilization of multi-cameras monitoring system is highly imperative for tracking, inference, and recognition of variety of group activities. With multi-cameras systems, complexity of occultation can be dealt with by finding and correlating the correspondences from within multiple cameras views observing the same target at once. In this paper, we demonstrate one such a multi-person tracking system developed in a virtual environment. By example, we demonstrate an efficient and effective technique for multi-target tracking, discrimination, and activity recognition in confined spaces. The exemplary scenario considered under this study represents a bus activity where multiple passengers arrive, take seats, and leave while being monitoring by four concurrently operating surveillance camera systems. In this paper, we present how processing tasks of multiple cameras are shared, what objects features they detect, track, and identify jointly. Furthermore, we present the computational intelligence techniques for processing multi-camera images for recognition of objects of interest as well as for annotation of observed individual and group activities via meta-data imagery fusion. The proposed multi-camera processing system is shown to have efficiency and effectively to track multiple targets with different degree of social interactions either with one another or with objects involved with their activities.","PeriodicalId":115861,"journal":{"name":"Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116354618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object recognition and tracking based on multiscale synthetic SAR and IR in the virtual environment (Conference Presentation) 虚拟环境中基于多尺度合成SAR和IR的目标识别与跟踪(会议报告)
A. Shirkhodaie, Cheng Zhang, Leila Borooshak, Yuanyuan Zhou
{"title":"Object recognition and tracking based on multiscale synthetic SAR and IR in the virtual environment (Conference Presentation)","authors":"A. Shirkhodaie, Cheng Zhang, Leila Borooshak, Yuanyuan Zhou","doi":"10.1117/12.2305540","DOIUrl":"https://doi.org/10.1117/12.2305540","url":null,"abstract":"Identification and tracking of dynamic 3D objects from Synthetic Aperture Radar (SAR) and Infrared (IR) Thermal imaging in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we primarily present an approach for 3D objects recognition and tracking based on their multi-modality (e.g., SAR and IR) imagery signatures and discuss a multi-scale scheme for multi-modality imagery salient keypoint descriptors extraction from 3D objects. Next, we describe how to cluster local salient keypoints and model them as signature surface patch features suitable for object detection and recognition. During our supervised training phase, multiple views of test model are presented to the system where a set of multi-scale invariant surface features are extracted from each model and registered as the object’s class signature exemplar. These features are employed during the online recognition phase to generate recognition hypotheses. When each object of interest is verified and recognized, the object’s attributes are annotated semantically. The coded semantic annotations are then efficiently presented to a Hidden Markov Model (HMM) for spatiotemporal object state discovery and tracking. Through this process, corresponding features of same objects from multiple sequential multi-modality imagery data are realized and tracked overtime. The proposed algorithm was tested using IRIS simulation model where two test scenarios were constructed. One scenario is used for activity recognition of ground-based vehicles, and the other one is used for classification of Unmanned Aerial Vehicles (UAV’s). In both scenarios, synthetic SAR and IR imagery are generated using IRIS simulation model for the purpose of training and testing of newly developed algorithms. Experimental results show that our algorithms offer significant efficiency and effectiveness.","PeriodicalId":115861,"journal":{"name":"Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127874851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning of group activities from partially observable surveillance video streams (Conference Presentation) 从部分可观察的监控视频流中深度学习群体活动(会议报告)
A. Shirkhodaie
{"title":"Deep learning of group activities from partially observable surveillance video streams (Conference Presentation)","authors":"A. Shirkhodaie","doi":"10.1117/12.2305286","DOIUrl":"https://doi.org/10.1117/12.2305286","url":null,"abstract":"","PeriodicalId":115861,"journal":{"name":"Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130513936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Poisson maximum likelihood spectral inference (Conference Presentation) 泊松最大似然谱推断(会议报告)
D. Emge
{"title":"Poisson maximum likelihood spectral inference (Conference Presentation)","authors":"D. Emge","doi":"10.1117/12.2305198","DOIUrl":"https://doi.org/10.1117/12.2305198","url":null,"abstract":"","PeriodicalId":115861,"journal":{"name":"Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127143270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FLYSEC: A comprehensive control, command and Information (C2I) system for risk-based security FLYSEC:一种基于风险的综合控制、指挥和信息(C2I)系统
A. Zalonis, S. Thomopoulos, D. Kyriazanos
{"title":"FLYSEC: A comprehensive control, command and Information (C2I) system for risk-based security","authors":"A. Zalonis, S. Thomopoulos, D. Kyriazanos","doi":"10.1117/12.2500144","DOIUrl":"https://doi.org/10.1117/12.2500144","url":null,"abstract":"Increased passenger flows at airports and the need for enhanced security measures from ever increasing and more complex threats lead to long security lines, increased waiting times, as well as often intrusive and disproportionate security measures that result in passenger dissatisfaction and escalating costs. As expressed by the International Air Transport Association (IATA), the Airports Council International, (ACI) and the respective industry, todays airport security model is not sustainable in the long term. The vision for a seamless and continuous journey throughout the airport and efficient security resources allocation based on intelligent risk analysis, set the challenging objectives for the Smart Security of the airport of the future. FLYSEC, a research and innovation project funded by the European Commission under the Horizon 2020 Framework Programme, developed and demonstrated an innovative integrated and risk-based end-to-end airport security process for passengers, while enabling a guided and streamlined procedure from landside to airside and into the boarding gates, offering for the first time an operationally validated innovative concept for end-to-end aviation security. With a consortium of eleven highly specialised partners, coordinated by the National Center for Scientific Research “Demokritos,” FLYSEC developed and tested an integrated risk-based security system with a POC (Proof Of Concept) validation field trial at the Schonhagen Airport in Berlin, and a final pilot demonstration under operational conditions at the Luxembourg International Airport.","PeriodicalId":115861,"journal":{"name":"Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134345209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Embedding a distributed simulator in a fully-operational control and command airport security system 在机场安全控制和指挥系统中嵌入分布式模拟器
Stelios Daveas, S. Thomopoulos
{"title":"Embedding a distributed simulator in a fully-operational control and command airport security system","authors":"Stelios Daveas, S. Thomopoulos","doi":"10.1117/12.2500143","DOIUrl":"https://doi.org/10.1117/12.2500143","url":null,"abstract":"Command and Control (C2) airport security systems have developed over time, both in terms of technology and in terms of increased security features. Airport control check points are required to operate and maintain modern security systems preventing malicious actions. This paper describes the architecture of embedding a fully distributed, sophisticated simulation platform within a fully operational and robust, state-of-the-art, C2 security system in the context of airport security. The overall system, i.e. the C2, the classification tool and the embedded simulator, delivers a fully operating, validated platform which focuses on: (a) the end-to-end airport security process for passengers, airports and airlines, and (b) the ability to test and validate all security subsystems, processes, as well as the entire security system, via realistically generated and simulated scenarios both in vitro and in vivo. The C2 system has been integrated with iCrowd, a Crowd Simulation platform developed by the Integrated Systems Lab of the Institute of Informatics and Telecommunications in NCSR Demokritos, that features a highly-configurable, high-fidelity agent-based behavior simulator. iCrowd provides a realistic environment inciting behaviors of simulated actors (e.g. passengers, personnel, malicious actors), instantiates the functionality of hardware security technologies (e.g. Beacons, RFID scanners and RFID tags for carry-on luggage tracking) and simulates passengers’ facilitation and customer service. To create a realistic and domain agnostic scenario, multiple simulation instances undertake different kind of entities - whose plans and actions would be naturally unknown to each other - and run in sync constituting a Distributed Simulation Platform. Primary goal is to enable a guided and streamlined procedure from land-side to air-side and into the boarding gates, while offering an operationally validated innovative concept for testing end-to-end aviation security processes, procedures and infrastructure.","PeriodicalId":115861,"journal":{"name":"Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121183933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Front Matter: Volume 10646 封面:第10646卷
{"title":"Front Matter: Volume 10646","authors":"","doi":"10.1117/12.2500434","DOIUrl":"https://doi.org/10.1117/12.2500434","url":null,"abstract":"","PeriodicalId":115861,"journal":{"name":"Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115145273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信