SPIE Defense + Security最新文献

筛选
英文 中文
Electronic and structural response of materials to fast intense laser pulses, including light-induced superconductivity 材料对快速强激光脉冲的电子和结构响应,包括光致超导性
SPIE Defense + Security Pub Date : 2016-06-02 DOI: 10.1117/12.2225129
R. Allen
{"title":"Electronic and structural response of materials to fast intense laser pulses, including light-induced superconductivity","authors":"R. Allen","doi":"10.1117/12.2225129","DOIUrl":"https://doi.org/10.1117/12.2225129","url":null,"abstract":"This is a very brief discussion of some experimental and theoretical studies of materials responding to fast intense laser pulses, with emphasis on those cases where the electronic response and structural response are both potentially important (and ordinarily coupled). Examples are nonthermal insulator-to-metal transitions and light-induced superconductivity in cuprates, fullerenes, and an organic Mott insulator.","PeriodicalId":222501,"journal":{"name":"SPIE Defense + Security","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122989132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Comparison of turbulence mitigation algorithms 湍流缓解算法的比较
SPIE Defense + Security Pub Date : 2016-06-02 DOI: 10.1117/12.2225617
Stephen T. Kozacik, Aaron L. Paolini, J. Bonnett, E. Kelmelis
{"title":"Comparison of turbulence mitigation algorithms","authors":"Stephen T. Kozacik, Aaron L. Paolini, J. Bonnett, E. Kelmelis","doi":"10.1117/12.2225617","DOIUrl":"https://doi.org/10.1117/12.2225617","url":null,"abstract":"When capturing image data over long distances (0.5 km and above), images are often degraded by atmospheric turbulence, especially when imaging paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms have different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for post-processing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005 as a part of our ATCOM [1] image processing suite. In this paper we will compare techniques from the literature with our commercially available real-time GPU accelerated turbulence mitigation software suite, as well as in-house research algorithms. These comparisons will be made using real, experimentally-obtained data for a variety of different conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation.","PeriodicalId":222501,"journal":{"name":"SPIE Defense + Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131389711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A comparative study of multi-focus image fusion validation metrics 多焦点图像融合验证指标的比较研究
SPIE Defense + Security Pub Date : 2016-05-31 DOI: 10.1117/12.2224349
Michael Giansiracusa, Adam Lutz, Neal Messer, Soundararajan Ezekiel, M. Alford, Erik Blasch, A. Bubalo, Michael Manno
{"title":"A comparative study of multi-focus image fusion validation metrics","authors":"Michael Giansiracusa, Adam Lutz, Neal Messer, Soundararajan Ezekiel, M. Alford, Erik Blasch, A. Bubalo, Michael Manno","doi":"10.1117/12.2224349","DOIUrl":"https://doi.org/10.1117/12.2224349","url":null,"abstract":"Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).","PeriodicalId":222501,"journal":{"name":"SPIE Defense + Security","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123663219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-focus and multi-modal fusion: a study of multi-resolution transforms 多焦点多模态融合:多分辨率变换研究
SPIE Defense + Security Pub Date : 2016-05-31 DOI: 10.1117/12.2224347
Michael Giansiracusa, Adam Lutz, Soundararajan Ezekiel, M. Alford, Erik Blasch, A. Bubalo, M. Thomas
{"title":"Multi-focus and multi-modal fusion: a study of multi-resolution transforms","authors":"Michael Giansiracusa, Adam Lutz, Soundararajan Ezekiel, M. Alford, Erik Blasch, A. Bubalo, M. Thomas","doi":"10.1117/12.2224347","DOIUrl":"https://doi.org/10.1117/12.2224347","url":null,"abstract":"Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.","PeriodicalId":222501,"journal":{"name":"SPIE Defense + Security","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115264829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Bandelet-based image fusion: a comparative study for multi-focus images 基于带状波的图像融合:多聚焦图像的比较研究
SPIE Defense + Security Pub Date : 2016-05-31 DOI: 10.1117/12.2224329
Michael Giansiracusa, Adam Lutz, Neal Messer, Soundararajan Ezekiel, E. Blasch, M. Alford
{"title":"Bandelet-based image fusion: a comparative study for multi-focus images","authors":"Michael Giansiracusa, Adam Lutz, Neal Messer, Soundararajan Ezekiel, E. Blasch, M. Alford","doi":"10.1117/12.2224329","DOIUrl":"https://doi.org/10.1117/12.2224329","url":null,"abstract":"There is a strong initiative to maximize visual information in a single image for viewing by fusing the salient data from multiple images. Many multi-focus imaging systems exist that would be able to provide better image data if these images are fused together. A fused image would allow an analyst to make decisions based on a single image rather than crossreferencing multiple images. The bandelet transform has proven to be an effective multi-resolution transform for both denoising and image fusion through its ability to calculate geometric flow in localized regions and decompose the image based on an orthogonal basis in the direction of the flow. Many studies have been done to develop and validate algorithms for wavelet image fusion but the bandelet has not been well investigated. This study seeks to investigate the use of the bandelet coefficients versus wavelet coefficients in modified versions of image fusion algorithms. There are many different methods for fusing these coefficients together for multi-focus and multi-modal images such as the simple average, absolute min and max, Principal Component Analysis (PCA) and a weighted average. This paper compares the image fusion methods with a variety of no reference image fusion metrics including information theory based, image feature based and structural similarity based assessments.","PeriodicalId":222501,"journal":{"name":"SPIE Defense + Security","volume":"519 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134317189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimal multi-focus contourlet-based image fusion algorithm selection 基于contourlet的多焦点图像融合优化算法选择
SPIE Defense + Security Pub Date : 2016-05-31 DOI: 10.1117/12.2224325
Adam Lutz, Michael Giansiracusa, Neal Messer, Soundararajan Ezekiel, E. Blasch, M. Alford
{"title":"Optimal multi-focus contourlet-based image fusion algorithm selection","authors":"Adam Lutz, Michael Giansiracusa, Neal Messer, Soundararajan Ezekiel, E. Blasch, M. Alford","doi":"10.1117/12.2224325","DOIUrl":"https://doi.org/10.1117/12.2224325","url":null,"abstract":"Multi-focus image fusion is becoming increasingly prevalent, as there is a strong initiative to maximize visual information in a single image by fusing the salient data from multiple images for visualization. This allows an analyst to make decisions based on a larger amount of information in a more efficient manner because multiple images need not be cross-referenced. The contourlet transform has proven to be an effective multi-resolution transform for both denoising and image fusion through its ability to pick up the directional and anisotropic properties while being designed to decompose the discrete two-dimensional domain. Many studies have been done to develop and validate algorithms for wavelet image fusion, but the contourlet has not been as thoroughly studied. When the contourlet coefficients for the wavelet coefficients are substituted in image fusion algorithms, it is contourlet image fusion. There are a multitude of methods for fusing these coefficients together and the results demonstrate that there is an opportunity for fusing coefficients together in the contourlet domain for multi-focus images. This paper compared the algorithms with a variety of no reference image fusion metrics including information theory based, image feature based and structural similarity based assessments to select the image fusion method.","PeriodicalId":222501,"journal":{"name":"SPIE Defense + Security","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132830936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition 运动人体全身和身体部位的检测、跟踪,以及在人体活动估计、行走模式和人脸识别方面的应用
SPIE Defense + Security Pub Date : 2016-05-31 DOI: 10.1117/12.2224319
Hai-Wen Chen, Mike McGurr
{"title":"Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition","authors":"Hai-Wen Chen, Mike McGurr","doi":"10.1117/12.2224319","DOIUrl":"https://doi.org/10.1117/12.2224319","url":null,"abstract":"We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.","PeriodicalId":222501,"journal":{"name":"SPIE Defense + Security","volume":"125 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113945033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Efficient sidelobe ASK based dual-function radar-communications 基于旁瓣ASK的高效双功能雷达通信
SPIE Defense + Security Pub Date : 2016-05-26 DOI: 10.1117/12.2227252
A. Hassanien, M. Amin, Yimin D. Zhang, F. Ahmad
{"title":"Efficient sidelobe ASK based dual-function radar-communications","authors":"A. Hassanien, M. Amin, Yimin D. Zhang, F. Ahmad","doi":"10.1117/12.2227252","DOIUrl":"https://doi.org/10.1117/12.2227252","url":null,"abstract":"Recently, dual-function radar-communications (DFRC) has been proposed as means to mitigate the spectrum congestion problem. Existing amplitude-shift keying (ASK) methods for information embedding do not take full advantage of the highest permissable sidelobe level. In this paper, a new ASK-based signaling strategy for enhancing the signal-to-noise ratio (SNR) at the communication receiver is proposed. The proposed method employs one reference waveform and simultaneously transmits a number of orthogonal waveforms equals to the number of 1's in the binary sequence being embedded. 3 dB SNR gain is achieved using the proposed method as compared to existing sidelobe ASK methods. The effectiveness of the proposed information embedding strategy is verified using simulations examples.","PeriodicalId":222501,"journal":{"name":"SPIE Defense + Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126737518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Computationally efficient beampattern synthesis for dual-function radar-communications 双功能雷达通信中计算效率高的波束图合成
SPIE Defense + Security Pub Date : 2016-05-26 DOI: 10.1117/12.2227333
A. Hassanien, M. Amin, Yimin D. Zhang
{"title":"Computationally efficient beampattern synthesis for dual-function radar-communications","authors":"A. Hassanien, M. Amin, Yimin D. Zhang","doi":"10.1117/12.2227333","DOIUrl":"https://doi.org/10.1117/12.2227333","url":null,"abstract":"The essence of amplitude-modulation based dual-function radar-communications is to modulate the sidelobe of the transmit beampattern while keeping the main beam, where the radar function takes place, unchanged during the entire processing interval. The number of distinct sidelobe levels (SLL) required for information embedding grows exponentially with the number of bits being embedded. We propose a simple and computationally cheap method for transmit beampattern synthesis which requires designing and storing only two beamforming weight vectors. The proposed method first designs a principal transmit beamforming weight vector based on the requirements dictated by the radar function of the DFRC system. Then, a second weight vectors is obtained by enforcing a deep null towards the intended communication directions. Additional SLLs can be realized by simply taking weighted linear combinations of the two available weight vectors. The effectiveness of the proposed method for beampattern synthesis is verified using simulations examples.","PeriodicalId":222501,"journal":{"name":"SPIE Defense + Security","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127703748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Evaluation of the use of 3D printing and imaging to create working replica keys 评估使用3D打印和成像来创建工作副本钥匙
SPIE Defense + Security Pub Date : 2016-05-26 DOI: 10.1117/12.2225187
J. Straub, Scott D. Kerlin
{"title":"Evaluation of the use of 3D printing and imaging to create working replica keys","authors":"J. Straub, Scott D. Kerlin","doi":"10.1117/12.2225187","DOIUrl":"https://doi.org/10.1117/12.2225187","url":null,"abstract":"This paper considers the efficacy of 3D scanning and printing technologies to produce duplicate keys. Duplication of keys, based on remote-sensed data represents a significant security threat, as it removes pathways to determining who illicitly gained access to a secured premises. Key to understanding the threat posed is the characterization of the easiness of gaining the required data for key production and an understanding of how well keys produced under this method work. The results of an experiment to characterize this are discussed and generalized to different key types. The effect of alternate sources of data on imaging requirements is considered.","PeriodicalId":222501,"journal":{"name":"SPIE Defense + Security","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134001969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信