2011 17th International Conference on Digital Signal Processing (DSP)最新文献

筛选
英文 中文
CUDA accelerated illumination preprocessing on GPUs CUDA在gpu上加速光照预处理
2011 17th International Conference on Digital Signal Processing (DSP) Pub Date : 2011-07-06 DOI: 10.1109/ICDSP.2011.6004977
Nicholas A. Vandal, M. Savvides
{"title":"CUDA accelerated illumination preprocessing on GPUs","authors":"Nicholas A. Vandal, M. Savvides","doi":"10.1109/ICDSP.2011.6004977","DOIUrl":"https://doi.org/10.1109/ICDSP.2011.6004977","url":null,"abstract":"In this paper we develop a parallelized implementation of the anisotropic diffusion image preprocessing algorithm for illumination invariant face recognition proposed by Gross and Brajovic. Our implementation employs Red-Black Gauss-Seidel relaxation running on inexpensive Graphics Processing Units (GPUs) programmed with Nvidia's CUDA framework. We are able to achieve a 20X speedup over a multithreaded implementation running on a quadcore CPU. Additionally a comparison to an open-source implementation of anisotropic diffusion in the Torch3vision library is performed, demonstrating a GPU speedup of greater than 900X over this commonly used machine vision library.","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117307000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Edge-based predictive scanning scheme of DCT coefficients for inter-frame video coding 帧间视频编码中基于边缘的DCT系数预测扫描方案
2011 17th International Conference on Digital Signal Processing (DSP) Pub Date : 2011-07-06 DOI: 10.1109/ICDSP.2011.6004907
Xingyu Zhang, O. Au, Feng Zou, Run Cha, Jiali Li
{"title":"Edge-based predictive scanning scheme of DCT coefficients for inter-frame video coding","authors":"Xingyu Zhang, O. Au, Feng Zou, Run Cha, Jiali Li","doi":"10.1109/ICDSP.2011.6004907","DOIUrl":"https://doi.org/10.1109/ICDSP.2011.6004907","url":null,"abstract":"Scanning of quantized transform coefficients is a very significant procedure in video coding. In H.264, it affects the coding efficiency of the following CABAC or CAVLC entropy coder directly. In this paper, we propose a novel edge-based predictive scanning scheme to improve the coding efficiency for inter-frame coding. This scheme includes three scanning pattern candidates. Besides zigzag pattern as defined in H.264, two alternative patterns are obtained by on-line training on frame level. Specifically, reference block is utilized to predict edge information in current 8×8 block. Based on predictive edge information, a suitable scanning pattern will be selected to scan the quantized coefficients of current 8×8 block. Since a similar prediction process can be done in the decoder side as well, no overhead is needed to be transmitted in the bit-stream. Experimental results show that the proposed edge-based predictive scanning scheme yields an average of 0.42% BD-bitrate reduction over the H.264 high profile.","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"7 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120881363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sparse reconstrcution techniques for SAR tomography SAR层析成像稀疏重建技术
2011 17th International Conference on Digital Signal Processing (DSP) Pub Date : 2011-07-06 DOI: 10.1109/ICDSP.2011.6005022
Xiaoxiang Zhu, R. Bamler
{"title":"Sparse reconstrcution techniques for SAR tomography","authors":"Xiaoxiang Zhu, R. Bamler","doi":"10.1109/ICDSP.2011.6005022","DOIUrl":"https://doi.org/10.1109/ICDSP.2011.6005022","url":null,"abstract":"Tomographic SAR inversion, including SAR tomography and differential SAR tomography, is essentially a spectral analysis problem. The resolution in the elevation direction depends on the size of the elevation aperture, i.e. on the spread of orbit tracks. Since the orbits of modern meter-resolution space-borne SAR systems, like TerraSAR-X, are tightly controlled, the tomographic elevation resolution is at least an order of magnitude lower than in range and azimuth. Hence, super-resolution reconstruction algorithms are desired. The high anisotropy of the 3D tomographic resolution element renders the signals sparse in the elevation direction; only a few point-like reflections are expected per azimuth-range cell. Considering the sparsity of the signal in elevation, a compressive sensing based algorithm is proposed in this paper: “Scale-down by L1 norm Minimization, Model selection, and Estimation Reconstruction” (SL1MMER, pronounced “slimmer”). It combines the advantages of compressive sensing, e.g. super-resolution capability, with the high amplitude and phase accuracy of linear estimators, and features a model order selection step which is demonstrated with several examples using TerraSAR-X spotlight data. Moreover, we investigate the ultimate bounds of the technique on localization accuracy and super-resolution power. Finally, a practical demonstration of the super resolution of SL1MMER for SAR tomographic reconstruction is provided.","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130767347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Resolution enhancement based on wavelet atomic functions 基于小波原子函数的分辨率增强
2011 17th International Conference on Digital Signal Processing (DSP) Pub Date : 2011-07-06 DOI: 10.1109/ICDSP.2011.6004972
V. Ponomaryov, Francisco Gomeztagle, V. Kravchenko
{"title":"Resolution enhancement based on wavelet atomic functions","authors":"V. Ponomaryov, Francisco Gomeztagle, V. Kravchenko","doi":"10.1109/ICDSP.2011.6004972","DOIUrl":"https://doi.org/10.1109/ICDSP.2011.6004972","url":null,"abstract":"This study analysed the implementation and performance of a novel procedure based on wavelet atomic functions (WAF) used in the high-resolution reconstruction of colour and greyscale video sequences of different types. The approach, which is justified based on key Wavelet properties such as cosine approximation, Reisz values, etc., permits the enhancement of video resolution by four times in comparison with that of the initial image format. Statistical simulation results method based on WAFs performs better at improving resolution than do existing frameworks, both in terms of objective criteria and based on the more subjective measure of human vision. Implementations of the proposal on a Digital Signal Processor (DSP) have demonstrated the possibility of real-time processing.","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130791122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mitigate high power interference noise in chirp radar systems using EMD-FrFT filtering 利用EMD-FrFT滤波减轻啁啾雷达系统中的高功率干扰噪声
2011 17th International Conference on Digital Signal Processing (DSP) Pub Date : 2011-07-06 DOI: 10.1109/ICDSP.2011.6004955
S. Elgamel, J. Soraghan
{"title":"Mitigate high power interference noise in chirp radar systems using EMD-FrFT filtering","authors":"S. Elgamel, J. Soraghan","doi":"10.1109/ICDSP.2011.6004955","DOIUrl":"https://doi.org/10.1109/ICDSP.2011.6004955","url":null,"abstract":"This paper presents a new signal processing subsystem for conventional monopulse chirp tracking radars that offers an improved solution to the problem of dealing with manmade high power interference (jamming). It is based on the hybrid use of empirical mode decomposition (EMD) and fractional Fourier transform (FrFT). EMD-FrFT filtering is carried out for complex noisy radar chirp signals to decrease the signal's noisy components. An improvement in the signal-to-noise ratio (SNR) of up to 18 dB for different target SNRs is achieved using the proposed EMD-FrFT algorithm.","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127146848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Augmenting virtual-reality environments with social-signal based music content 用基于社交信号的音乐内容增强虚拟现实环境
2011 17th International Conference on Digital Signal Processing (DSP) Pub Date : 2011-07-06 DOI: 10.1109/ICDSP.2011.6004944
Ioannis Karydis, I. Deliyannis, A. Floros
{"title":"Augmenting virtual-reality environments with social-signal based music content","authors":"Ioannis Karydis, I. Deliyannis, A. Floros","doi":"10.1109/ICDSP.2011.6004944","DOIUrl":"https://doi.org/10.1109/ICDSP.2011.6004944","url":null,"abstract":"Virtual environments and computer games incorporate music in order to enrich the audiovisual experience and further immerse users. Selecting musical content during design-time can have a controversial result based on the preferences of the users involved, while limiting the interactivity of the environment, affecting thus the effectiveness of immersion. In this work, we introduce a framework for the selection and incorporation of user preferable musical data into interactive virtual environments and games. The framework designates guidelines for both design and run-time annotation of scenes. Consequently, personal music preferences collected through local repositories or social networks can be processed, analysed, categorised and prepared for direct incorporation into virtual environments. This permits automated audio selection based on scene characteristics and scene characters' interaction, enriching or replacing the default designer choices. Proof-of-concept is given via development of a web-service that provides a video game with a dynamic interactive audio content based on predefined video game scene annotation and user musical preferences recorded in social network services.","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125577233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Blind copy move image forgery detection using dyadic undecimated wavelet transform 基于二进非消差小波变换的盲复制运动图像伪造检测
2011 17th International Conference on Digital Signal Processing (DSP) Pub Date : 2011-07-06 DOI: 10.1109/ICDSP.2011.6004974
G. Muhammad, M. Hussain, Khalid Khawaji, G. Bebis
{"title":"Blind copy move image forgery detection using dyadic undecimated wavelet transform","authors":"G. Muhammad, M. Hussain, Khalid Khawaji, G. Bebis","doi":"10.1109/ICDSP.2011.6004974","DOIUrl":"https://doi.org/10.1109/ICDSP.2011.6004974","url":null,"abstract":"In this paper, we propose a blind copy move image forgery detection method using dyadic wavelet transform (DyWT). DyWT is shift invariant and therefore more suitable than discrete wavelet transform (DWT) for data analysis. First we decompose the input image into approximation (LL1) and detail (HH1) subbands. Then we divide LL1 and HH1 subbands into overlapping blocks and measure the similarity between blocks. The key idea is that the similarity between the copied and moved blocks from the LL1 subband should be high, while the one from the HH1 subband should be low due to noise inconsistency in the moved block. We sort pairs of blocks based on high similarity using the LL1 subband and high dissimilarity using the HH1 subband. Using thresholding, we obtain matched pairs from the sorted list as copied and moved blocks. Experimental results show the effectiveness of the proposed method over competitive methods using DWT and the LL1 or HH1 subbands only.","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"1053 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123151534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Concatenative singing voice resynthesis 串联唱腔再合成
2011 17th International Conference on Digital Signal Processing (DSP) Pub Date : 2011-07-06 DOI: 10.1109/ICDSP.2011.6004986
N. Fonseca, Aníbal J. S. Ferreira, Ana Paula Rocha
{"title":"Concatenative singing voice resynthesis","authors":"N. Fonseca, Aníbal J. S. Ferreira, Ana Paula Rocha","doi":"10.1109/ICDSP.2011.6004986","DOIUrl":"https://doi.org/10.1109/ICDSP.2011.6004986","url":null,"abstract":"The concept of capturing the sound of “something” for later replication is not new, and it is used in many synthesizers. But capturing sounds and use them as an audio effect, is less common. This paper presents an approach for the resynthesis of a singing voice, based on concatenative techniques, that uses pre-recorded audio material as an high level semantic audio effect, replacing an original audio recording with the sound of a different singer, while trying to keep the same musical/phonetic performance.","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114579114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Face detection in a compressed domain 压缩域中的人脸检测
2011 17th International Conference on Digital Signal Processing (DSP) Pub Date : 2011-07-06 DOI: 10.1109/ICDSP.2011.6004888
Guido Manfredi, D. Ziou, M. Auclair-Fortier
{"title":"Face detection in a compressed domain","authors":"Guido Manfredi, D. Ziou, M. Auclair-Fortier","doi":"10.1109/ICDSP.2011.6004888","DOIUrl":"https://doi.org/10.1109/ICDSP.2011.6004888","url":null,"abstract":"We focus on the Viola-Jones face detector in the Discrete Cosine Transform (DCT). In order to avoid typical pitfalls due to features in the DCT domain we propose to merge the integral image stage with the decompression process, thus saving time for both operations. The proposed method saves 128 additions and 224 multiplications on an 8×8 block for a simple integral image. We propose a similar method for fast feature contrast adjustment by computing the squared integral image using DCT and simple integral image coefficients. These methods are faster and more transmission error resilient than classical methods.","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115991009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Variational level set method with shape constraint and application to oedema cardiac magnetic resonance image 形状约束的变分水平集方法及其在水肿心脏磁共振图像中的应用
2011 17th International Conference on Digital Signal Processing (DSP) Pub Date : 2011-07-06 DOI: 10.1109/ICDSP.2011.6004960
K. Kadir, Hao Gao, A. Payne, J. Soraghan, C. Berry
{"title":"Variational level set method with shape constraint and application to oedema cardiac magnetic resonance image","authors":"K. Kadir, Hao Gao, A. Payne, J. Soraghan, C. Berry","doi":"10.1109/ICDSP.2011.6004960","DOIUrl":"https://doi.org/10.1109/ICDSP.2011.6004960","url":null,"abstract":"Quantification of oedema area after acute myocardial infarction (MI) is very important in clinical prognosis for differentiating the viable and death myocardial tissues. In order to quantify oedema region, the first step is to segment the myocardial wall accurately. This paper applies variational level set method with shape constraint to oedema cardiac magnetic resonance (CMR) images. Shape information of the myocardial wall is introduced into the variational level set formulation, and the performance of the automatic method is tested on T2 weighted CMR images from 8 patients, and compared with manual analysis from two clinical experts. Results show that the proposed automatic segmentation framework can segment left ventricle (LV) boundary with no significant difference compared to manual segmentation,","PeriodicalId":360702,"journal":{"name":"2011 17th International Conference on Digital Signal Processing (DSP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122002355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信