2009 IEEE International Conference on Signal and Image Processing Applications最新文献

筛选
英文 中文
A level set based predictor-corrector algorithm for vessel segmentation 基于水平集的血管分割预测校正算法
2009 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2009-11-01 DOI: 10.1109/ICSIPA.2009.5478611
Wei Yan, T. Zhu, Yongming Xie, Wai-Man Pang, J. Qin, Jianhuang Wu, P. Heng
{"title":"A level set based predictor-corrector algorithm for vessel segmentation","authors":"Wei Yan, T. Zhu, Yongming Xie, Wai-Man Pang, J. Qin, Jianhuang Wu, P. Heng","doi":"10.1109/ICSIPA.2009.5478611","DOIUrl":"https://doi.org/10.1109/ICSIPA.2009.5478611","url":null,"abstract":"Vessel segmentation is an essential task in many computer-aided medical systems. However, the topology complexity of vascular structures and the intensity inhomogeneity of angiogram make it a challenging problem. We propose a level set based predictor-corrector algorithm to meet these challenges. In the predictor step, the overall contour of vessel structures is delineated by piecewise constant (PC) model, which is insensitive to the initial contour and adaptive to the complex morphological variations of vessel structures. In the corrector step, the segmented results are refined by an improved local binary fitting (LBF) model, which can efficiently deal with intensity inhomogeneity in the angiogram, especially in the distal part of the vessels. Compared to original LBF model, our approach can avoid the emergence of new contour in non-vascular regions. The proposed algorithm takes both global and local information into consideration and combines the advantages of PC model and LBF model. Experimental results on MRA images demonstrate the feasibility of our algorithm.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129802788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A simple approach to determine the best threshold value for automatic image thresholding 一个简单的方法来确定最佳阈值的自动图像阈值
2009 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2009-11-01 DOI: 10.1109/ICSIPA.2009.5478623
Abdul Halim Ismail, M. Marhaban
{"title":"A simple approach to determine the best threshold value for automatic image thresholding","authors":"Abdul Halim Ismail, M. Marhaban","doi":"10.1109/ICSIPA.2009.5478623","DOIUrl":"https://doi.org/10.1109/ICSIPA.2009.5478623","url":null,"abstract":"Image thresholding is a powerful yet simple method to highlight the subject from its background in image scene analysis. Lots of methods have been proposed around the globe while some researchers regard this matter as a non-trivial problem. This paper proposes a simple approach for fast calculation of the threshold value for automatic image thresholding based on gradient analysis of the image histogram. The method manages to successfully differentiate the subject from the background. The proposed approach is validated by illustrative examples. Satisfactory results were acquired with other methods that use more complex algorithms.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128670608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Extraction of human gait features from enhanced human silhouette images 增强人体轮廓图像中步态特征的提取
2009 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2009-11-01 DOI: 10.1109/ICSIPA.2009.5478691
Hu Ng, W. Tan, Hau-Lee Tong, J. Abdullah, R. Komiya
{"title":"Extraction of human gait features from enhanced human silhouette images","authors":"Hu Ng, W. Tan, Hau-Lee Tong, J. Abdullah, R. Komiya","doi":"10.1109/ICSIPA.2009.5478691","DOIUrl":"https://doi.org/10.1109/ICSIPA.2009.5478691","url":null,"abstract":"In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette image. The approach consists of five stages: clearing the background noise of image by morphological opening; measuring the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; and applying Hough transform to obtain the joint angles from the body segment skeletons. The joint angles together with the height and width of the human silhouette are collected and used for gait analysis. From the experiment conducted, it can be observed that the proposed system is feasible as satisfactory results have been achieved.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129102914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Multi-level segmentation method for serial computed tomography brain images 串行计算机断层扫描脑图像的多级分割方法
2009 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2009-11-01 DOI: 10.1109/ICSIPA.2009.5478636
W. M. Diyana, W. Zaki, M. Faizal, A. Fauzi, R. Besar, W. Munirah, Wan Siti Halimatul Munirah Wan Ahmad
{"title":"Multi-level segmentation method for serial computed tomography brain images","authors":"W. M. Diyana, W. Zaki, M. Faizal, A. Fauzi, R. Besar, W. Munirah, Wan Siti Halimatul Munirah Wan Ahmad","doi":"10.1109/ICSIPA.2009.5478636","DOIUrl":"https://doi.org/10.1109/ICSIPA.2009.5478636","url":null,"abstract":"This paper presents an automated computed tomography brain segmentation approach used to segment intracranial into brain matters and cerebrospinal fluid in order to detect any asymmetry present. Intracranial midline is used as reference axial where left and right segmented regions are subjectively compared. Two-level Otsu multi-thresholding method has been developed and applied to 213 abnormal cases of serial computed tomography brain images of thirty one patients. Prior to that, multilevel Fuzzy C-Means is used to extract the intracranial from background and skull. The segmented regions found to be very useful in providing information regarding normal and abnormal structures in the intracranial where any asymmetry detected would indicate high probability of abnormalities. This approach proved to effectively isolate important homogenous regions of computed tomography brain images from which extracted features would provide a strong basis in the application of content-based medical image retrieval.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125223683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A new robust wavelet based algorithm for baseline wandering cancellation in ECG signals 一种新的基于小波的心电信号基线漂移消除算法
2009 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2009-11-01 DOI: 10.1109/ICSIPA.2009.5478671
A. Sargolzaei, K. Faez, S. Sargolzaei
{"title":"A new robust wavelet based algorithm for baseline wandering cancellation in ECG signals","authors":"A. Sargolzaei, K. Faez, S. Sargolzaei","doi":"10.1109/ICSIPA.2009.5478671","DOIUrl":"https://doi.org/10.1109/ICSIPA.2009.5478671","url":null,"abstract":"Wavelet transform has been emerged over recent years as a powerful time-frequency analysis and signal coding tool favored for the interrogation of complex non stationary signals. Its application to bio-signal processing has been at the forefront of these developments where it has been found particularly useful in the study of these, often problematic, signals: none more so than the Electrocardiogram (ECG). In this paper, the emerging roles of the wavelet transform in the ECG preprocessing and noise removing step is discussed in detail. One of the most important noise sources, baseline wandering, which can be affected ECG signal analysis is introduced and a new method based on wavelet transform is being proposed. The proposed method construct a model of baseline wander with multiresolution analysis of the signal using discrete wavelet transform and then remove the baseline wander from the ECG signal using the constructed model. Simulations were carried out to show the performance of the algorithm using the MIT-BIH noise stress test database and PTB diagnosis database. The quality of the results by the proposed technique is found to meet or exceed that of published results using other conventional methods such as kalman filtering and conventional digital filters.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117054445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Efficient textile recognition via decomposition of co-occurrence matrices 通过共现矩阵的分解有效识别纺织品
2009 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2009-11-01 DOI: 10.1109/ICSIPA.2009.5478606
K. Loke, M. Cheong
{"title":"Efficient textile recognition via decomposition of co-occurrence matrices","authors":"K. Loke, M. Cheong","doi":"10.1109/ICSIPA.2009.5478606","DOIUrl":"https://doi.org/10.1109/ICSIPA.2009.5478606","url":null,"abstract":"Textile motifs such as Batik and Songket are common native textile design throughout South East Asia, and are often imbued with cultural and spiritual meanings. However despite its cultural importance, automatic classification and retrieval work based on design motifs are not extensive. Previous work based on texture classification methods have proved successful but uses over 700 attributes. We show in this work that the number of attributes can be reduced down to 2% without significantly reducing the classification rate. This indicates that with the appropriate attribute reduction, fast recognition and classification of Batik and Songket textiles can be achieved.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121610934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
An elastic band model for Shape retrieval 形状检索的弹性带模型
2009 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2009-11-01 DOI: 10.1109/ICSIPA.2009.5478678
Soonwook Hwang, Won-Du Chang, Jungpil Shin
{"title":"An elastic band model for Shape retrieval","authors":"Soonwook Hwang, Won-Du Chang, Jungpil Shin","doi":"10.1109/ICSIPA.2009.5478678","DOIUrl":"https://doi.org/10.1109/ICSIPA.2009.5478678","url":null,"abstract":"In this paper, we present novel model to find the contour of an image in phases. The model was designed by modelling physical motion of a real elastic band. This is applied to top of any existing shape representation methods in order to offer additional information. The model makes us deal with global and local shape characteristics at once and results in robustness to distortion and noise on the retrieval system as it does not miss the detailed data. Dynamic time warping algorithm is employed to measure the similarity between shapes. We have experimented in MPEG-7-CE-Shape-1 part B image database and the results show that our model is robust to distortion, noise and images having complexities inside of the boundary.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121701203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Neural Network based system for Persian sign language recognition 基于神经网络的波斯语手语识别系统
2009 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2009-11-01 DOI: 10.1109/ICSIPA.2009.5478627
Azadeh Kiani Sarkaleh, F. Poorahangaryan, Bahman Zanj, A. Karami
{"title":"A Neural Network based system for Persian sign language recognition","authors":"Azadeh Kiani Sarkaleh, F. Poorahangaryan, Bahman Zanj, A. Karami","doi":"10.1109/ICSIPA.2009.5478627","DOIUrl":"https://doi.org/10.1109/ICSIPA.2009.5478627","url":null,"abstract":"This paper presents a static gesture recognition system for recognizing some selected words of Persian sign language (PSL). The required images for the selected words are obtained using a digital camera. The color images are first resized, and then converted to grayscale images. Then, the discrete wavelet transform (DWT) is applied on the selected images and some features are extracted. Finally, a multi layered Perceptron (MLP) Neural Network (NN) is trained to classify the selected images. Our recognition system does not use any gloves or visual marking systems. The system was implemented and tested using a data set of 240 samples of Persian sign images; 30 images for each sign. The experiments show that the proposed system is able to classify the selected PSL signs with a classification accuracy of 98.75% when the network is trained using MATLAB NN Toolbox.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"40 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131067580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Tapping and detecting optical signals from optical CDMA networks 从光CDMA网络提取和检测光信号
2009 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2009-11-01 DOI: 10.1109/ICSIPA.2009.5478617
H. Bakarman, F. Hasoon, S. Shaari, M. Ismail
{"title":"Tapping and detecting optical signals from optical CDMA networks","authors":"H. Bakarman, F. Hasoon, S. Shaari, M. Ismail","doi":"10.1109/ICSIPA.2009.5478617","DOIUrl":"https://doi.org/10.1109/ICSIPA.2009.5478617","url":null,"abstract":"This paper presents the eavesdropper performance to tap and detect encoded spectral chips from optical CDMA networks. The probability of correctly detecting encoded spectral chip pulses is investigated based on Modified Double Weight (MDW) Spectral Amplitude Optical CDMA. For probability of correct detection of 0.5, an eavesdropper receiver would need to detect SNR of 8 dB. The eavesdropper performance is investigated by simulation. Tapping efficiency of 20 % is enough to tap encoded spectral chips with reasonable sensitivity.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127723572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A New approach of Rate-Quantization modeling for Intra and Inter frames in H.264 rate control H.264帧内帧间速率量化建模的新方法
2009 IEEE International Conference on Signal and Image Processing Applications Pub Date : 2009-11-01 DOI: 10.1109/ICSIPA.2009.5478701
Miryem Hrarti, Hakim Saadane, M. Larabi, A. Tamtaoui, D. Aboutajdine
{"title":"A New approach of Rate-Quantization modeling for Intra and Inter frames in H.264 rate control","authors":"Miryem Hrarti, Hakim Saadane, M. Larabi, A. Tamtaoui, D. Aboutajdine","doi":"10.1109/ICSIPA.2009.5478701","DOIUrl":"https://doi.org/10.1109/ICSIPA.2009.5478701","url":null,"abstract":"Video encoding rate control has been the research focus in the recent years. The existing rate control algorithms use Rate-Distortion (R-D) or Rate-Quantization (R-Q) models. These latter assume that the enhancement of the bit allocation process, the quantization parameter determination and the buffer management are essentially based on the improvement of complexity measures estimation. Inaccurate estimation leads to wrong quantization parameters and affects significantly the global performance. Therefore, several improved frame complexity measures are proposed in literature. The efficiency of such measures is however limited by the linear prediction model which remains still inaccurate to encode complexity between two neighbour frames. In this paper, we propose a new approach of Rate-Quantization modeling for both Intra and Inter frame without any complexity measure estimation. This approach results from extensive experiments and proposes two Rate-Quantization models. The first one (M1) aims at determining an optimal initial quantization parameter for Intra frames based on sequence target bit-rate and frame rate. The second model (M2) determines the quantization parameter of Inter coding unit (Frame or Macroblock) according to the statistics of the previous coded ones. This model substitutes both linear and quadratic models used in H.264 rate controller. The simulations have been carried out using both JM10.2 and JM15.0 reference softwares. Compared to JM10.2, M1 alone, improves the PSNR up to 1.93dB, M2 achieves a closer output bit-rate and similar quality while the combined model (M1+M2) minimizes the computational complexity. (M1+M2) outperforms both JM10.2 and JM15.0 in terms of PSNR.","PeriodicalId":400165,"journal":{"name":"2009 IEEE International Conference on Signal and Image Processing Applications","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123816598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信