2009 IEEE 17th Signal Processing and Communications Applications Conference最新文献

筛选
英文 中文
Color in attention control 注意控制中的颜色
2009 IEEE 17th Signal Processing and Communications Applications Conference Pub Date : 2009-04-09 DOI: 10.1109/SIU.2009.5136369
O. Erkent, H. I. Bozma
{"title":"Color in attention control","authors":"O. Erkent, H. I. Bozma","doi":"10.1109/SIU.2009.5136369","DOIUrl":"https://doi.org/10.1109/SIU.2009.5136369","url":null,"abstract":"This paper addresses a simplified version of attention control - where a robot is asked to attend to scene points with a priori specified color. Differing from the classical approaches, in which generating fixations is based on explicit search, we introduce the requirement that the search strategy must be accompanied by a series of saccades whose nature control the fixation process. In the explicit search approach, first, the scene point whose color is most similar to the “looked-for” color is determined and then the camera is made to move to that point. In the artificial potential functions approach, the two stages are merged together where the camera simply starts moving towards a point whose color is similar to the target color - although not necessarily the most similar. We present working implementations of the two approaches - reporting actual experiments with an attentive robot and comparing the resulting search behaviors.","PeriodicalId":219938,"journal":{"name":"2009 IEEE 17th Signal Processing and Communications Applications Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131715182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of a VAD+ algorithm and its evaluation 一种VAD+算法的实现及其评价
2009 IEEE 17th Signal Processing and Communications Applications Conference Pub Date : 2009-04-09 DOI: 10.1109/SIU.2009.5136413
Turgay Koç, T. Çiloglu
{"title":"Implementation of a VAD+ algorithm and its evaluation","authors":"Turgay Koç, T. Çiloglu","doi":"10.1109/SIU.2009.5136413","DOIUrl":"https://doi.org/10.1109/SIU.2009.5136413","url":null,"abstract":"In this paper, METU VAD+ which is designed and implemented for ECESS (European Center of Excellence on Speech Synthesis) VAD+ (Voice activity and voicing detection) evaluation campaign is described and the results of the campaign is presented. The results of the campaign has shown that METU VAD+ gives better performance as voice activity detector on speech sounds recorded in excessive noise environments.","PeriodicalId":219938,"journal":{"name":"2009 IEEE 17th Signal Processing and Communications Applications Conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122720353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical facial feature extraction using joint distribution of location and texture information 基于位置和纹理信息联合分布的统计人脸特征提取
2009 IEEE 17th Signal Processing and Communications Applications Conference Pub Date : 2009-04-09 DOI: 10.1109/SIU.2009.5136471
M. Yilmaz, Hakan Erdogan, M. Unel
{"title":"Statistical facial feature extraction using joint distribution of location and texture information","authors":"M. Yilmaz, Hakan Erdogan, M. Unel","doi":"10.1109/SIU.2009.5136471","DOIUrl":"https://doi.org/10.1109/SIU.2009.5136471","url":null,"abstract":"A facial feature extraction method is proposed in this work, which uses location and texture information given a face image. Location and texture information can automatically be learnt by the system, from a training data. Best facial feature locations are found by maximizing the joint distribution of location and texture information of facial features. Performance of the method was found promising after it is tested using 100 test images. Also it is observed that this new method performs better than active appearance models for the same test data.","PeriodicalId":219938,"journal":{"name":"2009 IEEE 17th Signal Processing and Communications Applications Conference","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123012068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fundamental frequency tracking of musical signals with correntropy 具有相关熵的音乐信号基频跟踪
2009 IEEE 17th Signal Processing and Communications Applications Conference Pub Date : 2009-04-09 DOI: 10.1109/SIU.2009.5136357
M. E. Ozbek, F. Savacı
{"title":"Fundamental frequency tracking of musical signals with correntropy","authors":"M. E. Ozbek, F. Savacı","doi":"10.1109/SIU.2009.5136357","DOIUrl":"https://doi.org/10.1109/SIU.2009.5136357","url":null,"abstract":"In this work, the fundamental frequencies of the musical signals are tracked for transcription. For this purpose, it is shown that the correntropy function can be used like the autocorrelation function for finding the fundamental frequencies of signals. The successfulness of the method is evaluated by comparing with the YIN algorithm for different note and melody samples. The correntropy function is shown to be successful as the YIN algorithm.","PeriodicalId":219938,"journal":{"name":"2009 IEEE 17th Signal Processing and Communications Applications Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129187154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A HMM based system for fine tuning in automatic speech segmentation 基于HMM的语音自动分割微调系统
2009 IEEE 17th Signal Processing and Communications Applications Conference Pub Date : 2009-04-09 DOI: 10.1109/SIU.2009.5136412
E. Akdemir, T. Çiloglu
{"title":"A HMM based system for fine tuning in automatic speech segmentation","authors":"E. Akdemir, T. Çiloglu","doi":"10.1109/SIU.2009.5136412","DOIUrl":"https://doi.org/10.1109/SIU.2009.5136412","url":null,"abstract":"In this study, a HMM based system for fine tuning the results of automatic speech segmentation results, is proposed. The phonetic boundaries of an automatic segmentation system are used as input to this system. This system includes diphone and diphone class based HMMs. The average absolute boundary error for /y/- /uu/ boundary is decreased by 44% and the average absolute boundary error for /t/-/p/ ve /uu/-/o/ class boundary is decreased by 63% using the proposed fine tuning system.","PeriodicalId":219938,"journal":{"name":"2009 IEEE 17th Signal Processing and Communications Applications Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128601977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of space-frequency-time coded OFDM systems with transmit antenna selection in land mobile satellite channel 发射天线选择的空频时编码OFDM系统在陆地移动卫星信道中的性能
2009 IEEE 17th Signal Processing and Communications Applications Conference Pub Date : 2009-04-09 DOI: 10.1109/SIU.2009.5136372
Cihat Cinar, A. Kavas
{"title":"Performance of space-frequency-time coded OFDM systems with transmit antenna selection in land mobile satellite channel","authors":"Cihat Cinar, A. Kavas","doi":"10.1109/SIU.2009.5136372","DOIUrl":"https://doi.org/10.1109/SIU.2009.5136372","url":null,"abstract":"In this work, land mobile satellite (LMS) communications, transmit antenna selection (TAS) method is applied to space-frequency-time coded orthogonal frequency division multiplexing (SFT Coded OFDM) with Alamouti and trellis coded over satellite channel of Loo's model with 2 multipath, equal power profile and 8 multipath, exponential power profile frequency selective, quasi-static Rayleigh fading and the systems are compared in terms of the frame error rate performance. For land mobile satellite channel, high diversity gains are obtained without changing the current system complexity and current number of transmitting antenna simultaneously. As the number of transmit antennas increase the gain increases too.","PeriodicalId":219938,"journal":{"name":"2009 IEEE 17th Signal Processing and Communications Applications Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129038021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Audio genre classification with Co-MRMR 用Co-MRMR进行音频类型分类
2009 IEEE 17th Signal Processing and Communications Applications Conference Pub Date : 2009-04-09 DOI: 10.1109/SIU.2009.5136419
Y. Yaslan, Z. Cataltepe
{"title":"Audio genre classification with Co-MRMR","authors":"Y. Yaslan, Z. Cataltepe","doi":"10.1109/SIU.2009.5136419","DOIUrl":"https://doi.org/10.1109/SIU.2009.5136419","url":null,"abstract":"In a classification problem, when there are multiple feature views and unlabeled examples, Co-training can be used to train two separate classifiers, label the unlabeled data points iteratively and then combine the resulting classifiers. Especially when the number of labeled examples is small due to expense or difficulty of obtaining labels, Co-training can improve classifier performance. In this paper, Co-MRMR algorithm which uses classifiers trained on different feature subsets for Co-training is used for audio music genre classification. The features are selected with MRMR (minimum redundancy maximum relevance)feature selection algorithm. Two different feature sets, obtained from Marsyas and Music Miner software are evaluated for Co-training. Experimental results show that Co-MRMR gives better results than the random subspace method for Co-training (RASCO) which was suggested by Wang et al. in 2008 and traditional Co-training algorithm.","PeriodicalId":219938,"journal":{"name":"2009 IEEE 17th Signal Processing and Communications Applications Conference","volume":"57 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116425354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Consistency analysis of Kalman Filter for Modal Analysis of Structures 结构模态分析中卡尔曼滤波的一致性分析
2009 IEEE 17th Signal Processing and Communications Applications Conference Pub Date : 2009-04-09 DOI: 10.1109/SIU.2009.5136464
Ilker Tanyer, S. Ozen, C. Donmez, M. Altınkaya
{"title":"Consistency analysis of Kalman Filter for Modal Analysis of Structures","authors":"Ilker Tanyer, S. Ozen, C. Donmez, M. Altınkaya","doi":"10.1109/SIU.2009.5136464","DOIUrl":"https://doi.org/10.1109/SIU.2009.5136464","url":null,"abstract":"In this paper, Consistency Analysis of Kalman Filter for Modal Analysis of Structural Systems is made. As a future work, A fundamental Modal Analysis algorithm, Eigensystem Realization Algorithm(ERA) will be used with Kalman filters together to make a modal parameter estimation for a structural system. By applying ERA to the impulse response measurements taken from the structure, a state-space representation will be written. Kalman filter will be used as a state estimator in this study and it will have a critical role on minimizing the measurement noise. Before using Kalman filter with ERA, a consistency analysis of Kalman filter is made for artificial impulse response data of the structural system.","PeriodicalId":219938,"journal":{"name":"2009 IEEE 17th Signal Processing and Communications Applications Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116739686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voting system based robust and efficient audiocopy detection 基于鲁棒高效音频检测的投票系统
2009 IEEE 17th Signal Processing and Communications Applications Conference Pub Date : 2009-04-09 DOI: 10.1109/SIU.2009.5136547
Banu Oskay Acar, Unal Zubari, Ezgi C. Ozan, A. Saracoglu, E. Esen, T. Çiloglu
{"title":"Voting system based robust and efficient audiocopy detection","authors":"Banu Oskay Acar, Unal Zubari, Ezgi C. Ozan, A. Saracoglu, E. Esen, T. Çiloglu","doi":"10.1109/SIU.2009.5136547","DOIUrl":"https://doi.org/10.1109/SIU.2009.5136547","url":null,"abstract":"Audio Copy Detection(ACD) problem has several difficulties due to signal distortions and huge amount of audio data to be searched. In this paper we propose a fast audio copy detection system which is very robust against common signal distortions.The proposed method performs a vote based search on 15 bit representation of audio data.","PeriodicalId":219938,"journal":{"name":"2009 IEEE 17th Signal Processing and Communications Applications Conference","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115218495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A low complex method for performance improvement of wireless multiuser cooperative coding networks 一种低复杂度的无线多用户协同编码网络性能改进方法
2009 IEEE 17th Signal Processing and Communications Applications Conference Pub Date : 2009-04-09 DOI: 10.1109/SIU.2009.5136411
Mahmood Mohassel Feghhi, B. Abolhassani
{"title":"A low complex method for performance improvement of wireless multiuser cooperative coding networks","authors":"Mahmood Mohassel Feghhi, B. Abolhassani","doi":"10.1109/SIU.2009.5136411","DOIUrl":"https://doi.org/10.1109/SIU.2009.5136411","url":null,"abstract":"User cooperation gain, in conventional incremental redundancy cooperative coding (IRCC) multiuser networks, is not achievable if cooperating user is unable to correctly decode its partner's data. On the other hand, sending simple channel log likelihood ratio (CLLR) in the cases that relaying users are unable to decode their partner's data, improves the bit error probability (BEP) performance of the system by achieving cooperative diversity gain, when the channel between two cooperating users is reliable. However, in adverse inter-user channel, exploiting channel reliability information is not appropriate. Simulation results show that the CLLR-based IRCC method has superior performance compared to that of the conventional IRCC method at high SNR and has inferior performance compared to that of conventional one at low SNR. Therefore, by combining two methods, we can achieve better performance in low as well as high SNR.","PeriodicalId":219938,"journal":{"name":"2009 IEEE 17th Signal Processing and Communications Applications Conference","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115815796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信