{"title":"Outage analysis for half-duplex interference relay channels","authors":"Ali Samed Unal, F. M. Ozcelik, M. Yuksel","doi":"10.1109/SIU.2010.5651675","DOIUrl":"https://doi.org/10.1109/SIU.2010.5651675","url":null,"abstract":"In this paper, the two-user interference channel with a half-duplex relay is studied. The relay does not hear one of the sources. The relay's transmission is beneficial for one of the receivers, whereas it is merely interference for the other. At the receivers the interference is either treated as noise, or if possible decoded first and subtracted from the received signal to enhance the direct transmission quality. Probability of outage for each user is computed and compared as the channels are assumed to be fading and no channel state information is available at the transmitters. Despite the fact that the relay causes interference to one of the users, it is shown that the relay is useful to both the receivers.","PeriodicalId":152297,"journal":{"name":"2010 IEEE 18th Signal Processing and Communications Applications Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131167503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Content-Based Image Retrieval system using Visual Attention","authors":"Gulsah Tumuklu Ozyer, F. Vural","doi":"10.1109/SIU.2010.5652263","DOIUrl":"https://doi.org/10.1109/SIU.2010.5652263","url":null,"abstract":"Semantic gap, difference between visual features and semantic annotations, is an important problem of Content-Based Image Retrieval (CBIR) systems. In this study, a new Content-Based Image Retrieval system is proposed by using Visual Attention which is a part of human visual system. In the proposed work, the region of interests are extracted by using Itti-Koch visual attention model. The attention values, obtained from the saliency maps are used to define a new similarity matching method. Successful results are obtained compared to traditional region-based retrieval systems.","PeriodicalId":152297,"journal":{"name":"2010 IEEE 18th Signal Processing and Communications Applications Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131208577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performans analizi performance analysis of analog to information converter under noisy conditions","authors":"Taner Ince, A. Nacaroglu, Nurdal Watsuji","doi":"10.1109/SIU.2010.5654428","DOIUrl":"https://doi.org/10.1109/SIU.2010.5654428","url":null,"abstract":"In this study we presented the performance of random demodulation based analog to information converter under noisy environment. Compressive sampling (Compressed sensing) is a new area of signal processing and attracts too much attention in recent years. Compressive sampling states that if a signal having length N has a sparse representation on an orthonormal basis, then it is possible to recover this signal exactly from M<<N measurements. Hence this allows reconstruction signals far below the Nyquist rate.","PeriodicalId":152297,"journal":{"name":"2010 IEEE 18th Signal Processing and Communications Applications Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133633227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Propagation of firing rate in a noisy feedforward biological neural network","authors":"M. Uzuntarla, M. Ozer, E. Koklukaya","doi":"10.1109/SIU.2010.5654422","DOIUrl":"https://doi.org/10.1109/SIU.2010.5654422","url":null,"abstract":"In this study, we investigate the input firing rate propagation in a feedforward biological neural network composed of multiple layers. Dynamical behaviour of neurons in the network are modeled by using stochastic Hodgkin-Huxley equations which considers the probabilistic nature of ion channels embedded in neuronal membranes. Thus, firing rate propagation is studied in a biophysically more realistic manner by including ion channel noise which is ignored in previous studies. Input rate information in the network is provided by varying the cell size in the first layer. We show that the efficent transmission of input firing rate through the network can be achieved via the synchronization mechanism within the neurons in layers. We also show that this synchronization araise from the synaptic current variance increase and provided by adjusting the cell size or the intrinsic channel noise strength in layers to an optimal value.","PeriodicalId":152297,"journal":{"name":"2010 IEEE 18th Signal Processing and Communications Applications Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130171118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subthreshold stimulus encoding on a stochastic scale-free neuronal network","authors":"Ergin Yılmaz, M. Ozer, B. Şen","doi":"10.1109/SIU.2010.5652536","DOIUrl":"https://doi.org/10.1109/SIU.2010.5652536","url":null,"abstract":"Random networks with complex topology arise in many different fields of science. Recently, it has been shown that existing network models fail to incorporate two common features of real networks in nature: First, real networks are open and continuously grow by addition of new elements, and second, a new element connects preferentially to an element that already has a large number of connections. Therefore, a new network model, called a scale-free (SF) network, has been proposed based on these two features. In this study, we study the subthreshold periodic stimulus encoding on a stochastic SF neuronal network based on the collective firing regularity. The network consists of identical Hodgkin-Huxley (HH) neurons. We show that the collective firing (spiking) regularity becomes maximal at a given stimulus frequency, corresponding to the frequency of the subthreshold oscillations of HH neurons. We also show that this best regularity can be obtained if the coupling strength and average degree of connectivity have their optimal values.","PeriodicalId":152297,"journal":{"name":"2010 IEEE 18th Signal Processing and Communications Applications Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130232643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Kement, Fatih Kazdal, Sahin Yanlik, Muhammet Unal, M. Onat
{"title":"Reconstruction the complete image from random scattered pieces of image","authors":"C. Kement, Fatih Kazdal, Sahin Yanlik, Muhammet Unal, M. Onat","doi":"10.1109/SIU.2010.5651389","DOIUrl":"https://doi.org/10.1109/SIU.2010.5651389","url":null,"abstract":"Combining all image pieces which are randomly scattered to reconstruct the initial (original) image takes so much time. For instance, combining the all image pieces used in tiling is a tedious and time consuming task. For this purpose, the hardness of the task is able to be decreased and the task duration is able to be shortened using the developed image processing optimization algorithms. In this study, the reconstruction of initial image is implemented using image processing methods from randomly scattered image pieces. In the first stage, the edge improvement is completed to reach the initial image position from randomly scattered image pieces. Then, the rotating angles of image pieces are determined with the support of corner finding algorithm. The initial image positions of pieces are determined and positioned to their original places by individually comparing the prescribed image pieces with the initial image pieces.","PeriodicalId":152297,"journal":{"name":"2010 IEEE 18th Signal Processing and Communications Applications Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128387482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting action units on 3D faces","authors":"A. Savran, B. Sankur","doi":"10.1109/SIU.2010.5651330","DOIUrl":"https://doi.org/10.1109/SIU.2010.5651330","url":null,"abstract":"Automatic facial action unit (AU) detection is a research topic that finds many applications in behavioral science and human computer interaction. The AU detection performance in 2D images are maturing but are not yet adequate. In this study, we develop a method to detect AUs in 3D images and show its superiority vis-a-vis 2D. The data modality is 2D curvature map, which is obtained by conformal mapping of 3D surface data. Since performance comparisons are run on 2D data with the same algorithms, any bias that could be induced by 3D modality is precluded. We address also the choice of generative versus discriminative classifiers, and consider 2D-3D fusion.","PeriodicalId":152297,"journal":{"name":"2010 IEEE 18th Signal Processing and Communications Applications Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134452020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cooperative space-time code for amplify-and-forward relay networks","authors":"A. Al-nahari, F. El-Samie, M. Dessouky","doi":"10.1109/SIU.2010.5651412","DOIUrl":"https://doi.org/10.1109/SIU.2010.5651412","url":null,"abstract":"In this paper, we propose a distributed space-time coded cooperative protocol with amplify-and-forward relaying. Motivated by protocol (III) presented in [1], we propose a distributed space-time coding for an arbitrary number of relay nodes. The pairwise error probability (PEP) is derived and the theory analysis demonstrates that our protocol achieves a diversity of order N +1 where N is the number of relay nodes. Quasi-orthogonal space-time codes are used as they give much better performance than random linear-dispersion codes. As the transmission power of the source node is a critical parameter in this protocol because it transmits in both phases, the optimal power allocation is derived using numerical and theoretical analysis. Simulation results demonstrate an improvement over the existing orthogonal protocols for different source-destination channel conditions.","PeriodicalId":152297,"journal":{"name":"2010 IEEE 18th Signal Processing and Communications Applications Conference","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133466518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Audiovisual articulatory inversion based on Gaussian Mixture Model (GMM)","authors":"I. Ozbek, M. Demirekler","doi":"10.1109/SIU.2010.5653987","DOIUrl":"https://doi.org/10.1109/SIU.2010.5653987","url":null,"abstract":"In this study, we examined articulatory inversion using audiovisual information based on Gaussian Mixture Model (GMM). In this method the joint distribution of the articulatory movement and audio (and/or visual) data are modelled via a mixture of Gaussians. The conditional expected value of the GMM is used as regression function between the audio (and/orvisual) and ar-ticulatory spaces. We also examined various fusion methods in order to combine acoustic and visual information in articula-tory inversion. The fusion methods improve the performance of articulatory inversion.","PeriodicalId":152297,"journal":{"name":"2010 IEEE 18th Signal Processing and Communications Applications Conference","volume":"18 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132393629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"INTERSPEECH 2009 Emotion Recognition Challenge evaluation","authors":"E. Bozkurt, E. Erzin, Ç. Erdem, A. Erdem","doi":"10.1109/SIU.2010.5649919","DOIUrl":"https://doi.org/10.1109/SIU.2010.5649919","url":null,"abstract":"In this paper we evaluate INTERSPEECH 2009 Emotion Recognition Challenge results. The challenge presents the problem of accurate classification of natural and emotionally rich FAU Aibo recordings into five and two emotion classes. We evaluate prosody related, spectral and HMM-based features with Gaussian mixture model (GMM) classifiers to attack this problem. Spectral features consist of mel-scale cepstral coefficients (MFCC), line spectral frequency (LSF) features and their derivatives, whereas prosody-related features consist of pitch, first derivative of pitch and intensity. We employ unsupervised training of HMM structures with prosody related temporal features to define HMM-based features. We also investigate data fusion of different features and decision fusion of different classifiers to improve emotion recognition results. Our two-stage decision fusion method achieves 41.59 % and 67.90 % recall rate for the five and two-class problems, respectively and takes second and fourth place among the overall challenge results.","PeriodicalId":152297,"journal":{"name":"2010 IEEE 18th Signal Processing and Communications Applications Conference","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116235857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}