{"title":"A biometric verification system based on the fusion of palmprint and face features","authors":"S. Ribaric, I. Fratric, K. Kis","doi":"10.1109/ISPA.2005.195376","DOIUrl":"https://doi.org/10.1109/ISPA.2005.195376","url":null,"abstract":"This paper presents a bimodal biometric verification system for physical access control based on the features of the palmprint and the face. The system tries to improve the verification results of unimodal biometric systems based on palmprint or facial features by integrating them using fusion at the matching-score level. The verification process consists of image acquisition using a scanner and a camera, palmprint recognition based on the principal lines, face recognition with eigenfaces, fusion of the unimodal results at the matching-score level, and finally, a decision based on thresholding. The experimental results show that fusion improves the equal error rate by 0.74% and the minimum total error rate by 1.72%.","PeriodicalId":238993,"journal":{"name":"ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122207177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of the chirp-z transform to fractional interpolation in DMT modems","authors":"F. Pisoni","doi":"10.1109/ISPA.2005.195403","DOIUrl":"https://doi.org/10.1109/ISPA.2005.195403","url":null,"abstract":"Sampling clock synchronization in multi carrier systems, such as discrete multitone (DMT) digital subscriber line modems, can be done with a phase locked loop. This requires expensive voltage-controlled oscillators (VCO). Alternatively, clock-offset compensation can be done completely in the digital domain, replacing the VCO with a cheaper free-running oscillator. This solution requires the signal of interest to be fractionally interpolated in the digital domain. In DMT, the digital interpolator can be combined with the modulating DFT into one single fractional operator: the chirp-z transform. In this paper, we explore interpolation algorithms based on the Chirp-z transform and tailor them at clock-offset recovery for DMT modems.","PeriodicalId":238993,"journal":{"name":"ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127280904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Appearance based lip tracking and cloning on speaking faces","authors":"Bouchra Abboud, G. Chollet","doi":"10.1109/ISPA.2005.195427","DOIUrl":"https://doi.org/10.1109/ISPA.2005.195427","url":null,"abstract":"This paper addresses the issue of lip tracking and cloning using appearance models. In this perspective, a statistical color-based lip detector is first used to classify the pixels of an image into lip and non-lip pixels. This rough estimation of the lip position is then used to initialize an appearance model. This models convergence allows to refine the positions of the MPEG-4 compatible feature points placed around the lip contours. The optimal position of the feature points is then used as a first estimate to compute the position at the next frame of an image sequence to perform speaking lip tracking. To animate an unknown face image in such a way that it reproduces the lip motion of the driving sequence a piecewise affine transform is applied forcing the target lips feature points to match the automatically detected feature points of each frame of the training sequence to perform lip motion cloning. Preliminary results show that the synthetic talking faces obtained are photorealistic and reproduce accurately the movements of the tracked mouth.","PeriodicalId":238993,"journal":{"name":"ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126982052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic transcription of piano polyphonic music","authors":"A. Kobzantsev, D. Chazan, Y. Zeevi","doi":"10.1109/ISPA.2005.195447","DOIUrl":"https://doi.org/10.1109/ISPA.2005.195447","url":null,"abstract":"A novel algorithm for automatic transcription of piano polyphonic music is proposed. It is based on a processing scheme that incorporates the following subtasks: segmentation of notes in time domain, estimation of frequency components based on the structure of time segments, extraction of pitches of underlying notes, and tracking of notes to obtain the final music score. A combination of multiresolution techniques, such as multiresolution Fourier transform and maximum likelihood frequency estimator, enables the user to successfully cope with the problems of constant time-frequency resolution and frequency masking. The algorithm demonstrates a better performance then results obtained by means of existing commercial software.","PeriodicalId":238993,"journal":{"name":"ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005.","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116898315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating perfusion using X-ray angiography","authors":"H. Bogunović, S. Lončarić","doi":"10.1109/ISPA.2005.195400","DOIUrl":"https://doi.org/10.1109/ISPA.2005.195400","url":null,"abstract":"In this paper we present a method for extraction of functional information from a time-sequence of X-ray angiographic images. By observing contrast agent propagation profile in a region of the angiogram one can calculate a number of parameters of that profile. Each parameter can be used to construct a parametric image of the imaged area. Such parametric images present a functional rather than morphological aspect of the tissue. The most important functional parameter is perfusion. Perfusion is defined as a blood flow at the capillary level and is commonly used to detect ischemic areas. Perfusion CT and perfusion MRI (pMRI) modalities have commonly been used to extract perfusion data. In this paper, a new method for calculation of perfusion from the contrast agent profile of a sequence of X-ray angiograms is presented. The method utilizes Wiener filtering for denoising of time signals. The experimental results are computed on a sequence of cerebral angiograms.","PeriodicalId":238993,"journal":{"name":"ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005.","volume":"51 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114018199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gradient-descent methods for parameter estimation in chaotic systems","authors":"I. P. Mariño, J. Miquez","doi":"10.1109/ISPA.2005.195452","DOIUrl":"https://doi.org/10.1109/ISPA.2005.195452","url":null,"abstract":"The rich nonlinear dynamics of chaos allows to model a broad variety of systems, including complex biological ones. The system of interest is usually observed through some time series and the modelization problem consists of adjusting the parameters of a model chaotic system until its dynamics is matched to the reference time series. In this paper, we describe a general methodology to adaptively select the values of the model parameters. Specifically, we assume that the observed time series are originated by a primary chaotic system with unknown parameters and we use it to drive a secondary chaotic system, so that both systems be coupled. The parameters of the secondary system are adaptively optimized (by a gradient-descent optimization of a suitable cost function) to make it follow the dynamics of the primary system. In this way, the secondary parameters are interpreted as estimates of the primary ones. We illustrate the application of the method by jointly estimating the complete parameter vector of a Lorenz system.","PeriodicalId":238993,"journal":{"name":"ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133150399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Camera parameter initialization for 3D kinematic systems","authors":"T. Pribanić, P. Sturm, M. Cifrek","doi":"10.1109/ISPA.2005.195462","DOIUrl":"https://doi.org/10.1109/ISPA.2005.195462","url":null,"abstract":"3D scene information can be extracted from images acquired by cameras. Before doing the actual reconstruction, camera calibration has to be done. Reconstruction accuracy is highly dictated by the calibration. Two typical demands, which are not easily simultaneously satisfied, are: calibration has to be done in a fast and convenient manner and yet assure a high degree of reconstruction accuracy. The computational part of calibration usually includes the initialization of camera parameters and refinement based on an initial set of values. The goodness of the initial set greatly affects the refinement procedure in terms of convergence speed and ultimately reconstruction accuracy. This work proposes a new calibration method for 3D kinematic systems. It shortens the commonly used calibration procedure, gives better initial parameter values for the refinement procedure which in turn is supposed to assure faster and safer convergence of the iterative minimization algorithm. Additionally, it is shown that even without parameter refinement the proposed method gives more accurate 3D reconstruction output.","PeriodicalId":238993,"journal":{"name":"ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114330090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of resampling schemes for particle filtering","authors":"R. Douc, O. Cappé, É. Moulines","doi":"10.1109/ISPA.2005.195385","DOIUrl":"https://doi.org/10.1109/ISPA.2005.195385","url":null,"abstract":"This contribution is devoted to the comparison of various resampling approaches that have been proposed in the literature on particle filtering. It is first shown using simple arguments that the so-called residual and stratified methods do yield an improvement over the basic multinomial resampling approach. A simple counter-example showing that this property does not hold true for systematic resampling is given. Finally, some results on the large-sample behavior of the simple bootstrap filter algorithm are given. In particular, a central limit theorem is established for the case where resampling is performed using the residual approach.","PeriodicalId":238993,"journal":{"name":"ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127677281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Co-inertia analysis for \"liveness\" test in audio-visual biometrics","authors":"Nicolas Eveno, Laurent Besacier","doi":"10.1109/ISPA.2005.195419","DOIUrl":"https://doi.org/10.1109/ISPA.2005.195419","url":null,"abstract":"In biometrics, it is crucial to detect impostors and thwart replay attacks. However, few researches have focused yet on the \"liveness\" verification. This test ensures that biometric cues being acquired are actual measurements from a live person who is present at the time of capture. Here, we propose a speaker independent \"liveness\" verification method for audio-video identification systems. It uses the correlation that exists between the lip movements and the speech produced. Two data analysis methods are considered to model this statistical link. Finally, according to tests carried out on the XM2VTS database, the best liveness verification EER achieved is 12.5%.","PeriodicalId":238993,"journal":{"name":"ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123086673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}