{"title":"OUHANDS database for hand detection and pose recognition","authors":"M. Matilainen, Pekka Sangi, J. Holappa, O. Silvén","doi":"10.1109/IPTA.2016.7821025","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821025","url":null,"abstract":"In this paper we propose a publicly available static hand pose database called OUHANDS and protocols for training and evaluating hand pose classification and hand detection methods. A comparison between the OUHANDS database and existing databases is given. Baseline results for both of the protocols are presented.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116840489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Gosse, S. Jehan-Besson, F. Lecellier, S. Ruan
{"title":"Comparison of 2D and 3D region-based deformable models and random walker methods for PET segmentation","authors":"Kevin Gosse, S. Jehan-Besson, F. Lecellier, S. Ruan","doi":"10.1109/IPTA.2016.7820959","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820959","url":null,"abstract":"In this paper, we propose to compare different methods for tumor segmentation in positron emission tomography (PET) images. We first propose to tackle this problem under the umbrella of shape optimization and 3D deformable models. Indeed, 2D active contours have been widely investigated in the literature but these techniques do not take advantage of 3D informations. On the one hand, we use the well-known model of Chan and Vese. On the other hand we use a criterion based on parametric probabilities which allows us to test the assumption of Poisson distribution of the intensity in such images. Both will be compared to their 2D equivalent and to an improved random-walker algorithm. For this comparison, we use a set of simulated, phantom and real sequences with a known ground-truth and compute the corresponding Dice Coefficients. We also give some examples of 2D and 3D segmentation results.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133267785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From active appearance models and mnemonic descent to 3d morphable models: A brief history of statistical deformable models with examples in menpo","authors":"S. Zafeiriou, Jiri Matas","doi":"10.1109/IPTA.2016.7821042","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821042","url":null,"abstract":"Construction and fitting of Statistical Deformable Models (SDM) is in the core of computer vision and image analysis discipline. It can be used to estimate the object's shape, pose, parts and landmarks using only static imagery captured from monocular cameras. One of the first and most popular families of SDMs is that of Active Appearance Models. AAM uses a generative parameterization of object appearance and shape. The fitting process of AAMs is usually conducted by solving a non-linear optimization problem. In this talk I will start with a brief introduction to AAMs and I will continue with describing supervised methods for AAM fitting. Subsequently, under this framework, I will motivate current techniques developed in my group that capitalize on the combined power of Deep Convolutional Neural Networks (DCNN) and Recurrent NN (RNNs) for optimal deformable object modeling and fitting. Finally, I will show how we can extract dense shape of objects by building and fitting 3D Morphable Models. Examples will be given in the publicly available toolbox of my group called Menpo (http://www.menpo.org/).","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132053121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acquiring multispectral light transport using multi-primary DLP projector","authors":"Kayano Maeda, Takahiro Okabe","doi":"10.1109/IPTA.2016.7820966","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820966","url":null,"abstract":"Acquiring the light transport (LT) of a scene is important for various applications such as radiometric analysis, image-based relighting, and controlling appearance of the scene. The multispectral LT, i.e. the LT in multiple primary colors enables us not only to enhance the color gamut but also to investigate wavelength-dependent interactions between light and a scene. In this paper, we propose a method for acquiring the multispectral LT by using a single off-the-shelf multi-primary DLP (Digital Light Processing) projector; it does not require any self-built equipment, geometric registration, and temporal synchronization. Specifically, based on the rapid color switch due to a rotating color wheel in the projector, we present a method for estimating the spectral properties of the projector in a non-destructive manner, and a method for acquiring the images of a scene illuminated only by one of the primary colors. We conducted a number of experiments by using real images, and confirmed that our method works well and the acquired multispectral LT is effective for radiometric analysis and image-based relighting.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132324446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fouad Boudjenouia, K. Abed-Meraim, A. Chetouani, R. Jennane
{"title":"On the use of image quality measures for image restoration","authors":"Fouad Boudjenouia, K. Abed-Meraim, A. Chetouani, R. Jennane","doi":"10.1109/IPTA.2016.7821030","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821030","url":null,"abstract":"Image quality measurements are valuable tools, crucial for most image processing applications, and used in particular to assess and compare the image restoration (IR) quality. The objective of this work is to investigate the potential of such measures when used as cost functions (integrated in the global criterion) to enhance the restoration performance. In this paper, the proposed approach uses the Structural SIMilarity (SSIM) index measure which is one of the most appropriate measures as it is inspired from the human visual system (HVS) and relatively simple to compute. For the composite criterion optimization, after initializing the algorithm by the alternating direction method of multipliers (ADMM), a gradient descent (GD) technique is used to minimize the global cost function. Finally, simulations are conducted to investigate the contexts in which such quality measures might lead to the desired IR improvement.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130297888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Color correction in image stitching using histogram specification and global mapping","authors":"Qi-Chong Tian, L. Cohen","doi":"10.1109/IPTA.2016.7821034","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821034","url":null,"abstract":"Color correction is an important problem in image stitching. There is a color inconsistency issue between the images (good quality as a reference image and bad quality as a test image) to be stitched. This paper presents a color correction approach with histogram specification and global mapping. The proposed algorithm can make images share the same color style and obtain color consistency. There are four main steps in this algorithm. Firstly, overlapping regions between a reference image and a test image are obtained. Secondly, an exact histogram specification is conducted for the overlapping region in the test image using the histogram of the overlapping region in the reference image. Thirdly, a global mapping function is obtained by minimizing color differences with an iterative method. Lastly, the global mapping function is applied to the whole test image to produce a color corrected image. Both synthetic dataset and real dataset are tested. The experiments demonstrate that the proposed algorithm outperforms other methods both quantitatively and qualitatively.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116508575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"De-convolutional auto-encoder for enhancement of fingerprint samples","authors":"Patrick Schuch, Simon-Daniel Schulz, C. Busch","doi":"10.1109/IPTA.2016.7821036","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821036","url":null,"abstract":"Reliability and accuracy of the features extracted from fingerprints are essential for the performance of any fingerprint comparison algorithm. Image Enhancement as a pre-processing step allows to extract features more accurately by enhancing the quality of the fingerprint signal. This work proposes to use De-Convolutional Auto-Encoders for fingerprint image enhancement. Its performance is compared to seven state-of-the-art methods regarding their improvements for recognitions of the biometric system. Biometric performance is tested with MINDTCT and FingerJetFX for feature extraction and BOZORTH3 for biometric comparison. Critical comparisons are determined from 14 datasets. Those are used for evaluation of the methods. The impact of a method on biometric performance varies significantly. No single image enhancement can be found, which works best for all combinations. However, the proposed method ConvEnhance achieves highest count of best improvements among the evaluated methods.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116623879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion system based on belief functions theory and approximated belief functions for tree species recognition","authors":"R. Ameur, L. Valet, D. Coquin","doi":"10.1109/IPTA.2016.7820955","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820955","url":null,"abstract":"In this paper, an information fusion system for tree species recognition through leaves is proposed. This approach consists in training sub-classifiers (Random forests) with attributes extracted from leaf photos. The database is incomplete, partial and some data is conflicting. A hierarchical fusion system based on Belief functions theory allows the fusion of data provided by different sub-classifiers. Different procedures for reducing computational complexity are tested.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132046476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Baumann, E. Dayangac, J. Aulinas, M. Zobel
{"title":"MedianStruck for long-term tracking applications","authors":"Florian Baumann, E. Dayangac, J. Aulinas, M. Zobel","doi":"10.1109/IPTA.2016.7821029","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821029","url":null,"abstract":"In this paper, we propose a mutual framework that combines two state-of-the-art visual object tracking algorithms. Both trackers benefit from each other's advantage leading to an efficient visual tracking approach. Many state-of-the-art trackers have poor performance due to rain, fog or occlusion in real-world scenarios. Often, after several frames, objects are getting lost, only leading to a short-term tracking capability. In this paper, we focus on long-term tracking, preserving real-time capability and very accurate positioning of tracked objects. The proposed framework is capable to track arbitrary objects, leading to decreased labeling efforts and an improved positioning of bounding boxes. This is especially interesting for applications such as semi-automatic labeling. The benefit of our proposed framework is demonstrated by comparing it with the related algorithms using own sequences as well as a well-known and publicly available dataset.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"453 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116078222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient BSIF-based near-infrared iris recognition","authors":"C. Rathgeb, F. Struck, C. Busch","doi":"10.1109/IPTA.2016.7820932","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820932","url":null,"abstract":"Binarized statistical image features (BSIF) represents a general purpose texture descriptor originally designed for texture description and classification, such as local binary patterns (LBP) or local phase quantisation (LPQ). Recently, BSIF has extensively been applied for the purpose of biometric recognition, for instance based on face or palmprint images. While recognition accuracy reported for different biometric characteristics indicates its applicability to iris recognition, up till now BSIF has primarily been employed for iris spoofing detection in particular, fake contact lens detection. In this work we present an adaptation of BSIF for near-infrared iris recognition. In accordance with generic iris recognition schemes, a specific alignment procedure is introduced in order to achieve robustness against head tilts. Further, we propose a binarization method for BSIF-based feature histograms, to obtain a compact feature representation, which allows for a rapid comparison. On the CASIAv4-Interval iris database the proposed system achieves competitive biometric performance obtaining EERs below 0.6%, compared to traditional schemes based on Log-Gabor and quadratic spline wavelets revealing EERs of approximately 0.4%. Moreover, we show that BSIF-based feature vectors complement those extracted by traditional systems, yielding a significant performance gain in a multi-algorithm fusion scenario resulting in an EER below 0.2%, which further underlines the usefulness of the presented approach.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116569018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}