{"title":"Low-rank and sparse matrix decomposition based on S1/2 and L1/2 regularizations in dynamic MRI","authors":"Xu-Xin Lin, Liang-Yong Xia, Yong Liang, Hai-Hui Huang, Hua Chai, Kuok-Fan Chan","doi":"10.1109/IPTA.2016.7820983","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820983","url":null,"abstract":"In recent years, compressed sensing (CS) has been proposed and successfully applied to speed up the acquisition in dynamic MRI. However, how to improve the quality of dynamic MRI is still a worthwhile question. Recently, a low-rank plus sparse (L+S) matrix decomposition model with S1 and L1 regularizations is proposed for reconstruction of under-sampled dynamic MRI with separation of background and dynamic components. It can effectively detect dynamic information in the process of imaging. In our work, we propose an improved L+S matrix decomposition model with S1/2 and L1/2 regularizations in order to improve the quality of original separation. To solve the model, we use an iterative half-thresholding decomposition algorithm. Finally, empirical results show that the new model can produce better performance and capture more completed dynamic information than the existing model.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116598052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structure-based image inpainting","authors":"A. Akl, Edgard Saad, C. Yaacoub","doi":"10.1109/IPTA.2016.7820976","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820976","url":null,"abstract":"Image inpainting is a dynamic field with different image processing and computer graphics applications. Most of the existing image inpainting methods lead to significant results in different applications but fail in difficult situations with high local structural variations. In this paper, a structure-based image inpainting algorithm is proposed, where the image's structure layer is represented and analyzed using the structure tensor field. The structure layer of the image is first inpainted by adapting the Efros and Leung algorithm to the specificities of the structure tensor, then the obtained tensor field is used to help the image inpainting process. Results show that using the proposed method, relevant local information can be better inpainted comparing to the initial intensity-based approach that does not consider structural information during the inpainting process.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127946495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast growing hough forest as a stable model for object detection","authors":"Antoine Tran, A. Manzanera","doi":"10.1109/IPTA.2016.7820960","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820960","url":null,"abstract":"Hough Forest is a framework combining Hough Transform and Random Forest for object detection. The purpose of the present paper is to improve the efficiency and reliability of the original framework by the mean of two contributions. First, instead of generating the image samples by drawing patches randomly from the training set, we bias this step toward the most relevant image content by selecting a proportion of patches from a geometrical criterion. Second, during the creation of non-leaf-nodes of the trees, instead of sampling uniformly the parameter space for choosing the binary tests aimed at splitting the set of image samples, we choose them according to a probability map constructed from the sample set. We aim to drastically reduce the training time without impacting the accuracy, and decreasing the variability of the produced detectors. The interest of this improved model is shown in the context of car and pedestrian detection by evaluating it on academic datasets.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129236476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ezgi C. Ozan, Ekaterina Riabchenko, S. Kiranyaz, M. Gabbouj
{"title":"A vector quantization based k-NN approach for large-scale image classification","authors":"Ezgi C. Ozan, Ekaterina Riabchenko, S. Kiranyaz, M. Gabbouj","doi":"10.1109/IPTA.2016.7821010","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821010","url":null,"abstract":"The k-nearest-neighbour classifiers (k-NN) have been one of the simplest yet most effective approaches to instance based learning problem for image classification. However, with the growth of the size of image datasets and the number of dimensions of image descriptors, popularity of k-NNs has decreased due to their significant storage requirements and computational costs. In this paper we propose a vector quantization (VQ) based k-NN classifier, which has improved efficiency for both storage requirements and computational costs. We test the proposed method on publicly available large scale image datasets and show that the proposed method performs comparable to traditional k-NN with significantly better complexity and storage requirements.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"9 44","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113931699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Stokkenes, Ramachandra Raghavendra, Morten K. Sigaard, K. Raja, M. Gomez-Barrero, C. Busch
{"title":"Multi-biometric template protection — A security analysis of binarized statistical features for bloom filters on smartphones","authors":"Martin Stokkenes, Ramachandra Raghavendra, Morten K. Sigaard, K. Raja, M. Gomez-Barrero, C. Busch","doi":"10.1109/IPTA.2016.7820972","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820972","url":null,"abstract":"In recent years, we have seen huge growth of biometric systems incorporated in devices such as smartphones and security is one of the major concerns. In this work a multi-biometric template protected system is proposed, based on Bloom filers and binarized statistical image features (BSIF). Features are extracted from face and both periocular regions and templates protected using Bloom filters. Score level fusion is applied to increase recognition accuracy. The system is tested on a database, consisting of 94 subjects, of images collected with smart phones. A comparison between unprotected and protected templates in the system shows the feasibility of the template protection method with observed Genuine-Match-Rate (GMR) of 95.95% for unprotected templates and 91.61% at a False-Match-Rate (FMR) of 0.01%. Irreversibility and unlinkability of the system is analysed based on a recently published security evaluation framework.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132616092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection and spatial analysis of hepatic steatosis in histopathology images using sparse linear models","authors":"Nazre Batool","doi":"10.1109/IPTA.2016.7820969","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820969","url":null,"abstract":"Hepatic steatosis is a defining feature of nonalcoholic fatty liver disease, emerging with the increasing incidence of obesity and metabolic syndrome. The research in image-based analysis of hepatic steatosis mostly focuses on the quantification of fat in biopsy images. This work furthers the image-based analysis of hepatic steatosis by exploring the spatial characteristics of fat globules in whole slide biopsy images after performing fat detection. An algorithm based on morphological filtering and sparse linear models is presented for fat detection. Then the spatial properties of detected fat globules in relation to the hepatic anatomical structures of central veins and portal tracts are explored. The test dataset consists of 38 high resolution images from 21 patients. The experimental results provide an insight into the size distributions of fat globules and their location with respect to the anatomical structures.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131320392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High-capacity data hiding in encrypted images using MSB prediction","authors":"Pauline Puteaux, D. Trinel, W. Puech","doi":"10.1109/IPTA.2016.7820991","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820991","url":null,"abstract":"In the last few years, visual privacy has become a major problem. Because of this, encrypted image processing has received a lot of attention within the scientific and business communities. Data hiding in encrypted images (DHEI) is an effective technique to embed data in the encrypted domain. The owner of an image encrypts it with a secret key and it is still possible to embed additional data without knowing the original content nor the secret key. This secret message can be extracted and the initial image can be recovered in the decoding phase. Recently, DHEI has become an investigative field, but the proposed methods do not allow a large amount of embedding capacity. In this paper, we present a new method based on the MSB (most significant bit) prediction. We suggest to hide one bit per pixel by pre-processing the image to avoid prediction errors and, thereby, to improve the quality of the reconstructed image. We have applied our method to various images and, in every cases, the obtained image is very similar to the original one in terms of PSNR or SSIM.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114661660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using web images as additional training resource for the discriminative generalized hough transform","authors":"Alexander Oliver Mader, H. Schramm, C. Meyer","doi":"10.1109/IPTA.2016.7821012","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821012","url":null,"abstract":"Many algorithms in computer vision, e.g., for object localization, are supervised and need annotated training data. One approach for object localization is the Discriminative Generalized Hough Transform (DGHT). It achieves state-of-the-art performance in applications like iris and epiphysis localization, if the amount and quality of training data is sufficient. This motivates techniques for extending the training corpus with limited manual effort. In this paper, we propose an active learning scheme to extend the training corpus by automatically and efficiently harvesting and selecting suitable Web images. We aim at improving localization performance, while reducing the manual supervision to a minimum. Our key idea is to estimate the benefit of a particular candidate Web image by analyzing its Hough space generated using an initial DGHT model. We show that our method performs similarly to a manual selection of Web images as well as a computationally intensive state-of-the-art approach.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117346669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An analysis of 3D point cloud reconstruction from light field images","authors":"C. Perra, F. Murgia, D. Giusto","doi":"10.1109/IPTA.2016.7821011","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821011","url":null,"abstract":"Current methodologies for the generation of 3D point cloud from real world scenes rely upon a set of 2D images capturing the scene from several points of view. Novel plenoptic cameras sample the light field crossing the main camera lens creating a light field image. The information available in a plenoptic image must be processed in order to render a view or create the depth map of the scene. This paper analyses a method for the reconstruction of 3D models. The reconstruction of the model is obtained from a single image shot. Exploiting the properties of plenoptic images, a point cloud is generated and compared with a point cloud of the same object but generated with a different plenoptic camera.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"36 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121159410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramachandra Raghavendra, K. Raja, S. Marcel, C. Busch
{"title":"Face presentation attack detection across spectrum using time-frequency descriptors of maximal response in Laplacian scale-space","authors":"Ramachandra Raghavendra, K. Raja, S. Marcel, C. Busch","doi":"10.1109/IPTA.2016.7820961","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820961","url":null,"abstract":"Multi-spectral face recognition has been an active area of research over the past few decades. However, the vulnerability of multi-spectral face recognition systems is a growing concern that argues the need for Presentation Attack Detection (PAD) (or countermeasure or anti-spoofing) schemes to successfully detect targeted attacks. In this work, we present a novel feature descriptor LαMTiF that can effectively capture time-frequency features from the maximum response obtained on the high pass band image, which is obtained from the scale-space decomposition of the presented image. The proposed feature descriptor can effectively capture the micro-texture patterns that can be effectively used describe the variation from the presented image. We then propose a new framework using the proposed LαMTiF features that process the input multi-spectral face image independently. These extracted features are then classified using a linear Support Vector Machine (SVM) to obtain the binary decision. Finally, we carry out a decision fusion using the And rule to obtain the final decision. Extensive experiments are carried out on publicly available multi-spectral face datasets that have indicated the efficacy of the proposed scheme.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128404740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}