Zukuan Wei, Zhao-gang Wang, Hong-Yeon Kim, Youngkyun Kim, Jae-Hong Kim
{"title":"Novel approach based on topological simplification algorithm optimized with Particle Swarm Optimization","authors":"Zukuan Wei, Zhao-gang Wang, Hong-Yeon Kim, Youngkyun Kim, Jae-Hong Kim","doi":"10.22630/mgv.2014.23.1.7","DOIUrl":"https://doi.org/10.22630/mgv.2014.23.1.7","url":null,"abstract":"The movement of people can be considered as the flow of liquid, so we can use the methods employed for the flow of liquid to understand the motion of a crowd. Based on this, we present a novel framework for abnormal behavior detection in crowded scenes. We extract a topological structure from the crowd with the topology simplification algorithm. However, a conventional topology simplification algorithm can not work well if we apply it to the crowd directly because there is too much noises produced by the random motion of the people in the original image. To overcome this, we make a step forward by optimizing this model using Particle Swarm Optimization (PSO) to perform the advection of particle population spread randomly over the image frames. Then we propose two new methods for analyzing the boundary point structure and extraction of a critical point from the particle motion field; both methods can be used to describe the global topological structure of the crowd motion. The advantage of our approach is that each kind of abnormal event can be described as a specific change in the topological structure, so we do not need construct a complex classifier, but can classify the crowd anomalies dynamically and directly. Moreover, the approach monitors the crowd motion macroscopically, making it insensitive to the motion of an individual, disregarding the global movement. The result of an experiment conducted on a common data set shows that our method is both precise and stable.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"06 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2012-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85975328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Technique to generate face and palm vein-based fuzzy vault for multi-biometric cryptosystem","authors":"N. Lalithamani, M. Sabrigiriraj","doi":"10.22630/mgv.2014.23.1.6","DOIUrl":"https://doi.org/10.22630/mgv.2014.23.1.6","url":null,"abstract":"Template security of biometric systems is a vital issue and needs critical focus. The importance lies in the fact that unlike passwords, stolen biometric templates cannot be revoked. Hence, the biometric templates cannot be stored in plain format and needs strong protection against any forgery. In this paper, we present a technique to generate face and palm vein-based fuzzy vault for multi-biometric cryptosystem. Here, initially the input images are pre-processed using various processes to make images fit for further processing. In our proposed method, the features are extracted from the processed face and palm vein images by finding out unique common points. The chaff points are added to the already extracted points to obtain the combined feature vector. The secret key points which are generated based on the user key input (by using proposed method) are added to the combined feature vector to have the fuzzy vault. For decoding, the multi-modal biometric template from palm vein and face image is constructed and is combined with the stored fuzzy vault to generate the final key. Finally, the experimentation is conducted using the palm vein and face database available in the CASIA and JAFFE database. The evaluation metrics employed are FMR (False Match Ratio) and GMR (Genuine Match Ratio). From the metric values obtained for the proposed system, we can infer that the system has performed well.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2012-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87905861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bag of Words - Quality Issues of Near-Duplicate Image Retrieval","authors":"M. Paradowski, Mariusz Durak, Bartosz Broda","doi":"10.22630/mgv.2014.23.1.5","DOIUrl":"https://doi.org/10.22630/mgv.2014.23.1.5","url":null,"abstract":"This paper addresses the problem of large scale near-duplicate image retrieval. Issues related to visual words dictionary generation are discussed. A new spatial verification routine is proposed. It incorporates neighborhood consistency, term weighting and it is integrated into the Bhattacharyya coefficient. The proposed approach reaches almost 10% higher retrieval quality, comparing to other recently reported state-of-the-art methods.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2012-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90110263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EBMBDT: Effective Block Matching Based Denoising Technique using dual tree complex wavelet transform","authors":"M. Selvi","doi":"10.22630/mgv.2014.23.3.3","DOIUrl":"https://doi.org/10.22630/mgv.2014.23.3.3","url":null,"abstract":"In processing and investigation of digital image denoising of images is hence very important. In this paper, we propose a Hybrid de-noising technique by using Dual Tree Complex Wavelet Transform (DTCWT) and Block Matching Algorithm (BMA). DTCWT and BMA is a method to identify the noisy pixel information and remove the noise in the image. The noisy image is given as input at first. Then, bring together the comparable image blocks into the load. Afterwards Complex Wavelet Transform (CWT) is applied to each block in the group. The analytic filters are made use of by CWT, i.e. their real and imaginary parts from the Hilbert Transform (HT) pair, defending magnitude-phase representation, shift invariance, and no aliasing. After that, adaptive thresholding is applied to enhance the image in which the denoising result is visually far superior. The proposed method has been compared with our previous de-noising technique with Gaussian and salt-pepper noise. From the results, we can conclude that the proposed de-noising technique have shown better values in the performance analysis.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2012-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78607760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust method for the text line detection and splitting of overlapping text in the Latin manuscripts","authors":"J. Pach, P. Bilski","doi":"10.22630/mgv.2014.23.3.2","DOIUrl":"https://doi.org/10.22630/mgv.2014.23.3.2","url":null,"abstract":"The paper presents the modified method of the text lines separation in the handwritten manuscripts. Such an approach is required for the medieval text analysis, where multiple text lines overlap and are written at different angles. The proposed approach consists in dividing the bounding boxes into smaller components based on the points of the character curves intersection. The method considers the askew text lines, producing non-rectangular zones between the neighboring lines.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"82 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2012-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88501843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Curvature-Tensor-Based Perceptual Quality Metric for 3D Triangular Meshes","authors":"Fakhri Torkhani, K. Wang, J. Chassery","doi":"10.22630/mgv.2014.23.1.4","DOIUrl":"https://doi.org/10.22630/mgv.2014.23.1.4","url":null,"abstract":"Perceptual quality assessment of 3D triangular meshes is crucial for a variety of applications. In this paper, we present a new objective metric for assessing the visual difference between a reference triangular mesh and its distorted version produced by lossy operations, such as noise addition, simplification, compression and watermarking. The proposed metric is based on the measurement of the distance between curvature tensors of the two meshes under comparison. Our algorithm uses not only tensor eigenvalues (i.e., curvature amplitudes) but also tensor eigenvectors (i.e., principal curvature directions) to derive a perceptually-oriented tensor distance. The proposed metric also accounts for the visual masking effect of the human visual system, through a roughness-based weighting of the local tensor distance. A final score that reflects the visual difference between two meshes is obtained via a Minkowski pooling of the weighted local tensor distances over the mesh surface. We validate the performance of our algorithm on four subjectively-rated visual mesh quality databases, and compare the proposed method with state-of-the-art objective metrics. Experimental results show that our approach achieves high correlation between objective scores and subjective assessments.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"137 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2012-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78128076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A method of constructing phyllotaxically arranged modular models by partitioning the interior of a cylinder or a cone","authors":"C. Stepien","doi":"10.22630/mgv.2014.23.1.2","DOIUrl":"https://doi.org/10.22630/mgv.2014.23.1.2","url":null,"abstract":"Illumination correction is a method used for removing the influence of light coming from the environment and of other distorting factors in the image capturing process. An algorithm based on the luminance mapping is proposed that can be used to remove low frequency variations in the intensity, and to increase the contrast in low contrast areas when necessary. Moreover, the algorithm can be employed to preserve the intensity of medium-sized objects with different intensity or colour than their surroundings, which otherwise would tend to be washed out. Furthermore, examples are given showing how the method can be used for both greyscale images and colour photos.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2012-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75293096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Survey of Passive 3D Reconstruction Methods on the Basis of More than One Image","authors":"M. Siudak, P. Rokita","doi":"10.22630/mgv.2014.23.3.5","DOIUrl":"https://doi.org/10.22630/mgv.2014.23.3.5","url":null,"abstract":"The research on the 3D scene reconstruction on the basis of its images and video recordings has been in progress for many years. As a~result there is a~number of methods concerning how to manage the reconstruction problem. This article's goal is to present the most important methods of reconstruction including stereo vision, shape from motion, shape from defocus, shape form silhouettes. shape from photo-consistency. All the algorithms explained in this article can be used on images taken with casual cameras in an ordinary illuminated scene (passive methods).","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"300 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2012-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72506997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved Illumination Correction that Preserves Medium Sized Objects","authors":"A. Hast, Andrea Marchetti","doi":"10.22630/mgv.2014.23.1.1","DOIUrl":"https://doi.org/10.22630/mgv.2014.23.1.1","url":null,"abstract":"Illumination correction is a method used for removing the influence of light coming from the environment and of other distorting factors in the image capturing process. An algorithm based on the luminance mapping is proposed that can be used to remove low frequency variations in the intensity, and to increase the contrast in low contrast areas when necessary. Moreover, the algorithm can be employed to preserve the intensity of medium-sized objects with different intensity or colour than their surroundings, which otherwise would tend to be washed out. Furthermore, examples are given showing how the method can be used for both greyscale images and colour photos.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2012-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81935762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Color transformation method that preserves the impression of texture in virtual makeover system","authors":"Joanna Kaczmarczyk, Maciej Pankiewicz","doi":"10.22630/mgv.2013.22.1.3","DOIUrl":"https://doi.org/10.22630/mgv.2013.22.1.3","url":null,"abstract":"The algorithms of color transformation that preserves the impression of texture are used in virtual makeover systems, where maintaining the impression of unaltered texture is important in the process of transforming the color. The content of this paper covers the process of implementing the algorithm of digital picture color transformation with its main objective - minimizing its influence on the texture structure. The main idea of the presented algorithm is to determine the area in HSV space that consists of the original picture pixels and then, to move it towards the target color in such a way that every color is moved by the same vector, limited only by the fact that the transformation is not always possible. The analysis of the algorithm was conducted based on fragments of real face photographs. Its results were compared on the basis of measures estimated on the run length matrix and the co-occurrence matrix.","PeriodicalId":39750,"journal":{"name":"Machine Graphics and Vision","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2012-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74319609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}