{"title":"Two Co-occurrence Histogram Features Using Gradient Orientations and Local Binary Patterns for Pedestrian Detection","authors":"Tomoki Watanabe, S. Ito","doi":"10.1109/ACPR.2013.117","DOIUrl":"https://doi.org/10.1109/ACPR.2013.117","url":null,"abstract":"Pedestrian detection plays important roles in various applications such as automobile driving assistance and surveillance camera system. The co-occurrence histograms of oriented gradients (CoHOG) feature descriptor showed good performance since thirty co-occurrences at each pixel position represent various spatial characteristics of object shapes. Though extraction of co-occurrence histogram features is computationally demanding, there is an application-specific integrated circuit (ASIC) to accelerate the computation. The hardware accelerator enables CoHOG to be used in real-time applications. In this paper, we propose the use of two co-occurrence histogram features describing different aspects of object shapes to improve accuracy of pedestrian detection. One feature is CoHOG and the other is co-occurrence histograms of local binary patterns (CoHLBP). CoHLBP assigns each pixel into eight categories by comparing a center pixel's value and its three neighbors' values, and then co-occurrence histograms are calculated in the same way as for CoHOG. Since the number of local binary patterns is the same as the number of quantized orientations used in CoHOG, the CoHOG hardware accelerator can be used for CoHLBP calculation. The experimental results using the benchmark NICTA pedestrian dataset show that the proposed method reduces the false positive rate to less than one-quarter of that of CoHOG.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131261512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital Watermarking Method to Extract Watermarks from Printed Matters with Cell Phone by Using Finder Patterns and Alignment Pattern of QR Code","authors":"M. Iwata, Kenji Yamanaka, K. Kise","doi":"10.1109/ACPR.2013.79","DOIUrl":"https://doi.org/10.1109/ACPR.2013.79","url":null,"abstract":"There are some watermarking methods to be ableto extract a watermark even from a printed watermarked image with cell phones. In such a use, the speed of extracting watermark should be fast for users' convenience, where QR codes are used for the similar purpose. However, QR codes require its own space in addition to contents and its appearance of QR codes is unattractive. On the other hand, digital watermarking requires no additional space, and its appearance is visually attractive. Nakamura's method used a black frame for identifying a watermarked area. The problem of Nakamura's method is a large constraint on original images and inconvenient extraction way. Therefore, we employ the finder pattern and the alignment pattern of QR codes for position detection. The advantage of the proposed method is a small constraint on design and convenient way of real time extraction. In the experiment, we implement the proposed method as an Android application and evaluate its performance including its real extraction time on a cell phone. The experimental results confirm that the proposed method can extract a correct watermark in 2 seconds, where it is enough fast in practice.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133314351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Object Recognition Method Using RGB-D Sensor","authors":"Daisuke Maeda, M. Morimoto","doi":"10.1109/ACPR.2013.156","DOIUrl":"https://doi.org/10.1109/ACPR.2013.156","url":null,"abstract":"To recognize objects within narrow categories, it is important to extract effective features from small number of training samples. In this paper, first we discuss several depth features to improve object recognition accuracy. After that, we also discuss feature dimension reduction when we have insufficient training samples.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"804 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133321795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pixel-Pair Features Selection for Vehicle Tracking","authors":"Zhibin Zhang, Xuezhen Li, Takio Kurita, Shinya Tanaka","doi":"10.1109/ACPR.2013.95","DOIUrl":"https://doi.org/10.1109/ACPR.2013.95","url":null,"abstract":"This paper proposes a novel tracking algorithm to cope with the appearance variations of vehicle in the natural environments. The algorithm utilizes the discriminative features named pixel-pair features for estimating the similarity between the template image and candidate matching images. Pixel-pair features have been proved to be robust for illumination changes and partial occlusions of the training object. This paper improves the original feature selection algorithm to increase the tracking performance in other appearance changes (such as shape deformation, drifting and view angle change). The new feature selection algorithm incrementally selects the discriminative pixel-pair feature whose matching error between the target and the background is lower than a given threshold. Also the roulette selection method based on the edge values is utilized to increase the possibility to select more informative feature points. The selected features therefore are considered to be robust for shape deformation and view angle changes. Compared with the original feature selection algorithm, our algorithm shows excellent robustness in a variety of videos which include illumination changes, shape deformation, drifting and partial occlusion.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132039485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Iwamura, Takuya Kobayashi, Takahiro Matsuda, K. Kise
{"title":"Recognition of Layout-Free Characters on Complex Background","authors":"M. Iwamura, Takuya Kobayashi, Takahiro Matsuda, K. Kise","doi":"10.1109/ACPR.2013.191","DOIUrl":"https://doi.org/10.1109/ACPR.2013.191","url":null,"abstract":"Recognizing characters in a scene is a challenging and unsolved problem. In this demonstration, we show an effective approach to cope with the problems: recognizing Japanese characters including complex characters such as Kanji (Chinese characters), which may not be aligned on a straight line and may be printed on a complex background. In the demo, our recognition method is applied to image sequences captured with a web camera. The recognition method is based on local features and their alignment. In addition, using a tracking method, recognition results and extracted features are accumulated so as to increase recognition accuracy as time goes on. The demo runs about 1 fps on a standard laptop computer.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133264948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shape Reconstruction from Single Relief Image","authors":"Harshit Agrawal, A. Namboodiri","doi":"10.1109/ACPR.2013.61","DOIUrl":"https://doi.org/10.1109/ACPR.2013.61","url":null,"abstract":"Reconstructing geometric models of relief carvings are of great importance in preserving cultural heritages digitally. In case of reliefs, using laser scanners and structured lighting techniques is not always feasible or are very expensive given the uncontrolled environment. Single image shape from shading is an under-constrained problem that tries to solve for the surface normals given the intensity image. Various constraints are used to make the problem tractable. To avoid the uncontrolled lighting, we use a pair of images with and without the flash and compute an image under a known illumination. This image is used as an input to the shape reconstruction algorithms. We present techniques that try to reconstruct the shape from relief images using the prior information learned from examples. We learn the variations in geometric shape corresponding to image appearances under different lighting conditions using sparse representations. Given a new image, we estimate the most appropriate shape that will result in the given appearance under the specified lighting conditions. We integrate the prior with the normals computed from reflectance equation in a MAP framework. We test our approach on relief images and compare them with the state-of-the-art shape from shading algorithms.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130166011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection of Driver's Drowsy Facial Expression","authors":"Taro Nakamura, Akinobu Maejima, S. Morishima","doi":"10.1109/ACPR.2013.176","DOIUrl":"https://doi.org/10.1109/ACPR.2013.176","url":null,"abstract":"We propose a method for the estimation of the degree of a driver's drowsiness on basis of changes in facial expressions captured by an IR camera. Typically, drowsiness is accompanied by falling of eyelids. Therefore, most of the related studies have focused on tracking eyelid movement by monitoring facial feature points. However, textural changes that arise from frowning are also very important and sensitive features in the initial stage of drowsiness, and it is difficult to detect such changes solely using facial feature points. In this paper, we propose a more precise drowsiness-degree estimation method considering wrinkles change by calculating local edge intensity on faces that expresses drowsiness more directly in the initial stage.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130168767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusing Local Patterns of Gabor and Non-subsampled Contourlet Transform for Face Recognition","authors":"Yao Deng, Zhenhua Guo, Youbin Chen","doi":"10.1109/ACPR.2013.58","DOIUrl":"https://doi.org/10.1109/ACPR.2013.58","url":null,"abstract":"Gabor features have been demonstrated to be very effective for face representation. Recently, non-sub sampled contour let transform (NSCT), which is a newly developed multi-resolution analysis tool based on contour let transform, is also used in facial image processing. In fact, the two image decomposition methods are performed from two different angles. To exploit complementarity of these features, in this paper, we propose a new face representation based on fusing local patterns of Gabor and NSCT. Firstly, we decompose face images using Gabor and NSCT respectively. Then all decomposition images are encoded by local texture descriptors to combine. In order to extract efficient features for feature fusion, we propose local Gabor difference features (LGDF) and local contour let difference features (LCDF) to represent the texture of decomposition images. Thirdly, after fusing LGDF and LCDF, block-based Fisher's linear discriminant (BFLD) is utilized to further reduce the dimensionality and improve discriminative power of the proposed method. Experiments on public databases demonstrate that the proposed LGDF and LCDF are very effective and our approach outperforms many state-of-the-art methods.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115148673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Ali, Trung-Thien Tran, Van-Toan Cao, D. Laurendeau
{"title":"An Approach for Local Comparison of Deformable 3D Models","authors":"Sarah Ali, Trung-Thien Tran, Van-Toan Cao, D. Laurendeau","doi":"10.1109/ACPR.2013.155","DOIUrl":"https://doi.org/10.1109/ACPR.2013.155","url":null,"abstract":"In this work we present a local comparison approach of deformable 3D models. Two types of deformation are discussed. The first is volume change which is caused by either an increase or a decrease of the body volume. In the second type of deformation, the volume of the model is not altered but the change in shape results from either an articulation or a bending of parts of the object. Our strategy begins with a global alignment of the two shapes such that only their common unchanged parts will be aligned perfectly, revealing minimal distances between the parts. However, the deformed parts after alignment will be distinct and observable. In addition we provide an accurate and simple algorithm to measure the difference between the deformed parts of the models after the alignment. Experiments were conducted with real scanned 3D data in order to verify the performance of the proposed method.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114641629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takahito Aoto, Tomokazu Sato, Y. Mukaigawa, N. Yokoya
{"title":"Linear Estimation of 4-D Illumination Light Field from Diffuse Reflections","authors":"Takahito Aoto, Tomokazu Sato, Y. Mukaigawa, N. Yokoya","doi":"10.1109/ACPR.2013.141","DOIUrl":"https://doi.org/10.1109/ACPR.2013.141","url":null,"abstract":"This paper proposes a linear estimation method for 4-D illumination light field using an inverse lighting framework. Although lighting environments have conventionally been modeled by the combination of point light sources or a spherical light field map, complex light sources such as an ordinary flashlight with lenses and reflectors, can not be correctly modeled by these approaches. In order to correctly model these complex light sources, we employ 4-D illumination light field. Unlike conventional work, we decompose observed intensities on a diffuse-reflection board into intensities of 4-D light rays by solving a linear system. We validate the effectiveness of the proposed method through the experiments on real and virtual environments.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130582436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}