{"title":"Pornographic video classification using fast motion Features","authors":"Jung-Jae Yu, S. Han","doi":"10.1109/FCV.2015.7103714","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103714","url":null,"abstract":"In this paper, a novel method to automatically classify an pornographic video using fast motion features is proposed. In proposed method, predefined number of clips without a shot change are extracted from an input video and new, fast motion features are computed for each clip. Each clip is given a pornographic possibility based on motion distribution information. If the possibility is bigger than a threshold value, the clip is regarded as an pornographic clip. Finally, the ratio of pornographic clips is computed and the input video is classified as an pornographic video if the ratio is bigger than a threshold ratio.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128224240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jin Kim, Yoon Suk Lee, Sang Su Han, Sangkwon Kim, G. Lee, Ho Jun Ji, Hye Ji Choi, K. Choi
{"title":"Autonomous flight system using marker recognition on drone","authors":"Jin Kim, Yoon Suk Lee, Sang Su Han, Sangkwon Kim, G. Lee, Ho Jun Ji, Hye Ji Choi, K. Choi","doi":"10.1109/FCV.2015.7103712","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103712","url":null,"abstract":"The DHL began test and research on delivery system by drone. Because of growing demand about comercialized and personal-use drone business, we needed a real-time unmanned drone control system that individuals can afford. In this paper, we propose an autonomous flight system for drone. We used marker recognition technique on that system because it doesn't require high-spec device which cannot be afforded by ordinary people. The proposed system maintains distance between drone and marker in flight. The system estimates the distance between marker and drone by calculating area of recognized marker image. Performance is validated by result of the proposed system experiment with Parrot's aerial vehicle AR.drone 2.0 and Android device. The proposed system is expected to be used in promising areas like chase camera, unmanned transport and etc.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133568464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rigid and non-rigid object image matching using deformable object image discrimination","authors":"J. Feng, In-su Won, Jae-hyup Jeong, D. Jeong","doi":"10.1109/FCV.2015.7103704","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103704","url":null,"abstract":"This paper proposes the image matching method that can match rigid object image and non-rigid object image by utilizing the same feature. To this end, first determines the matching of a rigid object image through geometric verification and then discriminate the non-rigid deformable image from the verified result by using supervised learning. Lastly, this paper proposes the method to match a non-rigid object image through clustering of feature matching-pairs in relation to the discriminated result. This paper confirmed that the proposed method had a lower time complexity and a higher matching success rate and accuracy than the conventional method.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130498580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Depth extended online RPCA with spatiotemporal constraints for robust background subtraction","authors":"S. Javed, T. Bouwmans, Soon Ki Jung","doi":"10.1109/FCV.2015.7103745","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103745","url":null,"abstract":"The detection of moving objects is the first step in video surveillance systems. But due to the challenging backgrounds such as illumination conditions, color saturation, and shadows, etc., the state of the art methods do not provide accurate segmentation using only a single camera. Recently, subspace learning model such as Robust Principal Component analysis (RPCA) shows a very nice framework towards object detection. But, RPCA presents the limitations of computational and memory issues due to the batch optimization methods, and hence it cannot process high dimensional data. Recent research on RPCA methods such as Online RPCA (OR-PCA) alleviates the traditional RPCA limitations. However, OR-PCA using only color or intensity features shows a weak performance specially when the background and foreground objects have a similar color or shadows appear in the background scene. To handle these challenges, this paper presents an extension of OR-PCA with the integration of depth and color information for robust background subtraction. Depth is less affected by shadows or background/foreground color saturation issues. However, the foreground object may not be detected when it is far from the camera field as depth is less useful without color information. We show that the OR-PCA including spatiotemporal constraints provides accurate segmentation with the utilization of both color and depth features. Experimental evaluations on a well-defined benchmark dataset with other methods demonstrate that our proposed technique is a top performer using color and range information.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134291543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Minematsu, Hideaki Uchiyama, Atsushi Shimada, H. Nagahara, R. Taniguchi
{"title":"Evaluation of foreground detection methodology for a moving camera","authors":"T. Minematsu, Hideaki Uchiyama, Atsushi Shimada, H. Nagahara, R. Taniguchi","doi":"10.1109/FCV.2015.7103752","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103752","url":null,"abstract":"Detection of moving objects is one of the key steps for vision based applications. Many previous works leverage background subtraction using background models and assume that image sequences are captured from a stationary camera. These methods are not directly applied to image sequences from a moving camera because both foreground and background objects move with respect to the camera. One of the approaches to tackle this problem is to estimate background movement by computing pixel correspondences between frames such as homography. With this approach, moving objects can be detected by using existing background subtraction. In this paper, we evaluate detection of foreground objects for image sequences from a moving camera. Especially, we focus on homography as a camera motion. In our evaluation we change the following parameters: changing feature points, the number of them and estimation methods of homography. We analyze its effect on detection of moving objects in regard to detection accuracy, processing time. Through experiments, we show requirement of background models in image sequences form a moving camera.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129918274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bingzhi Yuan, Toru Tamaki, Takahiro Kushida, B. Raytchev, K. Kaneda, Y. Mukaigawa, Hiroyuki Kubo
{"title":"Layered optical tomography of multiple scattering media with combined constraint optimization","authors":"Bingzhi Yuan, Toru Tamaki, Takahiro Kushida, B. Raytchev, K. Kaneda, Y. Mukaigawa, Hiroyuki Kubo","doi":"10.1109/FCV.2015.7103735","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103735","url":null,"abstract":"In this paper, we proposed an improved optical scattering tomography for optically dense media. We model a material by many layers with voxels, and light scattering by a distribution from a voxel in one layer to other voxels in the next layer. Then we write attenuation of light along a light path by an inner product of vectors, and formulate the scattering tomography as an inequality constraint optimization problem solved by an interior point method. To improve the accuracy, we solve simultaneously four configurations of a multiple-scattering tomography, however, this would increase the computational cost by a factor of four if we simply solved the problem four times. To reduce the computation cost, we introduce a quasi-Newton method to update the inverse of a Hessian matrix used in the iteration of the interior point method. We show experimental results with numerical simulation for evaluating the proposed method and comparisons with our previous work.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115597599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Determination of 3D object pose in point cloud with CAD model","authors":"D. Nguyen, J. P. Ko, J. Jeon","doi":"10.1109/FCV.2015.7103725","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103725","url":null,"abstract":"This paper introduces improvements to estimate 3D object pose from point clouds. We use point-pair feature for matching instead of traditional approaches using local feature descriptors. In order to obtain high accuracy estimation, a discriminative descriptor is introduced for point-pair features. The object model is a set of point pair descriptors computed from CAD model. The voting process is performed on a local area of each key-point to boost the performance. Due to the simplicity of descriptor, a matching threshold is defined to enable the robustness of the algorithm. A clustering algorithm is defined for grouping similar poses together. Best pose candidates will be selected for refining and final verification will be performed. The robustness and accuracy of our approach are demonstrated through experiments. Our approach can be compared to state-of-the-art algorithms in terms of recognition rates. These high accurate poses especially useful for robot in manipulating objects in the factory. Since our approach does not use color feature, it is independent to light conditions. The system give accurate pose estimation even when there is no light in the area.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130817076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonlinear discriminant analysis using K nearest neighbor estimation","authors":"Xuezhen Li, Takio Kurita","doi":"10.1109/FCV.2015.7103744","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103744","url":null,"abstract":"Fishers linear discriminant analysis (FLDA) is one of the well-known methods to extract the best features for multi-class discrimination. Recently Kernel discriminant analysis (KDA) has been successfully applied in many applications. KDA is one of the nonlinear extensions of FLDA and construct nonlinear discriminant mapping by using kernel functions. Otsu derived the optimum nonlinear discriminant analysis (ONDA) by assuming the underlying probabilities similar with the Bayesian decision theory. In this paper, we propose to construct an approximation of the optimum nonlinear discriminant mapping based on Otsu's theory of the nonlinear discriminant analysis. We use k nearest neighbor(k-NN) to estimate Bayesian posterior probabilities. In experiment, we show classification performance of the proposed nonlinear discriminant analysis for several modified k-NN.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128810042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of background subtraction algorithms for video surveillance","authors":"Ajmal Shahbaz, Joko Hariyono, K. Jo","doi":"10.1109/FCV.2015.7103699","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103699","url":null,"abstract":"This paper presents a comparative study of several state of the art background subtraction (BS) algorithms. The goal is to provide brief solid overview of the strengths and weaknesses of the most widely applied BS methods. Approaches ranging from simple background subtraction with global thresholding to more sophisticated statistical methods have been implemented and tested with ground truth. The interframe difference, approximate median filtering and Gaussian mixture models (GMM) methods are compared relative to their robustness, computational time, and memory requirement. The performance of the algorithms is tested in public datasets. Interframe difference and approximate median filtering are pretty fast, almost five times faster than GMM. Moreover, GMM occupies five times more memory than simpler methods. However, experimental results of GMM are more accurate than simple methods.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122511721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tumor detection on brain MR images using regional features: Method and preliminary results","authors":"K. Oh, Soohyung Kim, Myungeun Lee","doi":"10.1109/FCV.2015.7103705","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103705","url":null,"abstract":"This paper presents a novel approach to detecting tumor in the brain magnetic resonance images using regional features. First, the proposed algorithm segments head area and skull area using average of brain magnetic resonance images and local adaptive threshold technique. Next, super-pixel segmentation algorithm is applied in order to generate categorized regions on the segmented brain image. Second, we extract regional features, which are texture feature and intensity. Finally, the support vector machine classifier detects the tumor regions by integrating candidates of tumor, which are computed from categorized regions according to different super-pixel parameters. The scheme successfully detects tumor region on the 60 brain magnetic resonance dataset.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121386326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}