{"title":"Active Contour Model for Boundary Detection of Multiple Objects","authors":"Jong-Whan Jang","doi":"10.3745/KIPSTB.2010.17B.5.375","DOIUrl":"https://doi.org/10.3745/KIPSTB.2010.17B.5.375","url":null,"abstract":"ABSTRACT Most of previous algorithms of object boundary extraction have been studied for extracting the boundary of single object. However, multiple objects are much common in the real image. The proposed algorithm of extracting the boundary of each of multiple objects has two steps. In the first step, we propose the fast method using the outer and inner products; the initial contour including multiple objects is split and connected and each of new contours includes only one object. In the second step, an improved active contour model is studied to extract the boundary of each object included each of contours. Experimental results with various test images have shown that our algorithm produces much better results than the previous algorithmsKey words : Snake, Active Contour Model, Multiple Objects, Boundary Extraction, Highly Irregular Boundary, Splitting and Connecting of Snake Points 1. 서 론 1) 객체윤곽 추출은 내용기반 검출시스템 및 대화형 멀티미디어시스템에서 매우 중요하다[1, 2]. 이러한 시스템을 사용하여 서비스를 성공적으로 제공하기 위해서는 영상질의를 하는데 기본 정보로 객체 모양을 사용한다. 실 영상에서는 단일객체 보다는 복수 객체가 일반적이고 복수객체 윤곽을 효율적으로 추출하면 활용되는 분야가 다양화 될 것으로 기대된다. 복수 객체의 윤곽추출의 일반적인 방법은 먼저 복수객체를 단일객체로 분리하고, 그 다음 단일 객체의 모양 즉 윤곽을 추출하여야 한다.복수객체의 윤곽추출에는 여러 방법이 제안되었다. 예를 들면, 영역분할이나, Watershed 등의 알고리즘이 제안되었","PeriodicalId":122700,"journal":{"name":"The Kips Transactions:partb","volume":"11 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120857027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gradient Descent Approach for Value-Based Weighting","authors":"Chang-Hwan Lee, Joohyun Bae","doi":"10.3745/KIPSTB.2010.17B.5.381","DOIUrl":"https://doi.org/10.3745/KIPSTB.2010.17B.5.381","url":null,"abstract":"Naive Bayesian learning has been widely used in many data mining applications, and it performs surprisingly well on many applications. However, due to the assumption that all attributes are equally important in naive Bayesian learning, the posterior probabilities estimated by naive Bayesian are sometimes poor. In this paper, we propose more fine-grained weighting methods, called value weighting, in the context of naive Bayesian learning. While the current weighting methods assign a weight to each attribute, we assign a weight to each attribute value. We investigate how the proposed value weighting effects the performance of naive Bayesian learning. We develop new methods, using gradient descent method, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general Naive bayesian, and the value weighting method showed better in most cases.","PeriodicalId":122700,"journal":{"name":"The Kips Transactions:partb","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114071320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust AAM-based Face Tracking with Occlusion Using SIFT Features","authors":"Sungeun Eom, Jun-Su Jang","doi":"10.3745/KIPSTB.2010.17B.5.355","DOIUrl":"https://doi.org/10.3745/KIPSTB.2010.17B.5.355","url":null,"abstract":"Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.","PeriodicalId":122700,"journal":{"name":"The Kips Transactions:partb","volume":"419 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116402604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Panoramic Image Composition Algorithm through Scaling and Rotation Invariant Features","authors":"Ki-Won Kwon, Hae-Yeoun Lee, Duk-Hwan Oh","doi":"10.3745/KIPSTB.2010.17B.5.333","DOIUrl":"https://doi.org/10.3745/KIPSTB.2010.17B.5.333","url":null,"abstract":"This paper addresses the way to compose paronamic images from images taken the same objects. With the spread of digital camera, the panoramic image has been studied to generate with its interest. In this paper, we propose a panoramic image generation method using scaling and rotation invariant features. First, feature points are extracted from input images and matched with a RANSAC algorithm. Then, after the perspective model is estimated, the input image is registered with this model. Since the SURF feature extraction algorithm is adapted, the proposed method is robust against geometric distortions such as scaling and rotation. Also, the improvement of computational cost is achieved. In the experiment, the SURF feature in the proposed method is compared with features from Harris corner detector or the SIFT algorithm. The proposed method is tested by generating panoramic images using images. Results show that it takes 0.4 second in average for computation and is more efficient than other schemes.","PeriodicalId":122700,"journal":{"name":"The Kips Transactions:partb","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121072132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Ant Colony Optimization Heuristic to solve the VRP with Time Window","authors":"Myung-Duk Hong, Young-Hoon Yu, Geun-Sik Jo","doi":"10.3745/KIPSTB.2010.17B.5.389","DOIUrl":"https://doi.org/10.3745/KIPSTB.2010.17B.5.389","url":null,"abstract":"The Vehicle Routing and Scheduling Problem with Time Windows(VRSPTW) is to establish a delivery route of minimum cost satisfying the time constraints and capacity demands of many customers. The VRSPTW takes a long time to generate a solution because this is a NP-hard problem. To generate the nearest optimal solution within a reasonable time, we propose the heuristic by using an ACO(Ant Colony Optimization) with multi-cost functions. The multi-cost functions can generate a feasible initial-route by applying various weight values, such as distance, demand, angle and time window, to the cost factors when each ant evaluates the cost to move to the next customer node. Our experimental results show that our heuristic can generate the nearest optimal solution more efficiently than Solomon I1 heuristic or Hybrid heuristic applied by the opportunity time.","PeriodicalId":122700,"journal":{"name":"The Kips Transactions:partb","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127239921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Method for Precision Improvement Based on Core Query Clusters and Term Proximity","authors":"Kye-Hun Jang, Kyung-Soon Lee","doi":"10.3745/KIPSTB.2010.17B.5.399","DOIUrl":"https://doi.org/10.3745/KIPSTB.2010.17B.5.399","url":null,"abstract":"In this paper, we propose a method for precision improvement based on core clusters and term proximity. The method is composed by three steps. The initial retrieval documents are clustered based on query term combination, which occurred in the document. Core clusters are selected by using proximity between query terms. Then, the documents in core clusters are reranked based on context information of query. On TREC AP test collection, experimental results in precision at the top documents(P@100) show that the proposed method improved 11.2% over the language model.","PeriodicalId":122700,"journal":{"name":"The Kips Transactions:partb","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130069920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast Natural Feature Tracking Using Optical Flow","authors":"Byung-Jo Bae, Jong-Seung Park","doi":"10.3745/KIPSTB.2010.17B.5.345","DOIUrl":"https://doi.org/10.3745/KIPSTB.2010.17B.5.345","url":null,"abstract":"Visual tracking techniques for Augmented Reality are classified as either a marker tracking approach or a natural feature tracking approach. Marker-based tracking algorithms can be efficiently implemented sufficient to work in real-time on mobile devices. On the other hand, natural feature tracking methods require a lot of computationally expensive procedures. Most previous natural feature tracking methods include heavy feature extraction and pattern matching procedures for each of the input image frame. It is difficult to implement real-time augmented reality applications including the capability of natural feature tracking on low performance devices. The required computational time cost is also in proportion to the number of patterns to be matched. To speed up the natural feature tracking process, we propose a novel fast tracking method based on optical flow. We implemented the proposed method on mobile devices to run in real-time and be appropriately used with mobile augmented reality applications. Moreover, during tracking, we keep up the total number of feature points by inserting new feature points proportional to the number of vanished feature points. Experimental results showed that the proposed method reduces the computational cost and also stabilizes the camera pose estimation results.","PeriodicalId":122700,"journal":{"name":"The Kips Transactions:partb","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128415417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Method for Extracting Homogeneity Threshold and Segmenting Homogeneous Regions in Image","authors":"Gi-Tae Han","doi":"10.3745/KIPSTB.2010.17B.5.363","DOIUrl":"https://doi.org/10.3745/KIPSTB.2010.17B.5.363","url":null,"abstract":"In this paper, we propose the method for extracting Homogeneity Threshold() and for segmenting homogeneous regions by USRG(Unseeded Region Growing) with . The is a criterion to distinguish homogeneity in neighbor pixels and is computed automatically from the original image by proposed method. Theoretical background for proposed method is based on the Otsu`s single level threshold method. The method is used to divide a small local part of original image int o two classes and the sum() of standard deviations for the classes to satisfy special conditions for distinguishing as different regions from each other is used to compute . To find validity for proposed method, we compare the original image with the image that is regenerated with only the segmented homogeneous regions and show up the fact that the difference between two images is not exist visually and also present the steps to regenerate the image in order the size of segmented homogeneous regions and in order the intensity that includes pixels. Also, we show up the validity of proposed method with various results that is segmented using the homogeneity thresholds() that is added a coefficient for adjusting scope of . We expect that the proposed method can be applied in various fields such as visualization and animation of natural image, anatomy and biology and so on.","PeriodicalId":122700,"journal":{"name":"The Kips Transactions:partb","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126510670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model based Facial Expression Recognition using New Feature Space","authors":"Jin-Ok Kim","doi":"10.3745/KIPSTB.2010.17B.4.309","DOIUrl":"https://doi.org/10.3745/KIPSTB.2010.17B.4.309","url":null,"abstract":"ABSTRACT This paper introduces a new model based method for facial expression recognition that uses facial grid angles as feature space. In order to be able to recognize the six main facial expression, proposed method uses a grid approach and therefore it establishes a new feature space based on the angles that each gird's edge and vertex form. The way taken in the paper is robust against several affine transformations such as translation, rotation, and scaling which in other approaches are considered very harmful in the overall accuracy of a facial expression recognition algorithm. Also, this paper demonstrates the process that the feature space is created using angles and how a selection process of feature subset within this space is applied with Wrapper approach. Selected features are classified by SVM, 3-NN classifier and classification results are validated with two-tier cross validation. Proposed method shows 94% classification result and feature selection algorithm improves results by up to 10% over the full set of feature.Keywords:Facial Expression Recognition, Features Space Generation, Wrapper Approach, Multi-Tier Cross Validation","PeriodicalId":122700,"journal":{"name":"The Kips Transactions:partb","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114695777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Example-based Super Resolution Text Image Reconstruction Using Image Observation Model","authors":"Gyu-Ro Park, In-Jung Kim","doi":"10.3745/KIPSTB.2010.17B.4.295","DOIUrl":"https://doi.org/10.3745/KIPSTB.2010.17B.4.295","url":null,"abstract":"Example-based super resolution(EBSR) is a method to reconstruct high-resolution images by learning patch-wise correspondence between high-resolution and low-resolution images. It can reconstruct a high-resolution from just a single low-resolution image. However, when it is applied to a text image whose font type and size are different from those of training images, it often produces lots of noise. The primary reason is that, in the patch matching step of the reconstruction process, input patches can be inappropriately matched to the high-resolution patches in the patch dictionary. In this paper, we propose a new patch matching method to overcome this problem. Using an image observation model, it preserves the correlation between the input and the output images. Therefore, it effectively suppresses spurious noise caused by inappropriately matched patches. This does not only improve the quality of the output image but also allows the system to use a huge dictionary containing a variety of font types and sizes, which significantly improves the adaptability to variation in font type and size. In experiments, the proposed method outperformed conventional methods in reconstruction of multi-font and multi-size images. Moreover, it improved recognition performance from 88.58% to 93.54%, which confirms the practical effect of the proposed method on recognition performance.","PeriodicalId":122700,"journal":{"name":"The Kips Transactions:partb","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127295052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}