{"title":"Edge Saliency Map Detection with Texture Suppression","authors":"H. Ao, Nenghai Yu","doi":"10.1109/ICIG.2011.46","DOIUrl":"https://doi.org/10.1109/ICIG.2011.46","url":null,"abstract":"Edge is a basic feature in the field of computing vision. So to find edge saliency map is an indispensable operation for many applications on image processing. In this paper we present a fast algorithm to find edge saliency map for a natural image. The approach integrates three basic edge features: edge gradient value, edge segment length and edge density, and it works well to detect salient region boundaries and to suppress ill edges from texture. An edge saliency map can be used to image segmentation and boundaries detection. Experimental results demonstrate that our algorithm outperforms other edge saliency detection methods. Finally, our algorithm is applied on salient objects segmentation, compared with several state of-the-art salient region detection methods and the results show our work is valuable.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131834502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exemplar-Based Image Inpainting with Collaborative Filtering","authors":"Xinran Wu, Wei Zeng, Zhenzhou Li","doi":"10.1109/ICIG.2011.53","DOIUrl":"https://doi.org/10.1109/ICIG.2011.53","url":null,"abstract":"This paper proposes a novel patch synthesis approach for exemplar-based propagation in image in painting. Currently, plural non-local exemplar patches synthesis is widely adopted to fill missing pixels. It generally provides good results, but sometimes shows poor visual quality due to dissimilarity between exemplars and targets. In this paper, a collaborative filtering approach is used to enhance the exemplar-based propagation to obtain ideal in painting results. The approach works on pixel level information, while many exemplar-based propagation algorithms focus on patch level information. Object removal and stain image recovering are carried out to evaluate the proposed approach. Experiments show that our approach provides good visual quality in object removal and high PSNR in stain image recovering.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130897067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Saliency Modulated High Dynamic Range Image Tone Mapping","authors":"Yujie Mei, G. Qiu, K. Lam","doi":"10.1109/ICIG.2011.52","DOIUrl":"https://doi.org/10.1109/ICIG.2011.52","url":null,"abstract":"This paper presents a new high dynamic range image tone mapping technique - saliency modulated tone mapping (SMTM). The HDR image is not directly viewable and dynamic range compression will unavoidably loose information. A saliency map analyzes the visual importance of the regions and can therefore direct the tone mapping operators to preserve the visual conspicuity of the regions that should more likely attract visual attention. In SMTM, we have developed a very fast algorithm to first compute the visual saliency map of the high dynamic range radiance map and then directly use the saliency of the local regions to control the local tone mapping curve such that highly salient regions will have their details and contrast better protected so as to remain salient and attract visual attention in the tone mapped display. We present experimental results to show that SMTM provides competitive performances to state of the art tone mapping techniques in rending visually pleasing low dynamic range displays. We also show that SMTM is better able to preserve the visual saliency of the HDR image and that SMTM renders high saliency regions to stand out to attract observers attention.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131211135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Query by Virtual Example: Video Retrieval Using Example Shots Created by Virtual Reality Techniques","authors":"Kimiaki Shirahama, K. Uehara","doi":"10.1109/ICIG.2011.158","DOIUrl":"https://doi.org/10.1109/ICIG.2011.158","url":null,"abstract":"In this paper, we extend the traditional `Query-By-Example' (QBE) approach where example shots are provided to represent a query, and then shots similar to them are retrieved. One crucial problem of QBE is that when example shots for the query are unavailable, the retrieval cannot be performed. To overcome this, we propose an innovative approach, named `Query-By-Virtual-Example' (QBVE), where example shots for any arbitrary query can be created by using virtual reality techniques. We call such example shots `virtual example shots'. In our system, virtual example shots are created by synthesizing user's gesture in front of a video camera, 3D object models (3DCGs) and background images. Experimental results on TRECVID 2009 video data show the validity of substituting virtual example shots with real example shots used in QBE.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130905111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Ship Detection Method Based on Sea State Analysis from Optical Imagery","authors":"Guang Yang, Qichao Lu, F. Gao","doi":"10.1109/ICIG.2011.19","DOIUrl":"https://doi.org/10.1109/ICIG.2011.19","url":null,"abstract":"This paper proposes a novel ship detection method based on analyzing the sea state in optical images. This method is composed of three phases. First, the image is segmented with the improved region splitting and merging method, which divides the sea into separated regions. Then, the sea state of each divided region of sea is analyzed by extracting texture roughness and ripple density of a modified differential box counting (DBC) method. Finally, an appropriate algorithm is applied to detect ships for each region of sea. Experimental results test on 36 real remote sensing images and 133 images obtained from Google earth demonstrate that the method is free of image resolution and has little limitation of sea conditions.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130937270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adiljan Yimit, Yukari Hagihara, T. Miyoshi, Y. Hagihara
{"title":"A New Two-Dimensional Direction Histogram Based Entropic Thresholding","authors":"Adiljan Yimit, Yukari Hagihara, T. Miyoshi, Y. Hagihara","doi":"10.1109/ICIG.2011.70","DOIUrl":"https://doi.org/10.1109/ICIG.2011.70","url":null,"abstract":"Since the entropic concept was introduced in image segmentation by Pun, many entropic methods have been developed in rapid succession. Consequently, the entropic approach has become a major category of segmentation. Abutaleb et al. firstly introduced the two-dimensional entropies, taking the spatial correlation into account. However, their method takes up too much computational time to give better results. In this paper, a new entropic thresholding method is proposed based on the two-dimensional Shannon entropies by using the orientation histogram. Utilizing several images, the new method is compared with Abutaleb's method and the two-dimensional entropic thresholding method proposed by Xiao et al. The experimental results demonstrate the effectiveness of the proposed method in aspects of two-dimensional entropic thresholding.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130976613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lattice Boltzmann Method of Active Contour for Image Segmentation","authors":"Zhiqiang Wang, Zhuangzhi Yan, George Chen","doi":"10.1109/ICIG.2011.138","DOIUrl":"https://doi.org/10.1109/ICIG.2011.138","url":null,"abstract":"In this paper, Lattice Boltzmann Method (LBM) has been proposed to simulate the well known active contour model (the CV model) for image segmentation. The proposed method provides a new numerical solution for solving the level set equation of the active contour model. As a local and explicit scheme, the algorithm based on LBM is not only stable with large steps, but also overcomes the difficulty in parallel computing of most implicit difference approaches. Experimental results demonstrate that LBM is computationally more efficient than the semi-implicit discrete method of CV model.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132100507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RSEM: An Accelerated Algorithm on Repeated EM","authors":"Qinpei Zhao, Ville Hautamäki, P. Fränti","doi":"10.1109/ICIG.2011.110","DOIUrl":"https://doi.org/10.1109/ICIG.2011.110","url":null,"abstract":"Expectation maximization (EM) algorithm, being a gradient ascent algorithm depends highly on the initialization. Repeating EM multiple times with different initial solutions and taking the best result is used to attack this problem. However, the solution space is searched inefficiently in Repeated EM, because after each restart it can take a long time to converge without any guarantee that it leads to an improved solution. A random swap EM algorithm utilizes random swap strategy to improve the problem in a more efficient way. In this paper, a theoretical and experimental comparison between RSEM and REM is conducted. Based on GMM estimation theory, it is proved that RSEM reaches the optimal result faster than REM with high probability. It is also shown experimentally that RSEM speeds up REM from 9% to 63%. A study in color-texture images demonstrates an application of EM algorithms in a segmentation task.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132697376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Super-resolution Reconstruction Method Combining Narrow Quantization Constraint Set and Motion Estimation for H.264 Compressed Video","authors":"D. Hu, Yingxue Zhao, P. Xiao","doi":"10.1109/ICIG.2011.169","DOIUrl":"https://doi.org/10.1109/ICIG.2011.169","url":null,"abstract":"Super-resolution (SR) reconstruction technique is mainly a task of reconstructing high resolution images from a sequence of low resolution images. The super-resolution technique for H.264 compressed video has been focused by many researchers recently. This paper briefly analyzes the narrow quantization constraint set method (NQCS), and then, in consideration of motion characteristic information of H.264 compressed video, proposes a new SR reconstruction method which combines NQCS with motion characteristic information, which are spatial domain motion estimation and frequency domain motion noise respectively. Experimental results of different standard test sequences compressed by H.264 are given. The simulation shows that both the NQCS+Mnoise method which combines NQCS with frequency domain motion noise, and the NQCS+M method which combines NQCS with spatial domain motion estimation, can get higher PNSR value than NQCS. Moreover, the NQCS+M method has better performance than NQCS+Mnoise method, and our proposed method is suitable for the SR reconstruction of H.264 compressed video.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127661446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Fast Exact Euclidean Distance Transform Algorithm","authors":"Shuang Chen, Junli Li, Xiuying Wang","doi":"10.1109/ICIG.2011.34","DOIUrl":"https://doi.org/10.1109/ICIG.2011.34","url":null,"abstract":"Euclidean distance transform is widely used in many applications of image analysis and processing. Traditional algorithms are time-consuming and difficult to realize. This paper proposes a novel fast distance transform algorithm. Firstly, mark each foreground's nearest background pixel's position in the row and column, and then use the marks scan the foreground area and figure out the first foreground pixel distance transform information, According to the first pixel' information, design four small regions for its 4-adjacent foreground pixel and also based on the marks search out each adjacent foreground pixel's nearest background pixel. As the region growing, iteratively process each adjacent pixel until all the foreground pixels been resolved. Our algorithm has high efficiency and is simple to implement. Experiments show that comparing to the existing boundary striping and contour tracking algorithm, our algorithm demonstrates a significant improvement in time and space consumption.","PeriodicalId":277974,"journal":{"name":"2011 Sixth International Conference on Image and Graphics","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134482322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}