{"title":"Line-Based Region Growing Image Segmentation for Mobile Device Applications","authors":"Bo Yu, L. Diago, M. Savchenko, I. Hagiwara","doi":"10.11371/IIEEJ.41.7","DOIUrl":"https://doi.org/10.11371/IIEEJ.41.7","url":null,"abstract":"","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121204729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaori Kataoka, S. Ando, Akira Suzuki, H. Koike, M. Morimoto
{"title":"Eye Contour Based Face Hallucination Method","authors":"Kaori Kataoka, S. Ando, Akira Suzuki, H. Koike, M. Morimoto","doi":"10.11371/IIEEJ.40.909","DOIUrl":"https://doi.org/10.11371/IIEEJ.40.909","url":null,"abstract":"〈Summary〉 Face hallucination produces high-resolution facial images from lowresolution inputs. In this paper, we propose a contour-based face hallucination method. Since our goal is face recognition rather than visual effects, the contour information of facial-parts (such as eyes) is important. We focus on the eye parts and reconstruct eye contours instead of dividing them into small blocks. We obtain the eye contours by using the Active Appearance Model (AAM), and transform training images based on contours. We confirm that the proposed method significantly enhances face recognition performance.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117025907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust MLS Projection Operator for Point Clouds","authors":"H. Kawata, T. Kanai","doi":"10.11371/IIEEJ.40.558","DOIUrl":"https://doi.org/10.11371/IIEEJ.40.558","url":null,"abstract":"","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121562581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparative Study on Color Components for PCA-Based Face Recognition","authors":"Dongzhu Yin, Yoshihiro Sugaya, S. Omachi, H. Aso","doi":"10.11371/IIEEJ.40.671","DOIUrl":"https://doi.org/10.11371/IIEEJ.40.671","url":null,"abstract":"〈Summary〉 Using color information can significantly improve the face recognition rate instead of using the grayscale luminance image. However, there are few works that try to compare the color space models on face recognition. In this paper, we investigate thirty different color space models on face recognition using the classical principal component analysis (PCA). Through the extensive experiments we find that after successfully diminishing the influence of the illumination the recognition accuracy can be improved by 4.6∼5.5 percent points.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131152839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Text Entry based on Morse code Generated with Tongue Gestures","authors":"Luis Ricardo Sapaico, M. Nakajima, Makoto Sato","doi":"10.11371/IIEEJ.40.597","DOIUrl":"https://doi.org/10.11371/IIEEJ.40.597","url":null,"abstract":"〈Summary〉 We propose a vision-based Human-Computer Interface for Text Entry. The system uses a web camera for detecting tongue protrusion gestures, which are interpreted as the signals of International Morse Code. These gestures can be generated independently; hence, the traditional 3:1 ratio between dashes and dots can be disregarded. Employing Morse code for text entry requires memorizing it in advance. In this paper, users are provided with a Visual Chart where input characters are displayed on the screen. Navigating the chart in order to select a character matches perceptually the position of the tongue gestures. Thus, without previous knowledge of Morse code, users are able to start typing just by looking at the screen. Furthermore, the proposed interface allows learning the code by associating it with the tongue actions, while already obtaining tangible results. The Text Entry protocol consists of Timers that can be adjusted according to the level of expertise of users. The best text entry rate obtained was 2.54 WPM. We also provide a method to calculate theoretical speeds, which indicate the lower bound for the speeds obtained in practice. Finally, the Visual Chart contains 30 characters; however, it is possible to expand it in order to encode more information while maintaining the same text entry protocol.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121794217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved Exemplar-Based Inpainting with Reduced Search Space Computation from Image Wavelet Decomposition","authors":"Ladys Rodriguez, L. Diago, I. Hagiwara","doi":"10.11371/IIEEJ.40.428","DOIUrl":"https://doi.org/10.11371/IIEEJ.40.428","url":null,"abstract":"〈Summary〉 Image inpainting is very useful to restore or remove objects from digital images. Among several techniques to restore an image, exemplar-based inpainting is one of the most widely used. In instances in which large objects are removed, it dramatically outperforms earlier works in terms of both perceptual quality and computational efficiency. However, the exemplar-based approach has certain weaknesses such as high time cost and visual inconsistency in some cases with depth ambiguities. In this paper, we improve the exemplar-based approach by reducing the search space of exemplars. We obtain this reduction by using a combination of the wavelet transform of the image with an automatic computation of the search window. Numerical simulations show that the proposed approach considerably reduces the computational cost of original exemplar inpainting whilst keeps the quality of resulting images as good as in previous technique.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117251685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Representative Graph Generation for Graph-Based Character Recognition","authors":"Tomo Miyazaki, S. Omachi","doi":"10.11371/IIEEJ.40.439","DOIUrl":"https://doi.org/10.11371/IIEEJ.40.439","url":null,"abstract":"〈Summary〉 In graph-based pattern recognition, representative graph influences the performances of recognition and clustering. In this paper, we propose a learning method for generating a representative graph of a set of graphs by constructing graph unions with merging corresponding vertices and edges. Those corresponding vertices and edges are obtained using common vertices of a set. The proposed method includes extracting common vertices and correspondences of vertices. To show the validly of the proposed method, we applied the proposed method to pattern recognition problems with character graph database and graphs obtained from decorative character images.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"10 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114035069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Zhang, Xin Jin, Chen Liu, Minghui Wang, S. Goto
{"title":"ROI based Computational Complexity Reduction Scheme for H.264/AVC Encoder","authors":"T. Zhang, Xin Jin, Chen Liu, Minghui Wang, S. Goto","doi":"10.11371/IIEEJ.40.333","DOIUrl":"https://doi.org/10.11371/IIEEJ.40.333","url":null,"abstract":"〈Summary〉 This paper proposes a computational complexity reduction scheme for region-of-interest (ROI) based H.264/AVC encoding for videophone, video conferencing and surveillance systems. A proposed fast ROI detection algorithm which can detect facelike regions as ROI is applied to obtain accurate and small ROI so as to reduce the necessary coding effort of the encoder. The complexity reduction algorithm contains 3 methods: (1) the inter prediction mode selection based on quality difference, (2) unequal performance degradation based on unequal bits allocation, and (3) the ROI boundary enhancement to reduce the coding complexity of ROI. Experimental results show that proposed novel ROI detection and complexity reduction scheme can reduce 77.18% simulation time when QP difference between ROI and non-ROI is 20, with 1.02% bit-rate increase and 0.04 dB PSNR decrease. 18.17% of encoding time is further reduced compared with previous work with similar performance degradation.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116982068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Post-processing of YCbCr4:2:0 Compressed Image for Color Printer","authors":"Yuji Itoh","doi":"10.11371/IIEEJ.40.324","DOIUrl":"https://doi.org/10.11371/IIEEJ.40.324","url":null,"abstract":"","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115798266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection and Retrieval of Nucleated Red Blood Cells Using Linear Subspaces","authors":"Yosuke Shimizu, S. Hotta","doi":"10.11371/IIEEJ.40.67","DOIUrl":"https://doi.org/10.11371/IIEEJ.40.67","url":null,"abstract":"","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132394019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}