{"title":"PROTEIN: A Visual Interface for Classification of Partial Reliefs of Protein Molecular Surfaces","authors":"Keiko Nishiyama, T. Itoh","doi":"10.11371/IIEEJ.37.181","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.181","url":null,"abstract":"3D structure of proteins deeply takes part in the expression of the protein. Molecular surfaces of the protein generally have very complex and bumpy shapes. It is well-known that functions of proteins strongly appear in the bumpy parts of the molecular surfaces. We propose a visual interface to effectively visualize the partial reliefs of molecular surfaces of proteins. This technique assumes that molecule surfaces are approximated as triangular meshes. It first extracts groups of triangles forming partial reliefs, and calculates their feature values as histograms. It finally clusters the partial reliefs according to the histogram. The presented technique then visualizes the clustering results applying a hierarchical data visualization technique \"HeiankyoView\", as a visual interface to explore the clustered partial reliefs.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131542989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Face Image Capture from Color Video in Conjunction with Planar Laser Scanning","authors":"M. Hild","doi":"10.11371/IIEEJ.37.293","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.293","url":null,"abstract":"〈Summary〉 We propose a system for automatic acquisition of walking persons’ faces using a color video camera in conjunction with a planar laser scanner. The system reconstructs the person’s body surface, estimates the location of the neck reference point in 3D space, and determines the location of this reference point on the image plane. Face images are then cut out with respect to this reference point. A method for estimating the motion velocity vector of the walking person, which is necessary for body surface reconstruction, is also proposed. A prototype system was built, and its evaluation showed promising results.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133661329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalized Learning Local Averaging Classifier","authors":"S. Hotta","doi":"10.11371/IIEEJ.37.206","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.206","url":null,"abstract":"〈Summary〉 In this paper, a classifier called Generalized Learning Local Averaging Classifier (GLLAC) is proposed for image classification. GLLAC is regarded as a combination of Local Averaging Classifier (LAC) and Generalized Learning Vector Quantization (GLVQ) for achieving low error rates with small amount of reference vectors. In GLLAC, all k-near reference vectors of the nearest mean vector belonging to the same class to an input vector are moved toward an input vector, whereas those of the nearest mean vector from a different class are moved away from an input vector. The performance of GLLAC is verified with experiments on handwritten digit and color image classification. Experimental results show that GLLAC can achieve lower error rates than conventional classifiers such as GLVQ or Support Vector Machine (SVM).","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115297697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Data Hiding Method Using Mquant in MPEG Domain","authors":"Koksheik Wong, Kiyoshi Tanaka","doi":"10.11371/IIEEJ.37.256","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.256","url":null,"abstract":"〈Summary〉 This paper proposes a novel data hiding method utilizing Mquant as the data carrier in MPEG domain. To the best of our knowledge, Mquant is never used as the data carrier for information hiding. Mquant is the principle part of rate controller in MPEG, and it is one level higher than the existing data carriers such as quantized DCT coefficients and motion vectors in the MPEG coding hierarchy. In our method, matrix encoding is utilized as the data representation scheme for reducing the number of modification whenever possible. A modification scheme is proposed to sub-optimally preserve the original distribution of Mquant during data embedding. Our data hiding method is applicable not only to MPEG1/2/4 encoded video but also to the encoding process of MPEG video from a sequence of raw pictures. Carrier capacity, histogram distance, image quality, and filesize change are considered to verify the basic performance of the proposed method using various videos encoded by MPEG1. Comparisons among the proposed and existing data carriers are carried out using the same evaluation criterion. The influence of video bitrate on the performance of our method is also investigated.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125321771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hidenori Tanaka, Hiroyuki Arai, Hitoshi Nakazawa, T. Yasuno, H. Koike
{"title":"Object Detection under Illumination Variation with Privacy Protection","authors":"Hidenori Tanaka, Hiroyuki Arai, Hitoshi Nakazawa, T. Yasuno, H. Koike","doi":"10.11371/IIEEJ.37.250","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.250","url":null,"abstract":"〈Summary〉 We propose an object detection method that aims to protect privacy while providing video surveillance even when the illumination changes. Recently, many surveillance cameras have been installed in public spaces for monitoring activities. Unfortunately, there are too few authorized people to allow all of the resulting video streams to be perused continuously. Therefore, it is necessary to allow concerned citizens to watch the video streams. To permit this, we must address the privacy issue. In the proposed method, we first extract the object regions (which include private information) using the background estimated by taking account of illumination variations. Next, we filter the extracted regions to protect privacy. Experiments reveal that our method can successfully catch objects in surveillance videos, even when the objects stop for a long time under varying illumination conditions, while concealing the private information.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129424872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Image-Map Alignment Using Edge-Based Code Mutual Information and 3-D Hilbert Scan","authors":"Li Tian, S. Kamata","doi":"10.11371/IIEEJ.37.223","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.223","url":null,"abstract":"〈Summary〉 This study presents a new algorithm for automatic image-map alignment problem using a new similarity measure named Edge-Based Code Mutual Information (EBCMI) and 3-D Hilbert scan. In general, each image-map pair can be viewed as two special multimodal images, however, are very different in their representations such as the intensity. Therefore, the normal Mutual Information (MI) using the intensity in traditional alignment method may result in misalignment. To solve the problem, codes based on the edges of the image-map pairs are constructed and Mutual Information of the codes is computed as the similarity measure for the alignment in our method. Since Edge-Based Code (EBC) is robust to the differences between the image-map pairs in their representations, EBCMI also can overcome the differences. On the other hand, the 3-D search space in alignment can be converted to a 1-D search space sequence by 3-D Hilbert Scan and a new search strategy is proposed on the 1-D search space sequence. The experimental results show that the proposed EBCMI performed better than the normal MI and some other similarity measures and the proposed search strategy gives flexibility between efficiency and accuracy for automatic image-map alignment task.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121966448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Composition of Seamless and Color-matched Images of Overlapping Objects","authors":"Takuya Saito, Yosuke Bando, T. Nishita","doi":"10.11371/IIEEJ.37.278","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.278","url":null,"abstract":"〈Summary〉 We propose an image composition method which seamlessly matches the color of a source image region to that of a target image region that is partially occluded by foreground objects. Previous methods assume that a target image region has small color variation, and therefore it is difficult to paste source image regions so that they overlap foreground objects in a target image, as this induces color bleeding from the foreground objects. To overcome this difficulty, we propose to perform color matching only from the background region by excluding the foreground objects. We show how we compose objects from a source image both behind and in front of objects in a target image, and we demonstrate that visually pleasing seamless composition can be achieved.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121213913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multipose Face Recognition Based on Frequency Analysis and Modified LDA","authors":"I. Wijaya, K. Uchimura, Zhencheng Hu","doi":"10.11371/IIEEJ.37.231","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.231","url":null,"abstract":"〈Summary〉 A multipose human face recognition approach is presented. The proposed scheme is based on frequency analysis (i.e. DCT or wavelet transforms) to obtain facial features which represent global information of face image and modified LDA (M-LDA) to classify the facial features to the person’s class. The facial features are built by selecting a small number of frequency domain coefficients that have large magnitude values. Next, from the facial features, the mean of each face class and the global covariance are determined. Finally, by assuming that each class has multivariate normal distribution and all classes have the same covariance matrix, M-LDA is used to classify the facial features to the person’s class. The aims of proposed system are to reduce the high memory space requirement and to overcome retraining problem of classical LDA and PCA. The system is tested using several face databases and the experimental results are compared to well-known classical PCA, LDA, and other established LDA (i.e. DLDA, RLDA, and SLDA).","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128437150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masanori Kakimoto, T. Tatsukawa, G. Chun, T. Nishita
{"title":"Real-Time Reflection and Refraction on a Per-Vertex Basis","authors":"Masanori Kakimoto, T. Tatsukawa, G. Chun, T. Nishita","doi":"10.11371/IIEEJ.37.196","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.196","url":null,"abstract":"This paper proposes a novel method for real-time rendering of polygon mesh surfaces with reflection or refraction. The basic process is similar to dynamic environment mapping or cube mapping. Our proposed method is superior to those in that the accurate ray direction is reflected in the resulted image at every vertex on the mesh. Existing real-time techniques suffer from the differences between the viewpoint for the environment map and each reflection point. The proposed method minimizes this by finding an optimal viewpoint for the reflective or refractive mesh. With a sufficient number of vertices and map image resolutions, the users can render reflected images as accurate as ray tracing for all practical purposes, except for reflected objects around ray converging points of reflection on concave surfaces or refraction through convex lenses. The method can be applied to areas which require accuracy such as industrial design. Experiments with a CAD model of a car rear-view mirror and spectacle lenses exhibited results of sufficient quality for design verification.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121516137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhaoyang Lu, S. Ando, Kaori Kataoka, Y. Kusachi, Akira Suzuki, Yasuko Takahashi, T. Yasuno
{"title":"Text Locating and Verification Algorithm for Scene Images Based on Gray-Scale Connected-Component Analysis","authors":"Zhaoyang Lu, S. Ando, Kaori Kataoka, Y. Kusachi, Akira Suzuki, Yasuko Takahashi, T. Yasuno","doi":"10.11371/IIEEJ.36.509","DOIUrl":"https://doi.org/10.11371/IIEEJ.36.509","url":null,"abstract":"〈Summary〉 Text information in natural images is formally different from that in the images captured by traditional scanners. A text locating algorithm for natural color images is presented. The apparent non-text regions are removed by two-step processing of the grayscale version of the original image, leaving the possible text regions. The first deletion is based on connected-component (CC) analysis of strokes. The second step uses a rulesoriented verification process for improving the performance.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132774470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}