Hiroko Nakamura, I. Fujishiro, Yuriko Takeshima, Shigeo Takahashi, Takafumi Saito
{"title":"Guidelines for LoD Control in Volume Visualization","authors":"Hiroko Nakamura, I. Fujishiro, Yuriko Takeshima, Shigeo Takahashi, Takafumi Saito","doi":"10.11371/IIEEJ.37.461","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.461","url":null,"abstract":"","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124307715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"“CAT: A Graphical User Interface for Visualization and Level-of-Detail Control for Large Scale Image Collections”","authors":"Ai Gomi, R. Miyazaki, T. Itoh, Jia Li","doi":"10.11371/IIEEJ.37.436","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.436","url":null,"abstract":"CAT Summary This thesis proposes CAT (Clustered Album Thumbnail), a technique for clus-teringand browsing large number of images. It also provides a user interface for controlling the level of details. As a preprocessing, CAT first hierarchically clusters images and selects representative images for each cluster. And then, CAT visualizes the tree structure applying a hierarchical data visualization technique HeiankyoView. As a characteristic of CAT, it provides a graphical user interface for the zooming operation to effectively browse images. It selectively displays representative images while zooming out, or individual images while zooming in, by the mouse operation. This feature realizes high frame rates, and display of adequate number of images.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124035843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Higaki, K. Kaneda, Toru Tamaki, Nobutada Date, S. Azemoto
{"title":"Non-rigid Image Registration for Medical Diagnosis Using Free-form Deformation with Multiple Grids","authors":"T. Higaki, K. Kaneda, Toru Tamaki, Nobutada Date, S. Azemoto","doi":"10.11371/IIEEJ.37.286","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.286","url":null,"abstract":"〈Summary〉 In recent years, we can get high accuracy cross-sectional images with advanced medical devices such as CT, MRI, and PET, and the images are often vital to medical procedure. Images taken at different times are deformed by visceral movement. The deformations are non-rigid deformation. For medical diagnosis using the images, it is desired to develop a non-rigid image registration. In this research, we register two images that are acquired at different time. We have developed a method for non-rigid image registration, where we use a free-form deformation for image alignment, sum of squared difference as our similarity measurement, and a steepest descent method for optimization. The method achieves improved processing speeds with user interaction for specifying deformation areas.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114921698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building a 3D Model of the Face that Transmits an Arbitrary Combination of Identity, Facial Expression, and Gaze","authors":"S. Kikuchi, M. Kamachi, S. Akamatsu","doi":"10.11371/IIEEJ.37.189","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.189","url":null,"abstract":"〈Summary〉 This paper describes an attempt to build a 3D face model for an anthropomorphic interface that transmits an arbitrary combination of identity, various facial expressions, and gaze. It was done by combining a 3D morphable face model by which variations of 3D shape are represented in a small number of parameters and the Galatea face model by which facial expressions are generated. We also developed an eyeball model for visualizing eye movements to turn the gaze.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116123282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Sonoda, Seiya Tsuruta, M. Yoshimura, K. Hachimura
{"title":"Segmentation of Dancing Movement by Extracting Features from Motion Capture Data","authors":"M. Sonoda, Seiya Tsuruta, M. Yoshimura, K. Hachimura","doi":"10.11371/IIEEJ.37.303","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.303","url":null,"abstract":"","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124783585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Yoshimura, K. Hachimura, Takako Kunieda, Wakasaki Yamamura, K. Yokoyama
{"title":"Quantitative Realization of Spiral Motions Observed in Principal Components of \"Jiuta-Mai\" Japanese Classical Dance","authors":"M. Yoshimura, K. Hachimura, Takako Kunieda, Wakasaki Yamamura, K. Yokoyama","doi":"10.11371/IIEEJ.37.312","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.312","url":null,"abstract":"〈Summary〉 A unique feature of Jiuta-Mai in Japanese classical dance is a set of spiral movements propagated gradually from one part to another adjacent part of the body. In order to quantitatively realize the existence of spiral movement, analysis of a 3 dimensional (3D) time series of a dance motion performed by two master Japanese classical dancers was carried out. It revealed that the transformation of original 3D coordinates into a local coordinate system consisting of principal components, was effective. The first principal component represented the overall direction of body motion, and the spiral motions were clearly visualized through cyclic curves appearing in the second and third components, perpendicular to the first component.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122771670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"JPM-Based Differential Image Storage Method for Image Revision Management System","authors":"Junichi Hara, Y. Manabe, T. Onoye","doi":"10.11371/IIEEJ.37.268","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.268","url":null,"abstract":"〈Summary〉 An efficient partial image storage method is required for an image revision system to store a huge amount of retouched image data. This paper proposes and evaluates a framework for an image revision management system, which uses the JPM file format to store differential elements between two versions of the image. Two data storing approaches are considered, one stores replaced elements after the edition, and the other stores replaced elements or differential elements adaptively. These approaches are evaluated with four different kinds of edit operations: blurring, edge emphasis, character writing, and adding noise. Since the combination of JPM’s image object organization and JPEG 2000’s efficient region compression is effective for image revision management, our proposed system can save storage capacity and network bandwidth in the image revision management system.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129514494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational Cost Reduction of Improved Super-Resolution Method Using Overlapped Block Matching","authors":"Yasuo Takehisa, Kiyoshi Tanaka","doi":"10.11371/IIEEJ.37.214","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.214","url":null,"abstract":"〈Summary〉 The improved super-resolution method that achieves dense motion estimation (DME) using overlapped block matching (OBM) remarkably improves the quality of reconstructed images for a given video sequence. However, this method has a drawback to increase the computational cost almost linearly to the number of overlapped blocks because DME using OBM allocates multiple motion vectors to a local region in the image restoration process. To solve this problem, in this paper we propose a method to reduce computational cost of the improved super-resolution method by considering the statistics of motion vectors obtained by DME using OBM. This method can reduce the entire computational cost up to 29.9 ∼ 49.1% depending on a given video sequence while completely maintaining the original performance of the improved super-resolution method using OBM. Also, we try to further reduce computational cost by relaxing the complete original performance preservation requirement. With this additional attempt, we can further reduce computational cost up to 16.9 ∼ 20.8% without serious deterioration of the quality of reconstructed images.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128612358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Application of Character Structure Information in Online Handwriting Uyghur Character Recognition","authors":"Yidayet Zaydun, Tsuyoshi Saitoh","doi":"10.11371/IIEEJ.37.244","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.244","url":null,"abstract":"〈Summary〉 This paper discusses the use of structure information of Uyghur characters as a feature in online handwriting recognition on portable digital devices. Based on the position of secondary stroke, Uyghur characters can be separating into 4 groups. In this case, the unknown input character compares to other characters in its corresponding group only. This will be shortening the comparison time. The experiment result of freely-written 10 dataset showed that, the comparison time is reduced by 64.32% while the recognition rate improved 5.87%. The Approximate Stroke Sequence String Matching method is applied to Uyghur handwriting character recognition and an average recognition rate of 93.95% is obtained. It is improved to 96.6% while using the structure information of handwritten characters. Based on these results we discuss that, the recognition rate will be improved by the using of some other features like character frequency, the number of secondary strokes, and other.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121266865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. M. P. Sazzad, Masaharu Sato, Yoshikazu Kawayoke, Y. Horita
{"title":"No-Reference Image Quality Evaluation Based on Local Features and Segmentation","authors":"Z. M. P. Sazzad, Masaharu Sato, Yoshikazu Kawayoke, Y. Horita","doi":"10.11371/IIEEJ.37.335","DOIUrl":"https://doi.org/10.11371/IIEEJ.37.335","url":null,"abstract":"〈Summary〉 Perceived image distortion of any image is strongly dependent on the local features, such as edge, flat and texture. In this paper, a new objective no-reference (NR) image quality evaluation model for JPEG coded images based on the local features and segmentation is presented. The local features information of the image such as edge, flat and texture area and also the blockiness, activity measures, and zero crossing rate within the block of the image are evaluated in this method. The results on two different image databases indicate that the model performs quite well over a wide range of image content and distortion levels.","PeriodicalId":153591,"journal":{"name":"The Journal of the Institute of Image Electronics Engineers of Japan","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116301044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}