{"title":"Towards Flexibility of Synchronous e-learning Systems","authors":"Matthias Jahn, Claudia Piesche, S. Jablonski","doi":"10.1109/ISM.2011.100","DOIUrl":"https://doi.org/10.1109/ISM.2011.100","url":null,"abstract":"In this paper we present principles and the architecture of the Meeting Room Platform (MRP) as an example implementation of a synchronous communication and collaboration system. Our main goal is to achieve a highly flexible system as follows: First of all, the system itself should be easily capable of being integrated into other systems like Learning Management Systems (LMS). Secondly the approach allows integrating new components, respectively existing resources without the need to adapt the whole system. Finally, the system is configurable, so the user can choose a set of services that he wants to provide in his online meetings. With these three aspects of flexibility the concept of the MRP system differs from existing systems and constitutes therefore a new approach in designing synchronous e-learning environments. Furthermore, various use cases (of the system) as described in this paper show the benefit of this approach more detailed.","PeriodicalId":339410,"journal":{"name":"2011 IEEE International Symposium on Multimedia","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116821093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felipe Lacet S. Ferreira, Felipe Hermínio Lemos, Gutenberg Pessoa Botelho Neto, T. Araújo, Guido Lemos de Souza Filho
{"title":"Providing Support for Sign Languages in Middlewares Compliant with ITU J.202","authors":"Felipe Lacet S. Ferreira, Felipe Hermínio Lemos, Gutenberg Pessoa Botelho Neto, T. Araújo, Guido Lemos de Souza Filho","doi":"10.1109/ISM.2011.32","DOIUrl":"https://doi.org/10.1109/ISM.2011.32","url":null,"abstract":"Sign languages are natural languages used by deaf to communicate. Currently, the use of sign language on TV is still limited to manual devices, where a window with a sign language interpreter is shown into the original video program. Some related works, such as Amorim et al. [13] and Araujo et al [14], proposed solutions for this problem, but there are some gaps to be addressed. This paper proposes a solution to provide support for sign language in middlewares compatible with ITU J.202 specification [18]. An important feature of this solution is that it is not necessary to adapt or create new APIs (Application Programming Interface) to provide support for sign languages. A case study was developed to validate this solution, implemented using Ginga-J (procedural part of Ginga middleware), a middleware compliant with ITU J.202. Tests with deaf people confirm the feasibility of the proposed solution.","PeriodicalId":339410,"journal":{"name":"2011 IEEE International Symposium on Multimedia","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116971945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masaki Ishihara, Shugo Nakamura, Takayuki Baba, Masahiko Sugimura, S. Endo, Y. Uehara, D. Masumoto
{"title":"Industry Track: An Image Classification and Browsing System for Farm Inspection","authors":"Masaki Ishihara, Shugo Nakamura, Takayuki Baba, Masahiko Sugimura, S. Endo, Y. Uehara, D. Masumoto","doi":"10.1109/ISM.2011.65","DOIUrl":"https://doi.org/10.1109/ISM.2011.65","url":null,"abstract":"When a farmer shares information about farm situation or events with other farmers, he often uses pictures (images) taken during farm inspection for visual communication. However, it may take much time for the farmers to find the desired images from a large amount of images accumulated every day. Furthermore, the contents of the images are diverse (e.g. crop images, soil images, and field images), and the content of the desired images are dependent on the usage scene. Therefore, we develop an image classification and browsing system to suitable for the usage scene. We adopt a typical SVM image classification method using color histogram or layout of brightness as image features. The effectiveness of our system is verified by feasibility study in checking the growth situation of crops. The contribution of this work is the first attempt to apply the image classification and browsing technique to the farm inspection support in agricultural fields.","PeriodicalId":339410,"journal":{"name":"2011 IEEE International Symposium on Multimedia","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125257975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measures of Early Phonetic Development: A Longitudinal Analysis","authors":"Li-Mei Chen, Chia-Cheng Lee, T.-W. Kuo","doi":"10.1109/ISM.2011.92","DOIUrl":"https://doi.org/10.1109/ISM.2011.92","url":null,"abstract":"Four measures were used to observe a hearing-impaired child¡¦s volubility, canonical babbling, consonantal development, and syllable complexity. Child A is a seriously hearing-impaired child identified early and provided with a cochlear implant at the age of 14 months. Child B with normal hearing was recruited for the same age range comparison. The recording place was in their homes and the natural interaction between caretakers and children was recorded. The data were collected at the age of 9-30 months. First fifty identifiable utterances in each session were selected for analysis. The indicated that 1) Two children had no significant difference in volubility at pre-linguistic stage. After 21 months, Child A produced more than Child B. 2) The onset of canonical babbling was at as early as 9 months of age in Child B, while Child A had the onset at the age of 18 months. From 27 months, Child A had similar amount of canonical babbling as Child B. 3) Child A had smaller consonantal variability than Child B. 4) Child B had a smooth and gradual growth from using simple syllable structures to more complex forms, whereas Child A showed unstable and sudden shift. These results indicated that Child A had approximately six months phonetic delay. Child B experienced rapid phonetic growth at 18 months, while Child A didn¡¦t show obvious growth until 27 months. Child A seemed to make up the delay and narrowed the gap with Child B after the cochlear implantation for one year.","PeriodicalId":339410,"journal":{"name":"2011 IEEE International Symposium on Multimedia","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128817308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roberto Tronci, Luisa Falqui, Luca Piras, G. Giacinto
{"title":"A Study on the Evaluation of Relevance Feedback in Multi-tagged Image Datasets","authors":"Roberto Tronci, Luisa Falqui, Luca Piras, G. Giacinto","doi":"10.1109/ISM.2011.80","DOIUrl":"https://doi.org/10.1109/ISM.2011.80","url":null,"abstract":"This paper proposes a study on the evaluation of relevance feedback approaches when a multi-tagged dataset is available. The aim of this study is to verify how the relevance feedback works in a real-word scenario, i.e. by taking into account the multiple concepts represented by the query image. To this end, we first assessed how relevance feedback mechanisms adapt the search when the same image is used for retrieving different concepts. Then, we investigated the scenarios in which the same image is used for retrieving multiple concepts. The experimental results shows that relevance feedback can effectively focus the search according to the user's feedback even if the query image provides a rough example of the target concept. We also propose two performance measures aimed at comparing the accuracy of retrieval results when the same image is used as a prototype for a number of different concepts.","PeriodicalId":339410,"journal":{"name":"2011 IEEE International Symposium on Multimedia","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129981042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Global Motion Estimation Using Texture Masks","authors":"Yi Chen, R. S. Aygün","doi":"10.1109/ISM.2011.103","DOIUrl":"https://doi.org/10.1109/ISM.2011.103","url":null,"abstract":"Global motion estimation (GME) is a critical step for image alignment, image registration, and sprite generation. Direct methods use all pixels to estimate the motion. Eliminating pixels for GME is important since it may reduce the processing time and may also help to obtain correct motion parameters. In this paper, we firstly consider using fixed masks to observe the performance of GME. Then, we generate and use texture masks to eliminate texture regions to improve the performance of GME. The texture regions may include water, grass, ground, sky, etc. Our results indicate that adapting suitable masks reduces the processing time and improves the correctness of GME.","PeriodicalId":339410,"journal":{"name":"2011 IEEE International Symposium on Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130037469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High Quality Free Viewpoint Synthesis Using Multi-view Images with Depth Information","authors":"I. Tsuchida, Fan Chen, J. Izawa, K. Kotani","doi":"10.1109/ISM.2011.12","DOIUrl":"https://doi.org/10.1109/ISM.2011.12","url":null,"abstract":"Compared to conventional synthesis methods of free viewpoint images that use only multi-view images, our research synthesizes high quality free viewpoint images by using multi-view depth information as well as the images. By recovering high resolution and high precision 3D shapes from multi-view information, high quality free viewpoint images are synthesizable. Our research captures the scene by acquiring its multi-view depth appearance information, using a laser range finder. By performing camera parameter estimation, multi-view 3D shape integration and depth estimation, high quality free viewpoint images for any scene are synthesized. The performance of our method has been investigated by experimental results.","PeriodicalId":339410,"journal":{"name":"2011 IEEE International Symposium on Multimedia","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116343913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsung-Yau Huang, Po-Yen Su, Chieh-Kai Kao, Tao-Sheng Ou, Homer H. Chen
{"title":"Quality Improvement of Video Codec by Rate-Distortion Optimized Quantization","authors":"Tsung-Yau Huang, Po-Yen Su, Chieh-Kai Kao, Tao-Sheng Ou, Homer H. Chen","doi":"10.1109/ISM.2011.85","DOIUrl":"https://doi.org/10.1109/ISM.2011.85","url":null,"abstract":"Conventional quantization methods consider only the distortion between original and reconstructed video as the cost of compression. Considering the time-varying nature of network bandwidth for multimedia services, we believe a video coding system can provide a better quality of experience if it takes the bit rate of the compressed bit stream into consideration as well when optimizing the quantization. In this paper we present a rate-distortion optimization approach to the quantization of video coding. This approach is able to balance between rate and distortion for quantization and enhance the overall quality of the entire coding system, with only a slight increase in computational overhead. We implement this method in H.264/AVC, and the extensive experimental data obtained under various test conditions show that the performance of the R-D optimized quantization is indeed better than the H.264 reference software.","PeriodicalId":339410,"journal":{"name":"2011 IEEE International Symposium on Multimedia","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122469360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image-based Calorie Content Estimation for Dietary Assessment","authors":"Tatsuya Miyazaki, G. C. D. Silva, K. Aizawa","doi":"10.1109/ISM.2011.66","DOIUrl":"https://doi.org/10.1109/ISM.2011.66","url":null,"abstract":"In this paper, we present an image-analysis based approach to calorie content estimation for dietary assessment. We make use of daily food images captured and stored by multiple users in a public Web service called Food Log. The images are taken without any control or markers. We build a dictionary dataset of 6512 images contained in Food Log the calorie content of which have been estimated by experts in nutrition. An image is compared to the ground truth data from the point of views of multiple image features such as color histograms, color correlograms and SURF fetures, and the ground truth images are ranked by similarities. Finally, calorie content of the input food image is computed by linear estimation using the top n ranked calories in multiple features. The distribution of the estimation shows that 79% of the estimations are correct within ±40% error and 35% correct within ±20% error.","PeriodicalId":339410,"journal":{"name":"2011 IEEE International Symposium on Multimedia","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116439540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arshia Zernab Hassan, Bushra Tasnim Zahed, F. Zohora, Johra Muhammad Moosa, Tasmiha Salam, Md. Mustafizur Rahman, H. Ferdous, Syed Ishtiaque Ahmed
{"title":"Developing the Concept of Money by Interactive Computer Games for Autistic Children","authors":"Arshia Zernab Hassan, Bushra Tasnim Zahed, F. Zohora, Johra Muhammad Moosa, Tasmiha Salam, Md. Mustafizur Rahman, H. Ferdous, Syed Ishtiaque Ahmed","doi":"10.1109/ISM.2011.99","DOIUrl":"https://doi.org/10.1109/ISM.2011.99","url":null,"abstract":"Autism is a general term used to describe a group of complex developmental brain disorders known as Pervasive Developmental Disorders (PDD). It is a life-long disability that prevents people from understanding what they see, hear, and sense. This results in severe problems with social relationships, communications, and behavior. Autism is typically diagnosed between the ages of two and six, although variations of ASD (Autism Spectrum Disorders) can sometimes be diagnosed earlier or later [1]. Children with learning disability such as autism who have serious impairments with social, emotional and communication skills require high degree of personalization in using the educational software developed for them. In this paper we present a personalized game based on digital story-telling concept that helps the children of age ranging from 9 to 14 years old with autism to understand the use of money. It also teaches the autistic children the social behavior appropriate while shopping. The game is developed on BYOB (Build Your Own Block, an advanced offshoot of the game engine Scratch).","PeriodicalId":339410,"journal":{"name":"2011 IEEE International Symposium on Multimedia","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126902718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}