{"title":"Super Resolution to Identify License Plate Numbers in Low Resolution Videos","authors":"Lakkhana Mallikarachchi, A. Dharmarathne","doi":"10.1145/2636240.2636867","DOIUrl":"https://doi.org/10.1145/2636240.2636867","url":null,"abstract":"Surveillance videos are useful as a source for extracting information in many areas such as crime investigations and monitoring offences in public roads. In many occasions, required information cannot be extracted from these videos due to the poor quality of the video or due to the large distance from camera to the interested object. A multiple image super resolution based technique to improve license plate regions in low quality videos has been proposed in this paper. Moreover, a two step image registration technique based on phase correlation method has been proposed to align multiple license plate image regions. The evaluation of the method was done based on an automatic number plate recognition system. The impact to the recognition rates of automatic number plate recognition system by using the proposed method is measured. Compressed, low quality, real traffic videos were used as the dataset. According to the results, a significant improvement of recognition rates achieved with the proposed method.","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127928332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Madhawa Gunasekara, A. Dharmarathne, D. Sandaruwan
{"title":"A Feature Point Based Approach for Pose Variant Face Recognition","authors":"Madhawa Gunasekara, A. Dharmarathne, D. Sandaruwan","doi":"10.1145/2636240.2636875","DOIUrl":"https://doi.org/10.1145/2636240.2636875","url":null,"abstract":"The Pose variation challenge with respect to missing people database scenario in computerized face recognition is addressed in this study. Moreover, relationships of 2D face images with the angle variations of 0°, 45° and 90° for the same person are obtained. A feature point based approach with geometric distances of the half of face is applied. Moreover, a mathematical model and an Artificial Neural Network model are implemented using curve fitting technique to predict the face images. The face recognition accuracy is mainly tested by using face hit ratio, with Sri Lankan test subjects.","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121055903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiba Ahsan, A. Prabhu, S. Deeksha, Shridhar G. Domanal, T. Ashwin, G. R. M. Reddy
{"title":"Vision Based Laser Controlled Keyboard System for the Disabled","authors":"Hiba Ahsan, A. Prabhu, S. Deeksha, Shridhar G. Domanal, T. Ashwin, G. R. M. Reddy","doi":"10.1145/2636240.2636863","DOIUrl":"https://doi.org/10.1145/2636240.2636863","url":null,"abstract":"In this paper, we have proposed a novel design for a vision based unistroke keyboard system for the disabled. The keyboard layout considers the commonly used character patterns, which makes it convenient for the user to type. In addition to this, Shift functionality is provided to accommodate a larger set of characters. A webcam is positioned so as to monitor the keyboard and the characters are identified based on the laser pointer which the user can control by minor head movements. Experimental results demonstrate that the design achieves very promising results, thus establishing a baseline for such models in this domain.","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121291581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Developing useful Visualizations of Domestic Energy Usage","authors":"Joris Suppers, M. Apperley","doi":"10.1145/2636240.2636853","DOIUrl":"https://doi.org/10.1145/2636240.2636853","url":null,"abstract":"The need and desire to promote energy awareness within households is steadily growing, and with this many different approaches for visualising energy use beyond the usual monthly bill, many in near real time, have emerged. These approaches are a positive step towards households becoming more environmentally conscious and in control of their energy usage, which is vitally important if greater energy use efficiency is to be achieved. This paper reviews the current state of research in domestic energy use visualisation from four perspectives: personal characteristics influencing and motivating behaviour of individuals; correctly informing the individual; the reality and effectiveness of feedback; and the utility and impact of social effects. An analysis of these four perspectives will further our understanding of how to create successful domestic energy use visualisations.","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"92 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116299753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phillip Gough, Katelan Dunn, T. Bednarz, Xavier Ho
{"title":"Art and Chartjunk: a Guide for NEUVis","authors":"Phillip Gough, Katelan Dunn, T. Bednarz, Xavier Ho","doi":"10.1145/2636240.2636852","DOIUrl":"https://doi.org/10.1145/2636240.2636852","url":null,"abstract":"In the fast changing, hybrid and multi-disciplinary practices of artful information visualisation (artful infoVis) and artists using data to inform artworks, the act of translating data into an image can be fraught with peril. There is considerable debate around modes of visualization and their relationship with the underlying data. This paper outlines the debate between the opposing ideologies and, through assessment of design considerations and comparison of creative practice and visual analytics, formulates a set of guidelines for creative practitioners developing visualisations for Non-Expert Users (NEUVis).","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134000795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CoVE: A Colony Visualization System for Animal Pedigrees","authors":"Brady Cannon, M. Hiremath, C. Jorcyk, Alark Joshi","doi":"10.1145/2636240.2636850","DOIUrl":"https://doi.org/10.1145/2636240.2636850","url":null,"abstract":"CoVE is a novel, scalable, interactive tool that can be used to visualize and manage large colonies of laboratory animals. Effective management of large colonies of animals with multiple individual attributes and complicated breeding schemes represents a significant data management challenge in the biological sciences. Currently available software either provides databases for record keeping or generates basic pedigrees but not both. Thus, there is a pressing need for an integrated colony management system that provides a repository for the data and addresses the visualization challenge presented by complex genealogical data. We present CoVE, a colony visualization tool that provides an overview of the entire colony, clusters individuals based on Gender, Litter or Genotype, and provides an individual view of any animal for detailed examination. We demonstrate that CoVE provides an efficient way to manage, generate and view complex pedigree of real world genealogical data from animal colonies, annotated with details of individual attributes. It enables interactive tracing of lineages and identification of censored subjects in tumor studies.","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131170371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Participatory Data Analytics: Collaborative Interfaces for Data Composition and Visualisation","authors":"Daniel Filonik, Markus Rittenbruch, M. Foth","doi":"10.1145/2636240.2636873","DOIUrl":"https://doi.org/10.1145/2636240.2636873","url":null,"abstract":"This research proposes the development of interfaces to support collaborative, community-driven inquiry into data, which we refer to as Participatory Data Analytics. Since the investigation is led by local communities, it is not possible to anticipate which data will be relevant and what questions are going to be asked. Therefore, users have to be able to construct and tailor visualisations to their own needs. The poster presents early work towards defining a suitable compositional model, which will allow users to mix, match, and manipulate data sets to obtain visual representations with little-to-no programming knowledge. Following a user-centred design process, we are subsequently planning to identify appropriate interaction techniques and metaphors for generating such visual specifications on wall-sized, multi-touch displays.","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134155304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NEI: A Framework for Dynamic News Event Exploration and Visualization","authors":"Xiaofei Guo, Juan-Zi Li, Ruibing Yang, Xiaoli Ma","doi":"10.1145/2636240.2636845","DOIUrl":"https://doi.org/10.1145/2636240.2636845","url":null,"abstract":"Nowadays, there are many events reported by News Media everyday, which contains a massive number of news. People are getting more and more interested in understanding how an event evolves after it happens. News related to the same event or similar events usually has more common entities and stronger topic correlations, which is a new perspective to study news event. Due to the complexity of event evolving process, event visualization has been a big challenge for a long time.\u0000 In this paper, we design a novel four-phase framework NEI(News Event Insight) that focuses on visualizing a news event properly and clearly, namely (1)Entity Topic Modeling. We extract topics and entities through timeline. (2)Temporal Topic Correlation Analysis. Based on the topic modeling result, we design two methods to select hot topics and build links for them. (3)Keyword Extraction. Specially, we combine string frequency with syntax features and use language models to acquire candidate keywords for representing topics. (4)Visualization. Visualization demonstrates the quantifying properties of topics related to a certain event. A case study shows our framework achieves promising results on both single event and similar events.","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134405902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualizing Bag-of-Features Image Categorization Using Anchored Maps","authors":"Gao Yi, Hsiang-Yun Wu, Kazuo Misue, Kazuyo Mizuno, Shigeo Takahashi","doi":"10.1145/2636240.2636858","DOIUrl":"https://doi.org/10.1145/2636240.2636858","url":null,"abstract":"The bag-of-features models is one of the most popular and promising approaches for extracting the underlying semantics from image databases. However, the associated image categorization based on machine learning techniques may not convince us of its validity since we cannot visually verify how the images have been classified in the high-dimensional image feature space. This paper aims at visually rearrange the images in the projected feature space by taking advantage of a set of representative features called visual words obtained using the bag-of-features model. Our main idea is to associate each image with a specific number of visual words to compose a bipartite graph, and then lay out the overall set of images using anchored map representation in which the ordering of anchor nodes is optimized through a genetic algorithm. For handling relatively large image datasets, we adaptively merge a pair of most similar images one by one to conduct the hierarchical clustering through the similarity measure based on the weighted Jaccard coefficient. Voronoi partitioning has been also incorporated into our approach so that we can visually identify the image categorization based on support vector machine. Experimental results are finally presented to demonstrate that our visualization framework can effectively elucidate the underlying relationships between images and visual words through the anchored map representation.","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132565112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meng-Yao Chen, Cong Chen, Shu-Qing Liu, Kang Zhang
{"title":"Mobile Visualization Supporting Awareness in Collaborative Software Development","authors":"Meng-Yao Chen, Cong Chen, Shu-Qing Liu, Kang Zhang","doi":"10.1145/2636240.2636857","DOIUrl":"https://doi.org/10.1145/2636240.2636857","url":null,"abstract":"To foster innovation and competition, an increasing number of software teams are becoming distributed. Such distribution makes continuous collaboration and continuous awareness support a necessity and also a great challenge. Traditional desktop-based approaches are insufficient for the requirements of continuous awareness. In practical process of software development, an awareness tool on mobile devices is also desirable for team members to obtain the awareness information continuously. This paper addresses how to effectively present collaborative development activities using aesthetic visualization on mobile screens. Our approach supports multiple views suitable for software developers as well as team leaders. A small scale usability experiment has been conducted and reported.","PeriodicalId":360638,"journal":{"name":"International Symposiu on Visual Information Communication and Interaction","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133667939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}