{"title":"Chart decoder: Generating textual and numeric information from chart images automatically","authors":"Wenjing Dai, Meng Wang, Zhibin Niu, Jiawan Zhang","doi":"10.1016/j.jvlc.2018.08.005","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.005","url":null,"abstract":"<div><p><span>Charts are commonly used as a graphical representation for visualizing numerical data in digital documents. For many legacy charts or scientific charts, however, underlying data is not available, which hinders the process of redesigning more effective visualizations and further analysis of charts. In response, we present Chart Decoder, a system that implements decoding of visual features and recovers data from chart images. Chart Decoder takes a chart image as input and generates the textual and numeric information of that chart image as output through applying deep learning, computer vision and text recognition techniques. We train a deep learning based classifier to identify chart types of five categories (bar chart, pie chart, line chart, scatter plot and radar chart), which achieves a </span>classification accuracy<span> over 99%. We also complement a textual information extraction pipeline which detects text regions in a chart, recognizes text content and distinguishes their roles. For generating textual and graphical information, we implement automated data recovery from bar charts, one of the most popular chart types. To evaluate the effectiveness of our algorithms, we evaluate our system on two corpora: 1) bar charts collected from the web, 2) charts randomly made by a script. The results demonstrate that our system is able to recover data from bar charts with a high rate of accuracy.</span></p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 101-109"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual exploration and comparison of word embeddings","authors":"Juntian Chen, Yubo Tao, Hai Lin","doi":"10.1016/j.jvlc.2018.08.008","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.008","url":null,"abstract":"<div><p><span>Word embeddings are distributed representations for natural language words, and have been wildly used in many natural language processing tasks. The </span>word embedding space contains local clusters with semantically similar words and meaningful directions, such as the analogy. However, there are different training algorithms and text corpora, which both have a different impact on the generated word embeddings. In this paper, we propose a visual analytics system to visually explore and compare word embeddings trained by different algorithms and corpora. The word embedding spaces are compared from three aspects, i.e., local clusters, semantic directions and diachronic changes, to understand the similarity and differences between word embeddings.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 178-186"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual ranking of academic influence via paper citation","authors":"Zhiguang Zhou, Chen Shi, Miaoxin Hu, Yuhua Liu","doi":"10.1016/j.jvlc.2018.08.007","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.007","url":null,"abstract":"<div><p>With rapid growth of digital publishing, a great deal of document datum has been published online for a widely spread of knowledge innovations, which is an important resource for human survival and social development. However, it is a time-consuming and difficult task to conduct a high-efficiency access of valuable papers from an extremely large document database. A set of ranking techniques have been proposed to evaluate the influence of articles by counting the number and quality of citations, such as PageRank. In fact, the influence of an article does not merely depend on the account of citations, which is also highly related to the citation network. In this paper, we propose a visual analytics system for visual ranking of academic influence of articles, based on an insightful analysis of citation network. Firstly, a characterization of articles is established through word2vec model, based on an analogy between the articles in citation network and natural language processing (NPL) terms. Then, the difference between articles in the vectorized space is employed to optimize the PageRank model and achieve desired influence ranking results. A set of meaningful visual encodings are also designed to present the relationships among articles, such as the visualization of high-dimensional vectors and time-varying citation networks. At last, a visualization framework is implemented for visual ranking of academic influence of articles, with the ranking models and visual designs integrated. Case studies based on real-world datasets and interviews with domain experts have demonstrated the effectiveness of our system in the evaluation of academic influence of articles.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 134-143"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72081873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring high-dimensional data through locally enhanced projections","authors":"Chufan Lai , Ying Zhao , Xiaoru Yuan","doi":"10.1016/j.jvlc.2018.08.006","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.006","url":null,"abstract":"<div><p>Dimension reduced projections approximate the high-dimensional distribution by accommodating data in a low-dimensional space. They generate good overviews, but can hardly meet the needs of local relational/dimensional data analyses. On the one hand, layout distortions in linear projections largely harm the perception of local data relationships. On the other hand, non-linear projections seek to preserve local neighborhoods but at the expense of losing dimensional contexts. A sole projection is hardly enough for local analyses with different focuses and tasks. In this paper, we propose an interactive exploration scheme to help users customize a linear projection based on their point of interests (POIs) and analytic tasks. First, users specify their POI data interactively. Then regarding different tasks, various projections and subspaces are recommended to enhance certain features of the POI. Furthermore, users can save and compare multiple POIs and navigate their explorations with a POI map. Via case studies with real-world datasets, we demonstrate the effectiveness of our method to support high-dimensional local data analyses.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 144-156"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72081874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analyzing data flow diagrams by combination of formal methods and visualization techniques","authors":"Haocheng Zhang, Wei Liu, Hao Xiong, Xiaoju Dong","doi":"10.1016/j.jvlc.2018.08.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.08.001","url":null,"abstract":"<div><p><span>Data flow diagram (DFD) is an indispensable method to model data processing in software engineering. To analyze DFD rigorously, a </span>formal semantics is demanded. Formal interpretation of DFD and its formal semantics lead to an accurate and non-ambiguous analysis. Calculus of Communicating System (CCS), a formal approach in concurrent system modeling, could be utilized to describe DFD. Given its CCS description, automation tools generate the state space of the system depicted by DFD, which reflects all the behaviors of the system. However, analyzing the state space only with character expressions is hard for software developers. In this paper, a visual system is introduced to assist developers to analyze and compare the systems by combination of formal methods and visualization techniques.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"48 ","pages":"Pages 41-51"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.08.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72036449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic human body feature extraction and personal size measurement","authors":"Tan Xiaohui , Peng Xiaoyu , Liu Liwen , Xia Qing","doi":"10.1016/j.jvlc.2018.05.002","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.05.002","url":null,"abstract":"<div><p>It is a pervasive problem to automatically obtain the size of a human body without contacting for applications like virtual try-on. In this paper, we propose a novel approach to calculate human body size, such as width of shoulder, girths of bust, hips and waist. First, a depth camera as the 3D model acquisition device is used to get the 3D human body model. Then an automatic extraction method of focal features on 3D human body via random forest regression analysis of geodesic distances is used to extract the predefined feature points and lines. Finally, the individual human body size is calculated according to these feature points and lines. The scale-invariant heat kernel signature is exploited to serve as feature proximity. So our method is insensitive to postures and different shapes of 3D human body. These main advantages of our method lead to robust and accurate feature extraction and size measurement for 3D human bodies in various postures and shapes. The experiment results show that the average error of feature points extraction is 0.0617cm, the average errors of shoulder width and girth are 1.332 cm and 0.7635 cm, respectively. Overall, our algorithm has a better detection effect for 3D human body size, and it is stable with better robustness than existing methods.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"47 ","pages":"Pages 9-18"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.05.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72100905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gennaro Costagliola, Mattia De Rosa, Vittorio Fuccella
{"title":"A technique for improving text editing on touchscreen devices","authors":"Gennaro Costagliola, Mattia De Rosa, Vittorio Fuccella","doi":"10.1016/j.jvlc.2018.04.002","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.04.002","url":null,"abstract":"<div><p>We present a gesture-based technique for the efficient text editing on touchscreen devices. In order to define our technique, we ran a preliminary experiment and detected the most natural gestures that users choose when unconstrained. Users can perform the main operations such as select, move, copy, delete and paste directly on the text, thus making the editing technique independent from the text entry method (e.g. a soft keyboard). As an evaluation, we compared our gestural editing technique to the one available on most Android devices. The experimental results show that the gestural editing technique is more efficient when the text font size increases.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"47 ","pages":"Pages 1-8"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.04.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72100907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multi-comparable visual analytic approach for complex hierarchical data","authors":"Chen Yi , Dong Yu , Sun Yuehong , Liang Jie","doi":"10.1016/j.jvlc.2018.02.003","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.02.003","url":null,"abstract":"<div><p>Maximum residue limit (MRL) standard which specifies the highest level of every pesticide residue in different agricultural products plays a critical role in food safety. However, such standards which related to the characteristics of pesticides and the classification of agricultural products which organized into a hierarchical structure are complex and vary widely across different regions or countries. So it is a big challenge to compare multi-regional MRL standard data comprehensively. In this paper, we present a multi-comparable visual analytic approach for complex hierarchical data and a visual analytics system (McVA) to support multiple comparison and evaluation of MRL standard. With a cooperative multi-view visual design, our proposed approach links the hierarchies of MRL datasets and provides the capacity for comparison at different levels and dimensions. We also introduce a metric model for evaluating the completeness and strictness of MRL standards quantitatively. The case study of real problems and the positive feedback from domain experts demonstrate the effectiveness of this approach.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"47 ","pages":"Pages 19-30"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.02.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72100909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BPMN extensions for automating cloud environments using a two-layer orchestration approach","authors":"Robert Dukaric, Matjaz B. Juric","doi":"10.1016/j.jvlc.2018.06.002","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.06.002","url":null,"abstract":"<div><p>Cloud orchestration describes the automated arrangement, coordination, and management of complex cloud systems, middleware and services, and is realized by orchestrating workflows. To achieve an end-to-end cloud orchestration, workflow designers usually have to cope with integration challenges between two different technologies – one that entails technical cloud orchestration and another comprising business-level orchestration. This however presents a complex undertaking for workflow designers, as they have to gain sufficient knowledge and expertise of two diverse technologies in order to automate cloud-specific tasks across two different domains. Introduction of a unified orchestration platform would solve these issues, as it would deliver a common vocabulary for different types of workflow designers and would provide them with a single platform for orchestrating both business and technical activities, without having to face the integration complexities. The main objective of this paper is to provide support for cloud-specific workflows in BPMN business process engines. To achieve this objective we (1) define a meta-model for modeling cloud workflows, (2) extend BPMN 2.0.2 specification to orchestrate cloud-specific workflow activities, and (3) implement a meta-model with BPMN extensions by showing how cloud orchestration workflow elements (i.e. activities and workflow control) map onto extended BPMN elements. As a part of the evaluation we measure process size and complexity of two process models using various process metrics. The results have shown that when using our proposed BPMN extensions, the overall size and complexity of the use case process under test has been reduced by more than half on an average. We also improve the readability of BPMN process.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"47 ","pages":"Pages 31-43"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.06.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72106235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quang Vinh Nguyen , David Arness , Carrissa J. Sanderson , Simeon Simoff , Mao Lin Huang
{"title":"Enabling effective tree exploration using visual cues","authors":"Quang Vinh Nguyen , David Arness , Carrissa J. Sanderson , Simeon Simoff , Mao Lin Huang","doi":"10.1016/j.jvlc.2018.06.001","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.06.001","url":null,"abstract":"<div><p>This article presents a new interactive visualization for exploring large hierarchical structures by providing visual cues on a node link tree visualization. Our technique provides topological previews of hidden substructures with three types of visual cues including <em>simple cues, tree cues</em> and <em>treemap cues</em><span>. We demonstrate the visual cues on Degree-of-Interest Tree (DOITree) due to its familiar mapping, its capability of providing multiple focused nodes, and its dynamic rescaling of substructures to fit the available space. We conducted a usability study with 28 participants that measured completion time and accuracy across five different topology search tasks. The </span><em>simple cues</em> had the fastest completion time across three of the node identification tasks. The <em>treemap cues</em> had the highest rate of correct answers on four of the five tasks, although only reaching statistical significance for two of these. As predicted, user ratings demonstrated a preference for the easy to understand <em>tree cues</em> followed by the <em>simple cue</em>, despite this not consistently reflected in performance results.</p></div>","PeriodicalId":54754,"journal":{"name":"Journal of Visual Languages and Computing","volume":"47 ","pages":"Pages 44-61"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.jvlc.2018.06.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72100908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}