Xiaoxiao Liu, Mohammad Alharbi, Jing Chen, A. Diehl, Dylan Rees, Elif E. Firat, Qiru Wang, R. Laramee
{"title":"Visualization Resources: A Survey","authors":"Xiaoxiao Liu, Mohammad Alharbi, Jing Chen, A. Diehl, Dylan Rees, Elif E. Firat, Qiru Wang, R. Laramee","doi":"10.1177/14738716221126992","DOIUrl":"https://doi.org/10.1177/14738716221126992","url":null,"abstract":"Visualization, a vibrant field for researchers, practitioners, and higher educational institutions, is growing and evolving very rapidly. Tremendous progress has been made since 1987, the year often cited as the beginning of data visualization as a distinct field. As such, the number of visualization resources and the demand for those resources is increasing at a rapid pace. After a decades-equivalent long search process, we present a survey of open visualization resources for all those with an interest in interactive data visualization and visual analytics. Because the number of resources is so large, we focus on collections of resources, of which there are already many ranging from literature collections to collections of practitioner resources. Based on this, we develop a classification of visualization resource collections with a focus on the resource type, e.g. literature-based, web-based, developer focused and special topics. The result is an overview and details-on-demand of many useful resources. The collection offers a valuable jump-start for those seeking out data visualization resources from all backgrounds spanning from beginners such as students to teachers, practitioners, developers, and researchers wishing to create their own advanced or novel visual designs. This paper is a response to students and others who frequently ask for visualization resources available to them.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46665993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Li, Shanshan Wang, Weidong Xie, Kun Yu, Chaolu Feng
{"title":"Large scale medical image online three-dimensional reconstruction based on WebGL using four tier client server architecture","authors":"Wei Li, Shanshan Wang, Weidong Xie, Kun Yu, Chaolu Feng","doi":"10.1177/14738716221138090","DOIUrl":"https://doi.org/10.1177/14738716221138090","url":null,"abstract":"The development of medical device technology has led to the rapid growth of medical imaging data. The reconstruction from two-dimensional images to three-dimensional volume visualization not only shows the location and shape of lesions from multiple views but also provides intuitive simulation for surgical treatment. However, the three-dimensional reconstruction process requires the high performance execution of image data acquisition and reconstruction algorithms, which limits the application to equipments with limited resources. Therefore, it is difficult to apply on many online scenarios, and mobile devices cannot meet high-performance hardware and software requirements. This paper proposes an online medical image rendering and real-time three-dimensional (3D) visualization method based on Web Graphics Library (WebGL). This method is based on a four-tier client-server architecture and uses the method of medical image data synchronization to reconstruct at both sides of the client and the server. The reconstruction method is designed to achieve the dual requirements of reconstruction speed and quality. The real-time 3D reconstruction visualization of large-scale medical images is tested in real environments. During the interaction with the reconstruction model, users can obtain the reconstructed results in real-time and observe and analyze it from all angles. The proposed four-tier client-server architecture will provide instant visual feedback and interactive information for many medical practitioners in collaborative therapy and tele-medicine applications. The experiments also show that the method of online 3D image reconstruction is applied in clinical practice on large scale image data while maintaining high reconstruction speed and quality.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45330322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandro Rego de Lima, Diana Carvalho, Tânia de Jesus Vilela da Rocha
{"title":"HyperCube4x: A viewport management system proposal","authors":"Alessandro Rego de Lima, Diana Carvalho, Tânia de Jesus Vilela da Rocha","doi":"10.1177/14738716221137908","DOIUrl":"https://doi.org/10.1177/14738716221137908","url":null,"abstract":"This article presents a novel management and information visualization system proposal based on the tesseract, the 4D-hypercube. The concept comprises metaphors that mimic the tesseract geometrical properties using interaction and information visualization techniques, made possible by modern computer systems and human capabilities such as spatial cognition. The discussion compares the Hypercube and the traditional desktop metaphor systems. An operational prototype is also available for reader testing. Finally, a preliminary assessment with 31 participants revealed that 81.05% “agree” or “totally agree” that the proposed concepts offer real gains compared to the desktop metaphor.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48029529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compressing and interpreting word embeddings with latent space regularization and interactive semantics probing","authors":"Haoyu Li, Junpeng Wang, Yan-luan Zheng, Liang Wang, Wei Zhang, Han-Wei Shen","doi":"10.1177/14738716221130338","DOIUrl":"https://doi.org/10.1177/14738716221130338","url":null,"abstract":"Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embeddings into a latent space (e.g. via autoencoders) for further tasks, exploiting various merits the latent representations could bring. To preserve the embeddings’ quality, these works often map the embeddings into an even higher-dimensional latent space, making the already complicated embeddings even less interpretable and consuming more storage space. In this work, we borrow the idea of β VAE to regularize the HD latent space. Our regularization implicitly condenses information from the HD latent space into a much lower-dimensional space, thus compressing the embeddings. We also show that each dimension of our regularized latent space is more semantically salient, and validate our assertion by interactively probing the encoding-level of user-proposed semantics in the dimensions. To the end, we design a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics. We validate the effectiveness of our embedding regularization and interpretation approach through both quantitative and qualitative evaluations.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44274390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pramod Chundury, M. A. Yalçın, Jon Crabtree, A. Mahurkar, Lisa M Shulman, N. Elmqvist
{"title":"Contextual in situ help for visual data interfaces","authors":"Pramod Chundury, M. A. Yalçın, Jon Crabtree, A. Mahurkar, Lisa M Shulman, N. Elmqvist","doi":"10.1177/14738716221120064","DOIUrl":"https://doi.org/10.1177/14738716221120064","url":null,"abstract":"As the complexity of data analysis increases, even well-designed data interfaces must guide experts in transforming their theoretical knowledge into actual features supported by the tool. This challenge is even greater for casual users who are increasingly turning to data analysis to solve everyday problems. To address this challenge, we propose data-driven, contextual, in situ help features that can be implemented in visual data interfaces. We introduce five modes of help-seeking: (1) contextual help on selected interface elements, (2) topic listing, (3) overview, (4) guided tour, and (5) notifications. The difference between our work and general user interface help systems is that data visualization provide a unique environment for embedding context-dependent data inside on-screen messaging. We demonstrate the usefulness of such contextual help through two case studies of two visual data interfaces: Keshif and POD-Vis. We implemented and evaluated the help modes with two sets of participants, and found that directly selecting user interface elements was the most useful.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44671567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Kavaz, A. Puig, I. Rodríguez, Reyes Chacón, David De-La-Paz, Adrià Torralba, Montserrat Nofre, M. Taulé
{"title":"Visualisation of hierarchical multivariate data: Categorisation and case study on hate speech","authors":"E. Kavaz, A. Puig, I. Rodríguez, Reyes Chacón, David De-La-Paz, Adrià Torralba, Montserrat Nofre, M. Taulé","doi":"10.1177/14738716221120509","DOIUrl":"https://doi.org/10.1177/14738716221120509","url":null,"abstract":"Multivariate hierarchical data has an important role in many applications. To find the best visualisation that best fits a concrete data is crucial to explore and understand the relationships between the data. This paper proposes a categorisation – Elongated and Compact – of hierarchical data based on the inner shapes of the hierarchies, that is the connectivity degree of the internal nodes, the number of nodes, etc, that can be applied to any hierarchical data. Based on this taxonomy, we explore implicit and explicit layouts – Tree, Circle Packing, Force and Radial – to provide users with a complete view of the data. We hypothesise that Tree and Circle Packing fit with Elongated structures, and Force and Radial fit with Compact ones. In addition, we cluster multivariate features to embed them in the hierarchical layouts. Especially, we propose two different glyphs –one-by-one and all-in-one, and we bet for the one-by-one glyphs as the most suitable for showing the distribution of several features along with the hierarchical structures. To validate our hypotheses, we conducted a user study with 35 participants using a hate speech annotated corpus. This corpus comes from 4359 comments posted in online Spanish newspapers. The results indicated that users preferred the Tree layout over the other three layouts (Circle, Force, Radial) with both types of structures (EC and CC). However, when we focused the analysis only on Radial and Force layouts, both of them scored significantly higher with Compact than with Elongated data. Moreover, participants scored the one-by-one glyph higher than the all-in-one glyph, but the difference was not significant.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47989547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Witschard, Ilir Jusufi, R. M. Martins, K. Kucher, A. Kerren
{"title":"Interactive optimization of embedding-based text similarity calculations","authors":"D. Witschard, Ilir Jusufi, R. M. Martins, K. Kucher, A. Kerren","doi":"10.1177/14738716221114372","DOIUrl":"https://doi.org/10.1177/14738716221114372","url":null,"abstract":"Comparing text documents is an essential task for a variety of applications within diverse research fields, and several different methods have been developed for this. However, calculating text similarity is an ambiguous and context-dependent task, so many open challenges still exist. In this paper, we present a novel method for text similarity calculations based on the combination of embedding technology and ensemble methods. By using several embeddings, instead of only one, we show that it is possible to achieve higher quality, which in turn is a key factor for developing high-performing applications for text similarity exploitation. We also provide a prototype visual analytics tool which helps the analyst to find optimal performing ensembles and gain insights to the inner workings of the similarity calculations. Furthermore, we discuss the generalizability of our key ideas to fields beyond the scope of text analysis.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44045901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emotion visualization system based on physiological signals combined with the picture and scene","authors":"Wenqian Lin, C. Li, Yunjian Zhang","doi":"10.1177/14738716221109146","DOIUrl":"https://doi.org/10.1177/14738716221109146","url":null,"abstract":"In this paper, the system of emotion visualization and the system of emotion recognition and judgment are established. Twenty subjects were selected for the test on the above two systems, meanwhile the emotional trend changes given by emotion judgment system based on the set of optimal signal feature and based on conventional machine learning-based method are compared. The results show that the emotional trend changes given by emotion visualization based on picture and scene change are roughly consistent with those obtained by emotion judgment system. As to the real-time ability and interactivity of emotional judgment, the emotion visualization based on scene change is better than that based on picture change; the emotion judgment system based on the set of optimal signal feature is better than the system based on the conventional machine learning-based method. The test experience of subjects has an impact on the test results. Multi-dimensional interactive environment is easier to affect people’s emotional changes than single dimensional interactive environment.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43827698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Book Review: From Data to Stories: An end to end guide to Storytelling with Data Comics for the absolute beginner","authors":"M. S. Rana, Arnapurna Rath","doi":"10.1177/14738716221102927","DOIUrl":"https://doi.org/10.1177/14738716221102927","url":null,"abstract":"","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45997424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ATOVis – A visualisation tool for the detection of financial fraud","authors":"Catarina Maçãs, Evgheni Polisciuc, P. Machado","doi":"10.1177/14738716221098074","DOIUrl":"https://doi.org/10.1177/14738716221098074","url":null,"abstract":"Fraud detection is related to the suppression of possible financial losses for institutions and their clients. It is a task of high responsibility and, therefore, an important phase of the decision-making chain. Nowadays, experts in charge base their analysis on tabular data, usually presented in spreadsheets and seldom supplemented with simple visualisations. However, this type of inspection is laborious, time-consuming, and may be of little use for the analysis and overview of complex transactional data. To aid in the inspection of fraudulent activities, we develop ATOVis – a visualisation tool that enables a fast analysis and detection of suspicious behaviours. We aim to ease and accelerate fraud detection by providing an overview of specific patterns within the data, and enabling details on demand. ATOVis focuses on applying visualisation techniques to the Finance domain, specifically e-commerce, contributing to the state-of-the-art as the first visualisation tool primarily specialised in Account Takeover (ATO) patterns. In particular, the present paper incorporates: a task abstraction for detecting a specific financial fraud pattern – ATO; two models for the visualisation of ATO; and a multiscale timeline to enable an overview of the data. We also validate our tool through user testing, with experts in fraud detection and experts from other fields of data science. Based on the feedback provided by the analysts, we could conclude that ATOVis is an efficient and effective tool in detecting specific patterns of fraud which can improve the analysts’ work.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44219964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}