Franca-Alexandra Rupprecht, B. Hamann, C. Weidig, J. Aurich, A. Ebert
{"title":"IN2CO - A Visualization Framework for Intuitive Collaboration","authors":"Franca-Alexandra Rupprecht, B. Hamann, C. Weidig, J. Aurich, A. Ebert","doi":"10.2312/eurovisshort.20161174","DOIUrl":"https://doi.org/10.2312/eurovisshort.20161174","url":null,"abstract":"Today, the need for interaction and visualization techniques to fulfill user requirements for collaborative work is ever increasing. Current approaches do not suffice since they do not consider the simultaneous work of participating users, different views of the data being analyzed, or the exchange of information between different data emphases. We introduce Intuitive Collaboration (IN2CO), a scalable visualization framework that supports decision-making processes concerning multilevels and multi-roles. IN2CO improves the state of the art by integrating ubiquitous technologies and existing techniques to explore and manipulate data and dependencies collaboratively. A prototype has been tested by mechanical engineers with expertise in factory planning. Preliminary results imply that IN2CO supports communication and decision-making in a team-oriented manner.","PeriodicalId":224719,"journal":{"name":"Eurographics Conference on Visualization","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126306441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Geometry-Aware Visualization of Performance Data","authors":"Tom Vierjahn, B. Hentschel, T. Kuhlen","doi":"10.2312/eurp.20161136","DOIUrl":"https://doi.org/10.2312/eurp.20161136","url":null,"abstract":"Phenomena in the performance behaviour of high-performance computing (HPC) applications can stem from the HPC system itself, from the application's code, but also from the simulation domain. In order to analyse the latter phenomena, we propose a system that visualizes profile-based performance data in its spatial context, i.e., on the geometry, in the simulation domain. It thus helps HPC experts but also simulation experts understand the performance data better. In addition, our tool reduces the initially large search space by automatically labelling large-variation views on the data which require detailed analysis.","PeriodicalId":224719,"journal":{"name":"Eurographics Conference on Visualization","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114966416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"InVITe - Towards Intuitive Visualization of Iterations over Text","authors":"Houda Lamqaddam, J. Aerts","doi":"10.2312/eurp.20161141","DOIUrl":"https://doi.org/10.2312/eurp.20161141","url":null,"abstract":"With InVITe, we are working towards intuitive visualization to support review of iterative modifications on text documents. In order to accomplish this, we perform simple matching of text snippets between the two versions of text, across a large range of parameter settings. Next, an overview graphic indicating the effect of parameter space on the output allows the user to select those combinations that are of interest. Finally, such selection will display an alluvial diagram with annotations and covering different resolutions. \u0000 \u0000With this tool, co-authors can keep an overview of changes made, both structural and local.","PeriodicalId":224719,"journal":{"name":"Eurographics Conference on Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128945910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualization of Publication Impact","authors":"E. Maguire, Javier Martin Montull, Gilles Louppe","doi":"10.2312/eurovisshort.20161169","DOIUrl":"https://doi.org/10.2312/eurovisshort.20161169","url":null,"abstract":"Measuring scholarly impact has been a topic of much interest in recent years. While many use the citation count as a primary indicator of a publications impact, the quality and impact of those citations will vary. Additionally, it is often difficult to see where a paper sits among other papers in the same research area. Questions we wished to answer through this visualization were: is a publication cited less than publications in the field?; is a publication cited by high or low impact publications?; and can we visually compare the impact of publications across a result set? In this work we address the above questions through a new visualization of publication impact. Our technique has been applied to the visualization of citation information in InspireHep (www.inspirehep.net), the largest high energy physics publication repository.","PeriodicalId":224719,"journal":{"name":"Eurographics Conference on Visualization","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126410984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Memory Efficient Parallel Ray-casting Algorithm for Unstructured Grid Volume Rendering","authors":"Duksu Kim","doi":"10.2312/eurp.20171157","DOIUrl":"https://doi.org/10.2312/eurp.20171157","url":null,"abstract":"We present a novel memory-efficient parallel ray casting algorithm for unstructured grid volume rendering on multi-core CPUs. Our method is based on the Bunyk ray casting algorithm. To solve the high memory overhead problem of the Bunyk algorithm, we allocate a fixed size local buffer for each thread and the local buffers contain information of recently visited faces. The stored information is used by other rays or replaced by other face's information. To improve the utilization of local buffers, we propose an image-plane based ray grouping algorithm that makes ray groups have high coherency. The ray groups are then distributed to computing threads and each thread processes the given groups independently. We also propose a novel hash function that uses the index of faces as keys for calculating the buffer index each face will use to store the information. To see the benefits of our method, we applied it to three unstructured grid datasets with different sizes and measured the performance. We found that our method requires just 6% of the memory space compared with the Bunyk algorithm for storing face information. Also it shows compatible performance with the Bunyk algorithm even though it uses less memory. In addition, our method achieves up to 22% higher performance for a large-scale unstructured grid dataset with less memory than Bunyk algorithm. These results show the robustness and efficiency of our method and it demonstrates that our method is suitable to volume rendering for a large-scale unstructured grid dataset.","PeriodicalId":224719,"journal":{"name":"Eurographics Conference on Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128676737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Testbed Combining Visual Perception Models for Geographic Gaze Contingent Displays","authors":"K. Bektaş, A. Çöltekin, J. Krüger, A. Duchowski","doi":"10.2312/eurovisshort.20151127","DOIUrl":"https://doi.org/10.2312/eurovisshort.20151127","url":null,"abstract":"We present a testbed featuring gaze-contingent displays (GCDs), in which we combined multiple models of the human visual system (HVS) to manage the visual level of detail. GCDs respond to the viewer’s gaze in real-time, rendering a space-variant visualization. Our testbed is optimized for testing mathematical models of the human visual perception utilized in GCDs. Specifically, we combined models of contrast sensitivity, color perception and depth of field; and customized our implementation for geographic imagery. In this customization process, similarly to the geographic information systems (GIS), we georeference the input images, add vector layers on demand, and enable stereo viewing. After the implementation, we studied the computational and perceptual benefits of the studied perceptual models in terms of data reduction and user experience in geographic information science (GIScience) domain. Our computational validation experiments and the user study results indicate the HVS-based data reduction solutions are competitive, and encourage further research. We believe the research outcome and the testbed will be relevant in domains where visual interpretation of imagery is a part of professional life; such as in search and rescue, damage assessment in hazards, geographic image interpretation or urban planning.","PeriodicalId":224719,"journal":{"name":"Eurographics Conference on Visualization","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133461800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalie Kerracher, J. Kennedy, K. Chalmers, Martin Graham
{"title":"Visual Techniques to Support Exploratory Analysis of Temporal Graph Data","authors":"Natalie Kerracher, J. Kennedy, K. Chalmers, Martin Graham","doi":"10.2312/eurovisshort.20151133","DOIUrl":"https://doi.org/10.2312/eurovisshort.20151133","url":null,"abstract":"Recently, much research has focused on developing techniques for the visual representation of temporal graph data. This paper takes a wider look at the visual techniques involved in exploratory analysis of such data, considering the variety of sub tasks and contextual tasks required to understand change in a graph over time, and the visual techniques which are able to support these tasks. In so doing, we highlight a number of tasks which are less well supported by existing techniques, which could prove worthwhile avenues for future research.","PeriodicalId":224719,"journal":{"name":"Eurographics Conference on Visualization","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130111214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Flatken, A. Berres, Jonas Merkel, I. Hotz, A. Gerndt, H. Hagen
{"title":"Dynamic Scheduling for Progressive Large-Scale Visualization","authors":"M. Flatken, A. Berres, Jonas Merkel, I. Hotz, A. Gerndt, H. Hagen","doi":"10.2312/eurovisshort.20151122","DOIUrl":"https://doi.org/10.2312/eurovisshort.20151122","url":null,"abstract":"The ever-increasing compute capacity of high-performance systems enables scientists to simulate physical phenomena with a high spatial and temporal accuracy. Thus, the simulation output can yield dataset sizes of many terabytes. An efficient analysis and visualization process becomes very difficult especially for explorative scenarios where users continuously change input parameters. Using a distributed rendering pipeline may relieve the visualization frontend considerably but is often not sufficient. Therefore, we additionally propose a progressive data streaming and rendering approach. The main contribution of our method is the importance-guided order of data processing for block structured datasets. This requires a dynamic scheduling of data chunks on the parallel post-processing system which has been implemented by using an R-Tree. In this paper, we demonstrate the efficiency of our implementation for view-dependent feature extraction with varying viewpoints.","PeriodicalId":224719,"journal":{"name":"Eurographics Conference on Visualization","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124927606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Understanding Enjoyment and Flow in Information Visualization","authors":"B. Saket, C. Scheidegger, S. Kobourov","doi":"10.2312/eurovisshort.20151134","DOIUrl":"https://doi.org/10.2312/eurovisshort.20151134","url":null,"abstract":"Traditionally, evaluation studies in information visualization have measured effectiveness by assessing performance time and accuracy. More recently, there has been a concerted effort to understand aspects beyond time and errors. In this paper we study enjoyment, which, while arguably not the primary goal of visualization, has been shown to impact performance and memorability. Different models of enjoyment have been proposed in psychology, education and gaming; yet there is no standard approach to evaluate and measure enjoyment in visualization. In this paper we relate the flow model of Csikszentmihalyi to Munzner's nested model of visualization evaluation and previous work in the area. We suggest that, even though previous papers tackled individual elements of flow, in order to understand what specifically makes a visualization enjoyable, it might be necessary to measure all specific elements.","PeriodicalId":224719,"journal":{"name":"Eurographics Conference on Visualization","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121680633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Functional Unit Maps for Data-Driven Visualization of High-Density EEG Coherence","authors":"M. T. Caat, N. Maurits, J. Roerdink","doi":"10.2312/VisSym/EuroVis07/259-266","DOIUrl":"https://doi.org/10.2312/VisSym/EuroVis07/259-266","url":null,"abstract":"Synchronous electrical activity in different brain regions is generally assumed to imply functional relationships between these regions. A measure for this synchrony is electroencephalography (EEG) coherence, computed between pairs of signals as a function of frequency. Existing high-density EEG coherence visualizations are generally either hypothesis-driven, or data-driven graph visualizations which are cluttered. In this paper, a new method is presented for data-driven visualization of high-density EEG coherence, which strongly reduces clutter and is referred to as functional unit (FU) map. Starting from an initial graph, with vertices representing electrodes and edges representing significant coherences between electrode signals, we define an FU as a set of electrodes represented by a clique consisting of spatially connected vertices. In an FU map, the spatial relationship between electrodes is preserved, and all electrodes in one FU are assigned an identical gray value. Adjacent FUs are visualized with different gray values and FUs are connected by a line if the average coherence between FUs exceeds a threshold. Results obtained with our visualization are in accordance with known electrophysiological findings. FU maps can be used as a preprocessing step for conventional analysis.","PeriodicalId":224719,"journal":{"name":"Eurographics Conference on Visualization","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124894665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}