Theodoros Christoudias, C. Kallidonis, L. Koutsantonis, Christos Lemesios, Lefteris Markou, Constantinos Sophocleous
{"title":"Visualising the dark sky IEEE SciVis contest 2015","authors":"Theodoros Christoudias, C. Kallidonis, L. Koutsantonis, Christos Lemesios, Lefteris Markou, Constantinos Sophocleous","doi":"10.1109/SciVis.2015.7429496","DOIUrl":"https://doi.org/10.1109/SciVis.2015.7429496","url":null,"abstract":"Cosmological simulations are a cornerstone of our understanding of the Universe during its 13.7 billion year progression from small fluctuations that we see in the cosmic microwave background to today, where we are surrounded by galaxies and clusters of galaxies interconnected by a vast cosmic web. In this paper, we present our results on the 2015 IEEE Scientific Visualization Contest, which pertains to datasets derived from the Dark Sky Simulations [10]. We ingest, process and visualise cosmological data of particle clouds and halo formations in terms of positions and shed light on various properties of scientific interest including graviational potential, velocity and spin.","PeriodicalId":123718,"journal":{"name":"2015 IEEE Scientific Visualization Conference (SciVis)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116944850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High performance flow field visualization with high-order access dependencies","authors":"Jiang Zhang, Hanqi Guo, Xiaoru Yuan","doi":"10.1109/SciVis.2015.7429515","DOIUrl":"https://doi.org/10.1109/SciVis.2015.7429515","url":null,"abstract":"We present a novel model based on high-order access dependencies for high performance pathline computation in flow field. The high-order access dependencies are defined as transition probabilities from one data block to other blocks based on a few historical data accesses. Compared with existing methods which employed first-order access dependencies, our approach takes the advantages of high order access dependencies with higher accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing densely-seeded pathlines. The efficiency of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method can achieve higher data locality than the first-order access dependencies based method, thereby reducing the I/O requests and improving the efficiency of pathline computation in various applications.","PeriodicalId":123718,"journal":{"name":"2015 IEEE Scientific Visualization Conference (SciVis)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131617663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Chitiboi, M. Neugebauer, S. Schnell, M. Markl, L. Linsen
{"title":"3D superquadric glyphs for visualizing myocardial motion","authors":"T. Chitiboi, M. Neugebauer, S. Schnell, M. Markl, L. Linsen","doi":"10.1109/SCIVIS.2015.7429504","DOIUrl":"https://doi.org/10.1109/SCIVIS.2015.7429504","url":null,"abstract":"Various cardiac diseases can be diagnosed by the analysis of myocardial motion. Relevant biomarkers are radial, longitudinal, and rotational velocities of the cardiac muscle computed locally from MR images. We designed a visual encoding that maps these three attributes to glyph shapes according to a barycentric space formed by 3D superquadric glyphs. The glyphs show aggregated myocardial motion information following the AHA model and are displayed in a respective 3D layout.","PeriodicalId":123718,"journal":{"name":"2015 IEEE Scientific Visualization Conference (SciVis)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131728377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Bock, A. Pembroke, M. L. Mays, L. Rastaetter, T. Ropinski, A. Ynnerman
{"title":"Visual verification of space weather ensemble simulations","authors":"A. Bock, A. Pembroke, M. L. Mays, L. Rastaetter, T. Ropinski, A. Ynnerman","doi":"10.1109/SciVis.2015.7429487","DOIUrl":"https://doi.org/10.1109/SciVis.2015.7429487","url":null,"abstract":"We propose a system to analyze and contextualize simulations of coronal mass ejections. As current simulation techniques require manual input, uncertainty is introduced into the simulation pipeline leading to inaccurate predictions that can be mitigated through ensemble simulations. We provide the space weather analyst with a multi-view system providing visualizations to: 1. compare ensemble members against ground truth measurements, 2. inspect time-dependent information derived from optical flow analysis of satellite images, and 3. combine satellite images with a volumetric rendering of the simulations. This three-tier workflow provides experts with tools to discover correlations between errors in predictions and simulation parameters, thus increasing knowledge about the evolution and propagation of coronal mass ejections that pose a danger to Earth and interplanetary travel.","PeriodicalId":123718,"journal":{"name":"2015 IEEE Scientific Visualization Conference (SciVis)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130915452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualizing 3D flow through cutting planes","authors":"C. Ware, A. Stevens","doi":"10.1109/SciVis.2015.7429513","DOIUrl":"https://doi.org/10.1109/SciVis.2015.7429513","url":null,"abstract":"Studies have found conflicting results regarding the effectiveness of tube-like structures for representing 3D flow data. This paper presents the findings of a small-scale pilot study contrasting static monoscopic depth cues to ascertain their importance in perceiving the orientation of a three-dimensional glyph with respect to a cutting plane. A simple striped texture and shading were found to reduce judgement errors when used with a 3D tube glyph as compared to plain or shaded line glyphs. A discussion of considerations for a full-scale study and possible future work follows.","PeriodicalId":123718,"journal":{"name":"2015 IEEE Scientific Visualization Conference (SciVis)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114982573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongsen Liao, Yingcai Wu, Li Chen, T. Hamill, Yunhai Wang, Kan Dai, Hui Zhang, Wei Chen
{"title":"A visual voting framework for weather forecast calibration","authors":"Hongsen Liao, Yingcai Wu, Li Chen, T. Hamill, Yunhai Wang, Kan Dai, Hui Zhang, Wei Chen","doi":"10.1109/SciVis.2015.7429488","DOIUrl":"https://doi.org/10.1109/SciVis.2015.7429488","url":null,"abstract":"Numerical weather predictions have been widely used for weather forecasting. Many large meteorological centers are producing highly accurate ensemble forecasts routinely to provide effective weather forecast services. However, biases frequently exist in forecast products because of various reasons, such as the imperfection of the weather forecast models. Failure to identify and neutralize the biases would result in unreliable forecast products that might mislead analysts; consequently, unreliable weather predictions are produced. The analog method has been commonly used to overcome the biases. Nevertheless, this method has some serious limitations including the difficulties in finding effective similar past forecasts, the large search space for proper parameters and the lack of support for interactive, real-time analysis. In this study, we develop a visual analytics system based on a novel voting framework to circumvent the problems. The framework adopts the idea of majority voting to combine judiciously the different variants of analog methods towards effective retrieval of the proper analogs for calibration. The system seamlessly integrates the analog methods into an interactive visualization pipeline with a set of coordinated views that characterizes the different methods. Instant visual hints are provided in the views to guide users in finding and refining analogs. We have worked closely with the domain experts in the meteorological research to develop the system. The effectiveness of the system is demonstrated using two case studies. An informal evaluation with the experts proves the usability and usefulness of the system.","PeriodicalId":123718,"journal":{"name":"2015 IEEE Scientific Visualization Conference (SciVis)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129831863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Berge, D. Declara, C. Hennersperger, Maximilian Baust, Nassir Navab
{"title":"Real-time uncertainty visualization for B-mode ultrasound","authors":"C. Berge, D. Declara, C. Hennersperger, Maximilian Baust, Nassir Navab","doi":"10.1109/SciVis.2015.7429489","DOIUrl":"https://doi.org/10.1109/SciVis.2015.7429489","url":null,"abstract":"B-mode ultrasound is a very well established imaging modality and is widely used in many of today's clinical routines. However, acquiring good images and interpreting them correctly is a challenging task due to the complex ultrasound image formation process depending on a large number of parameters. To facilitate ultrasound acquisitions, we introduce a novel framework for real-time uncertainty visualization in B-mode images. We compute real-time per-pixel ultrasound Confidence Maps, which we fuse with the original ultrasound image in order to provide the user with an interactive feedback on the quality and credibility of the image. In addition to a standard color overlay mode, primarily intended for educational purposes, we propose two perceptional visualization schemes to be used in clinical practice. Our mapping of uncertainty to chroma uses the perceptionally uniform L*a*b* color space to ensure that the perceived brightness of B-mode ultrasound remains the same. The alternative mapping of uncertainty to fuzziness keeps the B-mode image in its original grayscale domain and locally blurs or sharpens the image based on the uncertainty distribution. An elaborate evaluation of our system and user studies on both medical students and expert sonographers demonstrate the usefulness of our proposed technique. In particular for ultrasound novices, such as medical students, our technique yields powerful visual cues to evaluate the image quality and thereby learn the ultrasound image formation process. Furthermore, seeing the distribution of uncertainty adjust to the transducer positioning in real-time, provides also expert clinicians with a strong visual feedback on their actions. This helps them to optimize the acoustic window and can improve the general clinical value of ultrasound.","PeriodicalId":123718,"journal":{"name":"2015 IEEE Scientific Visualization Conference (SciVis)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122687597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Halos in a dark sky: Interactively exploring the structure of dark matter halo merger trees","authors":"K. Almryde, A. Forbes","doi":"10.1109/SciVis.2015.7429495","DOIUrl":"https://doi.org/10.1109/SciVis.2015.7429495","url":null,"abstract":"This paper presents a novel application that visualizes dark matter halo merger trees and their evolution through space and time. Our application enables users to interact with individual halos within these trees in order to perform a range of visual analysis tasks, including: identifying the substructure and superstructure of the halos; observing the movement of halos across a custom range of time steps; and comparing the branching attributes of multiple trees. Central to our application is the ability to navigate the halos by interactively \"jumping\" from tree to tree. By clearly marking halos that have \"tributaries\" — that is, that split off into multiple halos or merge with one or more halos — we make it easy for the user to traverse the complex structure of the universe. Our application is publicly available1 online and runs at interactive rates on the browser using hardware-accelerated graphics.","PeriodicalId":123718,"journal":{"name":"2015 IEEE Scientific Visualization Conference (SciVis)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134450450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Sundén, P. Steneteg, S. Kottravel, Daniel Jönsson, Rickard Englund, M. Falk, T. Ropinski
{"title":"Inviwo - An extensible, multi-purpose visualization framework","authors":"E. Sundén, P. Steneteg, S. Kottravel, Daniel Jönsson, Rickard Englund, M. Falk, T. Ropinski","doi":"10.1109/SciVis.2015.7429514","DOIUrl":"https://doi.org/10.1109/SciVis.2015.7429514","url":null,"abstract":"To enable visualization research impacting other scientific domains, the availability of easy-to-use visualization frameworks is essential. Nevertheless, an easy-to-use system also has to be adapted to the capabilities of modern hardware architectures, as only this allows for realizing interactive visualizations. With this trade-off in mind, we have designed and realized the cross-platform Inviwo (Interactive Visualization Workshop) visualization framework, that supports both interactive visualization research as well as efficient visualization application development and deployment. In this poster we give an overview of the architecture behind Inviwo, and show how its design enables us and other researchers to realize their visualization ideas efficiently. Inviwo consists of a modern and lightweight, graphics independent core, which is extended by optional modules that encapsulate visualization algorithms, well-known utility libraries and commonly used parallel-processing APIs (such as OpenGL and OpenCL). The core enables a simplistic structure for creating bridges between the different modules regarding data transfer across architecture and devices with an easy-to-use screen graph and minimalistic programming. Making the base structures in a modern way while providing intuitive methods of extending the functionality and creating modules based on other modules, we hope that Inviwo can help the visualization community to perform research through a rapid-prototyping design and GUI, while at the same time allowing users to take advantage of the results implemented in the system in any way they desire later on. Inviwo is publicly available at www.inviwo.org, and can be used freely by anyone under a permissive free software license (Simplified BSD).","PeriodicalId":123718,"journal":{"name":"2015 IEEE Scientific Visualization Conference (SciVis)","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124525259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanghun Park, Hyunggoog Seo, Seunghoon Cha, Jun-yong Noh
{"title":"Auto-calibration of multi-projector displays with a single handheld camera","authors":"Sanghun Park, Hyunggoog Seo, Seunghoon Cha, Jun-yong Noh","doi":"10.1109/SciVis.2015.7429493","DOIUrl":"https://doi.org/10.1109/SciVis.2015.7429493","url":null,"abstract":"We present a novel approach that utilizes a simple handheld camera to automatically calibrate multi-projector displays. Most existing studies adopt active structured light patterns to verify the relationship between the camera and the projectors. The utilized camera is typically expensive and requires an elaborate installation process depending on the scalability of its applications. Moreover, the observation of the entire area by the camera is almost impossible for a small space surrounded by walls as there is not enough distance for the camera to capture the entire scene. We tackle these issues by requiring only a portion of the walls to be visible to a handheld camera that is widely used these days. This becomes possible by the introduction of our new structured light pattern scheme based on a perfect submap and a geometric calibration that successfully utilizes the geometric information of multi-planar environments. We demonstrate that immersive display in a small space such as an ordinary room can be effectively created using images captured by a handheld camera.","PeriodicalId":123718,"journal":{"name":"2015 IEEE Scientific Visualization Conference (SciVis)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122583983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}