K. Reda, A. Knoll, K. Nomura, M. Papka, Andrew E. Johnson, J. Leigh
{"title":"Visualizing large-scale atomistic simulations in ultra-resolution immersive environments","authors":"K. Reda, A. Knoll, K. Nomura, M. Papka, Andrew E. Johnson, J. Leigh","doi":"10.1109/LDAV.2013.6675159","DOIUrl":"https://doi.org/10.1109/LDAV.2013.6675159","url":null,"abstract":"Molecular Dynamics is becoming a principle methodology in the study of nanoscale systems, paving the way for innovations in battery design and alternative fuel applications. With the increasing availability of computational power and advances in modeling, atomistic simulations are rapidly growing in scale and complexity. Despite the plethora of molecular visualization techniques, visualizing and exploring large-scale atomistic simulations remain difficult. Existing molecular representations are not perceptually scalable and often adopt a rigid definition of surfaces, making them inappropriate for nanostructured materials where boundaries are inherently ill-defined. In this paper, we present an application for the interactive visualization and exploration of large-scale atomistic simulations in ultra-resolution immersive environments. We employ a hybrid representation which combines solid ball-and-stick glyphs with volumetric surfaces to visually convey the uncertainty in molecular boundaries at the nanoscale. We also describe a scalable, distributed GPU ray-casting implementation capable of rendering complex atomistic simulations with millions of atoms in real-time.","PeriodicalId":266607,"journal":{"name":"2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)","volume":"556 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115391173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application-specific compression of large MD data preserving physical characteristics","authors":"P. Gralka, Sebastian Grottel, G. Reina, T. Ertl","doi":"10.1109/LDAV.2013.6675162","DOIUrl":"https://doi.org/10.1109/LDAV.2013.6675162","url":null,"abstract":"Application areas like physics or thermodynamics often require simulations of very large data sets, up to the order of 1012 particles or even larger, to obtain results relevant for realistic industrial processes. Persisting such data is too costly, prohibiting interactive visual analysis in a classical post-processing fashion. Thus, analysis is restricted to statistical aggregation or visual in-situ exploration, both requiring an inkling of the results beforehand. We alleviate this issue by applying an application-optimized lossy compression. Reducing the size while at the same time preserving relevant physical characteristics of the data allows for accessibility on workstations and practical long-term storage. The compression is achieved by generating a density volume that is processed using wavelet decomposition, quantization and run-length encoding. Our reconstruction of particle data ensures the restoration of physically relevant properties. It employs a model based on stochastic distributions complemented by further adjustments. We evaluate the precision of the reconstruction for several data sets and a wide range of compression variants to show the effectiveness and user-adjustable trade-offs of the presented method.","PeriodicalId":266607,"journal":{"name":"2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124876089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Filtering edge for exploration of large graphs","authors":"Xiaodi Huang","doi":"10.1109/LDAV.2013.6675166","DOIUrl":"https://doi.org/10.1109/LDAV.2013.6675166","url":null,"abstract":"Visual clutter in the layout of a large graph is mainly caused by the overwhelming number of edges. Filtering is one of ways to reduce the clutter. We regard a filtered graph as the compressed one of an original graph. Based on this view, a filtering approach is presented to reduce the visual clutter of a layout in a way that hidden patterns can be revealed gradually. The experiments have demonstrated the performance of the proposed approach in our prototype system. As evidenced by real examples, the system allows users to explore a graph at adjustable, continuous levels of details in an interactive way. This new approach is able to reveal more hidden patterns in graphs than existing approaches, providing a new way to gain insights into graph data.","PeriodicalId":266607,"journal":{"name":"2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116759164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Thompson, Janine Bennett, Seshadhri Comandur, Ali Pinar
{"title":"A provably-robust sampling method for generating colormaps of large data","authors":"D. Thompson, Janine Bennett, Seshadhri Comandur, Ali Pinar","doi":"10.1109/LDAV.2013.6675161","DOIUrl":"https://doi.org/10.1109/LDAV.2013.6675161","url":null,"abstract":"First impressions from initial renderings of data are crucial for directing further exploration and analysis. In most visualization systems, default colormaps are generated by simply linearly interpolating color in some space based on a value's placement between the minimum and maximum taken on by the dataset. We design a simple sampling-based method for generating colormaps that high-lights important features. We use random sampling to determine the distribution of values observed in the data. The sample size required is independent of the dataset size and only depends on certain accuracy parameters. This leads to a computationally cheap and robust algorithm for colormap generation. Our approach (1) uses perceptual color distance to produce palettes from color curves, (2) allows the user to either emphasize or de-emphasize prominent values in the data, (3) uses quantiles to map distinct colors to values based on their frequency in the dataset, and (4) supports the highlighting of either inter- or intra-mode variations in the data.","PeriodicalId":266607,"journal":{"name":"2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125541336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Arunachalan, S. Diamond, Anne Stevens, B. Talaie, Maziar Ghaderi
{"title":"Visualization of residents in long-term care centres through mobile natural user interfaces (NUI)","authors":"B. Arunachalan, S. Diamond, Anne Stevens, B. Talaie, Maziar Ghaderi","doi":"10.1109/LDAV.2013.6675175","DOIUrl":"https://doi.org/10.1109/LDAV.2013.6675175","url":null,"abstract":"Large volumes of formal data and informal information are generated in daily workflow activities of caregivers at long-term care centers. Health data are captured formally through record keeping using paper based forms for regular updates; however, capturing informal information related to resident activities is more challenging. This unstructured data covers social contacts, family events, therapy sessions, and other happenings. The challenges arise firstly from digitizing and aggregating these data sets, because in long-term care, both datasets are essential to assess and support well-being. Secondly, visual analytics seeks to provide caregivers with much better and more effective ways to understand changes in residents' status over long durations, while improving their services immediately. Automated processing and comparison of data is valuable yet human judgment is required to apply analyses to the care of specific residents and develop support across similar groups. This suggests that the integration of automated analysis methods and interactive visualization methods is necessary. Thirdly, direct, multi-sensor handheld devices promise a set of natural input modality in providing interaction techniques such as speech, gesture, touch, and other sensor-based techniques that may facilitate just-in-time ease of analysis. In our research, we concentrate on providing effective visual analytics tools combined with appropriate natural user interfaces (NUI). In this poster, we present a set of NUI designs towards creating a social media platform for caregivers, which integrates automated analysis methods and natural interaction techniques to enable caregivers to capture, store, visualize, and analyze both formal data and informal information. Our research will evaluate whether NUI's make a difference in supporting long-term caregivers.","PeriodicalId":266607,"journal":{"name":"2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132779319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chongke Bi, K. Ono, K. Ma, Haiyuan Wu, Toshiyuki Imamura
{"title":"Proper orthogonal decomposition based parallel compression for visualizing big data on the K computer","authors":"Chongke Bi, K. Ono, K. Ma, Haiyuan Wu, Toshiyuki Imamura","doi":"10.1109/LDAV.2013.6675169","DOIUrl":"https://doi.org/10.1109/LDAV.2013.6675169","url":null,"abstract":"The development of supercomputers has greatly help us to carry on large-scale computing for dealing with various problems through simulating and analyzing them. Visualization is an indispensable tool to understand the properties of the data from supercomputers. Especially, interactive visualization can help us to analyze data from various viewpoints and even to find out some local small but important features. However, it is still difficult to interactively visualize such kind of big data directly due to the slow file I/O problem and the limitation of memory size. For resolving these problems, we proposed a parallel compression method to reduce the data size with low computational cost. Furthermore, the fast linear decompression process is another merit for interactive visualization. Our method uses proper orthogonal decomposition (POD) to compress data because it can effectively extract important features from the data and the resulting compressed data can also be linearly decompressed. Our implementation achieves high parallel efficiency with a binary load-distributed approach, which is similar to the binary-swap image composition used in parallel volume rendering [2]. This approach allows us to effectively utilize all the processing nodes and reduce the interprocessor communication cost throughout the parallel compression calculations. Our test results on the K computer demonstrate superior performance of our design and implementation.","PeriodicalId":266607,"journal":{"name":"2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131382633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An interactive method for activity detection visualization","authors":"Li Liu, S. Ozer, K. Bemis, Jay Takle, D. Silver","doi":"10.1109/LDAV.2013.6675173","DOIUrl":"https://doi.org/10.1109/LDAV.2013.6675173","url":null,"abstract":"Visualizing each time step in an activity from a scientific dataset can aid in understanding the data and phenomena. In this work, we present a Graphical User Interface (GUI) that allows scientists to first graphically model an activity, then detect any activities that match the model, and finally visualize the detected activities in time varying scientific data sets. As a graphical and state based interactive approach, an activity detection framework is implemented by our GUI as a tool for modelling, hypothesis-testing and searching for interested activities from the phenomena evolution of the data set. We demonstrate here some features of our GUI: a histogram is used to visualize the number of activities detected as a function of time and to allow the user to focus on a moment in time; a table is used to give details about the activities and the features participating in them; and finally the user is given the ability to click on the screen to bring up 3D images of the overall activity sequence, single time steps of an activity, or individual feature in an activity. We present examples from applications to two different data sets.","PeriodicalId":266607,"journal":{"name":"2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115409259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Kraft, Xiaoyu Wang, Jeffrey Delawder, Wenwen Dou, Yu Li, W. Ribarsky
{"title":"Less After-the-Fact: Investigative visual analysis of events from streaming twitter","authors":"Thomas Kraft, Xiaoyu Wang, Jeffrey Delawder, Wenwen Dou, Yu Li, W. Ribarsky","doi":"10.1109/LDAV.2013.6675163","DOIUrl":"https://doi.org/10.1109/LDAV.2013.6675163","url":null,"abstract":"News and events are traditionally broadcasted in an “After-the-Fact” manner, where the masses react to news formulated by a group of professionals. However, the deluge of information and real-time online social media sites have significantly changed this information input-output cycle, allowing the masses to report real-time events around the world. Specifically, the use of Twitter has resulted in the creation of a digital wealth of knowledge that directly associates to such events. Although governments and industries acknowledge the value of extracting events from the TwitterSphere, unfortunately the sheer velocity and volume of tweets poses significant challenges to the desired event analysis. In this paper, we present our Geo and Temporal Association Creator (GTAC) which extracts structured representations of events from the Twitter stream. GTAC further supports event-level investigative analysis of social media data through interactively visualizing the event indicators (who, when, where, and what). Using GTAC, we are trying to create a near real-time analysis environment for analysts to identify event structures, geographical distributions, and key indicators of emerging events.","PeriodicalId":266607,"journal":{"name":"2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117216648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nick Leaf, V. Vishwanath, J. Insley, M. Hereld, M. Papka, K. Ma
{"title":"Efficient parallel volume rendering of large-scale adaptive mesh refinement data","authors":"Nick Leaf, V. Vishwanath, J. Insley, M. Hereld, M. Papka, K. Ma","doi":"10.1109/LDAV.2013.6675156","DOIUrl":"https://doi.org/10.1109/LDAV.2013.6675156","url":null,"abstract":"Adaptive Mesh Refinement is a popular approach for allocating scarce computing resources to the most important portions of the simulation domain. This approach implies spatial compression and the large simulation sizes which necessitate it. We present a novel, cluster- and GPU-parallel rendering scheme for AMR data, which is built on previous work in the GPU ray casting of AMR data. Our approach utilizes the existing AMR structure to subdivide the problem into convexly-bounded chunks and perform static load-balancing. We take advantage of data locality within chunks to interpolate directly between blocks without the need to store ghost cells on the interior boundaries. We also present a novel block decomposition method, and analyze its performance against two alternative methods. Finally, we examine the interactivity of our renderer for multiple datasets, and consider its scalability across a large number of GPUs.","PeriodicalId":266607,"journal":{"name":"2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124682824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Hendrix, Diana Palsetia, Md. Mostofa Ali Patwary, Ankit Agrawal, W. Liao, A. Choudhary
{"title":"A scalable algorithm for single-linkage hierarchical clustering on distributed-memory architectures","authors":"W. Hendrix, Diana Palsetia, Md. Mostofa Ali Patwary, Ankit Agrawal, W. Liao, A. Choudhary","doi":"10.1109/LDAV.2013.6675153","DOIUrl":"https://doi.org/10.1109/LDAV.2013.6675153","url":null,"abstract":"Hierarchical clustering is a fundamental and widely-used clustering algorithm with many advantages over traditional partitional clustering. Due to the explosion in size of modern scientific datasets, there is a pressing need for scalable analytics algorithms, but good scaling is difficult to achieve for hierarchical clustering due to data dependencies inherent in the algorithm. To the best of our knowledge, no previous work on parallel hierarchical clustering has shown scalability beyond a couple hundred processes. In this paper, we present PINK, a scalable parallel algorithm for single-linkage hierarchical clustering based on decomposing a problem instance into two different types of subproblems. Despite the heterogeneous workloads, our algorithm exhibits good load balancing, as well as low memory requirements and a communication pattern that is both low-volume and deterministic. Evaluating PINK on up to 6050 processes, we find that it achieves speedups up to approximately 6600.","PeriodicalId":266607,"journal":{"name":"2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132451045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}