{"title":"Demonstrating vistiles: visual data exploration using mobile devices","authors":"R. Langner, Tom Horak, Raimund Dachselt","doi":"10.1145/3206505.3206583","DOIUrl":"https://doi.org/10.1145/3206505.3206583","url":null,"abstract":"We demonstrate the prototype of the conceptual VisTiles framework. VisTiles allows exploring multivariate data sets by using multiple coordinated views that are distributed across a set of mobile devices. This setup allows users to benefit from dynamic and user-defined interface arrangements and to easily initiate co-located data exploration sessions. The current web-based prototype runs on commodity devices and is able to determine the spatial device arrangement by either a cross-device pinch gesture or an external tracking system. Multiple data sets are provided that can be explored by different visualizations (e.g., scatterplots, parallel coordinate plots, stream graphs). With this demonstration, we showcase the general concepts of VisTiles and discuss ideas for enhancements as well the potential for application cases beyond data analysis.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126268604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of temporality, physical activity and cognitive load on spatiotemporal vibrotactile pattern recognition","authors":"Qing Chen, S. Perrault, Quentin Roy, L. Wyse","doi":"10.1145/3206505.3206511","DOIUrl":"https://doi.org/10.1145/3206505.3206511","url":null,"abstract":"Previous research demonstrated the ability for users to accurately recognize Spatiotemporal Vibrotactile Patterns (SVP): sequences of vibrations on different motors occurring either sequentially or simultaneously. However, the experiments were only run in a lab setting and the ability for users to recognize SVP in a real-world environment remains unclear. In this paper, we investigate how several factors may affect recognition: (1) physical activity (running), (2) cognitive task (i.e. primary task, typing), (3) distribution of the vibration motors across body parts and (4) temporality of the patterns. Our results suggest that physical activity has very little impact, specifically compared to cognitive task, location of the vibrations or temporality. We discuss these results and propose a set of guidelines for the design of SVPs.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128037789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Janine Kasper, Robert Richter, F. Thalmann, Rainer Groh
{"title":"VISKOMMP: graph visualization meets meeting documentation","authors":"Janine Kasper, Robert Richter, F. Thalmann, Rainer Groh","doi":"10.1145/3206505.3206565","DOIUrl":"https://doi.org/10.1145/3206505.3206565","url":null,"abstract":"In VISKOMMP (visual, collaborative, multi-meeting minutes system) we aim at supporting users during all stages of meeting-participation with focus on the preservation and accessibility of the produced information. For the efficient use of the knowledge generated during meetings, a comprehensive view of the aggregated data, independent of single events or documents, is necessary. An approach is presented which interlinks the heterogeneous information that is generated during meetings with the enterprise-knowledge. Created content and the established connections are further presented to the user in a comprehensible way. To this end, semantic technologies are utilized and an own ontology is designed, which covers the domains of project-management and meetings.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125683929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building a qualified annotation dataset for skin lesion analysis trough gamification","authors":"Fabrizio Balducci, P. Buono","doi":"10.1145/3206505.3206555","DOIUrl":"https://doi.org/10.1145/3206505.3206555","url":null,"abstract":"The deep learning approach has increased the quality of automatic medical diagnoses at the cost of building qualified datasets to train and test such supervised machine learning methods. Image annotation is one of the main activity of dermatologists and the quality of annotation depends on the physician experience and on the number of studied cases: manual annotations are very useful to extract features like contours, intersections and shapes that can be used in the processes of lesion segmentation and classification made by automatic agents. This paper proposes the design of an interactive multimedia platform that enhance the annotation process of medical images, in the domain of dermatology, adopting gamification and \"games with a purpose\" (GWAP) strategies in order to improve the engagement and the production of qualified datasets also fostering their sharing and practical evaluation. A special attention is given to the design choices, theories and assumptions as well as the implementation and technological details.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129927293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crossing spaces: towards cross-media personal information management user interfaces","authors":"Sandra Trullemans, Payam Ebrahimi, B. Signer","doi":"10.1145/3206505.3206528","DOIUrl":"https://doi.org/10.1145/3206505.3206528","url":null,"abstract":"Nowadays, digital and paper documents are used simultaneously during daily tasks. While significant research has been carried out to support the re-finding of digital documents, less effort has been made to provide similar functionality for paper documents. In this paper, we present a solution that enables the design of cross-media Personal Information Management (PIM) user interfaces helping users in re-finding documents across digital and physical information spaces. We propose three main design requirements for the presented cross-media PIM user interfaces. Further, we illustrate how these design requirements have been applied in the development of three proof-of-concept applications and describe a software framework supporting the design of these interfaces. Finally, we discuss opportunities for future improvements of the presented cross-media PIM user interfaces.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133190421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ehsan Sotoodeh Mollashahi, Md. Sami Uddin, C. Gutwin
{"title":"Two-level artificial-landmark scrollbars to improve revisitation in long documents","authors":"Ehsan Sotoodeh Mollashahi, Md. Sami Uddin, C. Gutwin","doi":"10.1145/3206505.3206588","DOIUrl":"https://doi.org/10.1145/3206505.3206588","url":null,"abstract":"Navigating to previously-visited pages is a trivial yet fundamental task in linear control-based document viewers. These widgets e.g., scrollbars often do not work well particularly for long documents. Existing solutions try to tackle this issue with bookmarks, search, history, and read wear but limited in terms of effort, clutter, and interpretability. To improve the revisitation support in long documents, we investigated the use of artificial landmarks similar to the visual augmentations available in physical books: coloring on page edges or indents cut into pages. We developed several artificial-landmark visualizations to represent page-locations in the scrollbar for many hundreds of pages long documents, and tested them in studies where participants visited multiple locations in long documents. Results indicate that using two columns of landmark icons significantly improved revisitation performance and preferred by users. Our two-level artificial-landmark augmented scrollbars can be a new way to support spatial memory development of long documents - and can be used either in isolation or in congregation with current techniques.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134193422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Kammer, Mandy Keck, Thomas Gründer, Rainer Groh
{"title":"Big data landscapes: improving the visualization of machine learning-based clustering algorithms","authors":"D. Kammer, Mandy Keck, Thomas Gründer, Rainer Groh","doi":"10.1145/3206505.3206556","DOIUrl":"https://doi.org/10.1145/3206505.3206556","url":null,"abstract":"With the internet, massively heterogeneous data sources need to be understood and classified to provide suitable services to users such as content observation, data exploration, e-commerce, or adaptive learning environments. The key to providing these services is applying machine learning (ML) in order to generate structures via clustering and classification. Due to the intricate processes involved in ML, visual tools are needed to support designing and evaluating the ML pipelines. In this contribution, we propose a comprehensive tool that facilitates the analysis and design of ML-based clustering algorithms using multiple visualization features such as semantic zoom, glyphs, and histograms.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128222426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual exploration and analysis of the italian cybersecurity framework","authors":"M. Angelini, G. Blasilli, S. Lenti, G. Santucci","doi":"10.1145/3206505.3206579","DOIUrl":"https://doi.org/10.1145/3206505.3206579","url":null,"abstract":"In the last years, several standards and frameworks have been developed to help organizations to increase the security of their Information Technology (IT) systems. In order to deal with the continuous evolution of the cyber-attacks complexity, such solutions have to cope with an overwhelming set of concepts, and are perceived as complex and hard to implement. The exploration of the cyber-security state of an organization can be made more effective and proficient if supported by the right level of automation. This paper presents the implementation of a visual analytics solution, called CybeR secUrity fraMework BrowSer (CRUMBS) [2], targeted at dealing with the Italian Adaptation of the Cyber Security Framework (IACSF), derived by the National Institute of Standards and Technology (NIST) proposal [1], adaptation that, in its full complexity, presents the security managers with hundreds of scattered concepts, like functions, categories, subcategories, priorities, maturity levels, current and target profiles, and controls, making its adoption a complex activity. The prototype is available at: http://awareserver.dis.uniroma1.it:11768/crumbs/.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133995538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. E. Raheb, George Tsampounaris, A. Katifori, Yannis E. Ioannidis
{"title":"Choreomorphy","authors":"K. E. Raheb, George Tsampounaris, A. Katifori, Yannis E. Ioannidis","doi":"10.1145/3206505.3206507","DOIUrl":"https://doi.org/10.1145/3206505.3206507","url":null,"abstract":"Choreomorphy is inspired by the Greek words \"choros\" (dance) and \"morphe\" (shape). Visual metaphors, such as the notion of transformation, and visual imagery are widely used in various movement and dance practices, education, and artistic creation. Motion capture and comprehensive movement representation technologies, if appropriately employed can become valuable tools in this field. Choreomorphy is a system for a whole-body interactive experience, using Motion Capture and 3D technologies, that allows the users to experiment with different body and movement visualisations in real-time. The system offers a variety of avatars, visualizations of movement and environments which can be easily selected through a simple GUI. The motivation of designing this system is the exploration of different avatars as \"digital selves\" and the reflection on the impact of seeing one's own body as an avatar that can vary in shape, size, gender and human vs. non-human characteristics, while dancing and improvising. Choreomorphy is interoperable with different motion capture systems, including, but not limited to inertial, optical, and Kinect. The 3D representations and interactions are constantly updated through an explorative co-design process with dance artists and professionals in different sessions and venues.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134266973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The invisible gorilla revisited: using eye tracking to investigate inattentional blindness in interface design","authors":"H. Gelderblom, Leanne Menge","doi":"10.1145/3206505.3206550","DOIUrl":"https://doi.org/10.1145/3206505.3206550","url":null,"abstract":"Interface designers often use change and movement to draw users' attention. Research on change blindness and inattentional blindness challenges this approach. In Simons and Chabris' 1999, \"Gorillas in our midst\" experiment, they showed how people that are focused on a task are likely to miss the occurrence of an unforeseen event (a man in a gorilla suit in their case), even if it appears in their field of vision. This relates to interface design because interfaces often include moving elements such as rotating banners or advertisements, which designers obviously want users to notice. We investigated how inattentional blindness affect users' perception through an eye tracking investigation on Simons and Chabris' video as well as on the web site of an airline that uses a rotating banner to advertise special deals. In both cases users performed tasks that required their full attention and were then interviewed to determine to what extent they perceived the changes or new information. We compared the results of the two experiments to see how Simons and Chabris' theory applies to interface design. Our findings show that although 43% of the participants had fixations on the gorilla, only 22% said that they noticed it. On the web site, 75% of participants had fixations on the moving banner but only 33% could recall any information related to it. We offer reasons for these results and provide designers with advice on how to address the effect of inattentional blindness and change blindness in their designs.","PeriodicalId":330748,"journal":{"name":"Proceedings of the 2018 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134379210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}