Taewoong Kwon, Iksoo Shin, Kyuil Kim, Jungsuk Song, Jun Lee
{"title":"Integrated Visual Analytics Approach against Multivariate Cybersecurity Attack","authors":"Taewoong Kwon, Iksoo Shin, Kyuil Kim, Jungsuk Song, Jun Lee","doi":"10.1145/3399715.3399944","DOIUrl":"https://doi.org/10.1145/3399715.3399944","url":null,"abstract":"As security threats rapidly spread all over the world, it is critical that network traffic is monitored and protected from abnormal attacks during 24/7. Even though various security devices (Rep., network intrusion detection system, NIDS) had utilized to guarantee a solid network security, it still depends on human being due to complex patterns from unknown threats. This study introduces a graphical interactive system for representing and understanding multivariate cybersecurity attacks. In particular, the interface enhances intuitive judgments combined with machine learning-based analysis of suspicious traffic","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132179866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Taking the Long View: Structured Expert Evaluation for Extended Interaction","authors":"A. Dix","doi":"10.1145/3399715.3399831","DOIUrl":"https://doi.org/10.1145/3399715.3399831","url":null,"abstract":"This paper proposes first steps in the development of practical techniques for the expert evaluation of long-term interactions driven by the need to perform expert evaluation of such systems in a consultancy framework. Some interactions are time-limited and goal-driven, for example withdrawing money at an ATM. However, these are typically embedded within longer-term interactions, such as with the banking system as a whole. We have numerous evaluation and design tools for the former, but long-term interaction is less well served. To fill this gap new evaluation prompts are presented, drawing on the style of cognitive walkthroughs to support extended interaction.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114818652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arnaud Prouzeau, Yuchen Wang, Barrett Ens, Wesley Willett, Tim Dwyer
{"title":"Corsican Twin: Authoring In Situ Augmented Reality Visualisations in Virtual Reality","authors":"Arnaud Prouzeau, Yuchen Wang, Barrett Ens, Wesley Willett, Tim Dwyer","doi":"10.1145/3399715.3399743","DOIUrl":"https://doi.org/10.1145/3399715.3399743","url":null,"abstract":"We introduce Corsican Twin, a tool for authoring augmented reality data visualisations in virtual reality using digital twins. The system provides users with the contextual information necessary to design embedded and situated data visualisations in a safe and convenient remote setting. We created system via a co-design process which involved people with little or no programming experience. Using the system, we illustrate three potential use cases for situated visualizations in the context of building maintenance, including: (1) on-site equipment debugging and diagnosis; (2) remote incident playback; and (3) operations simulations for future buildings. From feedback gathered during formative evaluations of our prototype tool with domain experts, we discuss implications, opportunities, and challenges for future in situ visualisation design tools.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123855307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manuel De Stefano, Michele Simone Gambardella, Fabiano Pecorelli, Fabio Palomba, A. De Lucia
{"title":"cASpER","authors":"Manuel De Stefano, Michele Simone Gambardella, Fabiano Pecorelli, Fabio Palomba, A. De Lucia","doi":"10.1007/978-3-319-67199-4_100549","DOIUrl":"https://doi.org/10.1007/978-3-319-67199-4_100549","url":null,"abstract":"","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124672861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual analysis of interactive document clustering streams","authors":"Eric M. Cabral, E. Milios, R. Minghim","doi":"10.1145/3399715.3399962","DOIUrl":"https://doi.org/10.1145/3399715.3399962","url":null,"abstract":"Interactive clustering techniques play a key role by putting the user in the clustering loop, allowing her to interact with document group abstractions instead of full-length documents. It allows users to focus on corpus exploration as an incremental task. To explore Information Discovery's incremental aspect, this article proposes a visual component to depict clustering membership changes throughout a clustering iteration loop in both static and dynamic data sets. The visual component is evaluated with an expert user and with an experiment with data streams.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130449358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing Visual Tools to Facilitate Human-Centered Design","authors":"Yuzhou Wang, M. Masoodian","doi":"10.1145/3399715.3399936","DOIUrl":"https://doi.org/10.1145/3399715.3399936","url":null,"abstract":"Human-Centred Design (HCD) relies on the use of many methods (e.g. interviews, observations) originating from other disciplines such as social sciences (e.g. ethnography). Such methods often rely on the use of visual tools (e.g. photographs and illustrations) to better facilitate the involvement of the participants in the design process. Most HCD practitioners, however, do not have the necessary visual design skills, and as such, need to work with visual communication designers to co-create visual tools to support their design projects. In this poster, we present a multidisciplinary approach to guide the process of co-creating such visual tools.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124570467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thibault Louis, J. Troccaz, Amélie Rochet-Capellan, N. Hoyek, F. Bérard
{"title":"When High Fidelity Matters: AR and VR Improve the Learning of a 3D Object","authors":"Thibault Louis, J. Troccaz, Amélie Rochet-Capellan, N. Hoyek, F. Bérard","doi":"10.1145/3399715.3399815","DOIUrl":"https://doi.org/10.1145/3399715.3399815","url":null,"abstract":"Virtual and Augmented Reality Environments have long been seen as having strong potential for educational applications. However, research showing actual evidences of their benefits is sparse. Indeed, some recent studies point to unnoticeable benefits, or even a detrimental effect due to an increase of cognitive demand for the students when using these environments. In this work, we question if a clear benefit of AR and VR can be robustly measured for a specific education-related task: learning a 3D object. We ran a controlled study in which we compared three interaction techniques. Two techniques are VR- and AR-based; they offer a High Fidelity (HF) virtual reproduction of observing and manipulating physical objects. The third technique is based on a multi-touch tablet and was used as a baseline. We selected a task of 3D object learning as one potentially benefitting from the HF reproduction of object manipulation. The experiment results indicate that VR and AR HF techniques can have a substantial benefit for education as the object was recognized more than 27% faster when learnt using the HF techniques than when using the tablet.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122837906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Woodward, Jahelle Cato, Jesse Smith, Isaac Wang, Brett Benda, Lisa Anthony, J. Ruiz
{"title":"Examining Fitts' and FFitts' Law Models for Children's Pointing Tasks on Touchscreens","authors":"Julia Woodward, Jahelle Cato, Jesse Smith, Isaac Wang, Brett Benda, Lisa Anthony, J. Ruiz","doi":"10.1145/3399715.3399844","DOIUrl":"https://doi.org/10.1145/3399715.3399844","url":null,"abstract":"Fitts' law has accurately modeled both children's and adults' pointing movements, but it is not as precise for modeling movement to small targets. To address this issue, prior work presented FFitts' law, which is more exact than Fitts' law for modeling adults' finger input on touchscreens. Since children's touch interactions are more variable than adults, it is unclear if FFitts' law should be applied to children. We conducted a 2D target acquisition task with 54 children (ages 5-10) to examine if FFitts' law can accurately model children's touchscreen movement time. We found that Fitts' law using nominal target widths is more accurate, with a R2 value of 0.93, than FFitts' law for modeling children's finger input on touchscreens. Our work contributes new understanding of how to accurately predict children's finger touch performance on touchscreens.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117313535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Categories and Completeness of Visual Programming and Direct Manipulation","authors":"Michael J. McGuffin, C. Fuhrman","doi":"10.1145/3399715.3399821","DOIUrl":"https://doi.org/10.1145/3399715.3399821","url":null,"abstract":"Recent innovations in visual programming and the use of direct manipulation for programming have demonstrated promise, but also raise questions about how far these approaches can be generalized. To clarify these issues, we present a categorization of systems for visual programming, programming-by-example, and similar systems. By examining each category, we elucidate the advantages, limitations, and ways to extend systems in each category. Our work makes it easier for researchers and designers to understand how visual programming languages (VPLs) and similar systems relate to each other, and how to extend them. We also indicate directions for future research.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120889951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Design Space for Advanced Visual Interfaces for Teleoperated Autonomous Vehicles","authors":"Gaetano Graf, H. Palleis, H. Hussmann","doi":"10.1145/3399715.3399942","DOIUrl":"https://doi.org/10.1145/3399715.3399942","url":null,"abstract":"Autonomous Vehicles (AVs) are facilitating the development of a diverse set of applications, from human-less delivering to alternative mobility services. There are a variety of challenges for AVs that might be assessed and solved by remote operators, such as sensor data ambiguity, temporary changes to infrastructure, or unexpected interventions by other road users. With this paper, we propose a design space to support the development of appropriate user interfaces for remote situational awareness and teleoperation. The design space is envisioned to discover and evaluate design alternatives before the implementation and to provide a systematic approach towards the development of novel interfaces.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131695965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}