{"title":"Towards a cognitive-inspired automatic unitizing technique: a feasibility study","authors":"Eleonora Ceccaldi, G. Volpe","doi":"10.1145/3399715.3399825","DOIUrl":"https://doi.org/10.1145/3399715.3399825","url":null,"abstract":"In this paper, we present and assess a novel technique for unitizing inspired by a cognitive theory on event structure perception. Unitizing indicates the process of dividing an observation into smaller units. Unitizing is often performed automatically, e.g., by selecting fixed-length windows. Although fast, such approach might result in unit boundaries being placed mid-interaction, eventually affecting observation, annotation, and labeling. We conceived a unitizing technique based on the Event Segmentation theory. In brief, changes drive the perception of boundaries between events (or units): an unexpected change in the observed situation might mean the current event ended and a new one begun. Our technique relies on observed changes for identifying unit boundaries. The first sketch of our technique was recently tested, proving it effective in overcoming the aforementioned shortcomings of fixed-window unitizing. Here, we further explore its feasibility by testing it in a different domain, i.e., solo stage performances, in order to explore the feasibility of adopting our unitizing approach across domains. Our results further support the idea of leveraging the Event Segmentation Theory for the design of an automatic technique for video unitizing.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116608696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. T. Baldassarre, Vita Santa Barletta, D. Caivano, A. Piccinno
{"title":"A Visual Tool for Supporting Decision-Making in Privacy Oriented Software Development","authors":"M. T. Baldassarre, Vita Santa Barletta, D. Caivano, A. Piccinno","doi":"10.1145/3399715.3399818","DOIUrl":"https://doi.org/10.1145/3399715.3399818","url":null,"abstract":"Nowadays, the dimension and complexity of software development projects increase the possibility of cyber-attacks, information exfiltration and data breaches. In this context, developers play a primary role in addressing privacy requirements and, consequently security, in software applications. Currently, only general guidelines exist that are difficult to put in operation due to the lack of the required security skills and knowledge, and to the use of legacy software development processes that do not deal with privacy and security aspects. This paper presents a knowledge base, the Privacy Knowledge Base (PKB), and the VIS-PRISE prototype (Visually Inspection to Support Privacy and Security) a visual tool that support developers' decisions to integrate privacy and security requirements in all software development phases. An initial experimental study with junior developers is also presented.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125975524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Athanasios Vogogias, D. Archambault, Benjamin Bach, J. Kennedy
{"title":"Visual Encodings for Networks with Multiple Edge Types","authors":"Athanasios Vogogias, D. Archambault, Benjamin Bach, J. Kennedy","doi":"10.1145/3399715.3399827","DOIUrl":"https://doi.org/10.1145/3399715.3399827","url":null,"abstract":"This paper reports on a formal user study on visual encodings of networks with multiple edge types in adjacency matrices. Our tasks and conditions were inspired by real problems in computational biology. We focus on encodings in adjacency matrices, selecting four designs from a potentially huge design space of visual encodings. We then settle on three visual variables to evaluate in a crowdsourcing study with 159 participants: orientation, position and colour. The best encodings were integrated into a visual analytics tool for inferring dynamic Bayesian networks and evaluated by computational biologists for additional evidence. We found that the encodings performed differently depending on the task, however, colour was found to help in all tasks except when trying to find the edge with the largest number of edge types. Orientation generally outperformed position in all of our tasks.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117129253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RegLine","authors":"Xiaoyi Wang, L. Micallef, K. Hornbæk","doi":"10.1145/3399715.3399913","DOIUrl":"https://doi.org/10.1145/3399715.3399913","url":null,"abstract":"The process of verifying linear model assumptions and remedying associated violations is complex, even when dealing with simple linear regression. This process is not well supported by current tools and remains time-consuming, tedious, and error-prone. We present RegLine, a visual analytics tool supporting the iterative process of assumption verification and violation remedy for simple linear regression models. To identify the best possible model, RegLine helps novices perform data transformations, deal with extreme data points, analyze residuals, validate models by their assumptions, and compare and relate models visually. A qualitative user study indicates that these features of RegLine support the exploratory and refinement process of model building, even for those with little statistical expertise. These findings may guide visualization designs on how interactive visualizations can facilitate refining and validating more complex models.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130641937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silvio Barra, A. Carcangiu, S. Carta, Alessandro Sebastian Podda, Daniele Riboni
{"title":"A Voice User Interface for football event tagging applications","authors":"Silvio Barra, A. Carcangiu, S. Carta, Alessandro Sebastian Podda, Daniele Riboni","doi":"10.1145/3399715.3399967","DOIUrl":"https://doi.org/10.1145/3399715.3399967","url":null,"abstract":"Manual event tagging may be a very long and stressful activity, due the monotonous operations involved. This is particularly true when dealing with online video tagging, as for football matches, in which the burden of events to tag can consist of many thousands of actions, according to the desired level of granularity. In this work we describe an actual solution, developed for an existing football match tagging application, in which the GUI has been enhanced and integrated with a Voice User Interface, aiming at reducing tagging time and error rate. Empirical tests have revealed the efficiency and the benefits brought by the developed solution.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128958700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silvestro V. Veneruso, Lauren S. Ferro, Andrea Marrella, Massimo Mecella, Tiziana Catarci
{"title":"CyberVR","authors":"Silvestro V. Veneruso, Lauren S. Ferro, Andrea Marrella, Massimo Mecella, Tiziana Catarci","doi":"10.1145/3399715.3399860","DOIUrl":"https://doi.org/10.1145/3399715.3399860","url":null,"abstract":"The use of videogames has become an established tool to educate users about various topics. Videogames can promote challenges, co-operation, engagement, motivation, and the development of problem-solving strategies, which are all aspects with an important educational potential. In this paper, we present the design and realization of CyberVR, a Virtual Reality (VR) videogame that acts as an interactive learning experience to improve the user awareness of cybersecurity-related issues. We report the results of a user study showing that CyberVR is equally effective but more engaging as learning method toward cybersecurity education than traditional textbook learning.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129371544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Language-based Interface for Analysis of Digital Storytelling","authors":"A. Gloder, L. Ducceschi, M. Zancanaro","doi":"10.1145/3399715.3399859","DOIUrl":"https://doi.org/10.1145/3399715.3399859","url":null,"abstract":"In this paper, we introduce a tool aimed at supporting deep qualitative analysis of digital comics. The tool exploits language-based technologies to facilitate the exploration of relatively large sets of comics. The core idea is that the specific words used in the comics are both an important element of the analysis and an index to navigate and explore the dataset. The design concept has been validated in a pilot study and the findings provide evidence that the approach meets the needs of qualitative analysts with the potential of improving their practices.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127781814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ten Objectives and Ten Rules for Designing Automations in Interaction Techniques, User Interfaces and Interactive Systems","authors":"Philippe A. Palanque","doi":"10.1145/3399715.3400872","DOIUrl":"https://doi.org/10.1145/3399715.3400872","url":null,"abstract":"Automation, as a design goal, focusses mainly on the migration of tasks from a human operator to a mechanical or digital system. Designing automation thus usually consists in removing tasks or activities from that operator and in designing systems that will be able to perform them. When these automations are not adequately designed (or correctly understood by the operator), they may result in so called automation surprises [1], [2] that degrade, instead of enhance, the overall performance of the couple (operator, system). Usually, these tasks are considered at a high level of abstraction (related to work and work objectives) leaving unconsidered low-level, repetitive tasks. This paper proposes a decomposition of automation for interactive systems highlighting the diverse objectives it may target at. Beyond, multiple complementary views of automation for interactive systems design are presented to better define the multiform concept of automation. It provides numerous concrete examples illustrating each view and identifies ten rules for designing interactive systems embedding automations.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133365737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasmeen Abdrabou, Yomna Abdelrahman, A. Ayman, Amr Elmougy, Mohamed Khamis
{"title":"Are Thermal Attacks Ubiquitous?: When Non-Expert Attackers Use Off the shelf Thermal Cameras","authors":"Yasmeen Abdrabou, Yomna Abdelrahman, A. Ayman, Amr Elmougy, Mohamed Khamis","doi":"10.1145/3399715.3399819","DOIUrl":"https://doi.org/10.1145/3399715.3399819","url":null,"abstract":"Recent work showed that using image processing techniques on thermal images taken by high-end equipment reveals passwords entered on touchscreens and keyboards. In this paper, we investigate the susceptibility of common touch inputs to thermal attacks when non-expert attackers visually inspect thermal images. Using an off-the-shelf thermal camera, we collected thermal images of a smartphone's touchscreen and a laptop's touchpad after 25 participants had entered passwords using touch gestures and touch taps. We show that visual inspection of thermal images by 18 participants reveals the majority of passwords. Touch gestures are more vulnerable to thermal attacks (60.65% successful attacks) than touch taps (23.61%), and attacks against touchscreens are more accurate than on touchpads (87.04% vs 56.02%). We discuss how the affordability of thermal attacks and the nature of touch interactions make the threat ubiquitous, and the implications this has on security.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121127890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rudy Berton, A. Kolasinska, O. Gaggi, C. Palazzi, Giacomo Quadrio
{"title":"A Chrome extension to help people with dyslexia","authors":"Rudy Berton, A. Kolasinska, O. Gaggi, C. Palazzi, Giacomo Quadrio","doi":"10.1145/3399715.3399843","DOIUrl":"https://doi.org/10.1145/3399715.3399843","url":null,"abstract":"Even if the World Wide Web is one of the main content and service providers, unfortunately, these contents and services are not really available for everyone. People affected by impairments often have difficulties in navigating Web pages for a wide range of reasons. In this paper, we focus on people affected by dyslexia. These users experience difficulties in reading acquisition, despite normal intelligence and adequate access to conventional instruction. For this reason, we have created Help me read!, a Chrome extension that allows to change many features of a Web page. Furthermore, it allows to isolate and enlarge one word at a time. This feature is crucial as it allows people with dyslexia to focus on each single word, thus overcoming one of their main difficulties.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128472284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}