{"title":"Circles of Affordance: Proposal for a diagnostic tool to support usability studies","authors":"R. Spence, Leah Redmond","doi":"10.1145/3399715.3399719","DOIUrl":"https://doi.org/10.1145/3399715.3399719","url":null,"abstract":"We propose, for interactive systems, a representation that is potentially useful as a diagnostic tool. It is based on the concept of affordances that can be offered to and deployed by a user. The proposal is illustrated by reference to an interface designed for a smartphone app that allows a person with Type-1 diabetes to self-manage their condition.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127422185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giulio Galesi, Luciano Giunipero, B. Leporini, Gianni Verdi
{"title":"SelfLens","authors":"Giulio Galesi, Luciano Giunipero, B. Leporini, Gianni Verdi","doi":"10.1145/3399715.3399941","DOIUrl":"https://doi.org/10.1145/3399715.3399941","url":null,"abstract":"Independently selecting food items while shopping, or storing and cooking food items correctly can be a very difficult task for people with special needs. Product labels on food packaging contain an ever-increasing amount of information, which can also be in a variety of languages. The amount of information and also the features of the text can make it difficult or impossible to read, in particular for those with visual impairments or the elderly. Several tools or applications are available on the market or have been proposed to support this type of activity (e.g. barcode or QR code reading), but they are limited and may require the user to have specific digital skills. Moreover, repeatedly using an application to read the label contents can require numerous steps on a touch-screen, and consequently be time-consuming. In this work, a portable tool is proposed to support people in reading the contents of labels and acquiring additional information, while they are using the item at home or shopping at the supermarket. The aim of our study is to propose a simple portable assistive technology tool which 1) can be used by anyone, regardless of their digital personal skills 2) does not require a smartphone or complex device, 3) is a low-cost solution for the user.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128137936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shane Sheehan, S. Luz, Pierre Albert, M. Masoodian
{"title":"TeMoCo-Doc: A visualization for supporting temporal and contextual analysis of dialogues and associated documents","authors":"Shane Sheehan, S. Luz, Pierre Albert, M. Masoodian","doi":"10.1145/3399715.3399956","DOIUrl":"https://doi.org/10.1145/3399715.3399956","url":null,"abstract":"A common task in a number of application areas is to create textual documents based on recorded audio data. Visualizations designed to support such tasks require linking temporal audio data with contextual data contained in the resulting documents. In this paper, we present a tool for the visualization of temporal and contextual links between recorded dialogues and their summary documents.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128231487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlo Centofanti, Alessandro D'errico, F. Caruso, Sara Peretti
{"title":"The CrazySquare solution: a gamified ICT tool to support the musical learning in pre-adolescents","authors":"Carlo Centofanti, Alessandro D'errico, F. Caruso, Sara Peretti","doi":"10.1145/3399715.3399943","DOIUrl":"https://doi.org/10.1145/3399715.3399943","url":null,"abstract":"In this paper, we present the current prototype of the CrazySquare Project which is aimed to provide a gamified ICT (Information and Communications Technology) solution for musical education. The project is inspired by Gordon's Musical Learning Theory. It is dedicated to the guitar since it is one of the most played instrument in Italian's Middle Schools. The TPACK (Technological Pedagogical Content Knowledge) framework has been used as a way to effectively integrate the technology into teaching activities. Moreover, the CrazySquare project follows an iterative process based on the TEL-oriented UCD approach. Currently, after carrying out an expert-based evaluation with several domain-experts, we are designing the user-based evaluation phase which will conclude the second iteration.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128242239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preserving Contextual Awareness during Selection of Moving Targets in Animated Stream Visualizations","authors":"E. Ragan, Andrew Pachuilo, J. Goodall, F. Bacim","doi":"10.1145/3399715.3399832","DOIUrl":"https://doi.org/10.1145/3399715.3399832","url":null,"abstract":"In many types of dynamic interactive visualizations, it is often desired to interact with moving objects. Stopping moving objects can make selection easier, but pausing animated content can disrupt perception and understanding of the visualization. To address such problems, we explore selection techniques that only pause a subset of all moving targets in the visualization. We present various designs for controlling pause regions based on cursor trajectory or cursor position. We then report a dual-task experiment that evaluates how different techniques affect both target selection performance and contextual awareness of the visualization. Our findings indicate that all pause techniques significantly improved selection performance as compared to the baseline method without pause, but the results also show that pausing the entire visualization can interfere with contextual awareness. However, the problem with reduced contextual awareness was not observed with our new techniques that only pause a limited region of the visualization. Thus, our research provides evidence that region-limited pause techniques can retain the advantages of selection in dynamic visualizations without imposing a negative effect on contextual awareness.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128255697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silvestro V. Veneruso, T. Catarci, Lauren S. Ferro, Andrea Marrella, Massimo Mecella
{"title":"V-DOOR: A Real-Time Virtual Dressing Room Application Using Oculus Rift","authors":"Silvestro V. Veneruso, T. Catarci, Lauren S. Ferro, Andrea Marrella, Massimo Mecella","doi":"10.1145/3399715.3399959","DOIUrl":"https://doi.org/10.1145/3399715.3399959","url":null,"abstract":"In recent years and with its accessibility, the use of online shopping for clothing has increased. Virtual Dressing Rooms (VDRs) represent an effective way to enact the ability to \"try before buying, thus removing an important obstacle for online shopping. While most of the VDR tools that have been realized so far are based on Augmented Reality and are installed directly inside the retail shops, this paper proposes a real-time VDR application titled V-DOOR that leverages the features of Oculus Rift to create an immersive experience that enables customers to try on clothes virtually in the comfort of their own home rather than physically in the retail shop.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130337304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Playful Citizen Science Tool for Casual Users","authors":"Risa Kimura, Keren Jiang, Di Zhang, T. Nakajima","doi":"10.1145/3399715.3399937","DOIUrl":"https://doi.org/10.1145/3399715.3399937","url":null,"abstract":"We present a playful citizen science tool to explore various protein docking through dance like body actions for casual users. For more attracting casual users, the tool offers a social watching functionality based on a virtual reality platform that presents multiple persons' visual perspectives in a virtual space. We also investigate some preliminary insights of our current tool.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134074806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pietro Battistoni, M. D. Gregorio, Domenico Giordano, M. Sebillo, G. Tortora, G. Vitiello
{"title":"Wearable Interfaces and Advanced Sensors to Enhance Firefighters Safety in Forest Fires","authors":"Pietro Battistoni, M. D. Gregorio, Domenico Giordano, M. Sebillo, G. Tortora, G. Vitiello","doi":"10.1145/3399715.3399961","DOIUrl":"https://doi.org/10.1145/3399715.3399961","url":null,"abstract":"The forest fires represent a social emergency that requires significant economic and organizational commitment. Safety and the lack of reliable and timely localization of firefighters is a big problem. In this paper, we present Karya Advanced Sensor, an automatic, accurate, and reliable IT solution able to locate firefighters in harsh environments and support decision making activities at control rooms. The system consists of sensors perfectly integrated into firefighters' uniforms, which are used to monitor in real-time individual operators' activities as well as the entire fire area. In particular, in case a firefighter gets injured, the system will activate the rescue teams quickly, as there will be a constant link between the firefighters and the medical assistance. The firefighter can also specify the reason for the accident, which is critical information for a more timely and appropriate health intervention. Moreover, the system is able to perform an automatic real-time mapping of forest fires and possibly estimate its propagation rate, providing precious support to control rooms, which are the center of the team coordination.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129239450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Visual Environment for End-User Creation of IoT Customization Rules with Recommendation Support","authors":"Andrea Mattioli, F. Paternò","doi":"10.1145/3399715.3399833","DOIUrl":"https://doi.org/10.1145/3399715.3399833","url":null,"abstract":"Personalization rules based on the trigger-action paradigm have recently garnered increasing interest in Internet of Things (IoT) applications. However, composing trigger-action rules can be a challenging task for end users, especially when the rules' complexity increases. Users have to decide about various aspects: which triggers and actions to select, how to combine multiple triggers or actions, and whether some previously defined rule can help in the composition process. We propose a visual environment, Block Rule Composer, to address these problems. It consists of a tool for creating rules based on visual blocks, integrated with recommendation techniques in order to provide intelligent support during rule creation. We also report on a first test which provided positive indications and suggestions for further design improvements.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124329303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Costagliola, Mattia De Rosa, V. Fuccella, Mark Minas
{"title":"ParVis","authors":"G. Costagliola, Mattia De Rosa, V. Fuccella, Mark Minas","doi":"10.1145/3399715.3399853","DOIUrl":"https://doi.org/10.1145/3399715.3399853","url":null,"abstract":"In this paper, we present ParVis, an interactive visual system for the animated visualization of logged parser trace executions. The system allows a parser implementer to create a visualizer for generated parsers by simply defining a JavaScript module that maps each logged parser instruction into a set of events driving the visual system interface. The result is a set of interacting graphical/text windows that allows users to explore logged parser executions and helps them to have a complete understanding of how the parser behaves during its execution on a given input. We used our system to visualize the behavior of textual as well as visual parsers and describe here its use with the well known CUP parser generator. Preliminary tests with users have provided good feedback on its use.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116550279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}