Angeliki Antoniou, B. D. Carolis, G. Raptis, Cristina Gena, T. Kuflik, A. Dix, A. Origlia, George Lepouras
{"title":"AVI2CH 2020: Workshop on Advanced Visual Interfaces and Interactions in Cultural Heritage","authors":"Angeliki Antoniou, B. D. Carolis, G. Raptis, Cristina Gena, T. Kuflik, A. Dix, A. Origlia, George Lepouras","doi":"10.1145/3399715.3400869","DOIUrl":"https://doi.org/10.1145/3399715.3400869","url":null,"abstract":"AVI2CH is a meeting place for researchers and practitioners focusing on the application of advanced information and communication technology in cultural heritage (CH) with a specific focus on user interfaces, visualization and interaction. It builds on a series of PATCH workshops, since 2007 including three at AVI and also a series of European workshops on cultural informatics. Eleven papers range from novel interfaces in museums to wider community engagement; all share a common mission to ensure that the latest digital technology helps preserve the past in ways that enrich the lives of current and future generations","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123748471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marta Bortoli, M. Furini, S. Mirri, M. Montangero, Catia Prandi
{"title":"Conversational Interfaces for a Smart Campus: A Case Study","authors":"Marta Bortoli, M. Furini, S. Mirri, M. Montangero, Catia Prandi","doi":"10.1145/3399715.3399914","DOIUrl":"https://doi.org/10.1145/3399715.3399914","url":null,"abstract":"The spoken language is the most natural interface for a human being and, thanks to the scientific-technological advances made in recent decades, nowadays we have voice assistance devices to interact with a machine through the use of natural language. Vocal user interfaces (VUI) are now included in many technological devices, such as desktop and laptop computers, smartphones and tablets, navigators, and home speakers, being welcomed by the market. The use of voice assistants can also be interesting and strategic in educational contexts and in public environments. This paper presents a case study based on the design, development, and assessment of a prototype devoted to assist students' during their daily activities in a smart campus context.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128136538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Incidental Visualizations: Pre-Attentive Primitive Visual Tasks","authors":"João Moreira, Daniel Mendes, Daniel Gonçalves","doi":"10.1145/3399715.3399841","DOIUrl":"https://doi.org/10.1145/3399715.3399841","url":null,"abstract":"In InfoVis design, visualizations make use of pre-attentive features to highlight visual artifacts and guide users' perception into relevant information during primitive visual tasks. These are supported by visual marks such as dots, lines, and areas. However, research assumes our pre-attentive processing only allows us to detect specific features in charts. We argue that a visualization can be completely perceived pre-attentively and still convey relevant information. In this work, by combining cognitive perception and psychophysics, we executed a user study with six primitive visual tasks to verify if they could be performed pre-attentively. The tasks were to find: horizontal and vertical positions, length and slope of lines, size of areas, and color luminance intensity. Users were presented with very simple visualizations, with one encoded value at a time, allowing us to assess the accuracy and response time. Our results showed that horizontal position identification is the most accurate and fastest task to do, and the color luminance intensity identification task is the worst. We believe our study is the first step into a fresh field called Incidental Visualizations, where visualizations are meant to be seen at-a-glance, and with little effort.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115557490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data4Good","authors":"Luigi De Russis, Neha Kumar, Akhil Mathur","doi":"10.1145/3399715.3400864","DOIUrl":"https://doi.org/10.1145/3399715.3400864","url":null,"abstract":"We are witnessing unprecedented datafication of the society we live in, alongside rapid advances in the fields of Artificial Intelligence and Machine Learning. However, emergent data-driven applications are systematically discriminating against many diverse populations. A major driver of the bias are the data, which typically align with predominantly Western definitions and lack representation from multilingually diverse and resource-constrained regions across the world. Therefore, data-driven approaches can benefit from integration of a more human-centred orientation before being used to inform the design, deployment, and evaluation of technologies in various contexts. This workshop seeks to advance these and similar conversations, by inviting researchers and practitioners in interdisciplinary domains to engage in conversation around how appropriate human-centred design can contribute to addressing data-related challenges among marginalised and under-represented/underserved groups.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115123405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. G. M. Ader, B. Caulfield, Benoît Bossavit, K. E. Raheb, M. Raynal, N. Vigouroux, K. Ting, Pourang Irani, J. Vanderdonckt
{"title":"Visual user interfaces for human motion","authors":"L. G. M. Ader, B. Caulfield, Benoît Bossavit, K. E. Raheb, M. Raynal, N. Vigouroux, K. Ting, Pourang Irani, J. Vanderdonckt","doi":"10.1145/3399715.3400859","DOIUrl":"https://doi.org/10.1145/3399715.3400859","url":null,"abstract":"Visual interfaces are important in human motion to capture it, to visualize it, and to facilitate motion-based interactive systems. This workshop aims at providing a platform for researchers, designers and users to discuss the challenges related to the design of visual interfaces for motion-based interaction, in terms of visualization (e.g. graphical user interface, multimodal feedback, evaluation) and processing (e.g., data collection, treatment, interpretation, recognition) of human movement (e.g., motor skills, amplitude of movements, limitations). We will share experiences, lessons learned and elaborate tools for developing all the possible applications going forward.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122545169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interaction in Volumetric Film: An Overview","authors":"Krzysztof Pietroszek","doi":"10.1145/3399715.3399957","DOIUrl":"https://doi.org/10.1145/3399715.3399957","url":null,"abstract":"Volumetric filmmaking is a novel and inherently interactive medium. In volumetric film the viewer takes over the director's responsibility for selecting the point of view from which the story is being told. The viewer becomes the cinematographer and the editor of the film at the moment of viewing. In this paper, we provide an overview of interaction modes in volumetric film and compare volumetric film to both traditional film and 360 video.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122728687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Giving Motivation for Using Secure Credentials through User Authentication by Game","authors":"Tetsuji Takada, Yumeji Hattori","doi":"10.1145/3399715.3399950","DOIUrl":"https://doi.org/10.1145/3399715.3399950","url":null,"abstract":"One of the issues in the knowledge-based user authentication is that users do not set and use a secure credential. Some methods exist to be able to resolve this issue, such as password policy, education, and password meter. However, these countermeasures impose a usability cost that is difficult for many users to accept. Therefore, these measures have not propelled users to use secure credentials in user authentication. We consider that motivating users is necessary to voluntarily accept the cost of using secure credentials. Thus, we attach a role-playing game function to pattern-based user authentication, and provide an incentive to users through user authentication. We conducted a small experiment with eight participants, and the result demonstrated that the prototype system has the potential to prompt users to use secure credentials.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128998810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Achberger, René Cutura, Oguzhan Türksoy, M. Sedlmair
{"title":"Caarvida","authors":"A. Achberger, René Cutura, Oguzhan Türksoy, M. Sedlmair","doi":"10.1145/3399715.3399862","DOIUrl":"https://doi.org/10.1145/3399715.3399862","url":null,"abstract":"We report on an interdisciplinary visual analytics project wherein automotive engineers analyze test drive videos. These videos are annotated with navigation-specific augmented reality (AR) content, and the engineers need to identify issues and evaluate the behavior of the underlying AR navigation system. With the increasing amount of video data, traditional analysis approaches can no longer be conducted in an acceptable timeframe. To address this issue, we collaboratively developed Caarvida, a visual analytics tool that helps engineers to accomplish their tasks faster and handle an increased number of videos. Caarvida combines automatic video analysis with interactive and visual user interfaces. We conducted two case studies which show that Caarvida successfully supports domain experts and speeds up their task completion time.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116368333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mancini, C. Gallagher, Radoslaw Niewiadomski, Gijs Huisman, Merijn Bruijnes
{"title":"Introducing Artificial Commensal Companions","authors":"M. Mancini, C. Gallagher, Radoslaw Niewiadomski, Gijs Huisman, Merijn Bruijnes","doi":"10.1145/3399715.3399958","DOIUrl":"https://doi.org/10.1145/3399715.3399958","url":null,"abstract":"The term commensality refers to \"sharing food and eating together in a social group. In this paper, we hypothesize that it would be possible to have the same kind of experience in a HCI setting, thanks to a new type of interface that we call Artificial Commensal Companion (ACC), that would be beneficial, for example, to people who voluntarily choose or are constrained to eat alone. To this aim, we introduce an interactive system implementing an ACC in the form of a robot with non-verbal socio-affective capabilities. Future tests are already planned to evaluate its influence on the eating experience of human participants.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115772090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Guarese, João Becker, Henrique Fensterseifer, M. Walter, C. Freitas, L. Nedel, Anderson Maciel
{"title":"Augmented Situated Visualization for Spatial and Context-Aware Decision-Making","authors":"R. Guarese, João Becker, Henrique Fensterseifer, M. Walter, C. Freitas, L. Nedel, Anderson Maciel","doi":"10.1145/3399715.3399838","DOIUrl":"https://doi.org/10.1145/3399715.3399838","url":null,"abstract":"Whenever accessing indoor spaces such as classrooms or auditoriums, people might attempt to analyze and choose an appropriate place to stay while attending an event. Several criteria may be accounted for, and most are not always self-evident or trivial. This work proposes the use of data visualization allied to an Augmented Reality (AR) user interface to help users defining the most convenient seats to take. We consider sets of arbitrary demands and project information directly atop the seats and all around the room. Users can also narrow down the search by switching and combining the attributes being displayed, e.g., temperature, wheelchair accessibility. The proposed approach was tested against a comparable 2D interactive visualization of the same data in usability assessments of seat-choosing tasks with a set of users (N = 16) to validate the solution. Qualitative and quantitative data indicated that the AR-based solution is promising, suggesting that AR may help users make more accurate decisions, even in an ordinary daily task. Regarding Augmented Situated Visualization, our results open new avenues for the exploration of context-aware data.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115784957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}