B. R. Barricelli, G. Fischer, D. Fogli, A. Mørch, A. Piccinno, S. Valtolina
{"title":"CoPDA 2022 - Cultures of Participation in the Digital Age: AI for Humans or Humans for AI?","authors":"B. R. Barricelli, G. Fischer, D. Fogli, A. Mørch, A. Piccinno, S. Valtolina","doi":"10.1145/3531073.3535262","DOIUrl":"https://doi.org/10.1145/3531073.3535262","url":null,"abstract":"The sixth edition of the CoPDA workshop is dedicated to discussing the current challenges and opportunities of Cultures of Participation with respect to Artificial Intelligence (AI) by contrasting it with the objectives pursued by Human-Centered Design (HCD). The workshop aims to establish a forum to explore our basic assumption (and to provide at least partial evidence) that the most successful AI systems out there today are dependent on teams of humans, just as humans depend on these systems to gain access to information, provide insights and perform tasks beyond their own capabilities.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115312399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring a Multi-Device Immersive Learning Environment","authors":"T. Onorati, P. Díaz, Telmo Zarraonandia, I. Aedo","doi":"10.1145/3531073.3534485","DOIUrl":"https://doi.org/10.1145/3531073.3534485","url":null,"abstract":"Though virtual reality has been used for more than one decade to support learning, technology is now mature and cheap enough, and students have the required digital fluency to reach real settings. Immersive technologies have also demonstrated that they not only are engaging, but they can also reinforce learning and improve memory. This work presents a preliminary study on the advantages of using an immersive experience to help young students understand genetic editing techniques. We have relied upon the CHIC Immersive Bubble Chart, a VR (Virtual Reality) multi-device visualization of the most relevant topics in the domain. We tested the CHIC Immersive Bubble Chart by asking a group of 29 students to explore the information space by interacting with two different devices: a desktop and a VR headset. The results show that they mainly preferred the VR headset finding it more engaging and useful. As a matter of fact, during the evaluation, the students kept exploring the space even after the assigned time slot.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115365045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video augmentation to support video-based learning","authors":"Ilaria Torre, Ilenia Galluccio, M. Coccoli","doi":"10.1145/3531073.3531179","DOIUrl":"https://doi.org/10.1145/3531073.3531179","url":null,"abstract":"Multimedia content and video-based learning are expected to take a central role in the post-pandemic world. Thus, providing new advanced interfaces and services that further exploit their potential becomes of paramount importance. A challenging area deals with developing intelligent visual interfaces that integrate the knowledge extracted from multimedia materials into educational applications. In this respect, we designed a web-based video player that is aimed to support video consumption by exploiting the knowledge extracted from the video in terms of concepts explained in the video and prerequisite relations between them. This knowledge is used to augment the video lesson through visual feedback methods. Specifically, in this paper we investigate the use of two types of visual feedback, i.e. an augmented transcript and a dynamic concept map (map of concept’s flow), to improve video comprehension in the first-watch learning context. Our preliminary findings suggest that both the methods help the learner to focus on the relevant concepts and their related contents. The augmented transcript has an higher impact on immediate comprehension compared to the map of concepts’ flow, even though the latter is expected to be more powerful to support other tasks such as exploration and in-depth analysis of the concepts in the video.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123560244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alba Bisante, Venkata Srikanth Varma Datla, Stefano Zeppieri, Emanuele Panizzi
{"title":"Implicit Interaction Approach for Car-related Tasks On Smartphone Applications - A Demo","authors":"Alba Bisante, Venkata Srikanth Varma Datla, Stefano Zeppieri, Emanuele Panizzi","doi":"10.1145/3531073.3534465","DOIUrl":"https://doi.org/10.1145/3531073.3534465","url":null,"abstract":"Implicit interaction is a possible approach to improve the user experience of smartphone apps in car-related environments. Indeed, it can enhance safety and avoids unnecessary and repetitive interactions on the user’s part. This demo paper presents a smartphone app based on an implicit interaction approach to detect when the user enters and exits their vehicle automatically. We describe the app interface and usage, and how we plan to demonstrate its performances during the conference demo session.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125359438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Humans in (Digital) Space: Representing Humans in Virtual Environments","authors":"M. Lycett, Alex Reppel","doi":"10.1145/3531073.3531172","DOIUrl":"https://doi.org/10.1145/3531073.3531172","url":null,"abstract":"Technology continues to pervade social and organizational life (e.g., immersive, and artificial intelligence) and our environments become increasingly virtual. In this context we examine the challenges of creating believable virtual human experiences— photo-realistic digital imitations of ourselves that can act as proxies capable of navigating complex virtual environments while demonstrating autonomous behavior. We first develop a framework for discussion, then use that to explore the state-of-the-art in the context of human-like experience, autonomous behavior, and expansive environments. Last, we consider the key research challenges that emerge from review as a call to action.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128050449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OCFER-Net: Recognizing Facial Expression in Online Learning System","authors":"Yi Huo, L. Zhang","doi":"10.1145/3531073.3534470","DOIUrl":"https://doi.org/10.1145/3531073.3534470","url":null,"abstract":"Recently, online learning is very popular, especially under the global epidemic of COVID-19. Besides knowledge distribution, emotion interaction is also very important. It can be obtained by employing Facial Expression Recognition (FER). Since the FER accuracy is substantial in assisting teachers to acquire the emotional situation, the project explores a series of FER methods and finds that few works engage in exploiting the orthogonality of convolutional matrix. Therefore, it enforces orthogonality on kernels by a regularizer, which extracts features with more diversity and expressiveness, and delivers OCFER-Net. Experiments are carried out on FER-2013, which is a challenging dataset. Results show superior performance over baselines by 1.087. The code of the research project is publicly available on https://github.com/YeeHoran/OCFERNet..","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120994519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Human-AI (H-AI) Collaboration On Design Tasks Using An Interactive Text/Voice Artificial Intelligence (AI) Agent","authors":"Joseph Makokha","doi":"10.1145/3531073.3534478","DOIUrl":"https://doi.org/10.1145/3531073.3534478","url":null,"abstract":"In this presentation, we demonstrate a way to develop a class of AI systems, the Disruptive Interjector (DI), which observe what a human is doing, then interject with suggestions that aid in idea generation or problem solving in a human-AI (H-AI) team; something that goes beyond current creativity support systems by replacing a human-human (H-H) team with a H-AI one. The proposed DI is distinct from tutors, chatbots, recommenders and other similar systems since they seek to diverge from a solution (rather than converge towards one) by encouraging consideration of other possibilities. We develop a conceptual design of the system, then present examples from deep Convolution Neural Networks[1,7] learning models. The first example shows results from a model that was trained on an open-source dataset (publicly available online) of a community technical support chat transcripts, while the second one was trained on a design-focused dataset obtained from transcripts of experts engaged in engineering design problem solving (unavailable publicly). Based on the results from these models, we propose the necessary improvements on models and training datasets that must be resolved in order to achieve usable and reliable collaborative text/voice systems that fall in this class of AI systems.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130335355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"End-user Development and Closed-Reading: an Initial Investigation","authors":"Sevda Abdollahinami, L. Ducceschi, M. Zancanaro","doi":"10.1145/3531073.3531128","DOIUrl":"https://doi.org/10.1145/3531073.3531128","url":null,"abstract":"In this work, we explore the idea of designing a tool to augment the practice of closed-reading a literary text by employing end-user programming practices. The ultimate goal is to help young humanities students learn and appreciate computational thinking skills. The proposed approach is aligned with other methods of applying computer science techniques to explore literary texts (as in digital humanities) but with original goals and means. An initial design concept has been realised as a probe to prompt the discussion among humanities students and teachers. This short paper discusses the design ideas and the feedback from interviews and focus groups involving 25 participants (10 teachers in different humanities fields and 15 university students in humanities as prospective teachers and scholars).","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128333641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Extended Reality Multi-Robot Ground Control Stations","authors":"Bryson Lawton, F. Maurer","doi":"10.1145/3531073.3534469","DOIUrl":"https://doi.org/10.1145/3531073.3534469","url":null,"abstract":"This paper presents work-in-progress research exploring the use of extended reality headsets to overcome the intrinsic limitations of conventional, screen-based ground control stations. Specifically, we discuss an extended reality ground control station prototype developed to explore how the strengths of these immersive technologies can be leveraged to improve 3D information visualization, workspace scalability, natural interaction methods, and system mobility for multi-robot ground control stations.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131744102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. T. Baldassarre, Vita Santa Barletta, G. Dimauro, Domenico Gigante, A. Pagano, A. Piccinno
{"title":"Supporting Secure Agile Development: the VIS-PRISE Tool","authors":"M. T. Baldassarre, Vita Santa Barletta, G. Dimauro, Domenico Gigante, A. Pagano, A. Piccinno","doi":"10.1145/3531073.3534494","DOIUrl":"https://doi.org/10.1145/3531073.3534494","url":null,"abstract":"Privacy by Design and Security by Design are two fundamental aspects in the current technological and regulatory context. Therefore, software development must integrate these aspects and consider software security on one hand, and user-centricity from the design phase on the other. It is necessary to support the team in all stages of the software lifecycle in integrating privacy and security requirements. Taking these aspects into account, the paper presents VIS-PRISE prototype, a visual tool for supporting the design team in the secure agile development.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131965356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}