Proceedings of the 2022 International Conference on Advanced Visual Interfaces最新文献

筛选
英文 中文
CoPDA 2022 - Cultures of Participation in the Digital Age: AI for Humans or Humans for AI? CoPDA 2022 -数字时代的参与文化:人工智能为人类还是人类为人工智能?
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3535262
B. R. Barricelli, G. Fischer, D. Fogli, A. Mørch, A. Piccinno, S. Valtolina
{"title":"CoPDA 2022 - Cultures of Participation in the Digital Age: AI for Humans or Humans for AI?","authors":"B. R. Barricelli, G. Fischer, D. Fogli, A. Mørch, A. Piccinno, S. Valtolina","doi":"10.1145/3531073.3535262","DOIUrl":"https://doi.org/10.1145/3531073.3535262","url":null,"abstract":"The sixth edition of the CoPDA workshop is dedicated to discussing the current challenges and opportunities of Cultures of Participation with respect to Artificial Intelligence (AI) by contrasting it with the objectives pursued by Human-Centered Design (HCD). The workshop aims to establish a forum to explore our basic assumption (and to provide at least partial evidence) that the most successful AI systems out there today are dependent on teams of humans, just as humans depend on these systems to gain access to information, provide insights and perform tasks beyond their own capabilities.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115312399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring a Multi-Device Immersive Learning Environment 探索多设备沉浸式学习环境
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534485
T. Onorati, P. Díaz, Telmo Zarraonandia, I. Aedo
{"title":"Exploring a Multi-Device Immersive Learning Environment","authors":"T. Onorati, P. Díaz, Telmo Zarraonandia, I. Aedo","doi":"10.1145/3531073.3534485","DOIUrl":"https://doi.org/10.1145/3531073.3534485","url":null,"abstract":"Though virtual reality has been used for more than one decade to support learning, technology is now mature and cheap enough, and students have the required digital fluency to reach real settings. Immersive technologies have also demonstrated that they not only are engaging, but they can also reinforce learning and improve memory. This work presents a preliminary study on the advantages of using an immersive experience to help young students understand genetic editing techniques. We have relied upon the CHIC Immersive Bubble Chart, a VR (Virtual Reality) multi-device visualization of the most relevant topics in the domain. We tested the CHIC Immersive Bubble Chart by asking a group of 29 students to explore the information space by interacting with two different devices: a desktop and a VR headset. The results show that they mainly preferred the VR headset finding it more engaging and useful. As a matter of fact, during the evaluation, the students kept exploring the space even after the assigned time slot.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115365045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video augmentation to support video-based learning 视频增强,支持基于视频的学习
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531179
Ilaria Torre, Ilenia Galluccio, M. Coccoli
{"title":"Video augmentation to support video-based learning","authors":"Ilaria Torre, Ilenia Galluccio, M. Coccoli","doi":"10.1145/3531073.3531179","DOIUrl":"https://doi.org/10.1145/3531073.3531179","url":null,"abstract":"Multimedia content and video-based learning are expected to take a central role in the post-pandemic world. Thus, providing new advanced interfaces and services that further exploit their potential becomes of paramount importance. A challenging area deals with developing intelligent visual interfaces that integrate the knowledge extracted from multimedia materials into educational applications. In this respect, we designed a web-based video player that is aimed to support video consumption by exploiting the knowledge extracted from the video in terms of concepts explained in the video and prerequisite relations between them. This knowledge is used to augment the video lesson through visual feedback methods. Specifically, in this paper we investigate the use of two types of visual feedback, i.e. an augmented transcript and a dynamic concept map (map of concept’s flow), to improve video comprehension in the first-watch learning context. Our preliminary findings suggest that both the methods help the learner to focus on the relevant concepts and their related contents. The augmented transcript has an higher impact on immediate comprehension compared to the map of concepts’ flow, even though the latter is expected to be more powerful to support other tasks such as exploration and in-depth analysis of the concepts in the video.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123560244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Implicit Interaction Approach for Car-related Tasks On Smartphone Applications - A Demo 智能手机应用中汽车相关任务的隐式交互方法-演示
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534465
Alba Bisante, Venkata Srikanth Varma Datla, Stefano Zeppieri, Emanuele Panizzi
{"title":"Implicit Interaction Approach for Car-related Tasks On Smartphone Applications - A Demo","authors":"Alba Bisante, Venkata Srikanth Varma Datla, Stefano Zeppieri, Emanuele Panizzi","doi":"10.1145/3531073.3534465","DOIUrl":"https://doi.org/10.1145/3531073.3534465","url":null,"abstract":"Implicit interaction is a possible approach to improve the user experience of smartphone apps in car-related environments. Indeed, it can enhance safety and avoids unnecessary and repetitive interactions on the user’s part. This demo paper presents a smartphone app based on an implicit interaction approach to detect when the user enters and exits their vehicle automatically. We describe the app interface and usage, and how we plan to demonstrate its performances during the conference demo session.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125359438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Humans in (Digital) Space: Representing Humans in Virtual Environments (数字)空间中的人:虚拟环境中的人
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531172
M. Lycett, Alex Reppel
{"title":"Humans in (Digital) Space: Representing Humans in Virtual Environments","authors":"M. Lycett, Alex Reppel","doi":"10.1145/3531073.3531172","DOIUrl":"https://doi.org/10.1145/3531073.3531172","url":null,"abstract":"Technology continues to pervade social and organizational life (e.g., immersive, and artificial intelligence) and our environments become increasingly virtual. In this context we examine the challenges of creating believable virtual human experiences— photo-realistic digital imitations of ourselves that can act as proxies capable of navigating complex virtual environments while demonstrating autonomous behavior. We first develop a framework for discussion, then use that to explore the state-of-the-art in the context of human-like experience, autonomous behavior, and expansive environments. Last, we consider the key research challenges that emerge from review as a call to action.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128050449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
OCFER-Net: Recognizing Facial Expression in Online Learning System OCFER-Net:在线学习系统中的面部表情识别
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534470
Yi Huo, L. Zhang
{"title":"OCFER-Net: Recognizing Facial Expression in Online Learning System","authors":"Yi Huo, L. Zhang","doi":"10.1145/3531073.3534470","DOIUrl":"https://doi.org/10.1145/3531073.3534470","url":null,"abstract":"Recently, online learning is very popular, especially under the global epidemic of COVID-19. Besides knowledge distribution, emotion interaction is also very important. It can be obtained by employing Facial Expression Recognition (FER). Since the FER accuracy is substantial in assisting teachers to acquire the emotional situation, the project explores a series of FER methods and finds that few works engage in exploiting the orthogonality of convolutional matrix. Therefore, it enforces orthogonality on kernels by a regularizer, which extracts features with more diversity and expressiveness, and delivers OCFER-Net. Experiments are carried out on FER-2013, which is a challenging dataset. Results show superior performance over baselines by 1.087. The code of the research project is publicly available on https://github.com/YeeHoran/OCFERNet..","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120994519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhancing Human-AI (H-AI) Collaboration On Design Tasks Using An Interactive Text/Voice Artificial Intelligence (AI) Agent 使用交互式文本/语音人工智能(AI)代理增强人类-人工智能(H-AI)在设计任务上的协作
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534478
Joseph Makokha
{"title":"Enhancing Human-AI (H-AI) Collaboration On Design Tasks Using An Interactive Text/Voice Artificial Intelligence (AI) Agent","authors":"Joseph Makokha","doi":"10.1145/3531073.3534478","DOIUrl":"https://doi.org/10.1145/3531073.3534478","url":null,"abstract":"In this presentation, we demonstrate a way to develop a class of AI systems, the Disruptive Interjector (DI), which observe what a human is doing, then interject with suggestions that aid in idea generation or problem solving in a human-AI (H-AI) team; something that goes beyond current creativity support systems by replacing a human-human (H-H) team with a H-AI one. The proposed DI is distinct from tutors, chatbots, recommenders and other similar systems since they seek to diverge from a solution (rather than converge towards one) by encouraging consideration of other possibilities. We develop a conceptual design of the system, then present examples from deep Convolution Neural Networks[1,7] learning models. The first example shows results from a model that was trained on an open-source dataset (publicly available online) of a community technical support chat transcripts, while the second one was trained on a design-focused dataset obtained from transcripts of experts engaged in engineering design problem solving (unavailable publicly). Based on the results from these models, we propose the necessary improvements on models and training datasets that must be resolved in order to achieve usable and reliable collaborative text/voice systems that fall in this class of AI systems.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130335355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-user Development and Closed-Reading: an Initial Investigation 终端用户开发和闭式阅读:初步调查
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3531128
Sevda Abdollahinami, L. Ducceschi, M. Zancanaro
{"title":"End-user Development and Closed-Reading: an Initial Investigation","authors":"Sevda Abdollahinami, L. Ducceschi, M. Zancanaro","doi":"10.1145/3531073.3531128","DOIUrl":"https://doi.org/10.1145/3531073.3531128","url":null,"abstract":"In this work, we explore the idea of designing a tool to augment the practice of closed-reading a literary text by employing end-user programming practices. The ultimate goal is to help young humanities students learn and appreciate computational thinking skills. The proposed approach is aligned with other methods of applying computer science techniques to explore literary texts (as in digital humanities) but with original goals and means. An initial design concept has been realised as a probe to prompt the discussion among humanities students and teachers. This short paper discusses the design ideas and the feedback from interviews and focus groups involving 25 participants (10 teachers in different humanities fields and 15 university students in humanities as prospective teachers and scholars).","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128333641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Extended Reality Multi-Robot Ground Control Stations 探索扩展现实多机器人地面控制站
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534469
Bryson Lawton, F. Maurer
{"title":"Exploring Extended Reality Multi-Robot Ground Control Stations","authors":"Bryson Lawton, F. Maurer","doi":"10.1145/3531073.3534469","DOIUrl":"https://doi.org/10.1145/3531073.3534469","url":null,"abstract":"This paper presents work-in-progress research exploring the use of extended reality headsets to overcome the intrinsic limitations of conventional, screen-based ground control stations. Specifically, we discuss an extended reality ground control station prototype developed to explore how the strengths of these immersive technologies can be leveraged to improve 3D information visualization, workspace scalability, natural interaction methods, and system mobility for multi-robot ground control stations.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131744102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supporting Secure Agile Development: the VIS-PRISE Tool 支持安全敏捷开发:VIS-PRISE工具
Proceedings of the 2022 International Conference on Advanced Visual Interfaces Pub Date : 2022-06-06 DOI: 10.1145/3531073.3534494
M. T. Baldassarre, Vita Santa Barletta, G. Dimauro, Domenico Gigante, A. Pagano, A. Piccinno
{"title":"Supporting Secure Agile Development: the VIS-PRISE Tool","authors":"M. T. Baldassarre, Vita Santa Barletta, G. Dimauro, Domenico Gigante, A. Pagano, A. Piccinno","doi":"10.1145/3531073.3534494","DOIUrl":"https://doi.org/10.1145/3531073.3534494","url":null,"abstract":"Privacy by Design and Security by Design are two fundamental aspects in the current technological and regulatory context. Therefore, software development must integrate these aspects and consider software security on one hand, and user-centricity from the design phase on the other. It is necessary to support the team in all stages of the software lifecycle in integrating privacy and security requirements. Taking these aspects into account, the paper presents VIS-PRISE prototype, a visual tool for supporting the design team in the secure agile development.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131965356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信