Proceedings of the International Conference on Advanced Visual Interfaces最新文献

筛选
英文 中文
A Question-Oriented Visualization Recommendation Approach for Data Exploration 面向问题的数据探索可视化推荐方法
Proceedings of the International Conference on Advanced Visual Interfaces Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399849
R. A. D. Lima, Simone Diniz Junqueira Barbosa
{"title":"A Question-Oriented Visualization Recommendation Approach for Data Exploration","authors":"R. A. D. Lima, Simone Diniz Junqueira Barbosa","doi":"10.1145/3399715.3399849","DOIUrl":"https://doi.org/10.1145/3399715.3399849","url":null,"abstract":"The increasingly rapid growth of data production and the consequent need to explore data to obtain answers to the most varied questions have promoted the development of tools to facilitate the manipulation and construction of data visualizations. However, building useful data visualizations is not a trivial task: it may involve a large number of subtle decisions from experienced designers. In this paper, we present an approach that uses a set of heuristics to recommend data visualizations associated with questions, in order to facilitate the understanding of the recommendations and assisting the visual exploration process. Our approach was implemented and evaluated through the VisMaker tool. We carried out two studies comparing VisMaker with Voyager 2 and analyzed some aspects of the recommendation approaches through the participants' feedbacks. As a result, we found some advantages of our approach and gathered comments to help improve the development of visualization recommender tools.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116390553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ARCA. Semantic exploration of a bookstore ARCA。书店的语义探索
Proceedings of the International Conference on Advanced Visual Interfaces Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399939
Eleonora Bernasconi, Miguel Ceriani, Massimo Mecella, T. Catarci, M. C. Capanna, Clara di Fazio, R. Marcucci, Erik Pender, Fabio Maria Petriccione
{"title":"ARCA. Semantic exploration of a bookstore","authors":"Eleonora Bernasconi, Miguel Ceriani, Massimo Mecella, T. Catarci, M. C. Capanna, Clara di Fazio, R. Marcucci, Erik Pender, Fabio Maria Petriccione","doi":"10.1145/3399715.3399939","DOIUrl":"https://doi.org/10.1145/3399715.3399939","url":null,"abstract":"In this demo paper, we present ARCA, a visual-search based system that allows the semantic exploration of a bookstore. Navigating a domain-specific knowledge graph, students and researchers alike can start from any specific concept and reach any other related concept, discovering associated books and information. To achieve this paradigm of interaction we built a prototype system, flexible and adaptable to multiple contexts of use, that extracts semantic information from the contents of a books' corpus, building a dedicated knowledge graph that is linked to external knowledge bases. The web-based user interface of ARCA integrates text-based search, visual knowledge graph navigation, and linear visualization of filtered books (ordered according to multiple criteria) in a comprehensive coordinated view aimed at exploiting the underlying data while avoiding information overload and unnecessary cluttering. A proof-of-concept of ARCA is available online at http://arca.diag.uniroma1.it","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122136189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
TuVe: A Shape-changeable Display using Fluids in a Tube TuVe:一种使用管内流体的可变形显示器
Proceedings of the International Conference on Advanced Visual Interfaces Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399874
Saya Suzunaga, Yuichi Itoh, Yuki Inoue, Kazuyuki Fujita, T. Onoye
{"title":"TuVe: A Shape-changeable Display using Fluids in a Tube","authors":"Saya Suzunaga, Yuichi Itoh, Yuki Inoue, Kazuyuki Fujita, T. Onoye","doi":"10.1145/3399715.3399874","DOIUrl":"https://doi.org/10.1145/3399715.3399874","url":null,"abstract":"We propose TuVe, a novel shape-changing display consisting of a flexible tube and fluids, in which the droplets flowing through the tube compose the display medium that represents information. In this system, every colored droplet is flowed by controlling valves and a pump connected to the tube. The display part employs a flexible tube that can be shaped to any structure (e.g., wrapped around a specific object), which is achieved by a calibration made to capture the tube structure using image processing with a camera. A performance evaluation reveals that our prototype succeeds in controlling each droplet with a positional error of 2 mm or less, which is small enough to show such simple characters as alphabetic characters using a 7 × 7-pixel resolution display. We also discuss example applications, such as large public displays and flow-direction visualization, that illustrate the characteristics of the TuVe display.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129412202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Designing a Self-help Mobile App to Cope with Avoidance Behavior in Panic Disorder 设计一个自助手机应用程序来应对恐慌障碍中的回避行为
Proceedings of the International Conference on Advanced Visual Interfaces Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399816
M. Paratore, Maria Claudia Buzzi, M. Buzzi
{"title":"Designing a Self-help Mobile App to Cope with Avoidance Behavior in Panic Disorder","authors":"M. Paratore, Maria Claudia Buzzi, M. Buzzi","doi":"10.1145/3399715.3399816","DOIUrl":"https://doi.org/10.1145/3399715.3399816","url":null,"abstract":"Panic disorder (PD) is an anxiety disorder that in recent years has spread worldwide. PD is diagnosed when a person has recurring panic attacks, characterized by physical symptoms and disturbing thoughts and feelings that arise rapidly, reach their peak in a few minutes and soon disappear. Panic attacks, despite being harmless and relatively short, are highly distressing and deeply affect the lives of patients, who very often develop agoraphobia, an anxiety disorder that leads to systematic avoidance of places where previous attacks have occurred. PD is often a chronic condition that does not respond well to pharmacological treatment. However, psychotherapeutic approaches such as mindfulness have proved to be quite effective and their delivery through self-care eHealth tools has been encouraged by the World Health Organization. In this paper, we present a self-aid mobile app designed by and for patients affected by PD with mild agoraphobia. The app is aimed at helping users cope with avoidance behavior. Thanks to geolocation, the app automatically detects the proximity of a \"critical place (i.e., where a previous attack has occurred) and suggests mindfulness strategies for coping with stress, in order to prevent anxiety escalation and panic. This paper describes the therapeutic background of the proposed application, as well as the mHealth best practices we strove to adopt in the design phase. Preliminary trials conducted with one patient are encouraging; nonetheless, we point out the need for further and more extensive tests to fully assess the effectiveness of our approach.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126774587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Space-free Gesture Interaction with Humanoid Robot 人形机器人的无空间手势交互
Proceedings of the International Conference on Advanced Visual Interfaces Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399949
S. Humayoun, M. Faizan, Zuhair Zafar, K. Berns
{"title":"Space-free Gesture Interaction with Humanoid Robot","authors":"S. Humayoun, M. Faizan, Zuhair Zafar, K. Berns","doi":"10.1145/3399715.3399949","DOIUrl":"https://doi.org/10.1145/3399715.3399949","url":null,"abstract":"In general, humanoid robots mostly use fixed-devices (e.g., camera or sensors) to detect human non-verbal communication, which have limitations in many real-life scenarios. Wearable devices could play an important role in many real-life scenarios. To address this, we propose using Myo armband for human-robot interaction using hand- and arm-based gestures. We present our end-to-end Spagti framework that is used first to train the user gestures using Myo armband and then to interact with a humanoid robot, called ROBIN, in real-time using space-free gestures.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134139639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluating User Preferences for Augmented Reality Interactions with the Internet of Things 评估用户对增强现实与物联网交互的偏好
Proceedings of the International Conference on Advanced Visual Interfaces Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399716
Shreya Chopra, F. Maurer
{"title":"Evaluating User Preferences for Augmented Reality Interactions with the Internet of Things","authors":"Shreya Chopra, F. Maurer","doi":"10.1145/3399715.3399716","DOIUrl":"https://doi.org/10.1145/3399715.3399716","url":null,"abstract":"We investigate user preferences for controlling IoT devices with headset-based Augment Reality (AR), comparing gestural control and voice control. An elicitation study is performed with 16 participants to gather their preferred voice commands and gestures for a set of referents. We analyzed 784 inputs (392 gestures and 392 voice) as well as observations and interviews to develop an empirical basis for design recommendations that form a guideline for future designers and implementors of such voice commands and gestures for interacting with the IoT via headset-based AR.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114515349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Externalizing Mental Images by Harnessing Size-Describing Gestures: Design Implications for a Visualization System 利用尺寸描述手势外化心理图像:可视化系统的设计含义
Proceedings of the International Conference on Advanced Visual Interfaces Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399920
S. A. Brown, Sharon Lynn Chu Yew Yee, Neha Rani
{"title":"Externalizing Mental Images by Harnessing Size-Describing Gestures: Design Implications for a Visualization System","authors":"S. A. Brown, Sharon Lynn Chu Yew Yee, Neha Rani","doi":"10.1145/3399715.3399920","DOIUrl":"https://doi.org/10.1145/3399715.3399920","url":null,"abstract":"People use a significant amount of gestures when engaging in creative brainstorming. This is especially typical for creative workers who frequently convey ideas, designs, and stories to team members. These gestures produced during natural conversation contain information that is not necessarily conveyed through speech. This paper investigates the design of a system that uses people's gestures in natural communication contexts to produce external visualizations of their mental imagery, focusing on gestures that describe dimension-related information. While much psycholinguistics research address how gestures relate to the representations of concepts, little HCI work has explored the possibilities of harnessing gestures to support thinking. We conducted a study to explore how people gesture using a basic gesture-based visualization system in simulated creative gift design scenarios, towards the goal of deriving design implications. Both quantitative and qualitative data were collected from the study, allowing us to ascertain what features (e.g., users' spatial frames of reference and listener types) of a gesture-based visualization system need to be accounted for in design. Results showed that our system managed to visualize users' envisioned gift dimensions, but that visualized object area significantly affected users' perceived accuracy of the system. We extract themes as to what dimensions are important in the design of a gesture-based visualization system, and the possible uses of such a system from the participants' perspectives. We discuss implications for the design of gesture-based visualization systems to support creative work and possibilities for future directions of research.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115842740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining the Presentation of Information in Augmented Reality Headsets for Situational Awareness 检查增强现实头戴式耳机中用于态势感知的信息呈现
Proceedings of the International Conference on Advanced Visual Interfaces Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399846
Julia Woodward, Jesse Smith, Isaac Wang, S. Cuenca, J. Ruiz
{"title":"Examining the Presentation of Information in Augmented Reality Headsets for Situational Awareness","authors":"Julia Woodward, Jesse Smith, Isaac Wang, S. Cuenca, J. Ruiz","doi":"10.1145/3399715.3399846","DOIUrl":"https://doi.org/10.1145/3399715.3399846","url":null,"abstract":"Augmented Reality (AR) headsets are being employed in industrial settings (e.g., the oil industry); however, there has been little work on how information should be presented in these headsets, especially in the context of situational awareness. We present a study examining three different presentation styles (Display, Environment, Mixed Environment) for textual secondary information in AR headsets. We found that the Display and Environment presentation styles assisted in perception and comprehension. Our work contributes a first step to understanding how to design visual information in AR headsets to support situational awareness.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116268593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Framework for Biometric Recognition in Online Content Delivery Platforms 在线内容传递平台的生物特征识别框架
Proceedings of the International Conference on Advanced Visual Interfaces Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399969
M. Marras, G. Fenu
{"title":"A Framework for Biometric Recognition in Online Content Delivery Platforms","authors":"M. Marras, G. Fenu","doi":"10.1145/3399715.3399969","DOIUrl":"https://doi.org/10.1145/3399715.3399969","url":null,"abstract":"In this paper, we introduce a modular framework that aims to empower online platforms with biometric-related capabilities, minimizing the user's interaction cost. First, we describe core concepts and architectural aspects characterizing the proposed framework. Then, as a use case, we integrate it in an e-learning platform to provide biometric recognition at the time of login and continuous identity verification in well-rounded areas of the platform.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115133196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Data-Driven Captioning of Time-Series Line Charts 时间序列折线图的神经数据驱动字幕
Proceedings of the International Conference on Advanced Visual Interfaces Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399829
Andrea Spreafico, G. Carenini
{"title":"Neural Data-Driven Captioning of Time-Series Line Charts","authors":"Andrea Spreafico, G. Carenini","doi":"10.1145/3399715.3399829","DOIUrl":"https://doi.org/10.1145/3399715.3399829","url":null,"abstract":"The success of neural methods for image captioning suggests that similar benefits can be reaped for generating captions for information visualizations. In this preliminary study, we focus on the very popular line charts. We propose a neural model which aims to generate text from the same data used to create a line chart. Due to the lack of suitable training corpora, we collected a dataset through crowdsourcing. Experiments indicate that our model outperforms relatively simple non-neural baselines.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115508446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信