26th International Conference on Intelligent User Interfaces - Companion最新文献

筛选
英文 中文
COVID19α: Interactive Spatio-Temporal Visualization of COVID-19 Symptoms through Tweet Analysis COVID-19 α:通过推文分析的COVID-19症状互动时空可视化
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450715
Biddut Sarker Bijoy, Syeda Jannatus Saba, Souvik Sarkar, Md. Saiful Islam, Sheikh Rabiul Islam, M. R. Amin, Shubhra (Santu) Karmaker
{"title":"COVID19α: Interactive Spatio-Temporal Visualization of COVID-19 Symptoms through Tweet Analysis","authors":"Biddut Sarker Bijoy, Syeda Jannatus Saba, Souvik Sarkar, Md. Saiful Islam, Sheikh Rabiul Islam, M. R. Amin, Shubhra (Santu) Karmaker","doi":"10.1145/3397482.3450715","DOIUrl":"https://doi.org/10.1145/3397482.3450715","url":null,"abstract":"In this demo, we focus on analyzing COVID-19 related symptoms across the globe reported through tweets by building an interactive spatio-temporal visualization tool, i.e., COVID19α. Using around 462 million tweets collected over a span of six months, COVID19α provides three different types of visualization tools: 1) Spatial Visualization with a focus on visualizing COVID-19 symptoms across different geographic locations; 2) Temporal Visualization with a focus on visualizing the evolution of COVID-19 symptoms over time for a particular geographic location; and 3) Spatio-Temporal Visualization with a focus on combining both spatial and temporal analysis to provide comparative visualizations between two (or more) symptoms across time and space. We believe that health professionals, scientists, and policymakers will be able to leverage this interactive tool to devise better and targeted health intervention policies. Our developed interactive visualization tool is publicly available at https://bijoy-sust.github.io/Covid19/.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133043983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Back-end semantics for multimodal dialog on XR devices XR设备上多模态对话框的后端语义
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450719
P. Poller, Margarita Chikobava, Jack Hodges, Mareike Kritzler, F. Michahelles, Tilman Becker
{"title":"Back-end semantics for multimodal dialog on XR devices","authors":"P. Poller, Margarita Chikobava, Jack Hodges, Mareike Kritzler, F. Michahelles, Tilman Becker","doi":"10.1145/3397482.3450719","DOIUrl":"https://doi.org/10.1145/3397482.3450719","url":null,"abstract":"Extended Reality (XR) devices have great potential to become the next wave in mobile interaction. They provide powerful, easy-to-use Augmented Reality (AR) and/or Mixed Reality (MR) in conjunction with multimodal interaction facilities using gaze, gesture, and speech. However, current implementations typically lack a coherent semantic representation for the virtual elements, backend-communication, and dialog capabilities. Existing devices are often restricted to mere command and control interactions. To improve these shortcomings and realize enhanced system capabilities and comprehensive interactivity, we have developed a flexible modular approach that integrates powerful back-end platforms using standard API interfaces. As a concrete example, we present our distributed implementation of a multimodal dialog system on the Microsoft Hololens®. It uses the SiAM-dp multimodal dialog platform as a back-end service and an Open Semantic Framework (OSF) back-end server to extract the semantic models for creating the dialog domain model.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125808856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Healthy Interfaces (HEALTHI) Workshop 健康接口(HEALTHI)研讨会
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450710
Michael Sobolev, Katrin Hänsel, Tanzeem Choudhury
{"title":"Healthy Interfaces (HEALTHI) Workshop","authors":"Michael Sobolev, Katrin Hänsel, Tanzeem Choudhury","doi":"10.1145/3397482.3450710","DOIUrl":"https://doi.org/10.1145/3397482.3450710","url":null,"abstract":"The first workshop on Healthy Interfaces (HEALTHI), collocated with the 2021 ACM Intelligent User Interfaces (IUI) conference, offers a forum that brings academics and industry researchers together and seeks submissions broadly related to the design of healthy user interfaces. The workshop will discuss intelligent user interfaces such as screens, wearables, voices assistants, and chatbots in the context of supporting health, health behavior, and wellbeing.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130096290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fifth HUMANIZE workshop on Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory: Summary 第五届人性化研讨会:基于心理学理论的用户建模在自适应系统中的透明度和可解释性:总结
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450708
Mark P. Graus, B. Ferwerda, M. Tkalcic, Panagiotis Germanakos
{"title":"Fifth HUMANIZE workshop on Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory: Summary","authors":"Mark P. Graus, B. Ferwerda, M. Tkalcic, Panagiotis Germanakos","doi":"10.1145/3397482.3450708","DOIUrl":"https://doi.org/10.1145/3397482.3450708","url":null,"abstract":"The fifth HUMANIZE workshop1 on Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory took place in conjunction with the 26th annual meeting of the Intelligent User Interfaces (IUI)2 community in Texas, USA on April 17, 2021. The workshop provided a venue for researchers from different fields to interact by accepting contributions on the intersection of practical data mining methods and theoretical knowledge for personalization. A total of five papers was accepted for this edition of the workshop.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"146 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129852833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE) 个性化界面的社会和文化整合(社会化)
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450709
F. Agrusti, Fabio Gasparetti, Cristina Gena, G. Sansonetti, M. Tkalcic
{"title":"SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE)","authors":"F. Agrusti, Fabio Gasparetti, Cristina Gena, G. Sansonetti, M. Tkalcic","doi":"10.1145/3397482.3450709","DOIUrl":"https://doi.org/10.1145/3397482.3450709","url":null,"abstract":"This is the first edition of the SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE) workshop. The main goal is to bring together all those interested in the development of interactive techniques that may contribute to foster the social and cultural inclusion of a broad range of users. More specifically, we intend to attract research that takes into account the interaction peculiarities typical of different realities, with a focus on disadvantaged and at-risk categories (e.g., refugees and migrants) and vulnerable groups (e.g., children, elderly, autistic and disabled people). Among others, we are also interested in human-robot interaction techniques aimed at the development of social robots, that is, autonomous robots that interact with people by engaging in social-affective behaviors, abilities, and rules related to their collaborative role.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123570293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
VisRec: A Hands-on Tutorial on Deep Learning for Visual Recommender Systems VisRec:视觉推荐系统的深度学习实践教程
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450620
Denis Parra, Antonio Ossa-Guerra, Manuel Cartagena, Patricio Cerda-Mardini, Felipe del-Rio
{"title":"VisRec: A Hands-on Tutorial on Deep Learning for Visual Recommender Systems","authors":"Denis Parra, Antonio Ossa-Guerra, Manuel Cartagena, Patricio Cerda-Mardini, Felipe del-Rio","doi":"10.1145/3397482.3450620","DOIUrl":"https://doi.org/10.1145/3397482.3450620","url":null,"abstract":"This tutorial serves as an introduction to deep learning approaches to build visual recommendation systems. Deep learning models can be used as feature extractors, and perform extremely well in visual recommender systems to create representations of visual items. This tutorial covers the foundations of convolutional neural networks and then how to use them to build state-of-the-art personalized recommendation systems. The tutorial is designed as a hands-on experience, focused on providing both theoretical knowledge as well as practical experience on the topics of the course.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116881490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SynZ: Enhanced Synthetic Dataset for Training UI Element Detectors 用于训练UI元素检测器的增强合成数据集
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450725
Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke
{"title":"SynZ: Enhanced Synthetic Dataset for Training UI Element Detectors","authors":"Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke","doi":"10.1145/3397482.3450725","DOIUrl":"https://doi.org/10.1145/3397482.3450725","url":null,"abstract":"User Interface (UI) prototyping is an iterative process where designers initially sketch UIs before transforming them into interactive digital designs. Recent research applies Deep Neural Networks (DNNs) to identify the constituent UI elements of these UI sketches and transform these sketches into front-end code. Training such DNN models requires a large-scale dataset of UI sketches, which is time-consuming and expensive to collect. Therefore, we earlier proposed Syn to generate UI sketches synthetically by random allocation of UI element sketches. However, these UI sketches are not statistically similar to real-life UI screens. To bridge this gap, in this paper, we introduce the SynZ dataset, which contains 175,377 synthetically generated UI sketches statistically similar to real-life UI screens. To generate SynZ, we analyzed, enhanced, and extracted annotations from the RICO dataset and used 17,979 hand-drawn UI element sketches from the UISketch dataset. Further, we fine-tuned a UI element detector with SynZ and observed that it doubles the mean Average Precision of UI element detection compared to the Syn dataset.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114354053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
LectYS: A System for Summarizing Lecture Videos on YouTube 讲座ys:一个总结YouTube上讲座视频的系统
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450722
Taewon Yoo, Hyewon Jeong, Donghwan Lee, Hyunggu Jung
{"title":"LectYS: A System for Summarizing Lecture Videos on YouTube","authors":"Taewon Yoo, Hyewon Jeong, Donghwan Lee, Hyunggu Jung","doi":"10.1145/3397482.3450722","DOIUrl":"https://doi.org/10.1145/3397482.3450722","url":null,"abstract":"Students leverage online resources such as online classes and YouTube is increasing. Still, there remain challenges for students to easily find the right lecture video online at the right time. Multiple video search methods have been proposed, but to our knowledge, no previous study has proposed a system that summarize YouTube lecture videos using subtitles. This demo proposes LectYS, a system for summarizing lecture videos on YouTube to support students search for lecture video content on YouTube. The key features of our proposed system are: (1) to summarize the lecture video using the subtitle of the video, (2) to access to the specific parts of the video using the start time of video subtitle, and (3) to search for the video with keyword. Using LectYS, students are allowed to search for lecture videos on YouTube faster and more accurately.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122181396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Akin: Generating UI Wireframes From UI Design Patterns Using Deep Learning 类似:使用深度学习从UI设计模式生成UI线框图
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450727
Nishit Gajjar, Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke
{"title":"Akin: Generating UI Wireframes From UI Design Patterns Using Deep Learning","authors":"Nishit Gajjar, Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke","doi":"10.1145/3397482.3450727","DOIUrl":"https://doi.org/10.1145/3397482.3450727","url":null,"abstract":"During the User interface (UI) design process, designers use UI design patterns for conceptualizing different UI wireframes for an application. This paper introduces Akin, a UI wireframe generator that allows designers to chose a UI design pattern and provides them with multiple UI wireframes for a given UI design pattern. Akin uses a fine-tuned Self-Attention Generative Adversarial Network trained with 500 UI wireframes of 5 android UI design patterns. Upon evaluation, Akin’s generative model provides an Inception Score of 1.63 (SD=0.34) and Fréchet Inception Distance of 297.19. We further conducted user studies with 15 UI/UX designers to evaluate the quality of Akin-generated UI wireframes. The results show that UI/UX designers considered wireframes generated by Akin are as good as wireframes made by designers. Moreover, designers identified Akin-generated wireframes as designer-made 50% of the time. This paper provides a baseline for further research in UI wireframe generation by providing a baseline metric.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131412918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
ModelGenGUIs – High-level Interaction Design with Discourse Models for Automated GUI Generation modelgengui -使用话语模型进行高级交互设计,用于自动生成GUI
26th International Conference on Intelligent User Interfaces - Companion Pub Date : 2021-04-14 DOI: 10.1145/3397482.3450619
H. Kaindl
{"title":"ModelGenGUIs – High-level Interaction Design with Discourse Models for Automated GUI Generation","authors":"H. Kaindl","doi":"10.1145/3397482.3450619","DOIUrl":"https://doi.org/10.1145/3397482.3450619","url":null,"abstract":"Since manual creation of user interfaces is hard and expensive, automated generation may become more and more important in the future. Instead of generating UIs from simple abstractions, transforming them from high-level models should be more attractive. In particular, we let an interaction designer model discourses in the sense of dialogues (supported by a tool), inspired by human-human communication. This tutorial informs about our approach, both about its advantages and its challenges (e.g., in terms of usability of generated UIs). In particular, our unique approach to optimization for a given device (e.g., a Smartphone) that applies Artificial Intelligence (AI) techniques will be high-lighted, as well as the techniques based on ontologies for automated GUI generation and customization. We also address low-vision accessibility of Web-pages, by combining automated design-time generation of Web-pages with responsive design for improving accessibility.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133269792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信