Biddut Sarker Bijoy, Syeda Jannatus Saba, Souvik Sarkar, Md. Saiful Islam, Sheikh Rabiul Islam, M. R. Amin, Shubhra (Santu) Karmaker
{"title":"COVID19α: Interactive Spatio-Temporal Visualization of COVID-19 Symptoms through Tweet Analysis","authors":"Biddut Sarker Bijoy, Syeda Jannatus Saba, Souvik Sarkar, Md. Saiful Islam, Sheikh Rabiul Islam, M. R. Amin, Shubhra (Santu) Karmaker","doi":"10.1145/3397482.3450715","DOIUrl":"https://doi.org/10.1145/3397482.3450715","url":null,"abstract":"In this demo, we focus on analyzing COVID-19 related symptoms across the globe reported through tweets by building an interactive spatio-temporal visualization tool, i.e., COVID19α. Using around 462 million tweets collected over a span of six months, COVID19α provides three different types of visualization tools: 1) Spatial Visualization with a focus on visualizing COVID-19 symptoms across different geographic locations; 2) Temporal Visualization with a focus on visualizing the evolution of COVID-19 symptoms over time for a particular geographic location; and 3) Spatio-Temporal Visualization with a focus on combining both spatial and temporal analysis to provide comparative visualizations between two (or more) symptoms across time and space. We believe that health professionals, scientists, and policymakers will be able to leverage this interactive tool to devise better and targeted health intervention policies. Our developed interactive visualization tool is publicly available at https://bijoy-sust.github.io/Covid19/.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133043983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Poller, Margarita Chikobava, Jack Hodges, Mareike Kritzler, F. Michahelles, Tilman Becker
{"title":"Back-end semantics for multimodal dialog on XR devices","authors":"P. Poller, Margarita Chikobava, Jack Hodges, Mareike Kritzler, F. Michahelles, Tilman Becker","doi":"10.1145/3397482.3450719","DOIUrl":"https://doi.org/10.1145/3397482.3450719","url":null,"abstract":"Extended Reality (XR) devices have great potential to become the next wave in mobile interaction. They provide powerful, easy-to-use Augmented Reality (AR) and/or Mixed Reality (MR) in conjunction with multimodal interaction facilities using gaze, gesture, and speech. However, current implementations typically lack a coherent semantic representation for the virtual elements, backend-communication, and dialog capabilities. Existing devices are often restricted to mere command and control interactions. To improve these shortcomings and realize enhanced system capabilities and comprehensive interactivity, we have developed a flexible modular approach that integrates powerful back-end platforms using standard API interfaces. As a concrete example, we present our distributed implementation of a multimodal dialog system on the Microsoft Hololens®. It uses the SiAM-dp multimodal dialog platform as a back-end service and an Open Semantic Framework (OSF) back-end server to extract the semantic models for creating the dialog domain model.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125808856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Healthy Interfaces (HEALTHI) Workshop","authors":"Michael Sobolev, Katrin Hänsel, Tanzeem Choudhury","doi":"10.1145/3397482.3450710","DOIUrl":"https://doi.org/10.1145/3397482.3450710","url":null,"abstract":"The first workshop on Healthy Interfaces (HEALTHI), collocated with the 2021 ACM Intelligent User Interfaces (IUI) conference, offers a forum that brings academics and industry researchers together and seeks submissions broadly related to the design of healthy user interfaces. The workshop will discuss intelligent user interfaces such as screens, wearables, voices assistants, and chatbots in the context of supporting health, health behavior, and wellbeing.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130096290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark P. Graus, B. Ferwerda, M. Tkalcic, Panagiotis Germanakos
{"title":"Fifth HUMANIZE workshop on Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory: Summary","authors":"Mark P. Graus, B. Ferwerda, M. Tkalcic, Panagiotis Germanakos","doi":"10.1145/3397482.3450708","DOIUrl":"https://doi.org/10.1145/3397482.3450708","url":null,"abstract":"The fifth HUMANIZE workshop1 on Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory took place in conjunction with the 26th annual meeting of the Intelligent User Interfaces (IUI)2 community in Texas, USA on April 17, 2021. The workshop provided a venue for researchers from different fields to interact by accepting contributions on the intersection of practical data mining methods and theoretical knowledge for personalization. A total of five papers was accepted for this edition of the workshop.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"146 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129852833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Agrusti, Fabio Gasparetti, Cristina Gena, G. Sansonetti, M. Tkalcic
{"title":"SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE)","authors":"F. Agrusti, Fabio Gasparetti, Cristina Gena, G. Sansonetti, M. Tkalcic","doi":"10.1145/3397482.3450709","DOIUrl":"https://doi.org/10.1145/3397482.3450709","url":null,"abstract":"This is the first edition of the SOcial and Cultural IntegrAtion with PersonaLIZEd Interfaces (SOCIALIZE) workshop. The main goal is to bring together all those interested in the development of interactive techniques that may contribute to foster the social and cultural inclusion of a broad range of users. More specifically, we intend to attract research that takes into account the interaction peculiarities typical of different realities, with a focus on disadvantaged and at-risk categories (e.g., refugees and migrants) and vulnerable groups (e.g., children, elderly, autistic and disabled people). Among others, we are also interested in human-robot interaction techniques aimed at the development of social robots, that is, autonomous robots that interact with people by engaging in social-affective behaviors, abilities, and rules related to their collaborative role.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123570293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Denis Parra, Antonio Ossa-Guerra, Manuel Cartagena, Patricio Cerda-Mardini, Felipe del-Rio
{"title":"VisRec: A Hands-on Tutorial on Deep Learning for Visual Recommender Systems","authors":"Denis Parra, Antonio Ossa-Guerra, Manuel Cartagena, Patricio Cerda-Mardini, Felipe del-Rio","doi":"10.1145/3397482.3450620","DOIUrl":"https://doi.org/10.1145/3397482.3450620","url":null,"abstract":"This tutorial serves as an introduction to deep learning approaches to build visual recommendation systems. Deep learning models can be used as feature extractors, and perform extremely well in visual recommender systems to create representations of visual items. This tutorial covers the foundations of convolutional neural networks and then how to use them to build state-of-the-art personalized recommendation systems. The tutorial is designed as a hands-on experience, focused on providing both theoretical knowledge as well as practical experience on the topics of the course.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116881490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke
{"title":"SynZ: Enhanced Synthetic Dataset for Training UI Element Detectors","authors":"Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke","doi":"10.1145/3397482.3450725","DOIUrl":"https://doi.org/10.1145/3397482.3450725","url":null,"abstract":"User Interface (UI) prototyping is an iterative process where designers initially sketch UIs before transforming them into interactive digital designs. Recent research applies Deep Neural Networks (DNNs) to identify the constituent UI elements of these UI sketches and transform these sketches into front-end code. Training such DNN models requires a large-scale dataset of UI sketches, which is time-consuming and expensive to collect. Therefore, we earlier proposed Syn to generate UI sketches synthetically by random allocation of UI element sketches. However, these UI sketches are not statistically similar to real-life UI screens. To bridge this gap, in this paper, we introduce the SynZ dataset, which contains 175,377 synthetically generated UI sketches statistically similar to real-life UI screens. To generate SynZ, we analyzed, enhanced, and extracted annotations from the RICO dataset and used 17,979 hand-drawn UI element sketches from the UISketch dataset. Further, we fine-tuned a UI element detector with SynZ and observed that it doubles the mean Average Precision of UI element detection compared to the Syn dataset.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114354053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LectYS: A System for Summarizing Lecture Videos on YouTube","authors":"Taewon Yoo, Hyewon Jeong, Donghwan Lee, Hyunggu Jung","doi":"10.1145/3397482.3450722","DOIUrl":"https://doi.org/10.1145/3397482.3450722","url":null,"abstract":"Students leverage online resources such as online classes and YouTube is increasing. Still, there remain challenges for students to easily find the right lecture video online at the right time. Multiple video search methods have been proposed, but to our knowledge, no previous study has proposed a system that summarize YouTube lecture videos using subtitles. This demo proposes LectYS, a system for summarizing lecture videos on YouTube to support students search for lecture video content on YouTube. The key features of our proposed system are: (1) to summarize the lecture video using the subtitle of the video, (2) to access to the specific parts of the video using the start time of video subtitle, and (3) to search for the video with keyword. Using LectYS, students are allowed to search for lecture videos on YouTube faster and more accurately.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122181396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nishit Gajjar, Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke
{"title":"Akin: Generating UI Wireframes From UI Design Patterns Using Deep Learning","authors":"Nishit Gajjar, Vinoth Pandian Sermuga Pandian, Sarah Suleri, M. Jarke","doi":"10.1145/3397482.3450727","DOIUrl":"https://doi.org/10.1145/3397482.3450727","url":null,"abstract":"During the User interface (UI) design process, designers use UI design patterns for conceptualizing different UI wireframes for an application. This paper introduces Akin, a UI wireframe generator that allows designers to chose a UI design pattern and provides them with multiple UI wireframes for a given UI design pattern. Akin uses a fine-tuned Self-Attention Generative Adversarial Network trained with 500 UI wireframes of 5 android UI design patterns. Upon evaluation, Akin’s generative model provides an Inception Score of 1.63 (SD=0.34) and Fréchet Inception Distance of 297.19. We further conducted user studies with 15 UI/UX designers to evaluate the quality of Akin-generated UI wireframes. The results show that UI/UX designers considered wireframes generated by Akin are as good as wireframes made by designers. Moreover, designers identified Akin-generated wireframes as designer-made 50% of the time. This paper provides a baseline for further research in UI wireframe generation by providing a baseline metric.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131412918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ModelGenGUIs – High-level Interaction Design with Discourse Models for Automated GUI Generation","authors":"H. Kaindl","doi":"10.1145/3397482.3450619","DOIUrl":"https://doi.org/10.1145/3397482.3450619","url":null,"abstract":"Since manual creation of user interfaces is hard and expensive, automated generation may become more and more important in the future. Instead of generating UIs from simple abstractions, transforming them from high-level models should be more attractive. In particular, we let an interaction designer model discourses in the sense of dialogues (supported by a tool), inspired by human-human communication. This tutorial informs about our approach, both about its advantages and its challenges (e.g., in terms of usability of generated UIs). In particular, our unique approach to optimization for a given device (e.g., a Smartphone) that applies Artificial Intelligence (AI) techniques will be high-lighted, as well as the techniques based on ontologies for automated GUI generation and customization. We also address low-vision accessibility of Web-pages, by combining automated design-time generation of Web-pages with responsive design for improving accessibility.","PeriodicalId":216190,"journal":{"name":"26th International Conference on Intelligent User Interfaces - Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133269792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}