{"title":"Exploration Interface for Jointly Visualised Text and Graph Data","authors":"Tim Repke, Ralf Krestel","doi":"10.1145/3379336.3381470","DOIUrl":"https://doi.org/10.1145/3379336.3381470","url":null,"abstract":"Many large text collections exhibit graph structures, either inherent to the content itself or encoded in the metadata of the individual documents. Example graphs extracted from document collections are co-author networks, citation networks, or named-entity-cooccurrence networks. Furthermore, social networks can be extracted from email corpora, tweets, or social media. When it comes to visualising these large corpora, traditionally either the textual content or the network graph are used. We propose to incorporate both, text and graph, to not only visualise the semantic information encoded in the documents' content but also the relationships expressed by the inherent network structure in a two-dimensional landscape. We illustrate the effectiveness of our approach with an exploration interface for different real world datasets.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121067635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vinoth Pandian Sermuga Pandian, Sarah Suleri, Matthias Jarke
{"title":"Blu","authors":"Vinoth Pandian Sermuga Pandian, Sarah Suleri, Matthias Jarke","doi":"10.1145/3379336.3381497","DOIUrl":"https://doi.org/10.1145/3379336.3381497","url":null,"abstract":"UI designers look for inspirational examples from existing UI designs during the prototyping process. However, they have to reconstruct these example UI designs from scratch to edit content or apply styling. The existing solution attempts to make UI screens into editable vector graphics using image segmentation techniques. In this research, we aim to use deep learning and gestalt laws-based algorithms to convert UI screens to editable blueprints by identifying the constituent UI element categories, their location, dimension, text content, and layout hierarchy. In this paper, we present a proof-of-concept web application that uses the UI screens and annotations from the RICO dataset and generates an editable blueprint vector graphic, and a UI layout tree. With this research, we aim to support UX designers in reconstructing UI screens and communicating UI layout information to developers.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117203298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SmartPHIL","authors":"S. Consoli, D. Recupero, Daniele Riboni","doi":"10.1145/3379336.3379353","DOIUrl":"https://doi.org/10.1145/3379336.3379353","url":null,"abstract":"1 ABOUT Given the increasing adoption of personal health services and devices, research on smart personal health interfaces is a hot topic for the communities of AI and human-computer interaction [3, 10, 12]. The availability of conversational interfaces in our environment may lead to a revolution in the home healthcare and health selfmanagement. The conventional means for getting people engaged for change in the health behaviour have been health education and counselling services which does not scale well for wide populations. The first wave of health solutions based on wearables and apps have not been shown to be sufficiently effective for behavior change and health self-management [8, 18]. Counseling is still known to be the most effective intervention to lifestyle diseases. The key element","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"357 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117078689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Werner Geyer, Lydia B. Chilton, Ranjitha Kumar, A. Kalai
{"title":"HAI-GEN 2020: Workshop on Human-AI Co-Creation with Generative Models","authors":"Werner Geyer, Lydia B. Chilton, Ranjitha Kumar, A. Kalai","doi":"10.1145/3379336.3379355","DOIUrl":"https://doi.org/10.1145/3379336.3379355","url":null,"abstract":"Recent advances in generative modeling will enable new kinds of user experiences around content creation, giving us \"creative superpowers\" and move us toward co-creation. This workshop brings together researchers and practitioners from both fields HCI and AI to explore and better understand both the opportunities and challenges of generative modelling from a Human-AI interaction perspective for the creation of both physical and digital artifacts.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129653457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michal Shmueli-Scheuer, Ron Artstein, Y. Khazaeni, Hao Fang, Q. Liao
{"title":"user2agent","authors":"Michal Shmueli-Scheuer, Ron Artstein, Y. Khazaeni, Hao Fang, Q. Liao","doi":"10.1145/3379336.3379356","DOIUrl":"https://doi.org/10.1145/3379336.3379356","url":null,"abstract":"Conversational agents are becoming increasingly popular. These systems present an extremely rich and challenging research space for addressing many aspects of user awareness and adaptation, such as user profiles, contexts, personalities, emotions, social dynamics, conversational styles, etc. Adaptive interfaces are of long-standing interest for the HCI community. Meanwhile, new machine learning approaches are introduced in the current generation of conversational agents, such as deep learning, reinforcement learning, and active learning. It is imperative to consider how various aspects of user-awareness should be handled by these new techniques. The goal of this workshop is to bring together researchers in HCI, user modeling, and the AI and NLP communities from both industry and academia, who are interested in advancing the state-of-the-art on the topic of user-aware conversational agents. Through a focused and open exchange of ideas and discussions, we will work to identify central research topics in user-aware conversational agents and develop a strong interdisciplinary foundation to address them.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116773503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating Natural Language Explanations for Group Recommendations in High Divergence Scenarios","authors":"Shabnam Najafian","doi":"10.1145/3379336.3381512","DOIUrl":"https://doi.org/10.1145/3379336.3381512","url":null,"abstract":"In some scenarios, like music or tourism, people often consume items in groups. However, reaching a consensus is difficult as different members of the group may have highly diverging tastes. To keep the rest of the group satisfied, an individual might need to be confronted occasionally with items they do not like. In this context, presenting an explanation of how the system came up with the recommended item(s), may make it easier for users to accept items they might not like for the benefit of the group. This paper presents our progress on proposing improved algorithms for recommending items (for both music and tourism) for a group to consume and an approach for generating natural language explanations. Our future directions include extending the current work by modeling different factors that we need to consider when we generate explanations for groups e.g. size of the group, group members' personality, demographics, and their relationship.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116436643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hans-Christian Schmitz, F. Kurth, Kevin Wilkinghoff, Uwe Müllerschkowski, Christian Karrasch, Volker Schmid
{"title":"Towards Robust Speech Interfaces for the ISS","authors":"Hans-Christian Schmitz, F. Kurth, Kevin Wilkinghoff, Uwe Müllerschkowski, Christian Karrasch, Volker Schmid","doi":"10.1145/3379336.3381496","DOIUrl":"https://doi.org/10.1145/3379336.3381496","url":null,"abstract":"The International Space Station ISS is a scientific laboratory in which astronauts conduct a great variety of experiments on a tight schedule. In order to fulfill their tasks efficiently and correctly, astronauts need assistance, which (at least partially) can be provided by IT systems on board, among them robotic assistants like the Crew Interactive Mobile Companion CIMON. However, the creation of user interfaces for such systems is a challenge, because astronauts often have to interact hands-free or cannot direct their attention to a visual user interface. These challenges can be met by providing multimodal user interfaces that enable speech interaction, among other modalities. We describe the use context for speech interfaces on the ISS, specific requirements and possible solutions. Our concepts rely on previous work carried out in acoustically demanding environments.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"387 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114348652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Costi, Marios Belk, C. Fidas, Argyris Constantinides, A. Pitsillides
{"title":"CogniKit","authors":"A. Costi, Marios Belk, C. Fidas, Argyris Constantinides, A. Pitsillides","doi":"10.1145/3379336.3381460","DOIUrl":"https://doi.org/10.1145/3379336.3381460","url":null,"abstract":"This paper presents CogniKit; an extensible tool for human cognitive modeling. It is based on the analysis, classification and visualization of eye tracking data such as gaze points, fixation count and duration, saccades, gaze transition and stationary entropy, heat maps, areas of interests, etc. These are further processed, analyzed and classified for detecting higher level human cognitive factors such as cognitive processing styles and abilities. CogniKit comprises of two main components: i) a software application that collects and processes low- and highlevel eye gaze data metrics in real-time; and ii) an extensible interactive workbench for storing, analyzing, classifying and visualizing the collected eye gaze data. We developed an example application to demonstrate the use of CogniKit within a practical scenario.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114763741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Runyon, Blake Williford, J. Linsey, T. Hammond
{"title":"An Intelligent Interface for Automatic Grading of Sketched Free Body Diagrams","authors":"Matthew Runyon, Blake Williford, J. Linsey, T. Hammond","doi":"10.1145/3379336.3381471","DOIUrl":"https://doi.org/10.1145/3379336.3381471","url":null,"abstract":"Sketching free body diagrams is an important skill that students learn in introductory physics and engineering classes; however, university class sizes are growing and often have hundreds of students in a single class. This creates a grading challenge for instructors as there is simply not enough time nor resources to provide adequate feedback on every problem. We have developed an intelligent user interface called Mechanix to provide automated, real-time feedback on hand-drawn free body diagrams for students. The system is driven by novel sketch recognition algorithms developed for recognizing and comparing trusses, general shapes, and arrows in diagrams. We have also discovered trends in how the students utilize extra submissions for learning through deployment to five universities with 350 students completing homework on the system over the 2018 and 2019 school year. A study with 57 students showed the system allowed for homework scores similar to other homework mediums while requiring and automatically grading the free body diagrams in addition to answers.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127272377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pan","authors":"Can Zhang, Yuexian Zou, Guang Chen, Lei Gan","doi":"10.1145/3343031.3350876","DOIUrl":"https://doi.org/10.1145/3343031.3350876","url":null,"abstract":"In the Amazon there are various resources with nutritional and physical-chemical characteristics that can be used as partial substitutes for wheat flour in the bakery industry, which would allow meeting the current high demand and enhancing the consumption of food from the Amazon. Therefore, this research aimed to evaluate the effect of the partial substitution of wheat flour for purple sachapapa flour (Dioscorea trifida L.) in the production of commercial bread using the direct method, as well as its physical-chemical and sensory characteristics. The research was descriptive, quantitative approach and experimental design. The population was of a finite type and, for the sample, an intentional non-probabilistic sampling was taken for convenience, obtaining 32 kg of purple sachapapa flour. Regarding the primary source of information, the results of the reading of the measuring equipment were taken into account, and as instruments for collecting secondary data, the technical documentary guide was used. For the statistical analysis, the nonparametric Friedman test was used, and the significant differences between the treatments were subjected to the Tukey test (p <0.05). It was found that, as the percentage of substitution increases, the chemical components remain in a similar percentage in the final product; while, regarding the sensory characteristics, it was observed that they present significant differences. Finally, it was concluded that the breads produced with substitute flour of purple sachapapa contribute significantly the content of proteins, carbohydrates, minerals and high levels of anthocyanins; However, the low acceptability due to the violet coloration that it presents is due to the presence of high levels of anthocyanins in the flour of sachapapa morada (Dioscorea trifida L.).","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116311506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}