Jun-Ho Choi, Kyungmin Kim, Taejin Park, J. Yun, Jong-Hwan Lee, Songkuk Kim, Hyunjung Shim, Jong-Seok Lee
{"title":"Real-time Integrated Human Activity Recognition System based on Multimodal User Understanding","authors":"Jun-Ho Choi, Kyungmin Kim, Taejin Park, J. Yun, Jong-Hwan Lee, Songkuk Kim, Hyunjung Shim, Jong-Seok Lee","doi":"10.1145/3379336.3381482","DOIUrl":"https://doi.org/10.1145/3379336.3381482","url":null,"abstract":"This paper presents our real-time human activity recognition system that understands human behavior using multimodal sensor data at multiple levels. Our system consists of a multimodal data acquisition framework and a user understanding algorithm including user identification, activity recognition, and health monitoring components.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134232361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","authors":"","doi":"10.1145/3379336","DOIUrl":"https://doi.org/10.1145/3379336","url":null,"abstract":"","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114586056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oscar Gomez, K. Ganguli, Leonid Kuzmenko, C. Guedes
{"title":"Exploring Music Collections: An Interactive, Dimensionality Reduction Approach to Visualizing Songbanks","authors":"Oscar Gomez, K. Ganguli, Leonid Kuzmenko, C. Guedes","doi":"10.1145/3379336.3381461","DOIUrl":"https://doi.org/10.1145/3379336.3381461","url":null,"abstract":"This is an overview paper on an interactive music exploration interface for music collections. This interface is meant to help explore the cross-cultural similarities, interactions, and patterns of music excerpts from different regions and understand the similarities by employing computational audio analysis, machine learning, and visualization techniques. In our computational analysis, we used standard audio features that capture timbre information and projected them onto a lower-dimensional space for visualizing the (dis)similarity. There are two collections of non-Eurogenetic music under study. The 2-D and 3-D mappings are visualized through a dashboard application and also rendered in Virtual Reality space where users can interact and explore to get meaningful insights about the structural (dis)similarities of the music collections.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114896803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kart-ON: Affordable Early Programming Education with Shared Smartphones and Easy-to-Find Materials","authors":"Alpay Sabuncuoglu, T. M. Sezgin","doi":"10.1145/3379336.3381472","DOIUrl":"https://doi.org/10.1145/3379336.3381472","url":null,"abstract":"Programming education has become an integral part of the primary school curriculum. However, most programming practices rely heavily on computers and electronics which causes inequalities across contexts with different socioeconomic levels. This demo introduces a new and convenient way of using tangibles for coding in classrooms. Our programming environment, Kart-ON, is designed as an affordable means to increase collaboration among students and decrease dependency on screen-based interfaces. Kart-ON is a tangible programming language that uses everyday objects such as paper, pen, fabrics as programming objects and employs a mobile phone as the compiler. Our preliminary studies with children (n=16, mage=12) show that Kart-ON boosts active and collaborative student participation in the tangible programming task, which is especially valuable in crowded classrooms with limited access to computational devices.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116938526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Someone really wanted that song but it was not me!: Evaluating Which Information to Disclose in Explanations for Group Recommendations","authors":"Shabnam Najafian, O. Inel, N. Tintarev","doi":"10.1145/3379336.3381489","DOIUrl":"https://doi.org/10.1145/3379336.3381489","url":null,"abstract":"Explanations can be used to supply transparency in recommender systems (RSs). However, when presenting a shared explanation to a group, we need to balance users' need for privacy with their need for transparency. This is particularly challenging when group members have highly diverging tastes and individuals are confronted with items they do not like, for the benefit of the group. This paper investigates which information people would like to disclose in explanations for group recommendations in the music domain.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131039803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Corentin Haidon, Hubert Kenfack Ngankam, S. Giroux, H. Pigot
{"title":"Using Augmented Reality and Ontologies to Co-design Assistive Technologies in Smart Homes","authors":"Corentin Haidon, Hubert Kenfack Ngankam, S. Giroux, H. Pigot","doi":"10.1145/3379336.3381492","DOIUrl":"https://doi.org/10.1145/3379336.3381492","url":null,"abstract":"Smart homes provide alternative means to foster autonomy for frail people living at home. Oral and visual cues are produced to help people carrying out activities. This necessitates to determine which sensors and effectors to choose for monitoring activities, which is is not trivial. A Do-it-Yourself approach is proposed for caregivers who know the frail people habits but needs a user-friendly interaction. Augmented reality and ontologies are aimed to address many of the smart home design issues, via a virtual advisor. The augmented reality interface is linked to an OWL ontology that describes space, sensors and effectors, activities of daily living, monitoring and assistance. First, a semantic 3D model of one's house is constructed. Second, still on augmented reality, a hierarchical model of the assistance and monitoring scenario is specified. A virtual advisor proposes actions, scenarios and corrections of design inconsistencies.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121619041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Seidenschwarz, Adalsteinn Jonsson, Michael Plüss, M. Rumo, L. Probst, H. Schuldt
{"title":"The SportSense User Interface for Holistic Tactical Performance Analysis in Football","authors":"Philipp Seidenschwarz, Adalsteinn Jonsson, Michael Plüss, M. Rumo, L. Probst, H. Schuldt","doi":"10.1145/3379336.3381473","DOIUrl":"https://doi.org/10.1145/3379336.3381473","url":null,"abstract":"In today's team sports, the effective and user-friendly support of analysts and coaches in analyzing their team's tactics is essential. In this paper, we present an extended version of SPORTSENSE, a tool for searching in sports video by means of sketches, for creating and visualizing statistics of individual players and the entire team, and for visualizing the players' off-ball movement. SPORTSENSE has been developed in close collaboration with football coaches.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115205597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Berardina Nadja De Carolis, Cristina Gena, Antonio Lieto, Silvia Rossi, A. Sciutti
{"title":"Workshop on Adapted intEraction with SociAl Robots (cAESAR)","authors":"Berardina Nadja De Carolis, Cristina Gena, Antonio Lieto, Silvia Rossi, A. Sciutti","doi":"10.1145/3379336.3379360","DOIUrl":"https://doi.org/10.1145/3379336.3379360","url":null,"abstract":"Human Robot Interaction (HRI) is a field of study dedicated to understanding, designing, and evaluating robotic systems for use by, or with, humans. In HRI there is a consensus about the design and implementation of robotic systems that should be able to adapt their behaviour on the basis of user actions and behaviour. The robot should adapt to emotions, personalities, and it should also have memory of past interactions with the user in order to become believable. This is of particular importance in the field of social robotics and social HRI. The aim of this Workshop is to bring together researchers and practitioners who are working on various aspects of social robotics and adaptive interaction.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114648894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reflecting the automated vehicle's perception and intention: Light-based interaction approaches for on-board HMI in highly automated vehicles","authors":"Marc Wilbrink, Anna Schieben, M. Oehl","doi":"10.1145/3379336.3381502","DOIUrl":"https://doi.org/10.1145/3379336.3381502","url":null,"abstract":"The number of automated driving functionalities in conventional vehicles is rising year by year. Intensive research regarding highly automated vehicles (AV) is performed by all big OEMs. AVs need advanced sensors and intelligence to detect relevant objects in driving situations and to perform driving tasks safely. Due to the shift of control, the role of the driver changes to an on-board user without any driving related tasks. However, the interaction between the AV and its on-board user stays vital in terms of creating a common understanding of the current situation and establishing a shared representation of the upcoming manoeuvre to ensure user acceptance and trust in automation. The current paper investigates two different light-based HMI approaches for AV / on-board user interaction. In a VR-Study 33 participants experienced an automated left turn in an urban scenario in highly automated driving. While turning, the AV had to consider other road users (pedestrian or another vehicle). The two HMI approaches (intention- vs. perception-based) were compared to a baseline using a within-subject design. Results reveal that using perception- or intention-based interaction design lead to higher user trust and usability in both scenarios.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"17 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120894900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"XAIT","authors":"Erick Oduor, Kun Qian, Yunyao Li, Lucian Popa","doi":"10.1145/3379336.3381468","DOIUrl":"https://doi.org/10.1145/3379336.3381468","url":null,"abstract":"Explainable AI (XAI) for text is an emerging field focused on developing novel techniques to render black-box models more interpretable for text-related tasks. To understand the recent advances in XAI for text, we have done an extensive literature review and user studies. Allowing users to easily explore the assets we created is a major challenge. In this demo we present an interactive website named XAIT. The core of XAIT is a tree-like taxonomy, with which the users can interactively explore and understand the field of XAI for text through different dimensions: (1) the type of text tasks in consideration; (2) the explanation techniques used for a particular task; (3) who are the target and appropriate users for a particular explanation technique. XAIT can be used as a recommender system for users to find out what are the appropriate and suitable explanation techniques for their text-related tasks, or for researchers who want to find out publications and tools relating to XAI for text.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122368310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}