{"title":"Driver Readiness Model for Regulating the Transfer from Automation to Human Control","authors":"T. Mioch, L. Kroon, Mark Antonius Neerincx","doi":"10.1145/3025171.3025199","DOIUrl":"https://doi.org/10.1145/3025171.3025199","url":null,"abstract":"In the collaborative driving scenario of truck platooning, the first car is driven by its chauffeur and the next cars follow automatically via a so-called 'virtual tow-bar'. The chauffeurs of the following cars do not drive 'in the towbar mode', but need to be able to take back control in foreseen emph{and} unforeseen conditions. It is crucial that this transfer of control only takes place when the chauffeur is ready for it. This paper presents a Driver Readiness (DR) ontological model that specifies the core factors, with their relationships, of a chauffeur's current and near-future readiness for taking back the control of driving. A first model was derived from a literature study and an analysis of truck driving data, which was refined subsequently based on an expert review. This DR model distinguishes (a) current and required states for the physical (hand, feet, head, and seating position) and mental readiness (attention and situation awareness), (b) agents (human and machine actor), (c) policies for agent behaviors, and (d) states of the vehicle and its environment. It provides the knowledge base of a Control Transfer Support (CTS) agent that assesses the current and predicted chauffeur state and guides the transition of control in an adaptive and personalized manner. The DR model will be fed by information from the network and in-car sensors. The behaviors of the CTS agent will be generated and constrained by the instantiated policies, providing an important step towards a safe transfer of control from automation to human driver.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125734659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guidelines for Tree-based Collaborative Goal Setting","authors":"Rifca Peters, J. Broekens, Mark Antonius Neerincx","doi":"10.1145/3025171.3025188","DOIUrl":"https://doi.org/10.1145/3025171.3025188","url":null,"abstract":"Educational technology needs a model of learning goals to support motivation, learning gain, tailoring of the learning process, and sharing of the personal goals between different types of users (i.e., learner and educator) and the system. This paper proposes a tree-based learning goal structuring to facilitate personal goal setting to shape and monitor the learning process. We developed a goal ontology and created a user interface representing this knowledge-base for the self-management education for children with Type 1 Diabetes Mellitus. Subsequently, a co-operative evaluation was conducted with healthcare professionals to refine and validate the ontology and its representation. Presentation of a concrete prototype proved to support professionals' contribution to the design process. The resulting tree-based goal structure enables three important tasks: ability assessment, goal setting and progress monitoring. Visualization should be clarified by icon placement and clustering of goals with the same difficulty and topic. Bloom's taxonomy for learning objectives should be applied to improve completeness and clarity of goal content.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115533787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Peltonen, Kseniia Belorustceva, Tuukka Ruotsalo
{"title":"Topic-Relevance Map: Visualization for Improving Search Result Comprehension","authors":"J. Peltonen, Kseniia Belorustceva, Tuukka Ruotsalo","doi":"10.1145/3025171.3025223","DOIUrl":"https://doi.org/10.1145/3025171.3025223","url":null,"abstract":"We introduce topic-relevance map, an interactive search result visualization that assists rapid information comprehension across a large ranked set of results. The topic-relevance map visualizes a topical overview of the search result space as keywords with respect to two essential information retrieval measures: relevance and topical similarity. Non-linear dimensionality reduction is used to embed high-dimensional keyword representations of search result data into angles on a radial layout. Relevance of keywords is estimated by a ranking method and visualized as radiuses on the radial layout. As a result, similar keywords are modeled by nearby points, dissimilar keywords are modeled by distant points, more relevant keywords are closer to the center of the radial display, and less relevant keywords are distant from the center of the radial display. We evaluated the effect of the topic-relevance map in a search result comprehension task where 24 participants were summarizing search results and produced a conceptualization of the result space. The results show that topic-relevance map significantly improves participants' comprehension capability compared to a conventional ranked list presentation.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129853516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CarNote: Reducing Misunderstanding between Drivers by Digital Augmentation","authors":"Chao Wang, J. Terken, Jun Hu","doi":"10.1145/3025171.3025214","DOIUrl":"https://doi.org/10.1145/3025171.3025214","url":null,"abstract":"The road environment can be seen as a social situation: Drivers need to coordinate with each other to share the infrastructure. In addition to the driving behaviour itself, lights, horn and speed are the most frequently used means to exchange information, limiting both the range and the bandwidth of the connectivity and leading to misunderstanding and conflict. With everywhere available connectivity and the broad penetration of social network services, the relationship between drivers on the road may gain more transparency, enabling social information to pass through the steel shell of the cars and giving opportunities to reduce misunderstanding and strengthen empathy. In this study, we present \"CarNote\", a concept that aims to reduce misunderstanding and conflict between drivers by showing their emergency driving status to others. This concept was prototyped and evaluated with users in a driving simulator. The results showed that CarNote enhances drivers' empathy, increases forgiveness and decreases anger to others on the road.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128300206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shilad Sen, Anja Beth Swoap, Qisheng Li, Brooke Boatman, I. Dippenaar, Rebecca Gold, Monica Ngo, Sarah Pujol, Bret Jackson, Brent J. Hecht
{"title":"Cartograph: Unlocking Spatial Visualization Through Semantic Enhancement","authors":"Shilad Sen, Anja Beth Swoap, Qisheng Li, Brooke Boatman, I. Dippenaar, Rebecca Gold, Monica Ngo, Sarah Pujol, Bret Jackson, Brent J. Hecht","doi":"10.1145/3025171.3025233","DOIUrl":"https://doi.org/10.1145/3025171.3025233","url":null,"abstract":"This paper introduces Cartograph, a visualization system that harnesses the vast amount of world knowledge encoded within Wikipedia to create thematic maps of almost any data. Cartograph extends previous systems that visualize non-spatial data using geographic approaches. While these systems required data with an existing semantic structure, Cartograph unlocks spatial visualization for a much larger variety of datasets by enhancing input datasets with semantic information extracted from Wikipedia. Cartograph's map embeddings use neural networks trained on Wikipedia article content and user navigation behavior. Using these embeddings, the system can reveal connections between points that are unrelated in the original data sets, but are related in meaning and therefore embedded close together on the map. We describe the design of the system and key challenges we encountered, and we present findings from an exploratory user study","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130894148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asnat Greenstein-Messica, L. Rokach, Michael Friedmann
{"title":"Session-Based Recommendations Using Item Embedding","authors":"Asnat Greenstein-Messica, L. Rokach, Michael Friedmann","doi":"10.1145/3025171.3025197","DOIUrl":"https://doi.org/10.1145/3025171.3025197","url":null,"abstract":"Recent methods for learning vector space representations of words, word embedding, such as GloVe and Word2Vec have succeeded in capturing fine-grained semantic and syntactic regularities. We analyzed the effectiveness of these methods for e-commerce recommender systems by transferring the sequence of items generated by users' browsing journey in an e-commerce website into a sentence of words. We examined the prediction of fine-grained item similarity (such as item most similar to iPhone 6 64GB smart phone) and item analogy (such as iPhone 5 is to iPhone 6 as Samsung S5 is to Samsung S6) using real life users' browsing history of an online European department store. Our results reveal that such methods outperform related models such as singular value decomposition (SVD) with respect to item similarity and analogy tasks across different product categories. Furthermore, these methods produce a highly condensed item vector space representation, item embedding, with behavioral meaning sub-structure. These vectors can be used as features in a variety of recommender system applications. In particular, we used these vectors as features in a neural network based models for anonymous user recommendation based on session's first few clicks. It is found that recurrent neural network that preserves the order of user's clicks outperforms standard neural network, item-to-item similarity and SVD (recall@10 value of 42% based on first three clicks) for this task.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131403018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying Frequent User Tasks from Application Logs","authors":"Himel Dev, Zhicheng Liu","doi":"10.1145/3025171.3025184","DOIUrl":"https://doi.org/10.1145/3025171.3025184","url":null,"abstract":"In the light of continuous growth in log analytics, application logs remain a valuable source to understand and analyze patterns in user behavior. Today, almost every major software company employs analysts to reveal user insights from log data. To understand the tasks and challenges of the analysts, we conducted a background study with a group of analysts from a major software company. A fundamental analytics objective that we recognized through this study involves identifying frequent user tasks from application logs. More specifically, analysts are interested in identifying operation groups that represent meaningful tasks performed by many users inside applications. This is challenging, primarily because of the nature of modern application logs, which are long, noisy and consist of events from high-cardinality set. In this paper, we address these challenges to design a novel frequent pattern ranking technique that extracts frequent user tasks from application logs. Our experimental study shows that our proposed technique significantly outperforms state of the art for real-world data.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114312983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Negative Relevance Feedback for Exploratory Search with Visual Interactive Intent Modeling","authors":"J. Peltonen, Jonathan Strahl, P. Floréen","doi":"10.1145/3025171.3025222","DOIUrl":"https://doi.org/10.1145/3025171.3025222","url":null,"abstract":"In difficult information seeking tasks, the majority of top-ranked documents for an initial query may be non-relevant, and negative relevance feedback may then help find relevant documents. Traditional negative relevance feedback has been studied on document results; we introduce a system and interface for negative feedback in a novel exploratory search setting, where continuous-valued feedback is directly given to keyword features of an inferred probabilistic user intent model. The introduced system allows both positive and negative feedback directly on an interactive visual interface, by letting the user manipulate keywords on an optimized visualization of modeled user intent. Feedback on the interactive intent model lets the user direct the search: Relevance of keywords is estimated from feedback by Bayesian inference, influence of feedback is increased by a novel propagation step, documents are retrieved by likelihoods of relevant versus non-relevant intents, and the most relevant keywords (having the highest upper confidence bounds of relevance) and the most non-relevant ones (having the smallest lower confidence bounds of relevance) are shown as options for further feedback. We carry out task-based information seeking experiments with real users on difficult real tasks; we compare the system to the nearest state of the art baseline allowing positive feedback only, and show negative feedback significantly improves the quality of retrieved information and user satisfaction for difficult tasks.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114322130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Web Screen Reading Automation Assistance Using Semantic Abstraction","authors":"V. Ashok, Yury Puzis, Y. Borodin, I. Ramakrishnan","doi":"10.1145/3025171.3025229","DOIUrl":"https://doi.org/10.1145/3025171.3025229","url":null,"abstract":"A screen reader's sequential press-and-listen interface makes for an unsatisfactory and often times painful web-browsing experience for blind people. To help alleviate this situation, we introduce Web Screen Reading Automation Assistant (SRAA) for automating users' screen-reading actions (e.g., finding price of an item) on demand, thereby letting them focus on what they want to do rather than on how to get it done. The key idea is to elevate the interaction from operating on (syntactic) HTML elements, as is done now, to operating on web entities (which are semantically meaningful collections of related HTML elements, e.g., search results, menus, widgets, etc.). SRAA realizes this idea of semantic abstraction by constructing a Web Entity Model (WEM), which is a collection of web entities of the underlying webpage, using an extensive generic library of custom-designed descriptions of commonly occurring web entities across websites. The WEM brings blind users closer to how sighted people perceive and operate on web entities, and together with a natural-language user interface, SRAA relieves users from having to press numerous shortcuts to operate on low-level HTML elements - the principal source of tedium and frustration. This paper describes the design and implementation of SRAA. Evaluation with 18 blind subjects demonstrates its usability and effectiveness.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115539909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Future Interactive Intelligent Systems for Animals: Study and Recognition of Embodied Interactions","authors":"P. Pons, J. Martínez, A. Catalá","doi":"10.1145/3025171.3025175","DOIUrl":"https://doi.org/10.1145/3025171.3025175","url":null,"abstract":"User-centered design applied to non-human animals is showing to be a promising research line known as Animal Computer Interaction (ACI), aimed at improving animals' wellbeing using technology. Within this research line, intelligent systems for animal entertainment could have remarkable benefits for their mental and physical wellbeing, while providing new ways of communication and amusement between humans and animals. In order to create user-centered interactive intelligent systems for animals, we first need to understand how they spontaneously interact with technology, and develop suitable mechanisms to adapt to the animals' observed interactions and preferences. Therefore, this paper describes a pioneer study on cats' preferences and behaviors with different technological devices. It also presents the design and evaluation of a promising depth-based tracking system for the detection of cats' body parts and postures. The contributions of this work lay foundations towards providing a framework for the development of future intelligent systems for animal entertainment.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122866296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}