Benjamin Rosman, S. Ramamoorthy, M. H. Mahmud, Pushmeet Kohli
{"title":"On user behaviour adaptation under interface change","authors":"Benjamin Rosman, S. Ramamoorthy, M. H. Mahmud, Pushmeet Kohli","doi":"10.1145/2557500.2557535","DOIUrl":"https://doi.org/10.1145/2557500.2557535","url":null,"abstract":"Different interfaces allow a user to achieve the same end goal through different action sequences, e.g., command lines vs. drop down menus. Interface efficiency can be described in terms of a cost incurred, e.g., time taken, by the user in typical tasks. Realistic users arrive at evaluations of efficiency, hence making choices about which interface to use, over time, based on trial and error experience. Their choices are also determined by prior experience, which determines how much learning time is required. These factors have substantial effect on the adoption of new interfaces. In this paper, we aim at understanding how users adapt under interface change, how much time it takes them to learn to interact optimally with an interface, and how this learning could be expedited through intermediate interfaces. We present results from a series of experiments that make four main points: (a) different interfaces for accomplishing the same task can elicit significant variability in performance, (b) switching interfaces can result in adverse sharp shifts in performance, (c) subject to some variability, there are individual thresholds on tolerance to this kind of performance degradation with an interface, causing users to potentially abandon what may be a pretty good interface, and (d) our main result -- shaping user learning through the presentation of intermediate interfaces can mitigate the adverse shifts in performance while still enabling the eventual improved performance with the complex interface upon the user becoming suitably accustomed. In our experiments, human users use keyboard based interfaces to navigate a simulated ball through a maze. Our results are a first step towards interface adaptation algorithms that architect choice to accommodate personality traits of realistic users.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116400059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Users and motion","authors":"Daniel Sonntag","doi":"10.1145/3260904","DOIUrl":"https://doi.org/10.1145/3260904","url":null,"abstract":"","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123678456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melissa Roemmele, Haley Archer-McClellan, A. Gordon
{"title":"Triangle charades: a data-collection game for recognizing actions in motion trajectories","authors":"Melissa Roemmele, Haley Archer-McClellan, A. Gordon","doi":"10.1145/2557500.2557510","DOIUrl":"https://doi.org/10.1145/2557500.2557510","url":null,"abstract":"Humans have a remarkable tendency to anthropomorphize moving objects, ascribing to them intentions and emotions as if they were human. Early social psychology research demonstrated that animated film clips depicting the movements of simple geometric shapes could elicit rich interpretations of intentional behavior from viewers. In attempting to model this reasoning process in software, we first address the problem of automatically recognizing humanlike actions in the trajectories of moving shapes. There are two main difficulties. First, there is no defined vocabulary of actions that are recognizable to people from motion trajectories. Second, in order for an automated system to learn actions from motion trajectories using machine-learning techniques, a vast amount of these action- trajectory pairs is needed as training data. This paper describes an approach to data collection that resolves both of these problems. In a web-based game, called Triangle Charades, players create motion trajectories for actions by animating a triangle to depict those actions. Other players view these animations and guess the action they depict. An action is considered recognizable if players can correctly guess it from animations. To move towards defining a controlled vocabulary and collecting a large dataset, we conducted a pilot study in which 87 users played Triangle Charades. Based on this data, we computed a simple metric for action recognizability. Scores on this metric formed a gradual linear pattern, suggesting there is no clear cutoff for determining if an action is recognizable from motion data. These initial results demonstrate the advantages of using a game to collect data for this action recognition task.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"11 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126067267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving government services with social media feedback","authors":"Stephen Wan, Cécile Paris","doi":"10.1145/2557500.2557513","DOIUrl":"https://doi.org/10.1145/2557500.2557513","url":null,"abstract":"Social media is an invaluable source of feedback not just about consumer products and services but also about the effectiveness of government services. Our aim is to help analysts identify how government services can be improved based on citizen-contributed feedback found in publicly available social media. We present ongoing research for a social media monitoring interactive prototype with federated search and text analysis functionality. The prototype, developed to fit the workflow of social media monitors in the government sector, collects, analyses, and provides overviews of social media content. It facilitates relevance judgements on specific social media posts to decide whether or not to engage online. Our user log analysis validates the original design requirements and indicates ongoing utility to our federated search approach.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Schiavo, A. Cappelletti, Eleonora Mencarini, O. Stock, M. Zancanaro
{"title":"Overt or subtle? Supporting group conversations with automatically targeted directives","authors":"G. Schiavo, A. Cappelletti, Eleonora Mencarini, O. Stock, M. Zancanaro","doi":"10.1145/2557500.2557507","DOIUrl":"https://doi.org/10.1145/2557500.2557507","url":null,"abstract":"In this paper, we present a system that acts as an automatic facilitator by supporting the flow of communication in a group conversation activity. The system monitors the group members' non-verbal behavior and promotes balanced participation, giving targeted directives to the participants through peripheral displays. We describe an initial study to compare two ways of influencing participantsfi social dynamics: overt directives, explicit recommendations of social actions displayed in the form of text; or subtle directives, where the same recommendations are provided in an implicit manner. Our study indicates that, when the participants understand how the implicit messages work, the subtle facilitation is regarded as more useful than the overt one and it is considered to more positively influence the group behavior.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134108490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Mock, Jörg Edelmann, A. Schilling, W. Rosenstiel
{"title":"User identification using raw sensor data from typing on interactive displays","authors":"Philipp Mock, Jörg Edelmann, A. Schilling, W. Rosenstiel","doi":"10.1145/2557500.2557503","DOIUrl":"https://doi.org/10.1145/2557500.2557503","url":null,"abstract":"Personalized soft-keyboards which adapt to a user's individual typing behavior can reduce typing errors on interactive displays. In multi-user scenarios a personalized model has to be loaded for each participant. In this paper we describe a user identification technique that is based on raw sensor data from an optical touch screen. For classification of users we use a multi-class support vector machine that is trained with grayscale images from the optical sensor. Our implementation can identify a specific user from a set of 12 users with an average accuracy of 97.51% after one keystroke. It can be used to automatically select individual typing models during free-text entry. The resulting authentication process is completely implicit. We furthermore describe how the approach can be extended to automatic loading of personal information and settings.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133927245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Hofmann, Vanessa Tobisch, U. Ehrlich, A. Berton, Angela Castronovo
{"title":"Comparison of speech-based in-car HMI concepts in a driving simulation study","authors":"H. Hofmann, Vanessa Tobisch, U. Ehrlich, A. Berton, Angela Castronovo","doi":"10.1145/2557500.2557509","DOIUrl":"https://doi.org/10.1145/2557500.2557509","url":null,"abstract":"This paper reports experimental results from a driving simulation study in order to compare different speech-based in-car human-machine interface concepts. The effects of the use of a command-based and a conversational in-car speech dialog system on usability and driver distraction are evaluated. Different graphical user interface concepts have been designed in order to investigate their potential supportive or distracting effects. The results show that only few differences concerning speech dialog quality were found when comparing the speech dialog strategies. The command-based dialog was slightly better accepted than the conversational dialog, which can be attributed to the limited performance of the system's language understanding component. No differences in driver distraction were revealed. Moreover, the study revealed that speech dialog systems without graphical user interface were accepted by participants in the driving environment and that the use of a graphical user interface impaired the driving performance and increased gaze-based distraction. In the driving scenario, the choice of speech dialog strategies does not have a strong influence on usability and no influence on driver distraction. Instead, when designing the graphical user interface of an in-car speech dialog systems, developers should consider reducing the content presented on the display device in order to reduce driver distraction.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126626198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Minimizing user effort in transforming data by example","authors":"Bo Wu, Pedro A. Szekely, Craig A. Knoblock","doi":"10.1145/2557500.2557523","DOIUrl":"https://doi.org/10.1145/2557500.2557523","url":null,"abstract":"Programming by example enables users to transform data formats without coding. To be practical, the method must synthesize the correct transformation with minimal user input. We present a method that minimizes user effort by color-coding the transformation result and recommending specific records where the user should provide examples. Simulation results and a user study show that our method significantly reduces user effort and increases the success rate for synthesizing correct transformation programs by example.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"21 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129456429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Location sharing privacy preference: analysis and personalized recommendation","authors":"Jierui Xie, Bart P. Knijnenburg, Hongxia Jin","doi":"10.1145/2557500.2557504","DOIUrl":"https://doi.org/10.1145/2557500.2557504","url":null,"abstract":"Location-based systems are becoming more popular with the explosive growth in popularity of smart phones. However, the user adoption of these systems is hindered by growing user concerns about privacy. To design better location-based systems that attract more user adoption and protect users from information under/overexposure, it is highly desirable to understand users' location sharing and privacy preferences. This paper makes two main contributions. First, by studying users' location sharing privacy preferences with three groups of people (i.e., Family, Friend and Colleague) in different contexts, including check-in time, companion and emotion, we reveal that location sharing behaviors are highly dynamic, context-aware, audience-aware and personal. In particular, we find that emotion and companion are good contextual predictors of privacy preferences. Moreover, we find that there are strong similarities or correlations among contexts and groups. Our second contribution is to show, in light of the user study, that despite the dynamic and context-dependent nature of location sharing, it is still possible to predict a user's in-situ sharing preference in various contexts. More specifically, we explore whether it is possible to give users a personalized recommendation of the sharing setting they are most likely to prefer, based on context similarity, group correlation and collective check-in preference. PPRec, the proposed recommendation algorithm that incorporates the above three elements, delivers personalized recommendations that could be helpful to reduce both user's burden and privacy risk. It also provides additional insights into the relative usefulness of different personal and contextual factors in predicting users' sharing behavior.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124593781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Alt, Stefan Schneegass, Jonas Auda, Rufat Rzayev, N. Broy
{"title":"Using eye-tracking to support interaction with layered 3D interfaces on stereoscopic displays","authors":"Florian Alt, Stefan Schneegass, Jonas Auda, Rufat Rzayev, N. Broy","doi":"10.1145/2557500.2557518","DOIUrl":"https://doi.org/10.1145/2557500.2557518","url":null,"abstract":"In this paper, we investigate the concept of gaze-based interaction with 3D user interfaces. We currently see stereo vision displays becoming ubiquitous, particularly as auto-stereoscopy enables the perception of 3D content without the use of glasses. As a result, application areas for 3D beyond entertainment in cinema or at home emerge, including work settings, mobile phones, public displays, and cars. At the same time, eye tracking is hitting the consumer market with low-cost devices. We envision eye trackers in the future to be integrated with consumer devices (laptops, mobile phones, displays), hence allowing the user's gaze to be analyzed and used as input for interactive applications. A particular challenge when applying this concept to 3D displays is that current eye trackers provide the gaze point in 2D only (x and y coordinates). In this paper, we compare the performance of two methods that use the eye's physiology for calculating the gaze point in 3D space, hence enabling gaze-based interaction with stereoscopic content. Furthermore, we provide a comparison of gaze interaction in 2D and 3D with regard to user experience and performance. Our results show that with current technology, eye tracking on stereoscopic displays is possible with similar performance as on standard 2D screens.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121250374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}