Tatiana Alexenko, M. Biondo, Deya Banisakher, M. Skubic
{"title":"Android-based speech processing for eldercare robotics","authors":"Tatiana Alexenko, M. Biondo, Deya Banisakher, M. Skubic","doi":"10.1145/2451176.2451213","DOIUrl":"https://doi.org/10.1145/2451176.2451213","url":null,"abstract":"A growing elderly population has created a need for innovative eldercare technologies. The use of a home robot to assist with daily tasks is one such example. In this paper we describe an interface for human-robot interaction, which uses built-in speech recognition in Android phones to control a mobile robot. We discuss benefits of using a smartphone for speech-based robot control and present speech recognition accuracy results for younger and older adults obtained with an Android smartphone.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123396009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An affordable real-time assessment system for surgical skill training","authors":"G. Islam, Baoxin Li, K. Kahol","doi":"10.1145/2451176.2451225","DOIUrl":"https://doi.org/10.1145/2451176.2451225","url":null,"abstract":"This research proposes a novel computer-vision-based approach for skill assessment by observing a surgeon's hand and surgical tool movements in minimally invasive surgical training, which can be extended to the evaluation in real surgeries. Videos capturing the surgical field are analyzed using a system composed of a series of computer vision algorithms. The system automatically detects major skill measuring features from surgical task videos and provides real-time performance feedback on objective and quantitative measurement of surgical skills.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131697229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VibroTactor: low-cost placement-aware technique using vibration echoes on mobile devices","authors":"Sungjae Hwang, K. Wohn","doi":"10.1145/2451176.2451206","DOIUrl":"https://doi.org/10.1145/2451176.2451206","url":null,"abstract":"In this paper, we present a low-cost placement-aware technique, called VibroTactor, which allows mobile devices to determine where they are placed (e.g., in a pocket, on a phone holder, on the bed, or on the desk). This is achieved by filtering and analyzing the acoustic signal generated when the mobile device vibrates. The advantage of this technique is that it is inexpensive and easy to deploy because it uses a microphone, which already embedded in standard mobile devices. To verify this idea, we implemented a prototype and conducted a preliminary test. The results show that this system achieves an average success rate of 91% in 12 different real-world placement sets.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124840182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Gil, Angela Knight, Kevin Zhang, Larry Zhang, Ricky J. Sethi
{"title":"An initial analysis of semantic wikis","authors":"Y. Gil, Angela Knight, Kevin Zhang, Larry Zhang, Ricky J. Sethi","doi":"10.1145/2451176.2451224","DOIUrl":"https://doi.org/10.1145/2451176.2451224","url":null,"abstract":"Semantic wikis augment wikis with semantic properties that can be used to aggregate and query data through reasoning. Semantic wikis are used by many communities, for widely varying purposes such as organizing genomic knowledge, coding software, and tracking environmental data. Although wikis have been analyzed extensively, there has been no published analysis of the use of semantic wikis. We carried out an initial analysis of twenty semantic wikis selected for their diverse characteristics and content. Based on the number of property edits per contributor, we identified several patterns to characterize community behaviors that are common to groups of wikis.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114176052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accelerometer-based hand gesture recognition using feature weighted naïve bayesian classifiers and dynamic time warping","authors":"David Mace, Wei Gao, A. Coskun","doi":"10.1145/2451176.2451211","DOIUrl":"https://doi.org/10.1145/2451176.2451211","url":null,"abstract":"Accelerometer-based gesture recognition is a major area of interest in human-computer interaction. In this paper, we compare two approaches: naïve Bayesian classification with feature separability weighting [1] and dynamic time warping [2]. Algorithms based on these two approaches are introduced and the results are compared. We evaluate both algorithms with four gesture types and five samples from five different people. The gesture identification accuracy for Bayesian classification and dynamic time warping are 97% and 95%, respectively.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126372137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An intelligent web-based interface for programming content detection in q&a forums","authors":"Mahdy Khayyamian, J. Kim","doi":"10.1145/2451176.2451202","DOIUrl":"https://doi.org/10.1145/2451176.2451202","url":null,"abstract":"In this demonstration, we introduce a novel web-based intelligent interface which automatically detects and highlights programming content (programming code and messages) in Q&A programming forums. We expect our interface helps enhancing visual presentation of such forum content and enhance effective participation.\u0000 We solve this problem using several alternative approaches: a dictionary-based baseline method, a non-sequential Naïve Bayes classification algorithm, and Conditional Random Fields (CRF) which is a sequential labeling framework. The best results are produced by CRF method with an F1-Score of 86.9%.\u0000 We also experimentally validate how robust our classifier is by testing the constructed CRF model built on a C++ forum against a Python and a Java dataset. The results indicate the classifier works quite well across different domains.\u0000 To demonstrate detection results, a web-based graphical user interface is developed that accepts a user input programming forum message and processes it using trained CRF model and then displays the programming content snippets in a different font to the user.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122018632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An affective evaluation tool using brain signals","authors":"M. Perakakis, A. Potamianos","doi":"10.1145/2451176.2451222","DOIUrl":"https://doi.org/10.1145/2451176.2451222","url":null,"abstract":"We propose a new interface evaluation tool that incorporates affective metrics which are provided from the ElectroEncephaloGraphy (EEG) signals of the Emotiv EPOC neuro-headset device. The evaluation tool captures and analyzes information in real time from a multitude of sources such as EEG, affective metrics such as frustration, engagement and excitement and facial expression. The proposed tool has been used to gain detailed affective information of users interacting with a mobile multimodal (touch and speech) iPhone application, for which we investigated the effect of speech recognition errors and modality usage patterns.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131503528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational approaches to visual attention for interaction inference","authors":"Hana Vrzakova","doi":"10.1145/2451176.2451187","DOIUrl":"https://doi.org/10.1145/2451176.2451187","url":null,"abstract":"Many aspects of interaction are hard to directly observe and measure. My research focuses on particular aspects of UX such as cognitive workload, problem solving or engagement, and establishes computational links between them and visual attention. Using machine learning and pattern recognition techniques, I aim to achieve automatic inferences for HCI and employ them as enhancements in gaze-aware interfaces.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127017852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From small screens to big displays: understanding interaction in multi-display environments","authors":"T. Seyed, C. Burns, M. Sousa, F. Maurer","doi":"10.1145/2451176.2451186","DOIUrl":"https://doi.org/10.1145/2451176.2451186","url":null,"abstract":"Devices such as tablets, mobile phones, tabletops and wall displays all incorporate different sizes of screens, and are now commonplace in a variety of situations and environments. Environments that incorporate these devices, multi-display environments (MDEs) are highly interactive and innovative, but the interaction in these environments is not well understood. The research presented here investigates and explores interaction and users in MDEs. This exploration tries to understand the conceptual models of MDEs for users and then examine and validate interaction approaches that can be done to make them more usable. In addition to a brief literature review, the methodology, research goals and current research status are presented.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127862811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sheetal K. Agarwal, Nitendra Rajput, N. Kodagoda, B. Wong, S. Oviatt
{"title":"3rd International workshop on intelligent user interfaces for developing regions: IUI4DR","authors":"Sheetal K. Agarwal, Nitendra Rajput, N. Kodagoda, B. Wong, S. Oviatt","doi":"10.1145/2451176.2451229","DOIUrl":"https://doi.org/10.1145/2451176.2451229","url":null,"abstract":"Information Technology (IT) has had significant impact on the society and has touched all aspects of our lives. Up and until now computers and expensive devices have fueled this growth. It has resulted in several benefits to the society. The challenge now is to take this success to its next level where IT services can be accessed by users in developing regions.\u0000 The first IUI4DR workshop was held at IUI 2008. This workshop focused on low cost interfaces, interfaces for illiterate people and on exploring different input mechanisms. The second workshop held at IUI 2011 focused on multimodal applications and collaborative interfaces in particular to aid effective navigation of content and access to services.\u0000 So far we have concentrated on mobile devices as the primary method for people to access content and services. In particular we focused on low-end feature phones that are widely used. However the smart phone market is booming even in developing countries with touch phones available for as little as 50 USD. We want to explore how devices such as smart TVs, smart phones, and old desktop machines, radios, etc. can be used to provide novel interaction methods and interfaces for the low literate populations. We would also like to continue our focus on interaction modalities other than speech such as gestures, haptic inputs and touch interfaces.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121227913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}