Zeeshan Ahmed, I. Steiner, Éva Székely, Julie Carson-Berndsen
{"title":"A system for facial expression-based affective speech translation","authors":"Zeeshan Ahmed, I. Steiner, Éva Székely, Julie Carson-Berndsen","doi":"10.1145/2451176.2451197","DOIUrl":"https://doi.org/10.1145/2451176.2451197","url":null,"abstract":"In the emerging field of speech-to-speech translation, emphasis is currently placed on the linguistic content, while the significance of paralinguistic information conveyed by facial expression or tone of voice is typically neglected. We present a prototype system for multimodal speech-to-speech translation that is able to automatically recognize and translate spoken utterances from one language into another, with the output rendered by a speech synthesis system. The novelty of our system lies in the technique of generating the synthetic speech output in one of several expressive styles that is automatically determined using a camera to analyze the user's facial expression during speech.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116429189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayano Tamura, S. Okada, K. Nitta, Tetsuya Harada, Makoto Sato
{"title":"HAPPIcom: haptic pad for impressive text communication","authors":"Ayano Tamura, S. Okada, K. Nitta, Tetsuya Harada, Makoto Sato","doi":"10.1145/2451176.2451220","DOIUrl":"https://doi.org/10.1145/2451176.2451220","url":null,"abstract":"We propose a system called Haptic Pad for Impressive Text Communication for creating text messages with haptic stimuli using the SPIDAR-tablet haptic interface. This system helps users indicate emotion in text messages and actions of characters in storytelling by attaching physical feedback to words in text. We evaluated the effectiveness of the system experimentally in two scenarios: storytelling and text messaging. We found that effective use of haptic stimuli depends on each situation and participant.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114693867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Gardner, C. A. Duncan, R. Selmic, Jinko Kanno
{"title":"Real-time classification of dynamic hand gestures from marker-based position data","authors":"Andrew Gardner, C. A. Duncan, R. Selmic, Jinko Kanno","doi":"10.1145/2451176.2451181","DOIUrl":"https://doi.org/10.1145/2451176.2451181","url":null,"abstract":"In this paper we describe plans for a dynamic hand gesture recognition system based on motion capture cameras with unlabeled markers. The intended classifier is an extension of previous work on static hand gesture recognition in the same environment. The static gestures are to form the basis of a vocabulary that will allow precise descriptions of various expressive hand gestures when combined with inferred motion and temporal data. Hidden Markov Models and dynamic time warping are expected to be useful tools in achieving this goal.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124428922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-modal context-awareness for ambient intelligence environments","authors":"Georgios Galatas, F. Makedon","doi":"10.1145/2451176.2451180","DOIUrl":"https://doi.org/10.1145/2451176.2451180","url":null,"abstract":"Context-awareness constitutes a fundamental attribute of a smart environment. Our research aims at advancing the context-awareness capabilities of ambient intelligence environments by combining multi-modal information from both stationary and moving sensors. The collected data enables us to perform person identification and 3-D localization and recognize activities. In addition, we explore closed-loop feedback by integrating autonomous robots interacting with the users.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124043418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deploying speech interfaces to the masses","authors":"Aasish Pappu, Alexander I. Rudnicky","doi":"10.1145/2451176.2451189","DOIUrl":"https://doi.org/10.1145/2451176.2451189","url":null,"abstract":"Speech systems are typically deployed either over phones, e.g. IVR agents, or on embodied agents, e.g. domestic robots. Most of these systems are limited to a particular platform i.e., only accessible by phone or in situated interactions. This limits scalability and potential domain of operation. Our goal is to make speech interfaces more widely available, and we are proposing a new approach for deploying such interfaces on the internet along with traditional platforms. In this work, we describe a lightweight speech interface architecture built on top of Freeswitch, an open source softswitch platform. A softswitch enables us to provide users with access over several types of channels (phone, VOIP, etc.) as well as support multiple users at the same time. We demonstrate two dialog applications developed using this approach: 1) Virtual Chauffeur: a voice based virtual driving experience and 2) Talkie: a speech-based chat bot.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131685214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multimodal dialogue interface for mobile local search","authors":"Patrick Ehlen, Michael Johnston","doi":"10.1145/2451176.2451200","DOIUrl":"https://doi.org/10.1145/2451176.2451200","url":null,"abstract":"Speak4itSM uses a multimodal interface to perform mobile search for local businesses. Users combine simultaneous speech and touch to input queries or commands, for example, by saying, \"gas stations\", while tracing a route on a touchscreen. This demonstration will exhibit an extension of our multimodal semantic processing architecture from a one-shot query system to a multimodal dialogue system that tracks dialogue state over multiple turns and resolves prior context using unification-based context resolution. We illustrate the capabilities and limitations of this approach to multimodal interpretation, describing the challenges of supporting true multimodal interaction in a deployed mobile service, while offering an interactive demonstration on tablets and smartphones.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128179227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rachel Adams, Alex Kuntz, Morgan Marks, William Martin, D. Musicant
{"title":"Keeping wiki content current via news sources","authors":"Rachel Adams, Alex Kuntz, Morgan Marks, William Martin, D. Musicant","doi":"10.1145/2451176.2451194","DOIUrl":"https://doi.org/10.1145/2451176.2451194","url":null,"abstract":"Online resources known as wikis are commonly used for collection and distribution of information. We present a software implementation that assists wiki contributors with the task of keeping a wiki current. Our demonstration, built using English Wikipedia, enables wiki contributors to subscribe to sources of news, based on which it makes intelligent recommendations for pages within Wikipedia where the new content should be added. This tool is also potentially useful for helping new Wikipedia editors find material to contribute.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123770759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Namelette: a tasteful supporter for creative naming","authors":"Gözde Özbal, C. Strapparava","doi":"10.1145/2451176.2451196","DOIUrl":"https://doi.org/10.1145/2451176.2451196","url":null,"abstract":"In this paper, we introduce a system that supports the naming process by exploiting natural language processing and linguistic creativity techniques in a completely unsupervised fashion. The system generates two types of neologisms based on the category of the service to be named and the properties to be underlined. While the first type consists of homophonic puns and metaphors, the second consists of neologisms that are produced by adding Latin suffixes to English words or homophonic puns. During this process, both semantic appropriateness and sound pleasantness of the generated names are taken into account.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131760451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards adaptive dialogue systems for assistive living environments","authors":"A. Papangelis, V. Karkaletsis, Heng Huang","doi":"10.1145/2451176.2451185","DOIUrl":"https://doi.org/10.1145/2451176.2451185","url":null,"abstract":"Adaptive Dialogue Systems can be seen as smart interfaces that typically use natural language (spoken or written) as a means of communication. They are being used in many applications, such as customer service, in-car interfaces, even in rehabilitation, and therefore it is essential that these systems are robust, scalable and quickly adaptable in order to cope with changing user or system needs or environmental conditions. Making Dialogue Systems adaptive means overcoming several challenges, such as scalability or lack of training data. Achieving adaptation online has thus been an even greater challenge. We propose to build such a system, that will operate in an Assistive Living Environment and provide its services as a coach to patients that need to perform rehabilitative exercises. We are currently in the process of developing it, using Robot Operating System on a robotic platform.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125619640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Leidinger, L. Spassova, A. Arens-Volland, N. Rösch
{"title":"MoFIS: a mobile user interface for semi-automatic extraction of food product ingredient lists","authors":"Tobias Leidinger, L. Spassova, A. Arens-Volland, N. Rösch","doi":"10.1145/2451176.2451193","DOIUrl":"https://doi.org/10.1145/2451176.2451193","url":null,"abstract":"The availability of food ingredient information in digital form is a major factor in modern information systems related to diet management and health issues. Although ingredient information is printed on food product labels, corresponding digital data is rarely available for the public. In this demo, we present the Mobile Food Information Scanner (MoFIS), a mobile user interface designed to enable users to semi-automatically extract ingredient lists from food product packaging.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"1247 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116083183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}