IUI '13 Companion最新文献

筛选
英文 中文
A system for facial expression-based affective speech translation 基于面部表情的情感语音翻译系统
IUI '13 Companion Pub Date : 2013-03-19 DOI: 10.1145/2451176.2451197
Zeeshan Ahmed, I. Steiner, Éva Székely, Julie Carson-Berndsen
{"title":"A system for facial expression-based affective speech translation","authors":"Zeeshan Ahmed, I. Steiner, Éva Székely, Julie Carson-Berndsen","doi":"10.1145/2451176.2451197","DOIUrl":"https://doi.org/10.1145/2451176.2451197","url":null,"abstract":"In the emerging field of speech-to-speech translation, emphasis is currently placed on the linguistic content, while the significance of paralinguistic information conveyed by facial expression or tone of voice is typically neglected. We present a prototype system for multimodal speech-to-speech translation that is able to automatically recognize and translate spoken utterances from one language into another, with the output rendered by a speech synthesis system. The novelty of our system lies in the technique of generating the synthetic speech output in one of several expressive styles that is automatically determined using a camera to analyze the user's facial expression during speech.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116429189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
HAPPIcom: haptic pad for impressive text communication HAPPIcom:令人印象深刻的文本交流触觉板
IUI '13 Companion Pub Date : 2013-03-19 DOI: 10.1145/2451176.2451220
Ayano Tamura, S. Okada, K. Nitta, Tetsuya Harada, Makoto Sato
{"title":"HAPPIcom: haptic pad for impressive text communication","authors":"Ayano Tamura, S. Okada, K. Nitta, Tetsuya Harada, Makoto Sato","doi":"10.1145/2451176.2451220","DOIUrl":"https://doi.org/10.1145/2451176.2451220","url":null,"abstract":"We propose a system called Haptic Pad for Impressive Text Communication for creating text messages with haptic stimuli using the SPIDAR-tablet haptic interface. This system helps users indicate emotion in text messages and actions of characters in storytelling by attaching physical feedback to words in text. We evaluated the effectiveness of the system experimentally in two scenarios: storytelling and text messaging. We found that effective use of haptic stimuli depends on each situation and participant.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114693867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Real-time classification of dynamic hand gestures from marker-based position data 基于标记的位置数据的动态手势实时分类
IUI '13 Companion Pub Date : 2013-03-19 DOI: 10.1145/2451176.2451181
Andrew Gardner, C. A. Duncan, R. Selmic, Jinko Kanno
{"title":"Real-time classification of dynamic hand gestures from marker-based position data","authors":"Andrew Gardner, C. A. Duncan, R. Selmic, Jinko Kanno","doi":"10.1145/2451176.2451181","DOIUrl":"https://doi.org/10.1145/2451176.2451181","url":null,"abstract":"In this paper we describe plans for a dynamic hand gesture recognition system based on motion capture cameras with unlabeled markers. The intended classifier is an extension of previous work on static hand gesture recognition in the same environment. The static gestures are to form the basis of a vocabulary that will allow precise descriptions of various expressive hand gestures when combined with inferred motion and temporal data. Hidden Markov Models and dynamic time warping are expected to be useful tools in achieving this goal.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124428922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-modal context-awareness for ambient intelligence environments 面向环境智能环境的多模态上下文感知
IUI '13 Companion Pub Date : 2013-03-19 DOI: 10.1145/2451176.2451180
Georgios Galatas, F. Makedon
{"title":"Multi-modal context-awareness for ambient intelligence environments","authors":"Georgios Galatas, F. Makedon","doi":"10.1145/2451176.2451180","DOIUrl":"https://doi.org/10.1145/2451176.2451180","url":null,"abstract":"Context-awareness constitutes a fundamental attribute of a smart environment. Our research aims at advancing the context-awareness capabilities of ambient intelligence environments by combining multi-modal information from both stationary and moving sensors. The collected data enables us to perform person identification and 3-D localization and recognize activities. In addition, we explore closed-loop feedback by integrating autonomous robots interacting with the users.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124043418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deploying speech interfaces to the masses 向大众部署语音接口
IUI '13 Companion Pub Date : 2013-03-19 DOI: 10.1145/2451176.2451189
Aasish Pappu, Alexander I. Rudnicky
{"title":"Deploying speech interfaces to the masses","authors":"Aasish Pappu, Alexander I. Rudnicky","doi":"10.1145/2451176.2451189","DOIUrl":"https://doi.org/10.1145/2451176.2451189","url":null,"abstract":"Speech systems are typically deployed either over phones, e.g. IVR agents, or on embodied agents, e.g. domestic robots. Most of these systems are limited to a particular platform i.e., only accessible by phone or in situated interactions. This limits scalability and potential domain of operation. Our goal is to make speech interfaces more widely available, and we are proposing a new approach for deploying such interfaces on the internet along with traditional platforms. In this work, we describe a lightweight speech interface architecture built on top of Freeswitch, an open source softswitch platform. A softswitch enables us to provide users with access over several types of channels (phone, VOIP, etc.) as well as support multiple users at the same time. We demonstrate two dialog applications developed using this approach: 1) Virtual Chauffeur: a voice based virtual driving experience and 2) Talkie: a speech-based chat bot.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131685214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A multimodal dialogue interface for mobile local search 用于移动本地搜索的多模式对话界面
IUI '13 Companion Pub Date : 2013-03-19 DOI: 10.1145/2451176.2451200
Patrick Ehlen, Michael Johnston
{"title":"A multimodal dialogue interface for mobile local search","authors":"Patrick Ehlen, Michael Johnston","doi":"10.1145/2451176.2451200","DOIUrl":"https://doi.org/10.1145/2451176.2451200","url":null,"abstract":"Speak4itSM uses a multimodal interface to perform mobile search for local businesses. Users combine simultaneous speech and touch to input queries or commands, for example, by saying, \"gas stations\", while tracing a route on a touchscreen. This demonstration will exhibit an extension of our multimodal semantic processing architecture from a one-shot query system to a multimodal dialogue system that tracks dialogue state over multiple turns and resolves prior context using unification-based context resolution. We illustrate the capabilities and limitations of this approach to multimodal interpretation, describing the challenges of supporting true multimodal interaction in a deployed mobile service, while offering an interactive demonstration on tablets and smartphones.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128179227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Keeping wiki content current via news sources 通过新闻来源保持wiki内容最新
IUI '13 Companion Pub Date : 2013-03-19 DOI: 10.1145/2451176.2451194
Rachel Adams, Alex Kuntz, Morgan Marks, William Martin, D. Musicant
{"title":"Keeping wiki content current via news sources","authors":"Rachel Adams, Alex Kuntz, Morgan Marks, William Martin, D. Musicant","doi":"10.1145/2451176.2451194","DOIUrl":"https://doi.org/10.1145/2451176.2451194","url":null,"abstract":"Online resources known as wikis are commonly used for collection and distribution of information. We present a software implementation that assists wiki contributors with the task of keeping a wiki current. Our demonstration, built using English Wikipedia, enables wiki contributors to subscribe to sources of news, based on which it makes intelligent recommendations for pages within Wikipedia where the new content should be added. This tool is also potentially useful for helping new Wikipedia editors find material to contribute.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123770759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Namelette: a tasteful supporter for creative naming 命名者:有品位的创意命名支持者
IUI '13 Companion Pub Date : 2013-03-19 DOI: 10.1145/2451176.2451196
Gözde Özbal, C. Strapparava
{"title":"Namelette: a tasteful supporter for creative naming","authors":"Gözde Özbal, C. Strapparava","doi":"10.1145/2451176.2451196","DOIUrl":"https://doi.org/10.1145/2451176.2451196","url":null,"abstract":"In this paper, we introduce a system that supports the naming process by exploiting natural language processing and linguistic creativity techniques in a completely unsupervised fashion. The system generates two types of neologisms based on the category of the service to be named and the properties to be underlined. While the first type consists of homophonic puns and metaphors, the second consists of neologisms that are produced by adding Latin suffixes to English words or homophonic puns. During this process, both semantic appropriateness and sound pleasantness of the generated names are taken into account.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131760451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards adaptive dialogue systems for assistive living environments 面向辅助生活环境的适应性对话系统
IUI '13 Companion Pub Date : 2013-03-19 DOI: 10.1145/2451176.2451185
A. Papangelis, V. Karkaletsis, Heng Huang
{"title":"Towards adaptive dialogue systems for assistive living environments","authors":"A. Papangelis, V. Karkaletsis, Heng Huang","doi":"10.1145/2451176.2451185","DOIUrl":"https://doi.org/10.1145/2451176.2451185","url":null,"abstract":"Adaptive Dialogue Systems can be seen as smart interfaces that typically use natural language (spoken or written) as a means of communication. They are being used in many applications, such as customer service, in-car interfaces, even in rehabilitation, and therefore it is essential that these systems are robust, scalable and quickly adaptable in order to cope with changing user or system needs or environmental conditions. Making Dialogue Systems adaptive means overcoming several challenges, such as scalability or lack of training data. Achieving adaptation online has thus been an even greater challenge. We propose to build such a system, that will operate in an Assistive Living Environment and provide its services as a coach to patients that need to perform rehabilitative exercises. We are currently in the process of developing it, using Robot Operating System on a robotic platform.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125619640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MoFIS: a mobile user interface for semi-automatic extraction of food product ingredient lists MoFIS:用于半自动提取食品成分表的移动用户界面
IUI '13 Companion Pub Date : 2013-03-19 DOI: 10.1145/2451176.2451193
Tobias Leidinger, L. Spassova, A. Arens-Volland, N. Rösch
{"title":"MoFIS: a mobile user interface for semi-automatic extraction of food product ingredient lists","authors":"Tobias Leidinger, L. Spassova, A. Arens-Volland, N. Rösch","doi":"10.1145/2451176.2451193","DOIUrl":"https://doi.org/10.1145/2451176.2451193","url":null,"abstract":"The availability of food ingredient information in digital form is a major factor in modern information systems related to diet management and health issues. Although ingredient information is printed on food product labels, corresponding digital data is rarely available for the public. In this demo, we present the Mobile Food Information Scanner (MoFIS), a mobile user interface designed to enable users to semi-automatically extract ingredient lists from food product packaging.","PeriodicalId":253850,"journal":{"name":"IUI '13 Companion","volume":"1247 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116083183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信