Multimodal desktop interaction: The face - object - gesture - voice example

N. Vidakis, A. Vlasopoulos, Tsampikos Kounalakis, Petros Varchalamas, M. Dimitriou, Grigorios Kalliatakis, Efthimios Syntychakis, John Christofakis, G. Triantafyllidis
{"title":"Multimodal desktop interaction: The face - object - gesture - voice example","authors":"N. Vidakis, A. Vlasopoulos, Tsampikos Kounalakis, Petros Varchalamas, M. Dimitriou, Grigorios Kalliatakis, Efthimios Syntychakis, John Christofakis, G. Triantafyllidis","doi":"10.1109/ICDSP.2013.6622782","DOIUrl":null,"url":null,"abstract":"This paper presents a natural user interface system based on multimodal human computer interaction, which operates as an intermediate module between the user and the operating system. The aim of this work is to demonstrate a multimodal system which gives users the ability to interact with desktop applications using face, objects, voice and gestures. These human behaviors constitute the input qualifiers to the system. Microsoft Kinect multi-sensor was utilized as input device in order to succeed the natural user interaction, mainly due to the multimodal capabilities offered by this device. We demonstrate scenarios which contain all the functions and capabilities of our system from the perspective of natural user interaction.","PeriodicalId":180360,"journal":{"name":"2013 18th International Conference on Digital Signal Processing (DSP)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 18th International Conference on Digital Signal Processing (DSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSP.2013.6622782","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

This paper presents a natural user interface system based on multimodal human computer interaction, which operates as an intermediate module between the user and the operating system. The aim of this work is to demonstrate a multimodal system which gives users the ability to interact with desktop applications using face, objects, voice and gestures. These human behaviors constitute the input qualifiers to the system. Microsoft Kinect multi-sensor was utilized as input device in order to succeed the natural user interaction, mainly due to the multimodal capabilities offered by this device. We demonstrate scenarios which contain all the functions and capabilities of our system from the perspective of natural user interaction.
多模式桌面交互:脸-对象-手势-声音的例子
本文提出了一种基于多模态人机交互的自然用户界面系统,它作为用户和操作系统之间的中间模块。这项工作的目的是展示一个多模态系统,使用户能够使用面部、对象、声音和手势与桌面应用程序进行交互。这些人类行为构成了系统的输入限定符。使用微软Kinect多传感器作为输入设备是为了继承自然的用户交互,主要是由于该设备提供的多模式功能。我们从自然用户交互的角度演示了包含系统所有功能和能力的场景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信