使用语音/手势的多模态界面系统的定性分析

M. Z. Baig, M. Kavakli
{"title":"使用语音/手势的多模态界面系统的定性分析","authors":"M. Z. Baig, M. Kavakli","doi":"10.1109/ICIEA.2018.8398188","DOIUrl":null,"url":null,"abstract":"In this paper, we present an upgraded version of the 3D modelling system, De-SIGN v3 [1]. The system uses speech and gesture recognition technology to collect information from the user in real-time. These inputs are then transferred to the main program to carry out required 3D object creation and manipulation operations. The aim of the system is to analyse the designer behaviour and quality of interaction, in a virtual reality environment. The system has the basic functionality for 3D object modelling. The users have performed two sets of experiments. In the first experiment, the participants had to draw 3D objects using keyboard and mouse. In the second experiment, speech and gesture inputs have been used for 3D modelling. The evaluation has been done with the help of questionnaires and task completion ratings. The results showed that with speech, it is easy to draw the objects but sometimes system detects the numbers incorrectly. With gestures, it is difficult to stabilize the hand at one position. The completion rate was above 90% with the upgraded system but the precision is low depending on participants.","PeriodicalId":140420,"journal":{"name":"2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Qualitative analysis of a multimodal interface system using speech/gesture\",\"authors\":\"M. Z. Baig, M. Kavakli\",\"doi\":\"10.1109/ICIEA.2018.8398188\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present an upgraded version of the 3D modelling system, De-SIGN v3 [1]. The system uses speech and gesture recognition technology to collect information from the user in real-time. These inputs are then transferred to the main program to carry out required 3D object creation and manipulation operations. The aim of the system is to analyse the designer behaviour and quality of interaction, in a virtual reality environment. The system has the basic functionality for 3D object modelling. The users have performed two sets of experiments. In the first experiment, the participants had to draw 3D objects using keyboard and mouse. In the second experiment, speech and gesture inputs have been used for 3D modelling. The evaluation has been done with the help of questionnaires and task completion ratings. The results showed that with speech, it is easy to draw the objects but sometimes system detects the numbers incorrectly. With gestures, it is difficult to stabilize the hand at one position. The completion rate was above 90% with the upgraded system but the precision is low depending on participants.\",\"PeriodicalId\":140420,\"journal\":{\"name\":\"2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA)\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIEA.2018.8398188\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIEA.2018.8398188","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

在本文中,我们提出了3D建模系统的升级版本De-SIGN v3[1]。该系统利用语音和手势识别技术实时收集用户信息。然后将这些输入转移到主程序以执行所需的3D对象创建和操作操作。该系统的目的是在虚拟现实环境中分析设计者的行为和交互质量。该系统具有三维物体建模的基本功能。用户进行了两组实验。在第一个实验中,参与者必须使用键盘和鼠标绘制3D物体。在第二个实验中,语音和手势输入被用于3D建模。评估是通过问卷调查和任务完成评级来完成的。结果表明,使用语音可以很容易地画出物体,但有时系统会错误地检测到数字。使用手势时,很难将手稳定在一个位置。升级后的系统完成率在90%以上,但根据参与者的不同,精度较低。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Qualitative analysis of a multimodal interface system using speech/gesture
In this paper, we present an upgraded version of the 3D modelling system, De-SIGN v3 [1]. The system uses speech and gesture recognition technology to collect information from the user in real-time. These inputs are then transferred to the main program to carry out required 3D object creation and manipulation operations. The aim of the system is to analyse the designer behaviour and quality of interaction, in a virtual reality environment. The system has the basic functionality for 3D object modelling. The users have performed two sets of experiments. In the first experiment, the participants had to draw 3D objects using keyboard and mouse. In the second experiment, speech and gesture inputs have been used for 3D modelling. The evaluation has been done with the help of questionnaires and task completion ratings. The results showed that with speech, it is easy to draw the objects but sometimes system detects the numbers incorrectly. With gestures, it is difficult to stabilize the hand at one position. The completion rate was above 90% with the upgraded system but the precision is low depending on participants.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信