Carlos Duque, Fernando de la Rosa, J. T. Hernández
{"title":"Multimodal interaction architecture applied to navigation in maps","authors":"Carlos Duque, Fernando de la Rosa, J. T. Hernández","doi":"10.1109/COLOMBIANCC.2013.6637520","DOIUrl":null,"url":null,"abstract":"This paper presents a multimodal interaction architecture that is proposed as the interaction/control component of new or existing computer applications, particularly for 2D/3D visual computing applications. The architecture aims to encourage multimodality in interaction between the user and the computer application in order to achieve a more natural interaction, easier and user friendly. The presented architecture integrates specialized interaction modules in different modalities (hand gestures and voice initially) operating simultaneously. The information resulting from these modules is processed by a multimodal integration module to detect simultaneous actions/commands. The architecture was integrated with Bing-Maps application to allow navigation in maps through voice and gestures. Some results of an initial evaluation using the current prototype are presented where we get information regarding its functionality and the resulting user experience.","PeriodicalId":409281,"journal":{"name":"2013 8th Computing Colombian Conference (8CCC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 8th Computing Colombian Conference (8CCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COLOMBIANCC.2013.6637520","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper presents a multimodal interaction architecture that is proposed as the interaction/control component of new or existing computer applications, particularly for 2D/3D visual computing applications. The architecture aims to encourage multimodality in interaction between the user and the computer application in order to achieve a more natural interaction, easier and user friendly. The presented architecture integrates specialized interaction modules in different modalities (hand gestures and voice initially) operating simultaneously. The information resulting from these modules is processed by a multimodal integration module to detect simultaneous actions/commands. The architecture was integrated with Bing-Maps application to allow navigation in maps through voice and gestures. Some results of an initial evaluation using the current prototype are presented where we get information regarding its functionality and the resulting user experience.