Dimitris Koryzis, Vasileios Svolopoulos, D. Spiliotopoulos
{"title":"Metalogue: A Multimodal Learning Journey","authors":"Dimitris Koryzis, Vasileios Svolopoulos, D. Spiliotopoulos","doi":"10.1145/2910674.2935860","DOIUrl":null,"url":null,"abstract":"In this paper, we present a high-level description of the Metalogue system that develops a multi-modal dialogue system that is able to implement interactive behavior between a virtual agent and a learner outlining the insight to the development of a fully-integrated multimodal interactive system. This system includes several components addressing several research domains: metacognitive modeling, skill training, usability testing, prosody analysis, multimodality, dialogue management, speech recognition, gesture recognition and interpretation, and learner feedback. The key issue is the integration of all these components in a single platform, allowing to the users to improve their metacognitive skills. This work reports on the user experience evaluation during the design and development phase of the system that feed back to the design and continuous refinement of the overall approach.","PeriodicalId":359504,"journal":{"name":"Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2910674.2935860","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In this paper, we present a high-level description of the Metalogue system that develops a multi-modal dialogue system that is able to implement interactive behavior between a virtual agent and a learner outlining the insight to the development of a fully-integrated multimodal interactive system. This system includes several components addressing several research domains: metacognitive modeling, skill training, usability testing, prosody analysis, multimodality, dialogue management, speech recognition, gesture recognition and interpretation, and learner feedback. The key issue is the integration of all these components in a single platform, allowing to the users to improve their metacognitive skills. This work reports on the user experience evaluation during the design and development phase of the system that feed back to the design and continuous refinement of the overall approach.