Towards a multi-modal Deep Learning Architecture for User Modeling

A. Tato, R. Nkambou
{"title":"Towards a multi-modal Deep Learning Architecture for User Modeling","authors":"A. Tato, R. Nkambou","doi":"10.32473/flairs.36.133328","DOIUrl":null,"url":null,"abstract":"Deep learning has succeeded in various applications, including image classification and feature learning. However, there needs to be more research on its use in Intelligent Tutoring Systems or Serious Games, particularly in modeling user behavior during learning or gaming sessions using multi-modal data. Creating an effective user model is crucial for developing a highly adaptive system. To achieve this, it is necessary to consider all available data sources to inform the user’s current state. This study proposes a user-sensitive deep multi-modal architecture that leverages deep learning and user data to extract a rich latent representation of the user. The architecture combines a Long Short-Term Memory, a Convolutional Neural Network, and multiple Deep Neu-ral Networks to handle the multi-modality of data. The resulting model was evaluated on a public multi-modal dataset, achieving better results than state-of-the-art algorithms for a similar task: opinion polarity detection. These findings suggest that the latent representation learned from the data is useful in discriminating behaviors. This proposed solution can be applied in various contexts where user modeling using multi-modal data is critical for improving the user experience.","PeriodicalId":302103,"journal":{"name":"The International FLAIRS Conference Proceedings","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International FLAIRS Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32473/flairs.36.133328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning has succeeded in various applications, including image classification and feature learning. However, there needs to be more research on its use in Intelligent Tutoring Systems or Serious Games, particularly in modeling user behavior during learning or gaming sessions using multi-modal data. Creating an effective user model is crucial for developing a highly adaptive system. To achieve this, it is necessary to consider all available data sources to inform the user’s current state. This study proposes a user-sensitive deep multi-modal architecture that leverages deep learning and user data to extract a rich latent representation of the user. The architecture combines a Long Short-Term Memory, a Convolutional Neural Network, and multiple Deep Neu-ral Networks to handle the multi-modality of data. The resulting model was evaluated on a public multi-modal dataset, achieving better results than state-of-the-art algorithms for a similar task: opinion polarity detection. These findings suggest that the latent representation learned from the data is useful in discriminating behaviors. This proposed solution can be applied in various contexts where user modeling using multi-modal data is critical for improving the user experience.
面向用户建模的多模态深度学习架构
深度学习在各种应用中都取得了成功,包括图像分类和特征学习。然而,还需要对其在智能辅导系统或严肃游戏中的应用进行更多研究,特别是在使用多模态数据对学习或游戏过程中的用户行为建模方面。创建有效的用户模型对于开发高适应性系统至关重要。为了实现这一点,有必要考虑所有可用的数据源来通知用户的当前状态。本研究提出了一种用户敏感的深度多模态架构,该架构利用深度学习和用户数据来提取用户的丰富潜在表示。该架构结合了长短期记忆、卷积神经网络和多个深度神经网络来处理多模态数据。结果模型在公共多模态数据集上进行了评估,在类似任务(意见极性检测)上取得了比最先进的算法更好的结果。这些发现表明,从数据中学习到的潜在表征在区分行为方面是有用的。这个建议的解决方案可以应用于使用多模态数据的用户建模对于改善用户体验至关重要的各种上下文中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信