LDNN: Linguistic Knowledge Injectable Deep Neural Network for Group Cohesiveness Understanding

Yanan Wang, Jianming Wu, Jinfa Huang, Gen Hattori, Y. Takishima, Shinya Wada, Rui Kimura, Jie Chen, Satoshi Kurihara
{"title":"LDNN: Linguistic Knowledge Injectable Deep Neural Network for Group Cohesiveness Understanding","authors":"Yanan Wang, Jianming Wu, Jinfa Huang, Gen Hattori, Y. Takishima, Shinya Wada, Rui Kimura, Jie Chen, Satoshi Kurihara","doi":"10.1145/3382507.3418830","DOIUrl":null,"url":null,"abstract":"Group cohesiveness reflects the level of intimacy that people feel with each other, and the development of a dialogue robot that can understand group cohesiveness will lead to the promotion of human communication. However, group cohesiveness is a complex concept that is difficult to predict based only on image pixels. Inspired by the fact that humans intuitively associate linguistic knowledge accumulated in the brain with the visual images they see, we propose a linguistic knowledge injectable deep neural network (LDNN) that builds a visual model (visual LDNN) for predicting group cohesiveness that can automatically associate the linguistic knowledge hidden behind images. LDNN consists of a visual encoder and a language encoder, and applies domain adaptation and linguistic knowledge transition mechanisms to transform linguistic knowledge from a language model to the visual LDNN. We train LDNN by adding descriptions to the training and validation sets of the Group AFfect Dataset 3.0 (GAF 3.0), and test the visual LDNN without any description. Comparing visual LDNN with various fine-tuned DNN models and three state-of-the-art models in the test set, the results demonstrate that the visual LDNN not only improves the performance of the fine-tuned DNN model leading to an MSE very similar to the state-of-the-art model, but is also a practical and efficient method that requires relatively little preprocessing. Furthermore, ablation studies confirm that LDNN is an effective method to inject linguistic knowledge into visual models.","PeriodicalId":402394,"journal":{"name":"Proceedings of the 2020 International Conference on Multimodal Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3382507.3418830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Group cohesiveness reflects the level of intimacy that people feel with each other, and the development of a dialogue robot that can understand group cohesiveness will lead to the promotion of human communication. However, group cohesiveness is a complex concept that is difficult to predict based only on image pixels. Inspired by the fact that humans intuitively associate linguistic knowledge accumulated in the brain with the visual images they see, we propose a linguistic knowledge injectable deep neural network (LDNN) that builds a visual model (visual LDNN) for predicting group cohesiveness that can automatically associate the linguistic knowledge hidden behind images. LDNN consists of a visual encoder and a language encoder, and applies domain adaptation and linguistic knowledge transition mechanisms to transform linguistic knowledge from a language model to the visual LDNN. We train LDNN by adding descriptions to the training and validation sets of the Group AFfect Dataset 3.0 (GAF 3.0), and test the visual LDNN without any description. Comparing visual LDNN with various fine-tuned DNN models and three state-of-the-art models in the test set, the results demonstrate that the visual LDNN not only improves the performance of the fine-tuned DNN model leading to an MSE very similar to the state-of-the-art model, but is also a practical and efficient method that requires relatively little preprocessing. Furthermore, ablation studies confirm that LDNN is an effective method to inject linguistic knowledge into visual models.
面向群体内聚性理解的语言知识注入深度神经网络
群体凝聚力反映了人们彼此之间的亲密程度,开发出能够理解群体凝聚力的对话机器人将会促进人类的交流。然而,群体凝聚力是一个复杂的概念,很难仅根据图像像素来预测。受人类直观地将大脑中积累的语言知识与所看到的视觉图像相关联这一事实的启发,我们提出了一种语言知识注入深度神经网络(LDNN),该网络构建了一个视觉模型(视觉LDNN),用于预测群体凝聚力,该模型可以自动关联隐藏在图像背后的语言知识。LDNN由视觉编码器和语言编码器组成,利用领域自适应和语言知识转换机制将语言知识从语言模型转化为视觉LDNN。我们通过在Group AFfect Dataset 3.0 (GAF 3.0)的训练集和验证集上添加描述来训练LDNN,并在没有任何描述的情况下测试视觉LDNN。将视觉LDNN与各种微调DNN模型和测试集中的三种最先进的DNN模型进行比较,结果表明,视觉LDNN不仅提高了微调DNN模型的性能,得到了与最先进模型非常相似的MSE,而且是一种实用有效的方法,需要相对较少的预处理。此外,消融研究证实了LDNN是一种将语言知识注入视觉模型的有效方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信