Speech Intention Classification with Multimodal Deep Learning.

Yue Gu, Xinyu Li, Shuhong Chen, Jianyu Zhang, Ivan Marsic
{"title":"Speech Intention Classification with Multimodal Deep Learning.","authors":"Yue Gu, Xinyu Li, Shuhong Chen, Jianyu Zhang, Ivan Marsic","doi":"10.1007/978-3-319-57351-9_30","DOIUrl":null,"url":null,"abstract":"<p><p>We present a novel multimodal deep learning structure that automatically extracts features from textual-acoustic data for sentence-level speech classification. Textual and acoustic features were first extracted using two independent convolutional neural network structures, then combined into a joint representation, and finally fed into a decision softmax layer. We tested the proposed model in an actual medical setting, using speech recording and its transcribed log. Our model achieved 83.10% average accuracy in detecting 6 different intentions. We also found that our model using automatically extracted features for intention classification outperformed existing models that use manufactured features.</p>","PeriodicalId":91830,"journal":{"name":"Advances in artificial intelligence. Canadian Society for Computational Studies of Intelligence. Conference","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6261374/pdf/nihms-993283.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in artificial intelligence. Canadian Society for Computational Studies of Intelligence. Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-319-57351-9_30","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2017/4/11 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We present a novel multimodal deep learning structure that automatically extracts features from textual-acoustic data for sentence-level speech classification. Textual and acoustic features were first extracted using two independent convolutional neural network structures, then combined into a joint representation, and finally fed into a decision softmax layer. We tested the proposed model in an actual medical setting, using speech recording and its transcribed log. Our model achieved 83.10% average accuracy in detecting 6 different intentions. We also found that our model using automatically extracted features for intention classification outperformed existing models that use manufactured features.

Abstract Image

Abstract Image

Abstract Image

多模式深度学习的言语意图分类。
我们提出了一种新的多模式深度学习结构,该结构可以从文本声学数据中自动提取特征,用于句子级语音分类。首先使用两个独立的卷积神经网络结构提取文本和声学特征,然后将其组合成联合表示,最后输入决策softmax层。我们使用语音记录及其转录日志在实际医疗环境中测试了所提出的模型。我们的模型在检测6种不同意图方面实现了83.10%的平均准确率。我们还发现,我们的模型使用自动提取的特征进行意图分类,优于使用制造特征的现有模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信