Investigating static and sequential models for intervention-free selection using multimodal data of EEG and eye tracking

Mazen Salous, F. Putze, Tanja Schultz, Jutta Hild, J. Beyerer
{"title":"Investigating static and sequential models for intervention-free selection using multimodal data of EEG and eye tracking","authors":"Mazen Salous, F. Putze, Tanja Schultz, Jutta Hild, J. Beyerer","doi":"10.1145/3279810.3279841","DOIUrl":null,"url":null,"abstract":"Multimodal data is increasingly used in cognitive prediction models to better analyze and predict different user cognitive processes. Classifiers based on such data, however, have different performance characteristics. We discuss in this paper an intervention-free selection task using multimodal data of EEG and eye tracking in three different models. We show that a sequential model, LSTM, is more sensitive but less precise than a static model SVM. Moreover, we introduce a confidence-based Competition-Fusion model using both SVM and LSTM. The fusion model further improves the recall compared to either SVM or LSTM alone, without decreasing precision compared to LSTM. According to the results, we recommend SVM for interactive applications which require minimal false positives (high precision), and recommend LSTM and highly recommend Competition-Fusion Model for applications which handle intervention-free selection requests in an additional post-processing step, requiring higher recall than precision.","PeriodicalId":326513,"journal":{"name":"Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3279810.3279841","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Multimodal data is increasingly used in cognitive prediction models to better analyze and predict different user cognitive processes. Classifiers based on such data, however, have different performance characteristics. We discuss in this paper an intervention-free selection task using multimodal data of EEG and eye tracking in three different models. We show that a sequential model, LSTM, is more sensitive but less precise than a static model SVM. Moreover, we introduce a confidence-based Competition-Fusion model using both SVM and LSTM. The fusion model further improves the recall compared to either SVM or LSTM alone, without decreasing precision compared to LSTM. According to the results, we recommend SVM for interactive applications which require minimal false positives (high precision), and recommend LSTM and highly recommend Competition-Fusion Model for applications which handle intervention-free selection requests in an additional post-processing step, requiring higher recall than precision.
利用脑电图和眼动追踪的多模态数据研究无干预选择的静态和顺序模型
多模态数据越来越多地用于认知预测模型中,以更好地分析和预测不同的用户认知过程。然而,基于这些数据的分类器具有不同的性能特征。本文利用三种不同模型的脑电和眼动追踪多模态数据,讨论了一种无干预选择任务。我们表明,序列模型LSTM比静态模型SVM更敏感,但精度较低。此外,我们引入了一个基于置信度的竞争融合模型,该模型同时使用支持向量机和LSTM。与SVM或LSTM单独相比,融合模型进一步提高了召回率,但与LSTM相比精度没有降低。根据结果,我们推荐SVM用于需要最小误报(高精度)的交互式应用,推荐LSTM和强烈推荐竞争融合模型用于在额外后处理步骤中处理无干预选择请求的应用,要求更高的召回率而不是精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信