Brainsourcing for temporal visual attention estimation.

IF 3.2 4区 医学 Q2 ENGINEERING, BIOMEDICAL
Biomedical Engineering Letters Pub Date : 2025-01-11 eCollection Date: 2025-03-01 DOI:10.1007/s13534-024-00449-1
Yoelvis Moreno-Alcayde, Tuukka Ruotsalo, Luis A Leiva, V Javier Traver
{"title":"Brainsourcing for temporal visual attention estimation.","authors":"Yoelvis Moreno-Alcayde, Tuukka Ruotsalo, Luis A Leiva, V Javier Traver","doi":"10.1007/s13534-024-00449-1","DOIUrl":null,"url":null,"abstract":"<p><p>The concept of <i>temporal</i> visual attention in dynamic contents, such as videos, has been much less studied than its <i>spatial</i> counterpart, i.e., visual salience. Yet, temporal visual attention is useful for many downstream tasks, such as video compression and summarisation, or monitoring users' engagement with visual information. Previous work has considered quantifying a temporal salience score from spatio-temporal user agreements from gaze data. Instead of gaze-based or content-based approaches, we explore to what extent only brain signals can reveal temporal visual attention. We propose methods for (1) computing a temporal <i>visual</i> salience score from salience maps of video frames; (2) quantifying the temporal <i>brain</i> salience score as a cognitive consistency score from the brain signals from multiple observers; and (3) assessing the correlation between both temporal salience scores, and computing its relevance. Two public EEG datasets (DEAP and MAHNOB) are used for experimental validation. Relevant correlations between temporal visual attention and EEG-based inter-subject consistency were found, as compared with a random baseline. In particular, effect sizes, measured with Cohen's <i>d</i>, ranged from very small to large in one dataset, and from medium to very large in another dataset. Brain consistency among subjects watching videos unveils temporal visual attention cues. This has relevant practical implications for analysing attention for visual design in human-computer interaction, in the medical domain, and in brain-computer interfaces at large.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 2","pages":"311-326"},"PeriodicalIF":3.2000,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11871278/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Engineering Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s13534-024-00449-1","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

The concept of temporal visual attention in dynamic contents, such as videos, has been much less studied than its spatial counterpart, i.e., visual salience. Yet, temporal visual attention is useful for many downstream tasks, such as video compression and summarisation, or monitoring users' engagement with visual information. Previous work has considered quantifying a temporal salience score from spatio-temporal user agreements from gaze data. Instead of gaze-based or content-based approaches, we explore to what extent only brain signals can reveal temporal visual attention. We propose methods for (1) computing a temporal visual salience score from salience maps of video frames; (2) quantifying the temporal brain salience score as a cognitive consistency score from the brain signals from multiple observers; and (3) assessing the correlation between both temporal salience scores, and computing its relevance. Two public EEG datasets (DEAP and MAHNOB) are used for experimental validation. Relevant correlations between temporal visual attention and EEG-based inter-subject consistency were found, as compared with a random baseline. In particular, effect sizes, measured with Cohen's d, ranged from very small to large in one dataset, and from medium to very large in another dataset. Brain consistency among subjects watching videos unveils temporal visual attention cues. This has relevant practical implications for analysing attention for visual design in human-computer interaction, in the medical domain, and in brain-computer interfaces at large.

时间视觉注意力估计的脑源研究。
动态内容(如视频)中时间视觉注意的研究远远少于空间视觉注意的研究,即视觉显著性。然而,时间视觉注意力对于许多下游任务是有用的,例如视频压缩和摘要,或监控用户对视觉信息的参与。以前的工作考虑了从凝视数据的时空用户协议中量化时间显著性评分。而不是基于凝视或基于内容的方法,我们探索在多大程度上只有大脑信号才能揭示时间视觉注意力。我们提出了以下方法:(1)从视频帧的显著性映射中计算时间视觉显著性分数;(2)将时间脑显著性评分量化为来自多个观察者的脑信号的认知一致性评分;(3)评估时间显著性得分之间的相关性,并计算其相关性。使用两个公开的EEG数据集(DEAP和MAHNOB)进行实验验证。与随机基线相比,发现时间视觉注意与基于脑电图的主体间一致性之间存在相关关系。特别是,用Cohen's d测量的效应大小,在一个数据集中从很小到很大,在另一个数据集中从中等到很大。观看视频的受试者大脑的一致性揭示了时间视觉注意力线索。这对于分析人机交互、医学领域和脑机接口中视觉设计的注意力具有相关的实际意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Biomedical Engineering Letters
Biomedical Engineering Letters ENGINEERING, BIOMEDICAL-
CiteScore
6.80
自引率
0.00%
发文量
34
期刊介绍: Biomedical Engineering Letters (BMEL) aims to present the innovative experimental science and technological development in the biomedical field as well as clinical application of new development. The article must contain original biomedical engineering content, defined as development, theoretical analysis, and evaluation/validation of a new technique. BMEL publishes the following types of papers: original articles, review articles, editorials, and letters to the editor. All the papers are reviewed in single-blind fashion.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信