Brainsourcing for temporal visual attention estimation.

IF 3.2 4区 医学 Q2 ENGINEERING, BIOMEDICAL
Biomedical Engineering Letters Pub Date : 2025-01-11 eCollection Date: 2025-03-01 DOI:10.1007/s13534-024-00449-1
Yoelvis Moreno-Alcayde, Tuukka Ruotsalo, Luis A Leiva, V Javier Traver
{"title":"Brainsourcing for temporal visual attention estimation.","authors":"Yoelvis Moreno-Alcayde, Tuukka Ruotsalo, Luis A Leiva, V Javier Traver","doi":"10.1007/s13534-024-00449-1","DOIUrl":null,"url":null,"abstract":"<p><p>The concept of <i>temporal</i> visual attention in dynamic contents, such as videos, has been much less studied than its <i>spatial</i> counterpart, i.e., visual salience. Yet, temporal visual attention is useful for many downstream tasks, such as video compression and summarisation, or monitoring users' engagement with visual information. Previous work has considered quantifying a temporal salience score from spatio-temporal user agreements from gaze data. Instead of gaze-based or content-based approaches, we explore to what extent only brain signals can reveal temporal visual attention. We propose methods for (1) computing a temporal <i>visual</i> salience score from salience maps of video frames; (2) quantifying the temporal <i>brain</i> salience score as a cognitive consistency score from the brain signals from multiple observers; and (3) assessing the correlation between both temporal salience scores, and computing its relevance. Two public EEG datasets (DEAP and MAHNOB) are used for experimental validation. Relevant correlations between temporal visual attention and EEG-based inter-subject consistency were found, as compared with a random baseline. In particular, effect sizes, measured with Cohen's <i>d</i>, ranged from very small to large in one dataset, and from medium to very large in another dataset. Brain consistency among subjects watching videos unveils temporal visual attention cues. This has relevant practical implications for analysing attention for visual design in human-computer interaction, in the medical domain, and in brain-computer interfaces at large.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 2","pages":"311-326"},"PeriodicalIF":3.2000,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11871278/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Engineering Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s13534-024-00449-1","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

The concept of temporal visual attention in dynamic contents, such as videos, has been much less studied than its spatial counterpart, i.e., visual salience. Yet, temporal visual attention is useful for many downstream tasks, such as video compression and summarisation, or monitoring users' engagement with visual information. Previous work has considered quantifying a temporal salience score from spatio-temporal user agreements from gaze data. Instead of gaze-based or content-based approaches, we explore to what extent only brain signals can reveal temporal visual attention. We propose methods for (1) computing a temporal visual salience score from salience maps of video frames; (2) quantifying the temporal brain salience score as a cognitive consistency score from the brain signals from multiple observers; and (3) assessing the correlation between both temporal salience scores, and computing its relevance. Two public EEG datasets (DEAP and MAHNOB) are used for experimental validation. Relevant correlations between temporal visual attention and EEG-based inter-subject consistency were found, as compared with a random baseline. In particular, effect sizes, measured with Cohen's d, ranged from very small to large in one dataset, and from medium to very large in another dataset. Brain consistency among subjects watching videos unveils temporal visual attention cues. This has relevant practical implications for analysing attention for visual design in human-computer interaction, in the medical domain, and in brain-computer interfaces at large.

求助全文
约1分钟内获得全文 求助全文
来源期刊
Biomedical Engineering Letters
Biomedical Engineering Letters ENGINEERING, BIOMEDICAL-
CiteScore
6.80
自引率
0.00%
发文量
34
期刊介绍: Biomedical Engineering Letters (BMEL) aims to present the innovative experimental science and technological development in the biomedical field as well as clinical application of new development. The article must contain original biomedical engineering content, defined as development, theoretical analysis, and evaluation/validation of a new technique. BMEL publishes the following types of papers: original articles, review articles, editorials, and letters to the editor. All the papers are reviewed in single-blind fashion.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信