Yoelvis Moreno-Alcayde, Tuukka Ruotsalo, Luis A Leiva, V Javier Traver
{"title":"Brainsourcing for temporal visual attention estimation.","authors":"Yoelvis Moreno-Alcayde, Tuukka Ruotsalo, Luis A Leiva, V Javier Traver","doi":"10.1007/s13534-024-00449-1","DOIUrl":null,"url":null,"abstract":"<p><p>The concept of <i>temporal</i> visual attention in dynamic contents, such as videos, has been much less studied than its <i>spatial</i> counterpart, i.e., visual salience. Yet, temporal visual attention is useful for many downstream tasks, such as video compression and summarisation, or monitoring users' engagement with visual information. Previous work has considered quantifying a temporal salience score from spatio-temporal user agreements from gaze data. Instead of gaze-based or content-based approaches, we explore to what extent only brain signals can reveal temporal visual attention. We propose methods for (1) computing a temporal <i>visual</i> salience score from salience maps of video frames; (2) quantifying the temporal <i>brain</i> salience score as a cognitive consistency score from the brain signals from multiple observers; and (3) assessing the correlation between both temporal salience scores, and computing its relevance. Two public EEG datasets (DEAP and MAHNOB) are used for experimental validation. Relevant correlations between temporal visual attention and EEG-based inter-subject consistency were found, as compared with a random baseline. In particular, effect sizes, measured with Cohen's <i>d</i>, ranged from very small to large in one dataset, and from medium to very large in another dataset. Brain consistency among subjects watching videos unveils temporal visual attention cues. This has relevant practical implications for analysing attention for visual design in human-computer interaction, in the medical domain, and in brain-computer interfaces at large.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 2","pages":"311-326"},"PeriodicalIF":3.2000,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11871278/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Engineering Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s13534-024-00449-1","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
The concept of temporal visual attention in dynamic contents, such as videos, has been much less studied than its spatial counterpart, i.e., visual salience. Yet, temporal visual attention is useful for many downstream tasks, such as video compression and summarisation, or monitoring users' engagement with visual information. Previous work has considered quantifying a temporal salience score from spatio-temporal user agreements from gaze data. Instead of gaze-based or content-based approaches, we explore to what extent only brain signals can reveal temporal visual attention. We propose methods for (1) computing a temporal visual salience score from salience maps of video frames; (2) quantifying the temporal brain salience score as a cognitive consistency score from the brain signals from multiple observers; and (3) assessing the correlation between both temporal salience scores, and computing its relevance. Two public EEG datasets (DEAP and MAHNOB) are used for experimental validation. Relevant correlations between temporal visual attention and EEG-based inter-subject consistency were found, as compared with a random baseline. In particular, effect sizes, measured with Cohen's d, ranged from very small to large in one dataset, and from medium to very large in another dataset. Brain consistency among subjects watching videos unveils temporal visual attention cues. This has relevant practical implications for analysing attention for visual design in human-computer interaction, in the medical domain, and in brain-computer interfaces at large.
期刊介绍:
Biomedical Engineering Letters (BMEL) aims to present the innovative experimental science and technological development in the biomedical field as well as clinical application of new development. The article must contain original biomedical engineering content, defined as development, theoretical analysis, and evaluation/validation of a new technique. BMEL publishes the following types of papers: original articles, review articles, editorials, and letters to the editor. All the papers are reviewed in single-blind fashion.