Catch me if you can: how episodic and thematic multimodal news frames shape policy support by stimulating visual attention and responsibility attributions

IF 1.5 Q2 COMMUNICATION
Stephanie Geise, Katharina Maubach
{"title":"Catch me if you can: how episodic and thematic multimodal news frames shape policy support by stimulating visual attention and responsibility attributions","authors":"Stephanie Geise, Katharina Maubach","doi":"10.3389/fcomm.2024.1305048","DOIUrl":null,"url":null,"abstract":"Using media coverage of animal welfare as an example, this study examines how the perception of multimodal news frames shapes recipients’ visual attention, attributions of responsibility, emotions, and policy support. To investigate the mechanisms of multimodal-episodic versus thematic framing, we combined eye-tracking measurements with a pre-post survey experiment in which 143 participants were randomly assigned to an episodic or a thematic multimodal framing condition. The results show that episodic multimodal frames are viewed longer than thematic frames, elicit stronger individual and political responsibility attributions, and increase political support for stricter animal-welfare laws. Understanding multimodal framing as a multistep process, a serial mediation model reveals that episodic frames affect viewing time, which leads to stronger attributions of political responsibility and, in turn, stronger policy support. Our results support the idea of a complex interplay between subsequent stages of information perception and processing within a multimodal framing process.","PeriodicalId":31739,"journal":{"name":"Frontiers in Communication","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Communication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fcomm.2024.1305048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0

Abstract

Using media coverage of animal welfare as an example, this study examines how the perception of multimodal news frames shapes recipients’ visual attention, attributions of responsibility, emotions, and policy support. To investigate the mechanisms of multimodal-episodic versus thematic framing, we combined eye-tracking measurements with a pre-post survey experiment in which 143 participants were randomly assigned to an episodic or a thematic multimodal framing condition. The results show that episodic multimodal frames are viewed longer than thematic frames, elicit stronger individual and political responsibility attributions, and increase political support for stricter animal-welfare laws. Understanding multimodal framing as a multistep process, a serial mediation model reveals that episodic frames affect viewing time, which leads to stronger attributions of political responsibility and, in turn, stronger policy support. Our results support the idea of a complex interplay between subsequent stages of information perception and processing within a multimodal framing process.
有机会就抓住我:情节性和主题性多模态新闻框架如何通过刺激视觉注意力和责任归属来形成政策支持
本研究以媒体对动物福利的报道为例,探讨了对多模态新闻框架的感知如何影响受众的视觉注意力、责任归属、情绪和政策支持。为了探究多模态表征框架与主题框架的作用机制,我们将眼动跟踪测量与事前事后调查实验相结合,将 143 名参与者随机分配到表征或主题多模态框架条件下。结果表明,情节性多模态框架比主题性框架被观看的时间更长,能引起更强烈的个人和政治责任归因,并增加对更严格的动物福利法律的政治支持。将多模态框架理解为一个多步骤的过程,一个串行中介模型揭示了情节性框架会影响观看时间,从而导致更强的政治责任归因,进而获得更强的政策支持。我们的研究结果表明,在多模态框架构建过程中,信息感知和处理的后续阶段之间存在复杂的相互作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.30
自引率
8.30%
发文量
284
审稿时长
14 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信