图像作为数据:政治人物和事件视觉呈现的自动内容分析

Jungseock Joo, Zachary C. Steinert-Threlkeld
{"title":"图像作为数据:政治人物和事件视觉呈现的自动内容分析","authors":"Jungseock Joo, Zachary C. Steinert-Threlkeld","doi":"10.5117/ccr2022.1.001.joo","DOIUrl":null,"url":null,"abstract":"Images matter because they help individuals evaluate policies, primarily through emotional resonance, and can help researchers from a variety of fields measure otherwise difficult to estimate quantities. The lack of scalable analytic methods, however, has prevented researchers from incorporating large scale image data in studies. This article offers an in-depth overview of automated methods for image analysis and explains their usage and implementation. It elaborates on how these methods and results can be validated and interpreted and discusses ethical concerns. Two examples then highlight approaches to systematically understanding visual presentations of political actors and events from large scale image datasets collected from social media. The first study examines gender and party differences in the self-presentation of the U.S. politicians through their Facebook photographs, using an off-the-shelf computer vision model, Google’s Label Detection API. The second study develops image classifiers based on convolutional neural networks to detect custom labels from images of protesters shared on Twitter to understand how protests are framed on social media. These analyses demonstrate advantages of computer vision and deep learning as a novel analytic tool that can expand the scope and size of traditional visual analysis to thousands of features and millions of images. The paper also provides comprehensive technical details and practices to help guide political communication scholars and practitioners.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"144 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Image as Data: Automated Content Analysis for Visual Presentations of Political Actors and Events\",\"authors\":\"Jungseock Joo, Zachary C. Steinert-Threlkeld\",\"doi\":\"10.5117/ccr2022.1.001.joo\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Images matter because they help individuals evaluate policies, primarily through emotional resonance, and can help researchers from a variety of fields measure otherwise difficult to estimate quantities. The lack of scalable analytic methods, however, has prevented researchers from incorporating large scale image data in studies. This article offers an in-depth overview of automated methods for image analysis and explains their usage and implementation. It elaborates on how these methods and results can be validated and interpreted and discusses ethical concerns. Two examples then highlight approaches to systematically understanding visual presentations of political actors and events from large scale image datasets collected from social media. The first study examines gender and party differences in the self-presentation of the U.S. politicians through their Facebook photographs, using an off-the-shelf computer vision model, Google’s Label Detection API. The second study develops image classifiers based on convolutional neural networks to detect custom labels from images of protesters shared on Twitter to understand how protests are framed on social media. These analyses demonstrate advantages of computer vision and deep learning as a novel analytic tool that can expand the scope and size of traditional visual analysis to thousands of features and millions of images. The paper also provides comprehensive technical details and practices to help guide political communication scholars and practitioners.\",\"PeriodicalId\":275035,\"journal\":{\"name\":\"Computational Communication Research\",\"volume\":\"144 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational Communication Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5117/ccr2022.1.001.joo\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Communication Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5117/ccr2022.1.001.joo","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

图像很重要,因为它们帮助个人评估政策,主要是通过情感共鸣,并且可以帮助来自各个领域的研究人员测量否则难以估计的数量。然而,缺乏可扩展的分析方法,阻碍了研究人员在研究中纳入大规模图像数据。本文提供了图像分析自动化方法的深入概述,并解释了它们的使用和实现。它详细阐述了如何验证和解释这些方法和结果,并讨论了伦理问题。然后,有两个例子强调了从社交媒体收集的大规模图像数据集中系统地理解政治行动者和事件的视觉呈现的方法。第一项研究使用现成的计算机视觉模型,即谷歌的标签检测API,通过美国政客在Facebook上的照片,研究他们自我表现的性别和党派差异。第二项研究开发了基于卷积神经网络的图像分类器,从Twitter上分享的抗议者图像中检测自定义标签,以了解抗议活动是如何在社交媒体上被构建的。这些分析证明了计算机视觉和深度学习作为一种新型分析工具的优势,可以将传统视觉分析的范围和规模扩展到数千个特征和数百万张图像。本文还提供了全面的技术细节和实践,以帮助指导政治传播学者和实践者。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Image as Data: Automated Content Analysis for Visual Presentations of Political Actors and Events
Images matter because they help individuals evaluate policies, primarily through emotional resonance, and can help researchers from a variety of fields measure otherwise difficult to estimate quantities. The lack of scalable analytic methods, however, has prevented researchers from incorporating large scale image data in studies. This article offers an in-depth overview of automated methods for image analysis and explains their usage and implementation. It elaborates on how these methods and results can be validated and interpreted and discusses ethical concerns. Two examples then highlight approaches to systematically understanding visual presentations of political actors and events from large scale image datasets collected from social media. The first study examines gender and party differences in the self-presentation of the U.S. politicians through their Facebook photographs, using an off-the-shelf computer vision model, Google’s Label Detection API. The second study develops image classifiers based on convolutional neural networks to detect custom labels from images of protesters shared on Twitter to understand how protests are framed on social media. These analyses demonstrate advantages of computer vision and deep learning as a novel analytic tool that can expand the scope and size of traditional visual analysis to thousands of features and millions of images. The paper also provides comprehensive technical details and practices to help guide political communication scholars and practitioners.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信