Computational Communication Research最新文献

筛选
英文 中文
The Accuracy and Precision of Measurement 测量的准确度和精密度
Computational Communication Research Pub Date : 2021-01-01 DOI: 10.5117/ccr2021.2.001.calc
Leandro A. Calcagnotto, Richard Huskey, Gerald M. Kosicki
{"title":"The Accuracy and Precision of Measurement","authors":"Leandro A. Calcagnotto, Richard Huskey, Gerald M. Kosicki","doi":"10.5117/ccr2021.2.001.calc","DOIUrl":"https://doi.org/10.5117/ccr2021.2.001.calc","url":null,"abstract":"\u0000 Measurement noise differs by instrument and limits the validity and reliability of findings. Researchers collecting reaction time data introduce noise in the form of response time latency from hardware and software, even when collecting data on standardized computer-based experimental equipment. Reaction time is a measure with broad application for studying cognitive processing in communication research that is vulnerable to response latency noise. In this study, we utilized an Arduino microcontroller to generate a ground truth value of average response time latency in Asteroid Impact, an open source, naturalistic, experimental video game stimulus. We tested if response time latency differed across computer operating system, software, and trial modality. Here we show that reaction time measurements collected using Asteroid Impact were susceptible to response latency variability on par with other response-latency measuring software tests. These results demonstrate that Asteroid Impact is a valid and reliable stimulus for measuring reaction time data. Moreover, we provide researchers with a low-cost and open-source tool for evaluating response time latency in their own labs. Our results highlight the importance of validating measurement tools and support the philosophy of contributing methodological improvements in communication science.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117070022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Extracting semantic relations using syntax 使用语法提取语义关系
Computational Communication Research Pub Date : 2021-01-01 DOI: 10.5117/ccr2021.2.003.welb
Kasper Welbers, W. Atteveldt, J. Kleinnijenhuis
{"title":"Extracting semantic relations using syntax","authors":"Kasper Welbers, W. Atteveldt, J. Kleinnijenhuis","doi":"10.5117/ccr2021.2.003.welb","DOIUrl":"https://doi.org/10.5117/ccr2021.2.003.welb","url":null,"abstract":"\u0000 Most common methods for automatic text analysis in communication science ignore syntactic information, focusing on the occurrence and co-occurrence of individual words, and sometimes n-grams. This is remarkably effective for some purposes, but poses a limitation for fine-grained analyses into semantic relations such as who does what to whom and according to what source. One tested, effective method for moving beyond this bag-of-words assumption is to use a rule-based approach for labeling and extracting syntactic patterns in dependency trees. Although this method can be used for a variety of purposes, its application is hindered by the lack of dedicated and accessible tools. In this paper we introduce the rsyntax R package, which is designed to make working with dependency trees easier and more intuitive for R users, and provides a framework for combining multiple rules for reliably extracting useful semantic relations.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121357866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Four best practices for measuring news sentiment using ‘off-the-shelf’ dictionaries: a large-scale p-hacking experiment 使用“现成”词典衡量新闻情绪的四个最佳实践:大规模p-hacking实验
Computational Communication Research Pub Date : 2020-10-07 DOI: 10.31235/osf.io/np5wa
Chung-hong Chan, Joseph W. Bajjalieh, L. Auvil, Hartmut Wessler, Scott L. Althaus, Kasper Welbers, Wouter van Atteveldt, Marc Jungblut
{"title":"Four best practices for measuring news sentiment using ‘off-the-shelf’ dictionaries: a large-scale p-hacking experiment","authors":"Chung-hong Chan, Joseph W. Bajjalieh, L. Auvil, Hartmut Wessler, Scott L. Althaus, Kasper Welbers, Wouter van Atteveldt, Marc Jungblut","doi":"10.31235/osf.io/np5wa","DOIUrl":"https://doi.org/10.31235/osf.io/np5wa","url":null,"abstract":"We examined the validity of 37 sentiment scores based on dictionary-based methods using a large news corpus and demonstrated the risk of generating a spectrum of results with different levels of statistical significance by presenting an analysis of relationships between news sentiment and U.S. presidential approval. We summarize our findings into four best practices: 1) use a suitable sentiment dictionary; 2) do not assume that the validity and reliability of the dictionary is ‘built-in’; 3) check for the influence of content length and 4) do not use multiple dictionaries to test the same statistical hypothesis.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"14 36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115014921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
How Document Sampling and Vocabulary Pruning Affect the Results of Topic Models 文档采样和词汇修剪如何影响主题模型的结果
Computational Communication Research Pub Date : 2019-11-20 DOI: 10.31219/osf.io/2rh6g
D. Maier, A. Niekler, Gregor Wiedemann, Daniela Stoltenberg
{"title":"How Document Sampling and Vocabulary Pruning Affect the Results of Topic Models","authors":"D. Maier, A. Niekler, Gregor Wiedemann, Daniela Stoltenberg","doi":"10.31219/osf.io/2rh6g","DOIUrl":"https://doi.org/10.31219/osf.io/2rh6g","url":null,"abstract":"\u0000 Topic modeling enables researchers to explore large document corpora. Large corpora, however, can be extremely costly to model in terms of time and computing resources. In order to circumvent this problem, two techniques have been suggested: (1) to model random document samples, and (2) to prune the vocabulary of the corpus. Although frequently applied, there has been no systematic inquiry into how the application of these techniques affects the respective models. Using three empirical corpora with different characteristics (news articles, websites, and Tweets), we systematically investigated how different sample sizes and pruning affect the resulting topic models in comparison to models of the full corpora. Our inquiry provides evidence that both techniques are viable tools that will likely not impair the resulting model. Sample-based topic models closely resemble corpus-based models if the sample size is large enough (> 10,000 documents). Moreover, extensive pruning does not compromise the quality of the resultant topics.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129245278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
3bij3 – Developing a framework for researching recommender systems and their effects 3bij3 -开发一个研究推荐系统及其效果的框架
Computational Communication Research Pub Date : 2019-10-02 DOI: 10.31235/osf.io/vw2dr
Felicia Loecherbach, D. Trilling
{"title":"3bij3 – Developing a framework for researching recommender systems and their effects","authors":"Felicia Loecherbach, D. Trilling","doi":"10.31235/osf.io/vw2dr","DOIUrl":"https://doi.org/10.31235/osf.io/vw2dr","url":null,"abstract":"\u0000 Today’s online news environment is increasingly characterized by personalized news selections, relying on algorithmic solutions for extracting relevant articles and composing an individual’s news diet. Yet, the impact of such recommendation algorithms on how we consume and perceive news is still understudied. We therefore developed one of the first software solutions to conduct studies on effects of news recommender systems in a realistic setting. The web app of our framework (called 3bij3) displays real-time news articles selected by different mechanisms. 3bij3 can be used to conduct large-scale field experiments, in which participants’ use of the site can be tracked over extended periods of time. Compared to previous work, 3bij3 gives researchers control over the recommendation system under study and creates a realistic environment for the participants. It integrates web scraping, different methods to compare and classify news articles, different recommender systems, a web interface for participants, gamification elements, and a user survey to enrich the behavioural measures obtained.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130020891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
News Organizations’ Selective Link Sharing as Gatekeeping 新闻机构的选择性链接共享作为把关
Computational Communication Research Pub Date : 2019-10-01 DOI: 10.5117/ccr2019.1.003.pak
Chankyung Pak
{"title":"News Organizations’ Selective Link Sharing as Gatekeeping","authors":"Chankyung Pak","doi":"10.5117/ccr2019.1.003.pak","DOIUrl":"https://doi.org/10.5117/ccr2019.1.003.pak","url":null,"abstract":"To disseminate their stories efficiently via social media, news organizations make decisions that resemble traditional editorial decisions. However, the decisions for social media may deviate from traditional ones because they are often made outside the newsroom and guided by audience metrics. This study focuses on selective link sharing, as quasi-gatekeeping, on Twitter -- conditioning a link sharing decision about news content and illustrates how it resembles and deviates from gatekeeping for the publication of news stories. Using a computational data collection method and a machine learning technique called Structural Topic Model (STM), this study shows that selective link sharing generates different topic distribution between news websites and Twitter, significantly revoking the specialty of news organizations. This finding implies that emergent logic, which governs news organizations' decisions for social media can undermine the provision of diverse news, which relies on journalistic values and norms.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134555616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computational observation 计算观察
Computational Communication Research Pub Date : 2019-10-01 DOI: 10.5117/ccr2019.1.004.haim
Mario Haim, Angela Nienierza
{"title":"Computational observation","authors":"Mario Haim, Angela Nienierza","doi":"10.5117/ccr2019.1.004.haim","DOIUrl":"https://doi.org/10.5117/ccr2019.1.004.haim","url":null,"abstract":"A lot of modern media use is guided by algorithmic curation, a phenomenon that is in desperate need of empirical observation, but for which adequate methodological tools are largely missing. To fill this gap, computational observation offers a novel approach—the unobtrusive and automated collection of information encountered within algorithmically curated media environments by means of a browser plug-in. In contrast to prior methodological approaches, browser plug-ins allow for reliable capture and repetitive analysis of both content and context at the point of the actual user encounter. After discussing the technological, ethical, and practical considerations relevant to this automated solution, we present our open-source browser plug-in as an element in an adequate multi-method design, along with potential links to panel surveys and content analysis. Ultimately, we present a proof-of-concept study in the realm of news exposure on Facebook; we successfully deployed the plug-in to Chrome and Firefox, and we combined the computational observation with a two-wave panel survey. Although this study suffered from severe recruitment difficulties, the results indicate that the methodological setup is reliable and ready to implement for data collection within a variety of studies on media use and media effects.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124507638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Weakly Supervised and Deep Learning Method for an Additive Topic Analysis of Large Corpora 大型语料库加性主题分析的弱监督深度学习方法
Computational Communication Research Pub Date : 2019-07-11 DOI: 10.31235/osf.io/nfr3p
Yair Fogel-Dror, Shaul R. Shenhav, Tamir Sheafer
{"title":"A Weakly Supervised and Deep Learning Method for an Additive Topic Analysis of Large Corpora","authors":"Yair Fogel-Dror, Shaul R. Shenhav, Tamir Sheafer","doi":"10.31235/osf.io/nfr3p","DOIUrl":"https://doi.org/10.31235/osf.io/nfr3p","url":null,"abstract":"The collaborative effort of theory-driven content analysis can benefit significantly from the use of topic analysis methods, which allow researchers to add more categories while developing or testing a theory. This additive approach enables the reuse of previous efforts of analysis or even the merging of separate research projects, thereby making these methods more accessible and increasing the discipline’s ability to create and share content analysis capabilities. This paper proposes a weakly supervised topic analysis method that uses both a low-cost unsupervised method to compile a training set and supervised deep learning as an additive and accurate text classification method. We test the validity of the method, specifically its additivity, by comparing the results of the method after adding 200 categories to an initial number of 450. We show that the suggested method provides a foundation for a low-cost solution for large-scale topic analysis.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132953092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Roadmap for Computational Communication Research 计算通信研究的路线图
Computational Communication Research Pub Date : 2019-05-20 DOI: 10.31235/osf.io/4dhfk
Wouter van Atteveldt, Drew B. Margolin, Cuihua Shen, D. Trilling, R. Weber
{"title":"A Roadmap for Computational Communication Research","authors":"Wouter van Atteveldt, Drew B. Margolin, Cuihua Shen, D. Trilling, R. Weber","doi":"10.31235/osf.io/4dhfk","DOIUrl":"https://doi.org/10.31235/osf.io/4dhfk","url":null,"abstract":"Computational Communication Research (CCR) is a new open access journal dedicated to publishing high quality computational research in communication science. This editorial introduction describes the role that we envision for the journal. First, we explain what computational communication science is and why a new journal is needed for this subfield. Then, we elaborate on the type of research this journal seeks to publish, and stress the need for transparent and reproducible science. The relation between theoretical development and computational analysis is discussed, and we argue for the value of null-findings and risky research in additive science. Subsequently, the (experimental) two-phase review process is described. In this process, after the first double-blind review phase, an editor can signal that they intend to publish the article conditional on satisfactory revisions. This starts the second review phase, in which authors and reviewers are no longer required to be anonymous and the authors are encouraged to publish a preprint to their article which will be linked as working paper from the journal. Finally, we introduce the four articles that, together with this Introduction, form the inaugural issue.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124477764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
iCoRe: The GDELT Interface for the Advancement of Communication Research iCoRe:促进通信研究的GDELT接口
Computational Communication Research Pub Date : 2019-05-17 DOI: 10.31235/osf.io/smjwb
F. R. Hopp, J. Schaffer, J. T. Fisher, R. Weber
{"title":"iCoRe: The GDELT Interface for the Advancement of Communication Research","authors":"F. R. Hopp, J. Schaffer, J. T. Fisher, R. Weber","doi":"10.31235/osf.io/smjwb","DOIUrl":"https://doi.org/10.31235/osf.io/smjwb","url":null,"abstract":"This article introduces the interface for communication research (iCoRe) to access, explore, and analyze the Global Database of Events, Language and Tone (GDELT; Leetaru & Schrodt, 2013). GDELT provides a vast, open source, and continuously updated repository of online news and event metadata collected from tens of thousands of news outlets around the world. Despite GDELT’s promise for advancing communication science, its massive scale and complex data structures have hindered efforts of communication scholars aiming to access and analyze GDELT. We thus developed iCoRe, an easy-to-use web interface that (a) provides fast access to the data available in GDELT, (b) shapes and processes GDELT for theory-driven applications within communication research, and (c) enables replicability through transparent query and analysis protocols. After providing an overview of how GDELT’s data pertain to addressing communication research questions, we provide a tutorial of utilizing iCoRe across three theory-driven case studies. We conclude this article with a discussion and outlook of iCoRe’s future potential for advancing communication research.","PeriodicalId":275035,"journal":{"name":"Computational Communication Research","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123783946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信