Attention allocation in information-rich environments: the case of news aggregators

Chrysanthos Dellarocas, J. Sutanto, Mihai Calin, Elia Palme
{"title":"Attention allocation in information-rich environments: the case of news aggregators","authors":"Chrysanthos Dellarocas, J. Sutanto, Mihai Calin, Elia Palme","doi":"10.1287/mnsc.2015.2237","DOIUrl":null,"url":null,"abstract":"Few industries have suffered more severe disruption by digital technologies than news and journalism. Traditional content creators, such as newspapers, are witnessing their geographical monopolies dissolving into the globally competitive Internet and some of their most important sources of revenue, such as classified ads, migrating to specialized online marketplaces like eBay and Craigslist. User-generated content, such as blogs and online reviews, has increased the supply of content that often competes head-on for readers' attention with professionally produced content.\n As if the previous changes were not enough, an equally disruptive transformation is currently underway in these industries. The overwhelming amount of content available online has increased the importance of curation and aggregation, that is, of interfaces and services that help readers filter and make sense of the subset of content that is important to them. Historically such functions used to be the realm of professional editors: editors not only commissioned the production of content but also decided what content would be included in a newspaper or magazine issue and how it would be organized.\n Web technologies allow this important function to be unbundled from content production. Specifically, the web's ability to place hyperlinks across content has enabled new types of players, commonly referred to as content aggregators or web portals, to successfully enter the professional content ecosystems, attracting traffic and revenue by hosting collections of links to the content of others. Aggregators produce little or no original content; they usually provide titles and short summaries of the articles they link to (Figure 1). Well known aggregators include Google News, the Drudge Report and the Huffington Post. Table 1 provides a more extensive list of examples.\n Facing severe financial pressures, some content creators have turned against content aggregators, accusing them of \"stealing\" their revenues by free riding on their content. Other market actors point out that in today's \"link economy\", the links bring valuable traffic to the target nodes. Therefore content creators should be happy that aggregators exist and direct consumers to their sites. Key aggregator executives, such as Google's Eric Schmidt, assert that it is to their interest to see content creators thrive, since the value of links (and aggregators) is directly related to the quality of content that these point to.\n A central aspect of the debate focuses on the complex economic implications of the process of placing (for the most part) free hyperlinks across content nodes. The main argument in favor of aggregators is that, if links are chosen well, then they point to good quality content; as a result, they reduce the search costs of the consumers, which may lead to an aggregate increase in content consumption and to more traffic for higher quality sites. The main argument against aggregators is that some consumers satisfy their curiosity by reading an aggregator's short snippet of a linked-to article and never click through to the article itself. In fact, the question of whether aggregators are legally permitted to reproduce an article's title and snippet without obtaining permission from (and possibly paying) the content producer is still unresolved.\n Even though there is still an open question of whether the current generation of news aggregators is beneficial or harmful to the content ecosystems (Dellarocas et al. 2010), we believe that the ever-increasing volume of available content makes some form of aggregation an inevitable and valuable component of every content ecosystem. The key question, therefore, is not whether aggregators should exist, but rather how the partly symbiotic and partly competitive relationship between aggregators and content creators can be optimized for the benefit of both parties.\n In an attempt to provide initial answers to these questions, we conduct a series of field experiments whose objective is to provide insight with respect to how readers distribute their attention between a news aggregator and the original articles it links to. Our experiments are based on manipulating elements of the user interface of a Swiss mobile news aggregator. We examine how key design parameters, such as the length of the text snippet that an aggregator provides about articles, the presence of associated photos as well as of other related articles on the same story, affect a reader's propensity to click on an article, the amount of time that the reader spends on that article after clicking, and the amount of time that the reader spends on the aggregator.\n Gaining a firmer empirical understanding of these relationships will be valuable not only for aggregators seeking to optimize their own traffic patterns, but also in terms of informing the public discourse between aggregators and content creators on the need for equitable profit sharing agreements between the two parties.\n Our results indicate that there is, indeed, a degree of substitution between the amount of information of a news article that is displayed on news aggregators and the cumulative time that readers are likely to spend on the original article site. We found a negative relationship between an article's snippet length on the aggregator and the probability that a user will click the link to the original article site: the longer the snippet, the lower the click-through rate. Moreover, we found a positive relationship between an article's snippet length and the amount of time readers spend on the aggregator until they decide to click on the linked article. Interestingly, the presence of an image has the same effect to that of increasing the snippet length on the article's click-through rate: it is associated with a decrease in click-through rate and an increase in a reader's average decision time. We also found that when there is a click-through, the amount of time spent on the original article has an inverted U relationship with the snippet lengths. This finding suggests that very short snippets do not provide adequate information; resulting in more readers clicking on their respective linked articles but then deciding that the articles were actually not very interesting to them.\n Since aggregators typically collect articles that belong to the same topic groups, they create competition among the related articles. In this study, we also examined how the aggregation of articles into topic groups affects the allocation of readers' attention. We found an inverse U-shaped relationship between the number of articles in a topic group and the probability that readers will click on at least one article from that group. Our explanation for this finding is that, the more articles are available on a topic, the more likely that at a user will find at least one of them appealing. Furthermore, the presence of multiple articles tends to signal important stories that are worth reading about. However, when there are plenty of related articles, the combined presence of multiple snippets may satisfy the readers' curiosity who then may not feel the need to click on any of the linked articles. This is a previously unnoticed side effect of news aggregators that can be potentially detrimental to content producers and thus deserves more attention.\n With respect to the competition among the related articles, we examined what factors determine which article(s) in a group will likely to be chosen by the readers. As expected, articles positioned at the top of the list were most often being chosen. Controlling for the position, articles with an image were more likely to be chosen. Interestingly, the choice probability was higher for articles whose snippet length was longer than the average snippet length of the related articles.\n The current work establishes that aggregators indeed extract an \"attention tax\" from content producers, in the form of users who never click through to the original articles. We demonstrate that the fraction of such users depends on the design parameters of the aggregator and that there is a substitution relationship between the amount of time that readers spend on the aggregator and the time readers spend on the original articles. What is outside the scope of the current research is the impact that aggregators have in increasing the overall consumption of content (e.g. because they reduce search costs by organizing content). Despite some recent attempts to provide partial answers (e.g. Chiou and Tucker 2011), the latter still remains an elusive and interesting empirical question for future research.\n From a methodological perspective, this work highlights the feasibility of conducting field experiments with \"real users\" using \"home grown\" apps developed in research labs and then released to the public. Our results suggest that experiments with even a few thousands of users can expose many of the effects that are also present in much larger scale applications. There is, thus, an interesting methodological discussion to be had on the merits of working with larger, secondary, data sets obtained from third-parties vs. primary data sets obtained from smaller scale apps developed for the explicit purpose of conducting experimental studies.","PeriodicalId":285194,"journal":{"name":"IRPN: Innovation & Information Management (Topic)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"46","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IRPN: Innovation & Information Management (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/mnsc.2015.2237","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 46

Abstract

Few industries have suffered more severe disruption by digital technologies than news and journalism. Traditional content creators, such as newspapers, are witnessing their geographical monopolies dissolving into the globally competitive Internet and some of their most important sources of revenue, such as classified ads, migrating to specialized online marketplaces like eBay and Craigslist. User-generated content, such as blogs and online reviews, has increased the supply of content that often competes head-on for readers' attention with professionally produced content. As if the previous changes were not enough, an equally disruptive transformation is currently underway in these industries. The overwhelming amount of content available online has increased the importance of curation and aggregation, that is, of interfaces and services that help readers filter and make sense of the subset of content that is important to them. Historically such functions used to be the realm of professional editors: editors not only commissioned the production of content but also decided what content would be included in a newspaper or magazine issue and how it would be organized. Web technologies allow this important function to be unbundled from content production. Specifically, the web's ability to place hyperlinks across content has enabled new types of players, commonly referred to as content aggregators or web portals, to successfully enter the professional content ecosystems, attracting traffic and revenue by hosting collections of links to the content of others. Aggregators produce little or no original content; they usually provide titles and short summaries of the articles they link to (Figure 1). Well known aggregators include Google News, the Drudge Report and the Huffington Post. Table 1 provides a more extensive list of examples. Facing severe financial pressures, some content creators have turned against content aggregators, accusing them of "stealing" their revenues by free riding on their content. Other market actors point out that in today's "link economy", the links bring valuable traffic to the target nodes. Therefore content creators should be happy that aggregators exist and direct consumers to their sites. Key aggregator executives, such as Google's Eric Schmidt, assert that it is to their interest to see content creators thrive, since the value of links (and aggregators) is directly related to the quality of content that these point to. A central aspect of the debate focuses on the complex economic implications of the process of placing (for the most part) free hyperlinks across content nodes. The main argument in favor of aggregators is that, if links are chosen well, then they point to good quality content; as a result, they reduce the search costs of the consumers, which may lead to an aggregate increase in content consumption and to more traffic for higher quality sites. The main argument against aggregators is that some consumers satisfy their curiosity by reading an aggregator's short snippet of a linked-to article and never click through to the article itself. In fact, the question of whether aggregators are legally permitted to reproduce an article's title and snippet without obtaining permission from (and possibly paying) the content producer is still unresolved. Even though there is still an open question of whether the current generation of news aggregators is beneficial or harmful to the content ecosystems (Dellarocas et al. 2010), we believe that the ever-increasing volume of available content makes some form of aggregation an inevitable and valuable component of every content ecosystem. The key question, therefore, is not whether aggregators should exist, but rather how the partly symbiotic and partly competitive relationship between aggregators and content creators can be optimized for the benefit of both parties. In an attempt to provide initial answers to these questions, we conduct a series of field experiments whose objective is to provide insight with respect to how readers distribute their attention between a news aggregator and the original articles it links to. Our experiments are based on manipulating elements of the user interface of a Swiss mobile news aggregator. We examine how key design parameters, such as the length of the text snippet that an aggregator provides about articles, the presence of associated photos as well as of other related articles on the same story, affect a reader's propensity to click on an article, the amount of time that the reader spends on that article after clicking, and the amount of time that the reader spends on the aggregator. Gaining a firmer empirical understanding of these relationships will be valuable not only for aggregators seeking to optimize their own traffic patterns, but also in terms of informing the public discourse between aggregators and content creators on the need for equitable profit sharing agreements between the two parties. Our results indicate that there is, indeed, a degree of substitution between the amount of information of a news article that is displayed on news aggregators and the cumulative time that readers are likely to spend on the original article site. We found a negative relationship between an article's snippet length on the aggregator and the probability that a user will click the link to the original article site: the longer the snippet, the lower the click-through rate. Moreover, we found a positive relationship between an article's snippet length and the amount of time readers spend on the aggregator until they decide to click on the linked article. Interestingly, the presence of an image has the same effect to that of increasing the snippet length on the article's click-through rate: it is associated with a decrease in click-through rate and an increase in a reader's average decision time. We also found that when there is a click-through, the amount of time spent on the original article has an inverted U relationship with the snippet lengths. This finding suggests that very short snippets do not provide adequate information; resulting in more readers clicking on their respective linked articles but then deciding that the articles were actually not very interesting to them. Since aggregators typically collect articles that belong to the same topic groups, they create competition among the related articles. In this study, we also examined how the aggregation of articles into topic groups affects the allocation of readers' attention. We found an inverse U-shaped relationship between the number of articles in a topic group and the probability that readers will click on at least one article from that group. Our explanation for this finding is that, the more articles are available on a topic, the more likely that at a user will find at least one of them appealing. Furthermore, the presence of multiple articles tends to signal important stories that are worth reading about. However, when there are plenty of related articles, the combined presence of multiple snippets may satisfy the readers' curiosity who then may not feel the need to click on any of the linked articles. This is a previously unnoticed side effect of news aggregators that can be potentially detrimental to content producers and thus deserves more attention. With respect to the competition among the related articles, we examined what factors determine which article(s) in a group will likely to be chosen by the readers. As expected, articles positioned at the top of the list were most often being chosen. Controlling for the position, articles with an image were more likely to be chosen. Interestingly, the choice probability was higher for articles whose snippet length was longer than the average snippet length of the related articles. The current work establishes that aggregators indeed extract an "attention tax" from content producers, in the form of users who never click through to the original articles. We demonstrate that the fraction of such users depends on the design parameters of the aggregator and that there is a substitution relationship between the amount of time that readers spend on the aggregator and the time readers spend on the original articles. What is outside the scope of the current research is the impact that aggregators have in increasing the overall consumption of content (e.g. because they reduce search costs by organizing content). Despite some recent attempts to provide partial answers (e.g. Chiou and Tucker 2011), the latter still remains an elusive and interesting empirical question for future research. From a methodological perspective, this work highlights the feasibility of conducting field experiments with "real users" using "home grown" apps developed in research labs and then released to the public. Our results suggest that experiments with even a few thousands of users can expose many of the effects that are also present in much larger scale applications. There is, thus, an interesting methodological discussion to be had on the merits of working with larger, secondary, data sets obtained from third-parties vs. primary data sets obtained from smaller scale apps developed for the explicit purpose of conducting experimental studies.
信息丰富环境中的注意力分配:以新闻聚合器为例
几乎没有哪个行业比新闻和新闻业受到的数字技术冲击更严重。传统的内容创造者,如报纸,正在见证他们的地域垄断在全球竞争激烈的互联网中瓦解,他们的一些最重要的收入来源,如分类广告,转移到像eBay和Craigslist这样的专业在线市场。用户生成的内容,如博客和在线评论,增加了内容的供应,这些内容经常与专业制作的内容正面竞争读者的注意力。似乎之前的变化还不够,这些行业目前正在进行一场同样具有颠覆性的变革。海量的在线内容增加了管理和聚合的重要性,也就是说,帮助读者过滤和理解对他们重要的内容子集的界面和服务。从历史上看,这些功能曾经是专业编辑的领域:编辑不仅委托制作内容,而且还决定报纸或杂志的内容以及如何组织。Web技术允许将这一重要功能从内容生产中分离出来。具体来说,网络在内容之间放置超链接的能力使得新类型的玩家(通常被称为内容聚合者或门户网站)能够成功进入专业内容生态系统,通过托管指向他人内容的链接集合来吸引流量和收入。聚合器几乎没有原创内容;它们通常会提供链接文章的标题和简短摘要(图1)。知名的聚合器包括谷歌新闻、德拉吉报道和赫芬顿邮报。表1提供了一个更广泛的示例列表。面对严重的财务压力,一些内容创作者开始反对内容聚合商,指责它们免费使用自己的内容,“窃取”了自己的收入。其他市场参与者指出,在今天的“链接经济”中,链接为目标节点带来了宝贵的流量。因此,内容创建者应该对聚合器的存在感到高兴,并将消费者引导到他们的网站。像谷歌的埃里克·施密特这样的关键聚合器的高管们断言,看到内容创造者的繁荣符合他们的利益,因为链接(和聚合器)的价值与这些链接所指向的内容的质量直接相关。争论的一个中心方面集中在跨内容节点放置(大部分)免费超链接过程的复杂经济含义上。支持聚合器的主要论点是,如果链接选择得好,那么它们就指向高质量的内容;因此,它们降低了消费者的搜索成本,这可能会导致内容消费的总体增加,并为高质量的网站带来更多的流量。反对聚合器的主要论点是,一些消费者通过阅读聚合器的链接文章的简短片段来满足他们的好奇心,而从不点击文章本身。事实上,法律是否允许聚合器在没有获得内容生产者的许可(并可能付费)的情况下复制文章的标题和片段,这个问题仍然没有解决。尽管目前这一代的新闻聚合器对内容生态系统是有益还是有害仍然是一个开放的问题(delarocas et al. 2010),但我们认为,可用内容的数量不断增加,使得某种形式的聚合成为每个内容生态系统中不可避免的和有价值的组成部分。因此,关键问题不在于聚合器是否应该存在,而在于聚合器和内容创造者之间部分共生、部分竞争的关系如何优化,以使双方都受益。为了提供这些问题的初步答案,我们进行了一系列实地实验,其目的是提供有关读者如何在新闻聚合器与其链接的原始文章之间分配注意力的见解。我们的实验是基于操作瑞士移动新闻聚合器的用户界面元素。我们研究了关键的设计参数,如聚合器提供的关于文章的文本片段的长度、相关照片的存在以及同一故事中其他相关文章的存在,如何影响读者点击文章的倾向、读者在点击该文章后花费的时间以及读者在聚合器上花费的时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信