Retraction Watch: What We’ve Learned and How Metrics Play a Role

I. Oransky
{"title":"Retraction Watch: What We’ve Learned and How Metrics Play a Role","authors":"I. Oransky","doi":"10.7551/mitpress/11087.003.0014","DOIUrl":null,"url":null,"abstract":".com) in August 2010 for two reasons: As longtime journalists, we often found that retraction notices were opaque. And sometimes opacity was the best you could hope for; often, notices were misleading or even wrong. We also found that there were great stories behind retractions. We have our own metrics at Retraction Watch, mostly just having to do with traffic to the site each month; we now have, on average, 150,000 unique visitors, and half a million page views. (However, we are not beholden to these metrics, as our revenue does not depend on advertising; we have at various times had generous funding from three foundations, and other income streams including freelance writing fees.) In terms of more traditional metrics, I can say we have been cited in the literature more than a hundred times. That means that if a blog could have an H index, we would have a good one. And it does not hurt when we talk to funders about the impact we are having on publishing practices and transparency. Retraction Watch posts often begin with a tip— mostly a notice of retraction. But we also receive long emails from frustrated researchers, who have been laboring to correct a perceived wrong for months, if not years. We empathize and sympathize with their frustration— it is incredibly hard to get papers retracted from the literature, or even corrected or noted in some way. As an illustration, take a piece by nutrition researcher David Allison and colleagues that appeared in Nature (Allison et al., 2016). They scanned the nutrition literature and found more than two dozen papers that they thought were deeply problematic. And they kept a pretty high bar. You can judge for yourselves, but if you look at the kinds of problems they were looking at in these papers, it was pretty clear something needed to be done. In a few cases, the journals retracted the paper, or published a letter from Allison and his team critiquing the findings, but in many cases the 10 Retraction Watch: What We’ve Learned and How Metrics Play a Role","PeriodicalId":186262,"journal":{"name":"Gaming the Metrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Gaming the Metrics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.7551/mitpress/11087.003.0014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

.com) in August 2010 for two reasons: As longtime journalists, we often found that retraction notices were opaque. And sometimes opacity was the best you could hope for; often, notices were misleading or even wrong. We also found that there were great stories behind retractions. We have our own metrics at Retraction Watch, mostly just having to do with traffic to the site each month; we now have, on average, 150,000 unique visitors, and half a million page views. (However, we are not beholden to these metrics, as our revenue does not depend on advertising; we have at various times had generous funding from three foundations, and other income streams including freelance writing fees.) In terms of more traditional metrics, I can say we have been cited in the literature more than a hundred times. That means that if a blog could have an H index, we would have a good one. And it does not hurt when we talk to funders about the impact we are having on publishing practices and transparency. Retraction Watch posts often begin with a tip— mostly a notice of retraction. But we also receive long emails from frustrated researchers, who have been laboring to correct a perceived wrong for months, if not years. We empathize and sympathize with their frustration— it is incredibly hard to get papers retracted from the literature, or even corrected or noted in some way. As an illustration, take a piece by nutrition researcher David Allison and colleagues that appeared in Nature (Allison et al., 2016). They scanned the nutrition literature and found more than two dozen papers that they thought were deeply problematic. And they kept a pretty high bar. You can judge for yourselves, but if you look at the kinds of problems they were looking at in these papers, it was pretty clear something needed to be done. In a few cases, the journals retracted the paper, or published a letter from Allison and his team critiquing the findings, but in many cases the 10 Retraction Watch: What We’ve Learned and How Metrics Play a Role
撤稿观察:我们的经验教训和参数的作用
出于两个原因:作为资深记者,我们经常发现撤稿通知不透明。有时候,不透明是你所能期望的最好结果;通常,通知具有误导性,甚至是错误的。我们还发现撤稿的背后有很棒的故事。我们在Retraction Watch上也有自己的指标,主要与网站每月的流量有关;我们现在平均有15万独立访客,50万页面浏览量。(然而,我们并不依赖于这些指标,因为我们的收入并不依赖于广告;在不同时期,我们得到了三个基金会的慷慨资助,以及其他收入来源,包括自由撰稿人的稿费。)就更传统的指标而言,我可以说我们的成果在文献中被引用了一百多次。这意味着如果一个博客可以有H索引,我们就会有一个很好的索引。当我们与资助者谈论我们对出版实践和透明度的影响时,这并没有什么坏处。撤稿观察站通常以提示开始——主要是撤稿通知。但我们也会收到沮丧的研究人员发来的长长的电子邮件,他们已经努力纠正一个察觉到的错误数月,甚至数年。我们对他们的沮丧感同身受——想要从文献中撤回论文,甚至以某种方式纠正或注释,都是非常困难的。例如,营养研究员David Allison及其同事发表在《自然》杂志上的一篇文章(Allison et al., 2016)。他们浏览了营养文献,发现了二十多篇他们认为存在严重问题的论文。他们的标准相当高。你可以自己判断,但如果你看看他们在这些论文中研究的问题,很明显需要做些什么。在少数情况下,期刊撤回了论文,或者发表了Allison和他的团队对研究结果的批评信,但在许多情况下,《10个撤稿观察:我们所学到的以及指标如何发挥作用》
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信