{"title":"Retraction Watch: What We’ve Learned and How Metrics Play a Role","authors":"I. Oransky","doi":"10.7551/mitpress/11087.003.0014","DOIUrl":null,"url":null,"abstract":".com) in August 2010 for two reasons: As longtime journalists, we often found that retraction notices were opaque. And sometimes opacity was the best you could hope for; often, notices were misleading or even wrong. We also found that there were great stories behind retractions. We have our own metrics at Retraction Watch, mostly just having to do with traffic to the site each month; we now have, on average, 150,000 unique visitors, and half a million page views. (However, we are not beholden to these metrics, as our revenue does not depend on advertising; we have at various times had generous funding from three foundations, and other income streams including freelance writing fees.) In terms of more traditional metrics, I can say we have been cited in the literature more than a hundred times. That means that if a blog could have an H index, we would have a good one. And it does not hurt when we talk to funders about the impact we are having on publishing practices and transparency. Retraction Watch posts often begin with a tip— mostly a notice of retraction. But we also receive long emails from frustrated researchers, who have been laboring to correct a perceived wrong for months, if not years. We empathize and sympathize with their frustration— it is incredibly hard to get papers retracted from the literature, or even corrected or noted in some way. As an illustration, take a piece by nutrition researcher David Allison and colleagues that appeared in Nature (Allison et al., 2016). They scanned the nutrition literature and found more than two dozen papers that they thought were deeply problematic. And they kept a pretty high bar. You can judge for yourselves, but if you look at the kinds of problems they were looking at in these papers, it was pretty clear something needed to be done. In a few cases, the journals retracted the paper, or published a letter from Allison and his team critiquing the findings, but in many cases the 10 Retraction Watch: What We’ve Learned and How Metrics Play a Role","PeriodicalId":186262,"journal":{"name":"Gaming the Metrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Gaming the Metrics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.7551/mitpress/11087.003.0014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
.com) in August 2010 for two reasons: As longtime journalists, we often found that retraction notices were opaque. And sometimes opacity was the best you could hope for; often, notices were misleading or even wrong. We also found that there were great stories behind retractions. We have our own metrics at Retraction Watch, mostly just having to do with traffic to the site each month; we now have, on average, 150,000 unique visitors, and half a million page views. (However, we are not beholden to these metrics, as our revenue does not depend on advertising; we have at various times had generous funding from three foundations, and other income streams including freelance writing fees.) In terms of more traditional metrics, I can say we have been cited in the literature more than a hundred times. That means that if a blog could have an H index, we would have a good one. And it does not hurt when we talk to funders about the impact we are having on publishing practices and transparency. Retraction Watch posts often begin with a tip— mostly a notice of retraction. But we also receive long emails from frustrated researchers, who have been laboring to correct a perceived wrong for months, if not years. We empathize and sympathize with their frustration— it is incredibly hard to get papers retracted from the literature, or even corrected or noted in some way. As an illustration, take a piece by nutrition researcher David Allison and colleagues that appeared in Nature (Allison et al., 2016). They scanned the nutrition literature and found more than two dozen papers that they thought were deeply problematic. And they kept a pretty high bar. You can judge for yourselves, but if you look at the kinds of problems they were looking at in these papers, it was pretty clear something needed to be done. In a few cases, the journals retracted the paper, or published a letter from Allison and his team critiquing the findings, but in many cases the 10 Retraction Watch: What We’ve Learned and How Metrics Play a Role