Manipulation of bibliometric data by editors of scientific journals

Q2 Social Sciences
V. Lazarev
{"title":"Manipulation of bibliometric data by editors of scientific journals","authors":"V. Lazarev","doi":"10.20316/ese.2019.45.19011","DOIUrl":null,"url":null,"abstract":"Although a bibliometrician myself, I believe that we, bibliometricians, are partly responsible for the bibliometric perversions currently in vogue to evaluate the performance of scientists. Bibliometricians are often negligent about, or indifferent to, how bibliometric indicators are interpreted by others, the terms used for referring to concepts, and other terminology, particularly terms referring to the properties of items assessed using bibliometric indicators. I support this serious charge against my colleagues with some examples. Take the fashionable term ‘altmetrics’, which reflects no particular domain or discipline (in contrast to ‘bibliometrics’ or ‘scientometrics’); using the term ‘metric’ instead of ‘indicator’ is a sign of overvalued evaluative ambitions, as is the frequent but uncritical use of the pairs ‘value’ and ‘quality’ as full synonyms. Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status. An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references. Viewpoint","PeriodicalId":35360,"journal":{"name":"European Science Editing","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Science Editing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20316/ese.2019.45.19011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 2

Abstract

Although a bibliometrician myself, I believe that we, bibliometricians, are partly responsible for the bibliometric perversions currently in vogue to evaluate the performance of scientists. Bibliometricians are often negligent about, or indifferent to, how bibliometric indicators are interpreted by others, the terms used for referring to concepts, and other terminology, particularly terms referring to the properties of items assessed using bibliometric indicators. I support this serious charge against my colleagues with some examples. Take the fashionable term ‘altmetrics’, which reflects no particular domain or discipline (in contrast to ‘bibliometrics’ or ‘scientometrics’); using the term ‘metric’ instead of ‘indicator’ is a sign of overvalued evaluative ambitions, as is the frequent but uncritical use of the pairs ‘value’ and ‘quality’ as full synonyms. Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status. An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references. Viewpoint
科学期刊编辑对文献计量数据的操纵
尽管我自己也是一名文献计量学家,但我相信,我们这些文献计量学家对目前流行的评估科学家表现的文献计量扭曲负有部分责任。文献计量学家经常忽视或漠不关心其他人如何解释文献计量指标、用于指代概念的术语和其他术语,特别是指代使用文献计量指标评估的项目的属性的术语。我举了一些例子,支持对我的同事提出的这一严重指控。以流行的术语“altmetrics”为例,它不反映特定的领域或学科(与“文献计量学”或“科学计量学”相反);使用术语“metric”而不是“indicator”是高估评估野心的标志,经常但不加批判地使用“value”和“quality”作为同义词也是如此。这种术语的滥用不仅证明了研究官僚机构根据这些术语评估研究绩效的错误做法是合理的,而且还鼓励科学期刊的编辑和研究论文的审稿人“玩弄”文献计量指标。例如,如果一本期刊似乎缺乏足够的引用次数,该期刊的编辑可能会决定强制其作者引用该期刊的论文。我知道一本印度期刊在其他几个标准方面质量相当合理,但我不能再这样认为了,因为它迫使作者对该期刊的论文进行不必要的引用(这完全是错误的)。对这本杂志的任何进一步评估,包括自我引用,都会导致对其真实地位的扭曲衡量。自然科学或应用科学的一篇论文平均至少列出10篇参考文献。1一些有进取心的编辑认为,提交给他们期刊的论文中,这个数字是最低的。白俄罗斯的许多期刊都强制执行了这样的规范,而我们这些作者现在已经习惯了这种规范,甚至没有意识到它在文献计量数据中造成的扭曲。事实上,我经常注意到,一些作者——仅仅为了满足至少10篇参考文献的标准——引用了非常旧的教科书和互联网资源,其URL已不再有效。一篇好论文的平均参考文献可能超过10篇,而一篇参考文献少于10篇的论文可能仍然是一篇好文章(爱因斯坦的第一篇论文在其原始版本中甚至没有一篇参考!)。我认为,由同行评审员来判断作者是否提供了足够的参考文献以及这些参考文献是否合适,期刊编辑不应该对参考文献的数量设定任何强制性配额。观点
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
European Science Editing
European Science Editing Social Sciences-Communication
CiteScore
1.90
自引率
0.00%
发文量
17
审稿时长
12 weeks
期刊介绍: EASE"s journal, European Science Editing , publishes articles, reports meetings, announces new developments and forthcoming events, reviews books, software and online resources, and highlights publications of interest to members.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信