{"title":"科学期刊编辑对文献计量数据的操纵","authors":"V. Lazarev","doi":"10.20316/ese.2019.45.19011","DOIUrl":null,"url":null,"abstract":"Although a bibliometrician myself, I believe that we, bibliometricians, are partly responsible for the bibliometric perversions currently in vogue to evaluate the performance of scientists. Bibliometricians are often negligent about, or indifferent to, how bibliometric indicators are interpreted by others, the terms used for referring to concepts, and other terminology, particularly terms referring to the properties of items assessed using bibliometric indicators. I support this serious charge against my colleagues with some examples. Take the fashionable term ‘altmetrics’, which reflects no particular domain or discipline (in contrast to ‘bibliometrics’ or ‘scientometrics’); using the term ‘metric’ instead of ‘indicator’ is a sign of overvalued evaluative ambitions, as is the frequent but uncritical use of the pairs ‘value’ and ‘quality’ as full synonyms. Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status. An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references. Viewpoint","PeriodicalId":35360,"journal":{"name":"European Science Editing","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Manipulation of bibliometric data by editors of scientific journals\",\"authors\":\"V. Lazarev\",\"doi\":\"10.20316/ese.2019.45.19011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although a bibliometrician myself, I believe that we, bibliometricians, are partly responsible for the bibliometric perversions currently in vogue to evaluate the performance of scientists. Bibliometricians are often negligent about, or indifferent to, how bibliometric indicators are interpreted by others, the terms used for referring to concepts, and other terminology, particularly terms referring to the properties of items assessed using bibliometric indicators. I support this serious charge against my colleagues with some examples. Take the fashionable term ‘altmetrics’, which reflects no particular domain or discipline (in contrast to ‘bibliometrics’ or ‘scientometrics’); using the term ‘metric’ instead of ‘indicator’ is a sign of overvalued evaluative ambitions, as is the frequent but uncritical use of the pairs ‘value’ and ‘quality’ as full synonyms. Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status. An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references. Viewpoint\",\"PeriodicalId\":35360,\"journal\":{\"name\":\"European Science Editing\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Science Editing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.20316/ese.2019.45.19011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Science Editing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20316/ese.2019.45.19011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
Manipulation of bibliometric data by editors of scientific journals
Although a bibliometrician myself, I believe that we, bibliometricians, are partly responsible for the bibliometric perversions currently in vogue to evaluate the performance of scientists. Bibliometricians are often negligent about, or indifferent to, how bibliometric indicators are interpreted by others, the terms used for referring to concepts, and other terminology, particularly terms referring to the properties of items assessed using bibliometric indicators. I support this serious charge against my colleagues with some examples. Take the fashionable term ‘altmetrics’, which reflects no particular domain or discipline (in contrast to ‘bibliometrics’ or ‘scientometrics’); using the term ‘metric’ instead of ‘indicator’ is a sign of overvalued evaluative ambitions, as is the frequent but uncritical use of the pairs ‘value’ and ‘quality’ as full synonyms. Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status. An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references. Viewpoint
期刊介绍:
EASE"s journal, European Science Editing , publishes articles, reports meetings, announces new developments and forthcoming events, reviews books, software and online resources, and highlights publications of interest to members.