As the world turns: scientific publishing in the digital era

IF 5.3 2区 环境科学与生态学 Q1 ENVIRONMENTAL SCIENCES
David Ozonoff
{"title":"As the world turns: scientific publishing in the digital era","authors":"David Ozonoff","doi":"10.1186/s12940-024-01063-5","DOIUrl":null,"url":null,"abstract":"<p>A quarter of the way into the 21st Century the technology of encoding and transmitting information in digital form is in full flower. Almost without noticing it, we are living through a historical discontinuity comparable to the one produced by Guttenberg’s invention of printing with moveable type in 1450, a technology that made possible the production of identical written texts on a scale previously unimaginable. That technology was quickly adopted, but its basic form didn’t change for hundreds of years. Today the speed of advance in digital technology is breath taking. Digital devices like the smart phone have moved from expensive prototypes to ubiquitous and essential appliances in a little over a decade. Digital technology has also substantially affected scientific publishing.</p><p>In 1879 John Shaw Billings, a surgeon in the Office of the US Surgeon General of the Army, began to compile an author-topic catalog of the library. In 1966 its print descendant, <i>Index Medicus</i> (now <i>PubMed</i>), went online [1], but as long as the journals themselves were still in print-only format, its full impact only came when most journal-published research was also available in electronic digital format. That time has come and it has had a profound effect on how scientists seek out and find research relevant to their work. Gone are the days when many of us routinely perused the latest issues of journals in our institutional libraries or went to library stacks to retrieve past issues and lug them to the copy machines at 10 cents a page. The stacks and copy machines now sit on our desks as internet-connected computers and personal printers. Some of us haven’t been in a physical library for years. Journals still appearing in paper format have been forced to have a digital format also available. At the same time advances in research methods, such as genomics or new imaging technics unimaginable in the pre-digital era have vastly expanded the scope and depth of biomedical research. Even well-defined research fields are now extensively sub-specialized and the volume of publication is potentially overwhelming. Yet online digital search makes it possible to find the needle in the haystack, and this is an essential difference compared to even a short time ago.</p><p>This is a seismic shift in scientific publishing and it has happened in a relatively short time without most of us being conscious of it. Just as music streaming services uncoupled song tracks from the record album or CD upon which they originally appeared, no-cost search engines like Google or the biomedical research database PubMed have uncoupled individual research articles from the journals where they originally appeared. Journal brand names remain significant, but less so than previously and they are no longer the first place we look. Now we can look everywhere at once.</p><p>By the year 2000 we had the routine ability to transmit our writing electronically in digital form and access to a worldwide network to distribute it almost instantaneously. Before that, printing and distributing scientific texts were done by commercial publishers. In the Age of the Internet, it seemed plausible that the print publisher, like the buggy whip makers in the age of the automobile, were headed for technological obsolescence. As of now, while far from obsolete, the major science publishers have still been forced to adapt to the new digital environment.</p><p>The Open Access (OA) movement in scientific publishing [2] produced new electronic journals with access open to anyone with an internet connection. The fall of subscription firewalls and a shift in intellectual property arrangements is still underway but is well advanced. Copyright shifted from the publisher to author(s). While there was still a publisher involved, OA journals were something new. In 2002 my co-Editor-in-Chief, Professor Philippe Grandjean, was invited by a new publisher, BioMed Central (BMC), to join a stable of electronically published OA science journals. BMC was founded in 2000 by entrepreneur and visionary Vitek Tracz (for more on the history of BMC, see [3] and for Tracz see [4]). Professor Grandjean generously asked me to join him in starting the first OA journal devoted to the science of environmental health, with a special focus on research using epidemiological methods. The result was this journal, <i>Environmental Health</i>, which has now been published for more than two decades. In 2022 it had 1.9 million downloads and over 26,000 altimetric mentions and is in the top quartile of all journals in the field, publishing more than 100 articles a year. OA and OA journals are now well established, with an ever increasing share of published research articles.</p><p>With the increasing recognition of the value of OA to readership and the resistance of institutional libraries to the soaring cost of subscriptions, publishers are changing their subscription-based business models to per-paper “processing charges” tied to appearance on their websites. For the publishers the <i>number</i> of published papers now had a financial significance independent of their contents. It remains true that a research publication record is a major criterion of professional status and reputation, used by many academic institutions in appointment and promotion decisions, but in doing so universities have also given weight to publication numbers, not just research significance or quality. Both trends have reinforced incentives for publisher and researcher alike to publish papers with the narrowest possible scope, resulting in multiple shorter papers from a single line of research and affecting the average content of an individual paper. It is the Editors who have the responsibility to accept or reject papers, but they must work with what is submitted and there are incentives for both publishers and researchers to divide papers into smaller and more numerous packages.</p><p>Triaging the resulting increase in submission volume is one of the biggest challenges journals face. Looked at from a researcher’s or publisher’s point of view this is a marketing problem. How does one get a journal to “buy” the maximum number of the researcher’s results or the researcher to buy the publisher’s services? But from the journal Editor’s point of view, it is a problem of how to recognize, and make available, research of value against a noisy background. Under the subscription model they were more or less aligned. Consistent high-quality and high-content research enhanced both the objectives of the publishers and researchers, on the one hand, and the editor’s journal, on the other. In this emerging environment that alignment has been lost. “Predatory journals” with low or no barriers to publication have arisen to take advantage of the current OA per-paper business model. At the same time legitimate and established journals like this one have seen large increases in submissions, many of marginal or no interest to the field.</p><p>Editors, however, are still charged with evaluating the contents of submissions. The conventional (although historically recent) mechanism of peer review would seem to be the surest way to address this. But the peer review process itself has become a major challenge for almost every scientific journal, including this one. As Editors, we serve a gatekeeping function, and while we are under no obligation to open the gate for papers of little value, we don’t always have the time or expertise to recognize those papers. We depend upon our scientific colleagues as peer reviewers to help us accomplish this task, but finding people willing to offer that help is becoming more and more difficult. Publishers have tried to justify their value by providing editors with tools to identify and contact appropriate reviewers. In our experience these tools can sometimes be helpful but often provide irrelevant or useless suggestions. Once identified, and we believe most editors use their own knowledge and experience of the field, there is the greater problem of getting invited reviewers to accept.</p><p>There are benefits to reviewers of advance knowledge gained by seeing a manuscript ahead of possible publication, especially in a special and fast-moving research area, but along with everything else, the Academy has also changed, and the pressure to do more with less available time — less available because university administrations are piling more and more required but uncompensated demands on faculty members --- that asking a colleague to review in depth anodyne “research” doesn’t pay when balanced against what today’s academics must or could do with their time. Obtaining conscientious unpaid peer reviews is now probably the biggest and most frustrating challenge for most journals and their editors.</p><p>The real problem is deeper. It seems commonsensical that pre-publication peer review must improve the quality of published research, but most of us who are involved with peer review know too much about how the sausage is made. As editors and researchers ourselves we know that the process often has poor inter-rater reliability and its accuracy is largely unknown and difficult or impossible to measure [5]. The potential for bias, especially for results that don’t conform to the reviewer’s expectations, should be obvious, and relying on a tiny number of subjective judgments for an important decision, especially with unknown or problematic selection bias, also seems risky. If peer-review were a research instrument, we would be very reluctant to use it. Nevertheless, our journal and almost all other mainstream scientific journals require peer review and even tout it as our most desirable, even most essential, feature. Yet the evidence that pre-publication peer review improves the quality of publications is mixed, at best [5].</p><p>Like many other things during an age of transition, peer review seems broken in important ways. A former colleague once said to me, “Real peer review happens <i>after</i> publication,” meaning that our colleagues evaluate the value of our publications for their work and the field in general, citing it, using it, contradicting it or ignoring it. This is, in essence, a form of crowd-sourced peer review. In the early 1990s mathematicians and physicists were finding that formal journal-required peer review of a complex manuscript could take 1½ to 2 years and if the paper was accepted, another year to appear in print. Their journals served small, often highly specialized research areas. Because of the lengthy time needed for peer review these researchers were accustomed to circulating manuscripts to a few friends and colleagues for comment before they were published, both to communicate interesting results and to get constructive criticism. When the internet replaced the postal service to circulate manuscripts that had not yet undergone formal peer review (called “pre-prints”), this practice expanded and became systematized, appearing publicly on computer platforms called preprint servers [6]. Papers were lightly moderated for scope but not peer-reviewed. They were also searchable and appeared almost immediately. The 1991 pre-print server <i>arXiv</i> [7] served just mathematics and physics. It took more than two decades for the biomedical community to catch-up with its own <i>bioRxiv</i> pre-print server [8], which now includes a separate <i>medRxiv</i>. Papers on preprint servers can simultaneously, or subsequently, be submitted to most conventional journals, including the most elite. They can be searched for, commented upon, revised, and cited by others [9]. Many are mentioned in the press because of their timeliness in addressing urgent problems like the pandemic. Media sources usually note them as “not yet published or peer reviewed,” but only in passing. Newspapers don’t seem to care.</p><p>To incorporate preprint servers into a crowd-sourced peer-review mechanism would require a way to evaluate value to readers, perhaps by allowing reader up-votes for papers or more systematic use of Commenting facilities [10]. Another possibility would be establishing “overlay” journals that publish, index or provide Commentaries on particular preprints or groups of preprints. These reviews would be “meta-reviewed” (a review of reviews) by journal staff and editors. Commentaries on the literature are already a much-read feature of current journals and would seem to be a better use of reviewer time. They could also count as a publication. This journal, through its publisher BMC, now offers a preprint halfway house, called <i>In Review</i>. This voluntary option allows authors to share their work with others to read and comment on prior to publication, with a citable DOI.</p><p>Subscription pay walls, uncoupling research reports from journals, and problems with conventional peer review are not the only challenges in today’s unpredictable publishing environment, but at least they are before us in concrete form. Even the near future is less tangible, and try as hard as we like to envision it, we almost always make the mistake of envisioning it to be like the present. It rarely is. As I write this (the end of 2023) it is little more than a year since the public unveiling of a new digital technology, generative Artificial Intelligence (AI), made possible by the phenomenal increase in readily available computing power. Using machines to do things we humans cannot do unassisted is not new, but the ability of machines to generate human-like conversational language is. ChatGPT, from the non-profit but corporate supported company OpenAI, claimed 13 million unique visits by the end of its first month and a year later is said to have 100 million users each week, making it the fastest growing user base in digital technology history [11]. Much of this success is due to its “chat” based user interface, which gives it the sense of being generated by a human being, not a computer. There has been a great deal of speculation about the good and bad potential of this technology, from utopian to doomsday, but something important is already happening. AI is making visible presuppositions that the printing press introduced but we haven’t noticed.</p><p>The first is the pervasive but implicit role of reader trust and confidence in scientific publishing. Peer reviewers recognize that certain practices, like fraudulent results or plagiarism are unacceptable, although science historians have long known that great scientists did not always live up to today’s standards (the controversy over Mendel’s experimental data is a good example [12]). But there is much in otherwise proper papers that rarely gets thoroughly examined. Reviewers and readers don’t check all the references to verify they say what a manuscript implies and only note discrepancies when a fortuitous personal knowledge of particular papers prompts it. Scientific fraud is so shocking because we do not normally assume a researcher has made up or altered data. We know it happens (although we aren’t sure how often), but we usually take published results at face value. Will we assume the same (or even more) about papers generated by a computer? Or will this produce a subtle or not-so-subtle shift in our thinking with important effects? On the one hand, we assume computers are precise, although we may question their accuracy. But papers produced by generative AI platforms like ChatGPT can make up citations, using plausible non-existent titles inferred from what actual authors have previously written [13]. Even when citations exist, they may not say what ChatGPT implies. And computer precision may be wrongly inferred, since repeat queries can give different texts. It’s not clear exactly what our unspoken presuppositions are about computer generated texts, but it is almost a certainty generative AI will be used to produce abstracts or whole papers submitted as scientific research. How will that affect tacit presuppositions about trust and confidence? We have no idea. But it is plausible computers will not be given the same benefit of the doubt as humans.</p><p>Veracity aside, how might computer authorship be viewed or accepted? The very notion of “authorship,” which seems so obvious, is historically recent [14]. Prior to printing, written texts were produced anonymously by scribes to record events or promulgate religious ideas. Specific authorship was usually unknown or irrelevant. Author ascription, if noted, was used to establish authority, not credit. The printing press not only enabled a means of mass communication but also produced texts that became commodities. Once the printed text had monetary value, authorship became connected with expertise, intellectual property, and reliability of the contents.</p><p>Until the 20th century the norm was single person authorship. In the mid-20th century multiple authors became more common, although rarely many more than a few. That has changed radically. A recent review of over 100,000 biomedical papers uploaded to PubMed between 2016 and 2021 found that the <i>median</i> number of authors was 6, up from 3, 20 years earlier. In 2002 33.9% were single-author papers. In 2021 single author papers in biomedicine had dropped to 2.1% [15]. We are now in an era when research is pursued by teams, an era of hyperauthorship. Physics holds the record with a printed paper in <i>Physical Review Letters</i> that recorded 5,154 co-authors, the list taking up 24 of the 33 page publication [16]. In the era of Big Data the biomedical sciences are not far behind. In 2015 a paper on the fruit-fly genome boasted over 1000 authors, among them 900 undergraduates [17]. Some biologists have complained that such a practice makes the idea of scientific authorship meaningless, but the first author of that paper responded that the students “read, critiqued and approved the manuscript, but did not write or revise it. Correcting and annotating the sequence required extensive data analysis. and each student made a ‘significant intellectual contribution’ to the project and earned his or her place in the author list.” [17]. Whether this is sufficient under current practice for authorship may be questioned [18], but the point is clear. When large teams are involved and each member supplies something that was necessary for the result, how does one credit authorship? If that description fitted a professional, like a spectroscopist or biostatistician, there would likely not be a question, but for copy editors, technicians, programmers or, in this case undergraduates, it seems to be questionable, although it is not clear why. Some journals now ask for the role played by co-authors, of which drafting and revising are two examples. At this juncture if the drafting were done by a computer suppled with the data there would likely be a reluctance to assign it authorship. But already radiology and lab reports are partially drafted by computers and it is plausible that the role of computers in providing and/or revising text for research papers will expand significantly, beyond current copy edit suggestions (which after all, is a form of revision). Can/should a computer be a co-author or even sole author? Regardless of how we would answer now, generative AI and hyperauthorship have raised the question of what authorship really means.</p><p>The printing press changed everything, although we know this only in retrospect, The world usually sleepwalks through technological revolutions of historic proportions. So much has already changed in scientific publishing that it is tempting to think we have reached a new equilibrium. In my view it is highly unlikely, although I am not wise enough or bold enough to say when today’s rapid evolution will pause and in what state it will leave the process of communicating research results. I rather doubt it will leave scientific publishing in a form that is recognizable to today’s researchers -- or even whether anything like today’s research scientist will even exist as a job title. 150 years ago there was no such job description. An unsettling thought, yes. But periods of historical transition are always unsettling.</p><p>Meanwhile, we carry on and adapt as the world changes around us.</p><ol data-track-component=\"outbound reference\"><li data-counter=\"1.\"><p>Greenberg SJ, Gallagher PE. The great contribution: Index Medicus, Index-Catalogue, and IndexCat. J Med Libr Assoc. 2009;97(2):108–13. https://doi.org/10.3163/1536-5050.97.2.007. PMID: 19404501; PMCID: PMC2670211. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2670211/pdf/mlab-97-02-108.pdf.</p><p>Article PubMed PubMed Central Google Scholar </p></li><li data-counter=\"2.\"><p>Wikipedia contributors. December, ‘Open access’. Wikipedia Free Encyclopedia, 26 2023, 16:56 UTC. https://en.wikipedia.org/w/index.php?title=Open_access&amp;oldid=1191926139 [accessed 29 December 2023].</p></li><li data-counter=\"3.\"><p>Wikipedia contributors. ‘BioMed Central’, Wikipedia, The Free Encyclopedia, 4 December 2023, 13:10 UTC, &lt;https://en.wikipedia.org/w/index.php?title=BioMed_Central&amp;oldid=1188289741 [accessed 28 December 2023].</p></li><li data-counter=\"4.\"><p>Wikipedia contributors. ‘Vitek Tracz’, <i>Wikipedia, The Free Encyclopedia</i>, 3 October 2023, 22:04 UTC, https://en.wikipedia.org/w/index.php?title=Vitek_Tracz&amp;oldid=1178474216 (accessed 28 December 2023).</p></li><li data-counter=\"5.\"><p>Neuen D. Peer-review and publication does not guarantee reliable information Posted on 16th January 2018, Students 4 Best Evidence Blog. https://s4be.cochrane.org/blog/2018/01/16/peer-review-and-publication-does-not-guarantee-reliable-information/#:~:text=Peer%2Dreview%20is%20by%20no,based%20only%20on%20that%20fact (Accessed 29 December 2023).</p></li><li data-counter=\"6.\"><p>Wikipedia contributors. ‘Preprint’, <i>Wikipedia, The Free Encyclopedia</i>, 1 December 2023, 11:30 UTC, https://en.wikipedia.org/w/index.php?title=Preprint&amp;oldid=1187786985 [accessed 28 December 2023].</p></li><li data-counter=\"7.\"><p>Uniceristy C. https://arxiv.orgaccessed 29 December (2023).</p></li><li data-counter=\"8.\"><p>Cold Spring Harbor Laboratory., https://www.biorxiv.org (accessed 29 December 2023).</p></li><li data-counter=\"9.\"><p>https://www.internationalscienceediting.com/cite-a-preprint/#:~:text=So%2C%20can%20you%20cite%20a,Nature%27s%20policy%20below</p></li><li data-counter=\"10.\"><p>Chugg B. The case for replacing peer-review with preprints and overlay journals, The Medium, July 6, 2022, https://benchugg.medium.com/the-case-for-replacing-peer-review-with-preprints-and-overlay-journals-f44899a5b8cd (accessed December 29, 2023).</p></li><li data-counter=\"11.\"><p>Conversation T. ChatGPT turns 1: AI chatbot’s success says as much about humans as technology, Published: November 29, 2023 1:33pm EST. https://theconversation.com/chatgpt-turns-1-ai-chatbots-success-says-as-much-about-humans-as-technology-218704. (accessed December 29, 2023).</p></li><li data-counter=\"12.\"><p>Sussmilch FC, Ross JJ, Reid JB, Mendel. From Genes to Genome, Plant Physiol. 2022;190(4):2103–2114. https://doi.org/10.1093/plphys/kiac424.PMID: 36094356. https://academic.oup.com/plphys/article/190/4/2103/6696226?login=false (accessed December 29, 2023).</p></li><li data-counter=\"13.\"><p>Library, University of Waterloo, ChatGPT and Generative Artificial Intelligence (AI). accessed December 29, : Incorrect bibliographic references. https://subjectguides.uwaterloo.ca/chatgpt_generative_ai/incorrectbibreferences (2023).</p></li><li data-counter=\"14.\"><p>Neville S. (2022). Authorship, Book History, and the Effects of Artifacts. In <i>Early Modern Herbals and the Book Trade: English Stationers and the Commodification of Botany</i> (pp. 55–88). Cambridge: Cambridge University Press. https://doi.org/10.1017/9781009031615.003. https://www.cambridge.org/core/books/early-modern-herbals-and-the-book-trade/authorship-book-history-and-the-effects-of-artifacts/092889DBCC6D79FBB5CD643707F16B5D (accessed December 29, 2023).</p></li><li data-counter=\"15.\"><p>King C. accessed December 29, Multiauthor Papers: Onward and Upward, Clavirate Corporate Website, http://archive.sciencewatch.com/newsletter/2012/201207/multiauthor_papers/ (2023).</p></li><li data-counter=\"16.\"><p>Aad G, ATLAS Collaboration, Collaboration CMS, Experiments et al. Phys Rev Lett 114, 191803 (2015).], https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.191803 (accessed December 29, 2023).</p></li><li data-counter=\"17.\"><p>Woolston C. Fruit-fly paper has 1,000 authors. Nature. 2015;521:263. https://doi.org/10.1038/521263f. accessed December 29, 2023.</p><p>Article ADS CAS Google Scholar </p></li><li data-counter=\"18.\"><p>International Committee of Medical Journal Editors., Defining the Role of Authors and Contributors. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html (accessed December 29, 2023).</p></li></ol><p>Download references<svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" role=\"img\" width=\"16\"><use xlink:href=\"#icon-eds-i-download-medium\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"></use></svg></p><p>The Author wishes to acknowledge the helpful comments from his journal colleagues, co-Editors-in-Chief Philippe Grandjean and Ruth Etzel; and the support and critical reading of Janet Kerr.</p><h3>Authors and Affiliations</h3><ol><li><p>Department of Environmental Health, Boston University School of Public Health, Boston, MA, USA</p><p>David Ozonoff</p></li></ol><span>Authors</span><ol><li><span>David Ozonoff</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li></ol><h3>Corresponding author</h3><p>Correspondence to David Ozonoff.</p><h3>Publisher’s Note</h3><p>Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.</p><p><b>Open Access</b> This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.</p>\n<p>Reprints and permissions</p><img alt=\"Check for updates. Verify currency and authenticity via CrossMark\" height=\"81\" loading=\"lazy\" src=\"data:image/svg+xml;base64,<svg height="81" width="57" xmlns="http://www.w3.org/2000/svg"><g fill="none" fill-rule="evenodd"><path d="m17.35 35.45 21.3-14.2v-17.03h-21.3" fill="#989898"/><path d="m38.65 35.45-21.3-14.2v-17.03h21.3" fill="#747474"/><path d="m28 .5c-12.98 0-23.5 10.52-23.5 23.5s10.52 23.5 23.5 23.5 23.5-10.52 23.5-23.5c0-6.23-2.48-12.21-6.88-16.62-4.41-4.4-10.39-6.88-16.62-6.88zm0 41.25c-9.8 0-17.75-7.95-17.75-17.75s7.95-17.75 17.75-17.75 17.75 7.95 17.75 17.75c0 4.71-1.87 9.22-5.2 12.55s-7.84 5.2-12.55 5.2z" fill="#535353"/><path d="m41 36c-5.81 6.23-15.23 7.45-22.43 2.9-7.21-4.55-10.16-13.57-7.03-21.5l-4.92-3.11c-4.95 10.7-1.19 23.42 8.78 29.71 9.97 6.3 23.07 4.22 30.6-4.86z" fill="#9c9c9c"/><path d="m.2 58.45c0-.75.11-1.42.33-2.01s.52-1.09.91-1.5c.38-.41.83-.73 1.34-.94.51-.22 1.06-.32 1.65-.32.56 0 1.06.11 1.51.35.44.23.81.5 1.1.81l-.91 1.01c-.24-.24-.49-.42-.75-.56-.27-.13-.58-.2-.93-.2-.39 0-.73.08-1.05.23-.31.16-.58.37-.81.66-.23.28-.41.63-.53 1.04-.13.41-.19.88-.19 1.39 0 1.04.23 1.86.68 2.46.45.59 1.06.88 1.84.88.41 0 .77-.07 1.07-.23s.59-.39.85-.68l.91 1c-.38.43-.8.76-1.28.99-.47.22-1 .34-1.58.34-.59 0-1.13-.1-1.64-.31-.5-.2-.94-.51-1.31-.91-.38-.4-.67-.9-.88-1.48-.22-.59-.33-1.26-.33-2.02zm8.4-5.33h1.61v2.54l-.05 1.33c.29-.27.61-.51.96-.72s.76-.31 1.24-.31c.73 0 1.27.23 1.61.71.33.47.5 1.14.5 2.02v4.31h-1.61v-4.1c0-.57-.08-.97-.25-1.21-.17-.23-.45-.35-.83-.35-.3 0-.56.08-.79.22-.23.15-.49.36-.78.64v4.8h-1.61zm7.37 6.45c0-.56.09-1.06.26-1.51.18-.45.42-.83.71-1.14.29-.3.63-.54 1.01-.71.39-.17.78-.25 1.18-.25.47 0 .88.08 1.23.24.36.16.65.38.89.67s.42.63.54 1.03c.12.41.18.84.18 1.32 0 .32-.02.57-.07.76h-4.36c.07.62.29 1.1.65 1.44.36.33.82.5 1.38.5.29 0 .57-.04.83-.13s.51-.21.76-.37l.55 1.01c-.33.21-.69.39-1.09.53-.41.14-.83.21-1.26.21-.48 0-.92-.08-1.34-.25-.41-.16-.76-.4-1.07-.7-.31-.31-.55-.69-.72-1.13-.18-.44-.26-.95-.26-1.52zm4.6-.62c0-.55-.11-.98-.34-1.28-.23-.31-.58-.47-1.06-.47-.41 0-.77.15-1.07.45-.31.29-.5.73-.58 1.3zm2.5.62c0-.57.09-1.08.28-1.53.18-.44.43-.82.75-1.13s.69-.54 1.1-.71c.42-.16.85-.24 1.31-.24.45 0 .84.08 1.17.23s.61.34.85.57l-.77 1.02c-.19-.16-.38-.28-.56-.37-.19-.09-.39-.14-.61-.14-.56 0-1.01.21-1.35.63-.35.41-.52.97-.52 1.67 0 .69.17 1.24.51 1.66.34.41.78.62 1.32.62.28 0 .54-.06.78-.17.24-.12.45-.26.64-.42l.67 1.03c-.33.29-.69.51-1.08.65-.39.15-.78.23-1.18.23-.46 0-.9-.08-1.31-.24-.4-.16-.75-.39-1.05-.7s-.53-.69-.7-1.13c-.17-.45-.25-.96-.25-1.53zm6.91-6.45h1.58v6.17h.05l2.54-3.16h1.77l-2.35 2.8 2.59 4.07h-1.75l-1.77-2.98-1.08 1.23v1.75h-1.58zm13.69 1.27c-.25-.11-.5-.17-.75-.17-.58 0-.87.39-.87 1.16v.75h1.34v1.27h-1.34v5.6h-1.61v-5.6h-.92v-1.2l.92-.07v-.72c0-.35.04-.68.13-.98.08-.31.21-.57.4-.79s.42-.39.71-.51c.28-.12.63-.18 1.04-.18.24 0 .48.02.69.07.22.05.41.1.57.17zm.48 5.18c0-.57.09-1.08.27-1.53.17-.44.41-.82.72-1.13.3-.31.65-.54 1.04-.71.39-.16.8-.24 1.23-.24s.84.08 1.24.24c.4.17.74.4 1.04.71s.54.69.72 1.13c.19.45.28.96.28 1.53s-.09 1.08-.28 1.53c-.18.44-.42.82-.72 1.13s-.64.54-1.04.7-.81.24-1.24.24-.84-.08-1.23-.24-.74-.39-1.04-.7c-.31-.31-.55-.69-.72-1.13-.18-.45-.27-.96-.27-1.53zm1.65 0c0 .69.14 1.24.43 1.66.28.41.68.62 1.18.62.51 0 .9-.21 1.19-.62.29-.42.44-.97.44-1.66 0-.7-.15-1.26-.44-1.67-.29-.42-.68-.63-1.19-.63-.5 0-.9.21-1.18.63-.29.41-.43.97-.43 1.67zm6.48-3.44h1.33l.12 1.21h.05c.24-.44.54-.79.88-1.02.35-.24.7-.36 1.07-.36.32 0 .59.05.78.14l-.28 1.4-.33-.09c-.11-.01-.23-.02-.38-.02-.27 0-.56.1-.86.31s-.55.58-.77 1.1v4.2h-1.61zm-47.87 15h1.61v4.1c0 .57.08.97.25 1.2.17.24.44.35.81.35.3 0 .57-.07.8-.22.22-.15.47-.39.73-.73v-4.7h1.61v6.87h-1.32l-.12-1.01h-.04c-.3.36-.63.64-.98.86-.35.21-.76.32-1.24.32-.73 0-1.27-.24-1.61-.71-.33-.47-.5-1.14-.5-2.02zm9.46 7.43v2.16h-1.61v-9.59h1.33l.12.72h.05c.29-.24.61-.45.97-.63.35-.17.72-.26 1.1-.26.43 0 .81.08 1.15.24.33.17.61.4.84.71.24.31.41.68.53 1.11.13.42.19.91.19 1.44 0 .59-.09 1.11-.25 1.57-.16.47-.38.85-.65 1.16-.27.32-.58.56-.94.73-.35.16-.72.25-1.1.25-.3 0-.6-.07-.9-.2s-.59-.31-.87-.56zm0-2.3c.26.22.5.37.73.45.24.09.46.13.66.13.46 0 .84-.2 1.15-.6.31-.39.46-.98.46-1.77 0-.69-.12-1.22-.35-1.61-.23-.38-.61-.57-1.13-.57-.49 0-.99.26-1.52.77zm5.87-1.69c0-.56.08-1.06.25-1.51.16-.45.37-.83.65-1.14.27-.3.58-.54.93-.71s.71-.25 1.08-.25c.39 0 .73.07 1 .2.27.14.54.32.81.55l-.06-1.1v-2.49h1.61v9.88h-1.33l-.11-.74h-.06c-.25.25-.54.46-.88.64-.33.18-.69.27-1.06.27-.87 0-1.56-.32-2.07-.95s-.76-1.51-.76-2.65zm1.67-.01c0 .74.13 1.31.4 1.7.26.38.65.58 1.15.58.51 0 .99-.26 1.44-.77v-3.21c-.24-.21-.48-.36-.7-.45-.23-.08-.46-.12-.7-.12-.45 0-.82.19-1.13.59-.31.39-.46.95-.46 1.68zm6.35 1.59c0-.73.32-1.3.97-1.71.64-.4 1.67-.68 3.08-.84 0-.17-.02-.34-.07-.51-.05-.16-.12-.3-.22-.43s-.22-.22-.38-.3c-.15-.06-.34-.1-.58-.1-.34 0-.68.07-1 .2s-.63.29-.93.47l-.59-1.08c.39-.24.81-.45 1.28-.63.47-.17.99-.26 1.54-.26.86 0 1.51.25 1.93.76s.63 1.25.63 2.21v4.07h-1.32l-.12-.76h-.05c-.3.27-.63.48-.98.66s-.73.27-1.14.27c-.61 0-1.1-.19-1.48-.56-.38-.36-.57-.85-.57-1.46zm1.57-.12c0 .3.09.53.27.67.19.14.42.21.71.21.28 0 .54-.07.77-.2s.48-.31.73-.56v-1.54c-.47.06-.86.13-1.18.23-.31.09-.57.19-.76.31s-.33.25-.41.4c-.09.15-.13.31-.13.48zm6.29-3.63h-.98v-1.2l1.06-.07.2-1.88h1.34v1.88h1.75v1.27h-1.75v3.28c0 .8.32 1.2.97 1.2.12 0 .24-.01.37-.04.12-.03.24-.07.34-.11l.28 1.19c-.19.06-.4.12-.64.17-.23.05-.49.08-.76.08-.4 0-.74-.06-1.02-.18-.27-.13-.49-.3-.67-.52-.17-.21-.3-.48-.37-.78-.08-.3-.12-.64-.12-1.01zm4.36 2.17c0-.56.09-1.06.27-1.51s.41-.83.71-1.14c.29-.3.63-.54 1.01-.71.39-.17.78-.25 1.18-.25.47 0 .88.08 1.23.24.36.16.65.38.89.67s.42.63.54 1.03c.12.41.18.84.18 1.32 0 .32-.02.57-.07.76h-4.37c.08.62.29 1.1.65 1.44.36.33.82.5 1.38.5.3 0 .58-.04.84-.13.25-.09.51-.21.76-.37l.54 1.01c-.32.21-.69.39-1.09.53s-.82.21-1.26.21c-.47 0-.92-.08-1.33-.25-.41-.16-.77-.4-1.08-.7-.3-.31-.54-.69-.72-1.13-.17-.44-.26-.95-.26-1.52zm4.61-.62c0-.55-.11-.98-.34-1.28-.23-.31-.58-.47-1.06-.47-.41 0-.77.15-1.08.45-.31.29-.5.73-.57 1.3zm3.01 2.23c.31.24.61.43.92.57.3.13.63.2.98.2.38 0 .65-.08.83-.23s.27-.35.27-.6c0-.14-.05-.26-.13-.37-.08-.1-.2-.2-.34-.28-.14-.09-.29-.16-.47-.23l-.53-.22c-.23-.09-.46-.18-.69-.3-.23-.11-.44-.24-.62-.4s-.33-.35-.45-.55c-.12-.21-.18-.46-.18-.75 0-.61.23-1.1.68-1.49.44-.38 1.06-.57 1.83-.57.48 0 .91.08 1.29.25s.71.36.99.57l-.74.98c-.24-.17-.49-.32-.73-.42-.25-.11-.51-.16-.78-.16-.35 0-.6.07-.76.21-.17.15-.25.33-.25.54 0 .14.04.26.12.36s.18.18.31.26c.14.07.29.14.46.21l.54.19c.23.09.47.18.7.29s.44.24.64.4c.19.16.34.35.46.58.11.23.17.5.17.82 0 .3-.06.58-.17.83-.12.26-.29.48-.51.68-.23.19-.51.34-.84.45-.34.11-.72.17-1.15.17-.48 0-.95-.09-1.41-.27-.46-.19-.86-.41-1.2-.68z" fill="#535353"/></g></svg>\" width=\"57\"/><h3>Cite this article</h3><p>Ozonoff, D. As the world turns: scientific publishing in the digital era. <i>Environ Health</i> <b>23</b>, 24 (2024). https://doi.org/10.1186/s12940-024-01063-5</p><p>Download citation<svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" role=\"img\" width=\"16\"><use xlink:href=\"#icon-eds-i-download-medium\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"></use></svg></p><ul data-test=\"publication-history\"><li><p>Published<span>: </span><span><time datetime=\"2024-02-26\">26 February 2024</time></span></p></li><li><p>DOI</abbr><span>: </span><span>https://doi.org/10.1186/s12940-024-01063-5</span></p></li></ul><h3>Share this article</h3><p>Anyone you share the following link with will be able to read this content:</p><button data-track=\"click\" data-track-action=\"get shareable link\" data-track-external=\"\" data-track-label=\"button\" type=\"button\">Get shareable link</button><p>Sorry, a shareable link is not currently available for this article.</p><p data-track=\"click\" data-track-action=\"select share url\" data-track-label=\"button\"></p><button data-track=\"click\" data-track-action=\"copy share url\" data-track-external=\"\" data-track-label=\"button\" type=\"button\">Copy to clipboard</button><p> Provided by the Springer Nature SharedIt content-sharing initiative </p>","PeriodicalId":11686,"journal":{"name":"Environmental Health","volume":"2 1","pages":""},"PeriodicalIF":5.3000,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Environmental Health","FirstCategoryId":"93","ListUrlMain":"https://doi.org/10.1186/s12940-024-01063-5","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

A quarter of the way into the 21st Century the technology of encoding and transmitting information in digital form is in full flower. Almost without noticing it, we are living through a historical discontinuity comparable to the one produced by Guttenberg’s invention of printing with moveable type in 1450, a technology that made possible the production of identical written texts on a scale previously unimaginable. That technology was quickly adopted, but its basic form didn’t change for hundreds of years. Today the speed of advance in digital technology is breath taking. Digital devices like the smart phone have moved from expensive prototypes to ubiquitous and essential appliances in a little over a decade. Digital technology has also substantially affected scientific publishing.

In 1879 John Shaw Billings, a surgeon in the Office of the US Surgeon General of the Army, began to compile an author-topic catalog of the library. In 1966 its print descendant, Index Medicus (now PubMed), went online [1], but as long as the journals themselves were still in print-only format, its full impact only came when most journal-published research was also available in electronic digital format. That time has come and it has had a profound effect on how scientists seek out and find research relevant to their work. Gone are the days when many of us routinely perused the latest issues of journals in our institutional libraries or went to library stacks to retrieve past issues and lug them to the copy machines at 10 cents a page. The stacks and copy machines now sit on our desks as internet-connected computers and personal printers. Some of us haven’t been in a physical library for years. Journals still appearing in paper format have been forced to have a digital format also available. At the same time advances in research methods, such as genomics or new imaging technics unimaginable in the pre-digital era have vastly expanded the scope and depth of biomedical research. Even well-defined research fields are now extensively sub-specialized and the volume of publication is potentially overwhelming. Yet online digital search makes it possible to find the needle in the haystack, and this is an essential difference compared to even a short time ago.

This is a seismic shift in scientific publishing and it has happened in a relatively short time without most of us being conscious of it. Just as music streaming services uncoupled song tracks from the record album or CD upon which they originally appeared, no-cost search engines like Google or the biomedical research database PubMed have uncoupled individual research articles from the journals where they originally appeared. Journal brand names remain significant, but less so than previously and they are no longer the first place we look. Now we can look everywhere at once.

By the year 2000 we had the routine ability to transmit our writing electronically in digital form and access to a worldwide network to distribute it almost instantaneously. Before that, printing and distributing scientific texts were done by commercial publishers. In the Age of the Internet, it seemed plausible that the print publisher, like the buggy whip makers in the age of the automobile, were headed for technological obsolescence. As of now, while far from obsolete, the major science publishers have still been forced to adapt to the new digital environment.

The Open Access (OA) movement in scientific publishing [2] produced new electronic journals with access open to anyone with an internet connection. The fall of subscription firewalls and a shift in intellectual property arrangements is still underway but is well advanced. Copyright shifted from the publisher to author(s). While there was still a publisher involved, OA journals were something new. In 2002 my co-Editor-in-Chief, Professor Philippe Grandjean, was invited by a new publisher, BioMed Central (BMC), to join a stable of electronically published OA science journals. BMC was founded in 2000 by entrepreneur and visionary Vitek Tracz (for more on the history of BMC, see [3] and for Tracz see [4]). Professor Grandjean generously asked me to join him in starting the first OA journal devoted to the science of environmental health, with a special focus on research using epidemiological methods. The result was this journal, Environmental Health, which has now been published for more than two decades. In 2022 it had 1.9 million downloads and over 26,000 altimetric mentions and is in the top quartile of all journals in the field, publishing more than 100 articles a year. OA and OA journals are now well established, with an ever increasing share of published research articles.

With the increasing recognition of the value of OA to readership and the resistance of institutional libraries to the soaring cost of subscriptions, publishers are changing their subscription-based business models to per-paper “processing charges” tied to appearance on their websites. For the publishers the number of published papers now had a financial significance independent of their contents. It remains true that a research publication record is a major criterion of professional status and reputation, used by many academic institutions in appointment and promotion decisions, but in doing so universities have also given weight to publication numbers, not just research significance or quality. Both trends have reinforced incentives for publisher and researcher alike to publish papers with the narrowest possible scope, resulting in multiple shorter papers from a single line of research and affecting the average content of an individual paper. It is the Editors who have the responsibility to accept or reject papers, but they must work with what is submitted and there are incentives for both publishers and researchers to divide papers into smaller and more numerous packages.

Triaging the resulting increase in submission volume is one of the biggest challenges journals face. Looked at from a researcher’s or publisher’s point of view this is a marketing problem. How does one get a journal to “buy” the maximum number of the researcher’s results or the researcher to buy the publisher’s services? But from the journal Editor’s point of view, it is a problem of how to recognize, and make available, research of value against a noisy background. Under the subscription model they were more or less aligned. Consistent high-quality and high-content research enhanced both the objectives of the publishers and researchers, on the one hand, and the editor’s journal, on the other. In this emerging environment that alignment has been lost. “Predatory journals” with low or no barriers to publication have arisen to take advantage of the current OA per-paper business model. At the same time legitimate and established journals like this one have seen large increases in submissions, many of marginal or no interest to the field.

Editors, however, are still charged with evaluating the contents of submissions. The conventional (although historically recent) mechanism of peer review would seem to be the surest way to address this. But the peer review process itself has become a major challenge for almost every scientific journal, including this one. As Editors, we serve a gatekeeping function, and while we are under no obligation to open the gate for papers of little value, we don’t always have the time or expertise to recognize those papers. We depend upon our scientific colleagues as peer reviewers to help us accomplish this task, but finding people willing to offer that help is becoming more and more difficult. Publishers have tried to justify their value by providing editors with tools to identify and contact appropriate reviewers. In our experience these tools can sometimes be helpful but often provide irrelevant or useless suggestions. Once identified, and we believe most editors use their own knowledge and experience of the field, there is the greater problem of getting invited reviewers to accept.

There are benefits to reviewers of advance knowledge gained by seeing a manuscript ahead of possible publication, especially in a special and fast-moving research area, but along with everything else, the Academy has also changed, and the pressure to do more with less available time — less available because university administrations are piling more and more required but uncompensated demands on faculty members --- that asking a colleague to review in depth anodyne “research” doesn’t pay when balanced against what today’s academics must or could do with their time. Obtaining conscientious unpaid peer reviews is now probably the biggest and most frustrating challenge for most journals and their editors.

The real problem is deeper. It seems commonsensical that pre-publication peer review must improve the quality of published research, but most of us who are involved with peer review know too much about how the sausage is made. As editors and researchers ourselves we know that the process often has poor inter-rater reliability and its accuracy is largely unknown and difficult or impossible to measure [5]. The potential for bias, especially for results that don’t conform to the reviewer’s expectations, should be obvious, and relying on a tiny number of subjective judgments for an important decision, especially with unknown or problematic selection bias, also seems risky. If peer-review were a research instrument, we would be very reluctant to use it. Nevertheless, our journal and almost all other mainstream scientific journals require peer review and even tout it as our most desirable, even most essential, feature. Yet the evidence that pre-publication peer review improves the quality of publications is mixed, at best [5].

Like many other things during an age of transition, peer review seems broken in important ways. A former colleague once said to me, “Real peer review happens after publication,” meaning that our colleagues evaluate the value of our publications for their work and the field in general, citing it, using it, contradicting it or ignoring it. This is, in essence, a form of crowd-sourced peer review. In the early 1990s mathematicians and physicists were finding that formal journal-required peer review of a complex manuscript could take 1½ to 2 years and if the paper was accepted, another year to appear in print. Their journals served small, often highly specialized research areas. Because of the lengthy time needed for peer review these researchers were accustomed to circulating manuscripts to a few friends and colleagues for comment before they were published, both to communicate interesting results and to get constructive criticism. When the internet replaced the postal service to circulate manuscripts that had not yet undergone formal peer review (called “pre-prints”), this practice expanded and became systematized, appearing publicly on computer platforms called preprint servers [6]. Papers were lightly moderated for scope but not peer-reviewed. They were also searchable and appeared almost immediately. The 1991 pre-print server arXiv [7] served just mathematics and physics. It took more than two decades for the biomedical community to catch-up with its own bioRxiv pre-print server [8], which now includes a separate medRxiv. Papers on preprint servers can simultaneously, or subsequently, be submitted to most conventional journals, including the most elite. They can be searched for, commented upon, revised, and cited by others [9]. Many are mentioned in the press because of their timeliness in addressing urgent problems like the pandemic. Media sources usually note them as “not yet published or peer reviewed,” but only in passing. Newspapers don’t seem to care.

To incorporate preprint servers into a crowd-sourced peer-review mechanism would require a way to evaluate value to readers, perhaps by allowing reader up-votes for papers or more systematic use of Commenting facilities [10]. Another possibility would be establishing “overlay” journals that publish, index or provide Commentaries on particular preprints or groups of preprints. These reviews would be “meta-reviewed” (a review of reviews) by journal staff and editors. Commentaries on the literature are already a much-read feature of current journals and would seem to be a better use of reviewer time. They could also count as a publication. This journal, through its publisher BMC, now offers a preprint halfway house, called In Review. This voluntary option allows authors to share their work with others to read and comment on prior to publication, with a citable DOI.

Subscription pay walls, uncoupling research reports from journals, and problems with conventional peer review are not the only challenges in today’s unpredictable publishing environment, but at least they are before us in concrete form. Even the near future is less tangible, and try as hard as we like to envision it, we almost always make the mistake of envisioning it to be like the present. It rarely is. As I write this (the end of 2023) it is little more than a year since the public unveiling of a new digital technology, generative Artificial Intelligence (AI), made possible by the phenomenal increase in readily available computing power. Using machines to do things we humans cannot do unassisted is not new, but the ability of machines to generate human-like conversational language is. ChatGPT, from the non-profit but corporate supported company OpenAI, claimed 13 million unique visits by the end of its first month and a year later is said to have 100 million users each week, making it the fastest growing user base in digital technology history [11]. Much of this success is due to its “chat” based user interface, which gives it the sense of being generated by a human being, not a computer. There has been a great deal of speculation about the good and bad potential of this technology, from utopian to doomsday, but something important is already happening. AI is making visible presuppositions that the printing press introduced but we haven’t noticed.

The first is the pervasive but implicit role of reader trust and confidence in scientific publishing. Peer reviewers recognize that certain practices, like fraudulent results or plagiarism are unacceptable, although science historians have long known that great scientists did not always live up to today’s standards (the controversy over Mendel’s experimental data is a good example [12]). But there is much in otherwise proper papers that rarely gets thoroughly examined. Reviewers and readers don’t check all the references to verify they say what a manuscript implies and only note discrepancies when a fortuitous personal knowledge of particular papers prompts it. Scientific fraud is so shocking because we do not normally assume a researcher has made up or altered data. We know it happens (although we aren’t sure how often), but we usually take published results at face value. Will we assume the same (or even more) about papers generated by a computer? Or will this produce a subtle or not-so-subtle shift in our thinking with important effects? On the one hand, we assume computers are precise, although we may question their accuracy. But papers produced by generative AI platforms like ChatGPT can make up citations, using plausible non-existent titles inferred from what actual authors have previously written [13]. Even when citations exist, they may not say what ChatGPT implies. And computer precision may be wrongly inferred, since repeat queries can give different texts. It’s not clear exactly what our unspoken presuppositions are about computer generated texts, but it is almost a certainty generative AI will be used to produce abstracts or whole papers submitted as scientific research. How will that affect tacit presuppositions about trust and confidence? We have no idea. But it is plausible computers will not be given the same benefit of the doubt as humans.

Veracity aside, how might computer authorship be viewed or accepted? The very notion of “authorship,” which seems so obvious, is historically recent [14]. Prior to printing, written texts were produced anonymously by scribes to record events or promulgate religious ideas. Specific authorship was usually unknown or irrelevant. Author ascription, if noted, was used to establish authority, not credit. The printing press not only enabled a means of mass communication but also produced texts that became commodities. Once the printed text had monetary value, authorship became connected with expertise, intellectual property, and reliability of the contents.

Until the 20th century the norm was single person authorship. In the mid-20th century multiple authors became more common, although rarely many more than a few. That has changed radically. A recent review of over 100,000 biomedical papers uploaded to PubMed between 2016 and 2021 found that the median number of authors was 6, up from 3, 20 years earlier. In 2002 33.9% were single-author papers. In 2021 single author papers in biomedicine had dropped to 2.1% [15]. We are now in an era when research is pursued by teams, an era of hyperauthorship. Physics holds the record with a printed paper in Physical Review Letters that recorded 5,154 co-authors, the list taking up 24 of the 33 page publication [16]. In the era of Big Data the biomedical sciences are not far behind. In 2015 a paper on the fruit-fly genome boasted over 1000 authors, among them 900 undergraduates [17]. Some biologists have complained that such a practice makes the idea of scientific authorship meaningless, but the first author of that paper responded that the students “read, critiqued and approved the manuscript, but did not write or revise it. Correcting and annotating the sequence required extensive data analysis. and each student made a ‘significant intellectual contribution’ to the project and earned his or her place in the author list.” [17]. Whether this is sufficient under current practice for authorship may be questioned [18], but the point is clear. When large teams are involved and each member supplies something that was necessary for the result, how does one credit authorship? If that description fitted a professional, like a spectroscopist or biostatistician, there would likely not be a question, but for copy editors, technicians, programmers or, in this case undergraduates, it seems to be questionable, although it is not clear why. Some journals now ask for the role played by co-authors, of which drafting and revising are two examples. At this juncture if the drafting were done by a computer suppled with the data there would likely be a reluctance to assign it authorship. But already radiology and lab reports are partially drafted by computers and it is plausible that the role of computers in providing and/or revising text for research papers will expand significantly, beyond current copy edit suggestions (which after all, is a form of revision). Can/should a computer be a co-author or even sole author? Regardless of how we would answer now, generative AI and hyperauthorship have raised the question of what authorship really means.

The printing press changed everything, although we know this only in retrospect, The world usually sleepwalks through technological revolutions of historic proportions. So much has already changed in scientific publishing that it is tempting to think we have reached a new equilibrium. In my view it is highly unlikely, although I am not wise enough or bold enough to say when today’s rapid evolution will pause and in what state it will leave the process of communicating research results. I rather doubt it will leave scientific publishing in a form that is recognizable to today’s researchers -- or even whether anything like today’s research scientist will even exist as a job title. 150 years ago there was no such job description. An unsettling thought, yes. But periods of historical transition are always unsettling.

Meanwhile, we carry on and adapt as the world changes around us.

  1. Greenberg SJ, Gallagher PE. The great contribution: Index Medicus, Index-Catalogue, and IndexCat. J Med Libr Assoc. 2009;97(2):108–13. https://doi.org/10.3163/1536-5050.97.2.007. PMID: 19404501; PMCID: PMC2670211. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2670211/pdf/mlab-97-02-108.pdf.

    Article PubMed PubMed Central Google Scholar

  2. Wikipedia contributors. December, ‘Open access’. Wikipedia Free Encyclopedia, 26 2023, 16:56 UTC. https://en.wikipedia.org/w/index.php?title=Open_access&oldid=1191926139 [accessed 29 December 2023].

  3. Wikipedia contributors. ‘BioMed Central’, Wikipedia, The Free Encyclopedia, 4 December 2023, 13:10 UTC, <https://en.wikipedia.org/w/index.php?title=BioMed_Central&oldid=1188289741 [accessed 28 December 2023].

  4. Wikipedia contributors. ‘Vitek Tracz’, Wikipedia, The Free Encyclopedia, 3 October 2023, 22:04 UTC, https://en.wikipedia.org/w/index.php?title=Vitek_Tracz&oldid=1178474216 (accessed 28 December 2023).

  5. Neuen D. Peer-review and publication does not guarantee reliable information Posted on 16th January 2018, Students 4 Best Evidence Blog. https://s4be.cochrane.org/blog/2018/01/16/peer-review-and-publication-does-not-guarantee-reliable-information/#:~:text=Peer%2Dreview%20is%20by%20no,based%20only%20on%20that%20fact (Accessed 29 December 2023).

  6. Wikipedia contributors. ‘Preprint’, Wikipedia, The Free Encyclopedia, 1 December 2023, 11:30 UTC, https://en.wikipedia.org/w/index.php?title=Preprint&oldid=1187786985 [accessed 28 December 2023].

  7. Uniceristy C. https://arxiv.orgaccessed 29 December (2023).

  8. Cold Spring Harbor Laboratory., https://www.biorxiv.org (accessed 29 December 2023).

  9. https://www.internationalscienceediting.com/cite-a-preprint/#:~:text=So%2C%20can%20you%20cite%20a,Nature%27s%20policy%20below

  10. Chugg B. The case for replacing peer-review with preprints and overlay journals, The Medium, July 6, 2022, https://benchugg.medium.com/the-case-for-replacing-peer-review-with-preprints-and-overlay-journals-f44899a5b8cd (accessed December 29, 2023).

  11. Conversation T. ChatGPT turns 1: AI chatbot’s success says as much about humans as technology, Published: November 29, 2023 1:33pm EST. https://theconversation.com/chatgpt-turns-1-ai-chatbots-success-says-as-much-about-humans-as-technology-218704. (accessed December 29, 2023).

  12. Sussmilch FC, Ross JJ, Reid JB, Mendel. From Genes to Genome, Plant Physiol. 2022;190(4):2103–2114. https://doi.org/10.1093/plphys/kiac424.PMID: 36094356. https://academic.oup.com/plphys/article/190/4/2103/6696226?login=false (accessed December 29, 2023).

  13. Library, University of Waterloo, ChatGPT and Generative Artificial Intelligence (AI). accessed December 29, : Incorrect bibliographic references. https://subjectguides.uwaterloo.ca/chatgpt_generative_ai/incorrectbibreferences (2023).

  14. Neville S. (2022). Authorship, Book History, and the Effects of Artifacts. In Early Modern Herbals and the Book Trade: English Stationers and the Commodification of Botany (pp. 55–88). Cambridge: Cambridge University Press. https://doi.org/10.1017/9781009031615.003. https://www.cambridge.org/core/books/early-modern-herbals-and-the-book-trade/authorship-book-history-and-the-effects-of-artifacts/092889DBCC6D79FBB5CD643707F16B5D (accessed December 29, 2023).

  15. King C. accessed December 29, Multiauthor Papers: Onward and Upward, Clavirate Corporate Website, http://archive.sciencewatch.com/newsletter/2012/201207/multiauthor_papers/ (2023).

  16. Aad G, ATLAS Collaboration, Collaboration CMS, Experiments et al. Phys Rev Lett 114, 191803 (2015).], https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.191803 (accessed December 29, 2023).

  17. Woolston C. Fruit-fly paper has 1,000 authors. Nature. 2015;521:263. https://doi.org/10.1038/521263f. accessed December 29, 2023.

    Article ADS CAS Google Scholar

  18. International Committee of Medical Journal Editors., Defining the Role of Authors and Contributors. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html (accessed December 29, 2023).

Download references

The Author wishes to acknowledge the helpful comments from his journal colleagues, co-Editors-in-Chief Philippe Grandjean and Ruth Etzel; and the support and critical reading of Janet Kerr.

Authors and Affiliations

  1. Department of Environmental Health, Boston University School of Public Health, Boston, MA, USA

    David Ozonoff

Authors
  1. David OzonoffView author publications

    You can also search for this author in PubMed Google Scholar

Corresponding author

Correspondence to David Ozonoff.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

Abstract Image

Cite this article

Ozonoff, D. As the world turns: scientific publishing in the digital era. Environ Health 23, 24 (2024). https://doi.org/10.1186/s12940-024-01063-5

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/s12940-024-01063-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

世界在转动:数字时代的科学出版
科学欺诈之所以如此令人震惊,是因为我们通常不会认为研究人员编造或篡改了数据。我们知道这种情况时有发生(尽管我们并不确定发生的频率),但我们通常会对发表的结果信以为真。对于计算机生成的论文,我们是否也会这样认为(甚至更多)?或者,这是否会在我们的思维中产生微妙或不那么微妙的变化,从而产生重要影响?一方面,我们假定计算机是精确的,尽管我们可能会质疑其准确性。但是,由 ChatGPT 等生成式人工智能平台生成的论文可以编造引文,使用从实际作者以前的文章中推断出的似是而非的不存在的标题[13]。即使存在引文,它们也可能并不像 ChatGPT 所暗示的那样。计算机的精确度也可能被错误推断,因为重复查询可能会得到不同的文本。目前还不清楚我们对计算机生成的文本有哪些不言而喻的预设,但几乎可以肯定的是,生成式人工智能将被用于生成摘要或作为科研成果提交的整篇论文。这将如何影响关于信任和信心的默认预设?我们不得而知。撇开真实性不谈,人们会如何看待或接受计算机的作者身份呢?作者身份 "这一概念看似显而易见,其实是最近才出现的[14]。在印刷术出现之前,书面文字都是由文士匿名撰写的,用于记录事件或宣传宗教思想。具体作者通常不详或无关紧要。如果有作者署名,也是用来确立权威,而不是信用。印刷术不仅为大众传播提供了手段,还使印刷品成为商品。一旦印刷文本具有货币价值,作者身份就与专业知识、知识产权和内容的可靠性联系在一起。20 世纪中叶,多位作者的情况越来越普遍,但很少超过几位。这种情况已经发生了根本变化。最近对 2016 年至 2021 年间上传到 PubMed 的 10 万多篇生物医学论文进行的审查发现,作者人数的中位数从 20 年前的 3 人增加到了 6 人。2002年,33.9%的论文是单作者论文。2021 年,生物医学领域的单作者论文已降至 2.1%[15]。我们现在正处于一个由团队进行研究的时代,一个超作者时代。物理学保持着记录,《物理评论快报》上的一篇印刷论文记录了 5154 位共同作者,这份名单占了 33 页出版物的 24 页[16]。在大数据时代,生物医学也不甘落后。2015 年,一篇关于果蝇基因组的论文拥有 1000 多名作者,其中有 900 名本科生[17]。一些生物学家抱怨说,这种做法让科学作者的概念变得毫无意义,但该论文的第一作者回应说,学生们 "阅读、评论和批准了手稿,但没有撰写或修改手稿。校正和注释序列需要大量的数据分析。每个学生都对项目做出了'重要的智力贡献',并赢得了自己在作者列表中的位置"[17]。[17].根据目前的惯例,这是否足以成为作者可能还有疑问[18],但问题是显而易见的。当有大型团队参与其中,每个成员都提供了取得成果所必需的东西时,如何确定作者身份?如果是光谱学家或生物统计学家等专业人士,可能不会有这样的问题,但对于文案编辑、技术人员、程序员或在这种情况下的本科生来说,似乎就有问题了,尽管不清楚原因何在。现在有些期刊会询问合著者的作用,起草和修改就是两个例子。在这个时候,如果起草工作是由一台提供数据的计算机完成的,那么很可能会有人不愿意将其归为作者。但是,放射学和实验室报告已经部分由计算机起草,计算机在提供和/或修改研究论文文本方面的作用将大大扩展,超越目前的编辑建议(毕竟这也是一种修改形式)。计算机能否/是否应该成为合著者甚至独著者?无论我们现在如何回答,生成式人工智能和超作者身份都提出了作者身份的真正含义问题。印刷术改变了一切,尽管我们只是回过头来才知道这一点。科学出版业已经发生了如此之多的变化,以至于人们很容易认为我们已经达到了一个新的平衡。在我看来,这种可能性微乎其微,尽管我还没有足够的智慧和胆量来断言当今的快速发展何时会停顿下来,研究成果的交流过程又将处于何种状态。 开放获取 本文采用知识共享署名 4.0 国际许可协议进行许可,该协议允许以任何媒介或格式使用、共享、改编、分发和复制,只要您适当注明原作者和来源,提供知识共享许可协议的链接,并说明是否进行了修改。本文中的图片或其他第三方材料均包含在文章的知识共享许可协议中,除非在材料的署名栏中另有说明。如果材料未包含在文章的知识共享许可协议中,且您打算使用的材料不符合法律规定或超出许可使用范围,您需要直接从版权所有者处获得许可。要查看该许可的副本,请访问 http://creativecommons.org/licenses/by/4.0/。除非在数据的信用行中另有说明,否则知识共享公共领域专用免责声明(http://creativecommons.org/publicdomain/zero/1.0/)适用于本文提供的数据。转载与许可引用本文Ozonoff, D. As the world turns: scientific publishing in the digital era.Environ Health 23, 24 (2024). https://doi.org/10.1186/s12940-024-01063-5Download citationPublished: 26 February 2024DOI: https://doi.org/10.1186/s12940-024-01063-5Share this articleAnyone you share the following link with will be able to read this content:Get shareable linkSorry, a shareable link is not currently available for this article.Copy to clipboard Provided by the Springer Nature SharedIt content-sharing initiative
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Environmental Health
Environmental Health 环境科学-公共卫生、环境卫生与职业卫生
CiteScore
10.10
自引率
1.70%
发文量
115
审稿时长
3.0 months
期刊介绍: Environmental Health publishes manuscripts on all aspects of environmental and occupational medicine and related studies in toxicology and epidemiology. Environmental Health is aimed at scientists and practitioners in all areas of environmental science where human health and well-being are involved, either directly or indirectly. Environmental Health is a public health journal serving the public health community and scientists working on matters of public health interest and importance pertaining to the environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信