Paul Komesaroff, Elizabeth Potter, Emma R. Felman, Jeff Szer
{"title":"How should journals respond to the emerging challenges of artificial intelligence?","authors":"Paul Komesaroff, Elizabeth Potter, Emma R. Felman, Jeff Szer","doi":"10.1111/imj.16519","DOIUrl":null,"url":null,"abstract":"<p>The advent of artificial intelligence (AI) models has already produced wide-ranging effects on all aspects of social life,<span><sup>1</sup></span> and these continue to evolve rapidly. What the impact on medicine and science will be remains uncertain, but it is also likely to be profound. In the current fluid context, there is a need for clinicians and researchers to inform themselves of both the beneficial possibilities of AI and the ways in which it might undermine or compromise practices and values they have been taking for granted.</p><p>While the present pace of change may seem particularly intense, there is, of course, nothing remarkable about change itself. Indeed, we are familiar with a constant flux of new treatments, investigative techniques and tools of various other kinds. Occasionally, concerns have been expressed about a possible loss of skills<span><sup>2</sup></span> or a potential impact on relationships with patients,<span><sup>3</sup></span> but for the most part, innovations are welcomed and comfortably accommodated.<span><sup>4</sup></span></p><p>Despite this familiarity with change, it has been argued that the impact of AI will be different from that of previous epochs of technological innovation.<span><sup>5</sup></span> This is supposedly because AI is not just another tool that allows everyday tasks to be completed more quickly and efficiently but, in many cases, can actually replace human inputs altogether or even, more fundamentally, actually challenge the nature of what it is to be human. While it is too early to tell whether this will indeed turn out to be the case, it is clear that, for the present, we need to scrutinise carefully what is claimed and delivered.</p><p>Under these circumstances of uncertainty and ferment, journals and professional societies are hurriedly preparing policies to respond to perceived challenges emerging in the field of scientific publishing.<span><sup>6</sup></span> Particular emphasis is being placed on issues relating to authorship and originality of manuscripts,<span><sup>7</sup></span> reviewing practices,<span><sup>8</sup></span> intellectual property<span><sup>9</sup></span> and accountability.<span><sup>10</sup></span> For the most part, the policies remain provisional and precautionary<span><sup>6</sup></span> and reflect a recognition of the likely need for revision as further information becomes available.</p><p>The <i>Internal Medicine Journal</i> (IMJ) welcomes this reflective process and invites comments and suggestions from readers about their experiences with AI and what they consider to be its potential benefits and risks. We also recognise that a reflection on the impact of AI on journal publishing requires an examination of the multiple tasks that journals themselves serve and provides an opportunity for these tasks to be clarified and refined.</p><p>Medical journals like the IMJ are not mere manuscript-publishing machines, and their functions are not purely technical. Their success is not judged solely by the numbers of articles published, citations or impact factors, or even the efficiency with which they organise, review, process and disseminate written submissions. Their purposes also include the accumulation and dissemination of experience and knowledge about clinical practice and expansion and critical scrutiny of its scientific basis. They play an important role in stimulating discussion about issues of common concern of a social or ethical kind. They contribute to the formation and maintenance of communities of practitioners and to continuing education and regulatory processes. They assist in the formation of ethical insights and behaviours, deepen knowledge and stimulate ideas.</p><p>In addition to this, clinical medicine, which medical journals seek to enhance, is itself a collection of ethical practices underpinned by scientific knowledge. Clinicians, therefore, are encumbered by an unavoidable obligation to respond, rapidly and effectively, to uncertainties that may arise of a factual or ethical kind. As a novel field of technology that draws on existing human knowledge and purports to offer new ideas and strategies for action, AI itself raises unprecedented questions, such as how to identify the boundary between what is purely ‘technical’ and what is inherently ethical, and therefore cannot be disconnected from human agency. In particular, it raises questions about whether automated thinking processes can command the same authority as considered human judgements and whether, like the latter, they are subject to personal, cultural or other influences that must be openly identified and declared.<span><sup>11</sup></span></p><p>The questions do not stop there. The application of AI to the practice of science may evoke issues about the conduct, reporting and publishing of research projects. Research involves multiple steps, such as design, ethics review, recruitment, data acquisition, data analysis and interpretation, production and publication of manuscripts, and public dissemination of outcomes. Many of these utilise ‘tools’ of some kind, such as laboratory equipment, calculators, databases, computers, statistical packages and so on, to assist or facilitate the activities. Each, however, is subject to rigorous processes of social oversight that organise, control and regulate how they are carried out, according to values negotiated in socially and culturally variable contexts. This complex, multifaceted process of ethical discussion and regulation is what secures the trust of the community in the integrity and reliability of research outcomes. How AI processes might influence – and possibly either enhance or undermine – these well-established standards and what, if any, steps need to be taken to protect them remains to be determined.</p><p>At present, there are a few questions on which at least some agreement has been reached. One of these is the question of authorship. As summarised by the International Committee of Medical Journal Editors,<span><sup>12</sup></span> for an individual to qualify for authorship, several criteria must be satisfied: that he or she has made a substantial contribution to the conception or design of the work or to drafting the manuscript and that final approval has been given of the version to be published, along with agreement to be accountable for all aspects of the work. It would seem clear that not only could an AI process not qualify to be an author under these criteria, but in many cases, even limited contributions from automated devices might raise problems. At the least, for these reasons alone, the utilisation of AI devices in research projects must be reported fully and transparently. AI is already being used to enhance expression in English by authors for whom English is not the native language; arguably, this is an acceptable use of AI provided appropriate boundaries are not breached and a statement to describe its use is provided.</p><p>Many additional issues remain, such as the origin and representativeness of the data on which the AI machines rely and biases that may be embedded in the logical algorithms themselves. Questions of confidentiality and copyright need to be scrutinised. New protocols need to be developed for identifying and managing interests, such as those of wealthy, powerful individuals or companies that control and licence the AI models. Research into AI itself raises still more questions.<span><sup>13</sup></span></p><p>For journals, the use of AI in reviewing manuscripts is yet another unresolved topic of discussion. It is possible that in such settings AI may provide useful assistance in validating data, but the additional reviewer tasks of verifying the integrity, originality and reliability of the work may be limited to human assessors. Here too, at the present time, where the boundaries should be set and how they would be policed remain uncertain.</p><p>Where does this leave the IMJ in relation to the use of AI by its authors? Apart from the question of authorship, the requirement for full disclosures and the other checks and balances mentioned above, the rest remains up in the air. Perhaps the most that can be said is that while it is clear that AI will occupy an important place in publishing, how that place will ultimately be defined will have to be determined not by AI itself but by its human users – in what will no doubt be continuing, vigorous, highly charged, often inconclusive, conversations.</p>","PeriodicalId":13625,"journal":{"name":"Internal Medicine Journal","volume":"54 10","pages":"1601-1602"},"PeriodicalIF":1.8000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/imj.16519","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internal Medicine Journal","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/imj.16519","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0
Abstract
The advent of artificial intelligence (AI) models has already produced wide-ranging effects on all aspects of social life,1 and these continue to evolve rapidly. What the impact on medicine and science will be remains uncertain, but it is also likely to be profound. In the current fluid context, there is a need for clinicians and researchers to inform themselves of both the beneficial possibilities of AI and the ways in which it might undermine or compromise practices and values they have been taking for granted.
While the present pace of change may seem particularly intense, there is, of course, nothing remarkable about change itself. Indeed, we are familiar with a constant flux of new treatments, investigative techniques and tools of various other kinds. Occasionally, concerns have been expressed about a possible loss of skills2 or a potential impact on relationships with patients,3 but for the most part, innovations are welcomed and comfortably accommodated.4
Despite this familiarity with change, it has been argued that the impact of AI will be different from that of previous epochs of technological innovation.5 This is supposedly because AI is not just another tool that allows everyday tasks to be completed more quickly and efficiently but, in many cases, can actually replace human inputs altogether or even, more fundamentally, actually challenge the nature of what it is to be human. While it is too early to tell whether this will indeed turn out to be the case, it is clear that, for the present, we need to scrutinise carefully what is claimed and delivered.
Under these circumstances of uncertainty and ferment, journals and professional societies are hurriedly preparing policies to respond to perceived challenges emerging in the field of scientific publishing.6 Particular emphasis is being placed on issues relating to authorship and originality of manuscripts,7 reviewing practices,8 intellectual property9 and accountability.10 For the most part, the policies remain provisional and precautionary6 and reflect a recognition of the likely need for revision as further information becomes available.
The Internal Medicine Journal (IMJ) welcomes this reflective process and invites comments and suggestions from readers about their experiences with AI and what they consider to be its potential benefits and risks. We also recognise that a reflection on the impact of AI on journal publishing requires an examination of the multiple tasks that journals themselves serve and provides an opportunity for these tasks to be clarified and refined.
Medical journals like the IMJ are not mere manuscript-publishing machines, and their functions are not purely technical. Their success is not judged solely by the numbers of articles published, citations or impact factors, or even the efficiency with which they organise, review, process and disseminate written submissions. Their purposes also include the accumulation and dissemination of experience and knowledge about clinical practice and expansion and critical scrutiny of its scientific basis. They play an important role in stimulating discussion about issues of common concern of a social or ethical kind. They contribute to the formation and maintenance of communities of practitioners and to continuing education and regulatory processes. They assist in the formation of ethical insights and behaviours, deepen knowledge and stimulate ideas.
In addition to this, clinical medicine, which medical journals seek to enhance, is itself a collection of ethical practices underpinned by scientific knowledge. Clinicians, therefore, are encumbered by an unavoidable obligation to respond, rapidly and effectively, to uncertainties that may arise of a factual or ethical kind. As a novel field of technology that draws on existing human knowledge and purports to offer new ideas and strategies for action, AI itself raises unprecedented questions, such as how to identify the boundary between what is purely ‘technical’ and what is inherently ethical, and therefore cannot be disconnected from human agency. In particular, it raises questions about whether automated thinking processes can command the same authority as considered human judgements and whether, like the latter, they are subject to personal, cultural or other influences that must be openly identified and declared.11
The questions do not stop there. The application of AI to the practice of science may evoke issues about the conduct, reporting and publishing of research projects. Research involves multiple steps, such as design, ethics review, recruitment, data acquisition, data analysis and interpretation, production and publication of manuscripts, and public dissemination of outcomes. Many of these utilise ‘tools’ of some kind, such as laboratory equipment, calculators, databases, computers, statistical packages and so on, to assist or facilitate the activities. Each, however, is subject to rigorous processes of social oversight that organise, control and regulate how they are carried out, according to values negotiated in socially and culturally variable contexts. This complex, multifaceted process of ethical discussion and regulation is what secures the trust of the community in the integrity and reliability of research outcomes. How AI processes might influence – and possibly either enhance or undermine – these well-established standards and what, if any, steps need to be taken to protect them remains to be determined.
At present, there are a few questions on which at least some agreement has been reached. One of these is the question of authorship. As summarised by the International Committee of Medical Journal Editors,12 for an individual to qualify for authorship, several criteria must be satisfied: that he or she has made a substantial contribution to the conception or design of the work or to drafting the manuscript and that final approval has been given of the version to be published, along with agreement to be accountable for all aspects of the work. It would seem clear that not only could an AI process not qualify to be an author under these criteria, but in many cases, even limited contributions from automated devices might raise problems. At the least, for these reasons alone, the utilisation of AI devices in research projects must be reported fully and transparently. AI is already being used to enhance expression in English by authors for whom English is not the native language; arguably, this is an acceptable use of AI provided appropriate boundaries are not breached and a statement to describe its use is provided.
Many additional issues remain, such as the origin and representativeness of the data on which the AI machines rely and biases that may be embedded in the logical algorithms themselves. Questions of confidentiality and copyright need to be scrutinised. New protocols need to be developed for identifying and managing interests, such as those of wealthy, powerful individuals or companies that control and licence the AI models. Research into AI itself raises still more questions.13
For journals, the use of AI in reviewing manuscripts is yet another unresolved topic of discussion. It is possible that in such settings AI may provide useful assistance in validating data, but the additional reviewer tasks of verifying the integrity, originality and reliability of the work may be limited to human assessors. Here too, at the present time, where the boundaries should be set and how they would be policed remain uncertain.
Where does this leave the IMJ in relation to the use of AI by its authors? Apart from the question of authorship, the requirement for full disclosures and the other checks and balances mentioned above, the rest remains up in the air. Perhaps the most that can be said is that while it is clear that AI will occupy an important place in publishing, how that place will ultimately be defined will have to be determined not by AI itself but by its human users – in what will no doubt be continuing, vigorous, highly charged, often inconclusive, conversations.
期刊介绍:
The Internal Medicine Journal is the official journal of the Adult Medicine Division of The Royal Australasian College of Physicians (RACP). Its purpose is to publish high-quality internationally competitive peer-reviewed original medical research, both laboratory and clinical, relating to the study and research of human disease. Papers will be considered from all areas of medical practice and science. The Journal also has a major role in continuing medical education and publishes review articles relevant to physician education.