How should journals respond to the emerging challenges of artificial intelligence?

IF 1.8 4区 医学 Q2 MEDICINE, GENERAL & INTERNAL
Paul Komesaroff, Elizabeth Potter, Emma R. Felman, Jeff Szer
{"title":"How should journals respond to the emerging challenges of artificial intelligence?","authors":"Paul Komesaroff,&nbsp;Elizabeth Potter,&nbsp;Emma R. Felman,&nbsp;Jeff Szer","doi":"10.1111/imj.16519","DOIUrl":null,"url":null,"abstract":"<p>The advent of artificial intelligence (AI) models has already produced wide-ranging effects on all aspects of social life,<span><sup>1</sup></span> and these continue to evolve rapidly. What the impact on medicine and science will be remains uncertain, but it is also likely to be profound. In the current fluid context, there is a need for clinicians and researchers to inform themselves of both the beneficial possibilities of AI and the ways in which it might undermine or compromise practices and values they have been taking for granted.</p><p>While the present pace of change may seem particularly intense, there is, of course, nothing remarkable about change itself. Indeed, we are familiar with a constant flux of new treatments, investigative techniques and tools of various other kinds. Occasionally, concerns have been expressed about a possible loss of skills<span><sup>2</sup></span> or a potential impact on relationships with patients,<span><sup>3</sup></span> but for the most part, innovations are welcomed and comfortably accommodated.<span><sup>4</sup></span></p><p>Despite this familiarity with change, it has been argued that the impact of AI will be different from that of previous epochs of technological innovation.<span><sup>5</sup></span> This is supposedly because AI is not just another tool that allows everyday tasks to be completed more quickly and efficiently but, in many cases, can actually replace human inputs altogether or even, more fundamentally, actually challenge the nature of what it is to be human. While it is too early to tell whether this will indeed turn out to be the case, it is clear that, for the present, we need to scrutinise carefully what is claimed and delivered.</p><p>Under these circumstances of uncertainty and ferment, journals and professional societies are hurriedly preparing policies to respond to perceived challenges emerging in the field of scientific publishing.<span><sup>6</sup></span> Particular emphasis is being placed on issues relating to authorship and originality of manuscripts,<span><sup>7</sup></span> reviewing practices,<span><sup>8</sup></span> intellectual property<span><sup>9</sup></span> and accountability.<span><sup>10</sup></span> For the most part, the policies remain provisional and precautionary<span><sup>6</sup></span> and reflect a recognition of the likely need for revision as further information becomes available.</p><p>The <i>Internal Medicine Journal</i> (IMJ) welcomes this reflective process and invites comments and suggestions from readers about their experiences with AI and what they consider to be its potential benefits and risks. We also recognise that a reflection on the impact of AI on journal publishing requires an examination of the multiple tasks that journals themselves serve and provides an opportunity for these tasks to be clarified and refined.</p><p>Medical journals like the IMJ are not mere manuscript-publishing machines, and their functions are not purely technical. Their success is not judged solely by the numbers of articles published, citations or impact factors, or even the efficiency with which they organise, review, process and disseminate written submissions. Their purposes also include the accumulation and dissemination of experience and knowledge about clinical practice and expansion and critical scrutiny of its scientific basis. They play an important role in stimulating discussion about issues of common concern of a social or ethical kind. They contribute to the formation and maintenance of communities of practitioners and to continuing education and regulatory processes. They assist in the formation of ethical insights and behaviours, deepen knowledge and stimulate ideas.</p><p>In addition to this, clinical medicine, which medical journals seek to enhance, is itself a collection of ethical practices underpinned by scientific knowledge. Clinicians, therefore, are encumbered by an unavoidable obligation to respond, rapidly and effectively, to uncertainties that may arise of a factual or ethical kind. As a novel field of technology that draws on existing human knowledge and purports to offer new ideas and strategies for action, AI itself raises unprecedented questions, such as how to identify the boundary between what is purely ‘technical’ and what is inherently ethical, and therefore cannot be disconnected from human agency. In particular, it raises questions about whether automated thinking processes can command the same authority as considered human judgements and whether, like the latter, they are subject to personal, cultural or other influences that must be openly identified and declared.<span><sup>11</sup></span></p><p>The questions do not stop there. The application of AI to the practice of science may evoke issues about the conduct, reporting and publishing of research projects. Research involves multiple steps, such as design, ethics review, recruitment, data acquisition, data analysis and interpretation, production and publication of manuscripts, and public dissemination of outcomes. Many of these utilise ‘tools’ of some kind, such as laboratory equipment, calculators, databases, computers, statistical packages and so on, to assist or facilitate the activities. Each, however, is subject to rigorous processes of social oversight that organise, control and regulate how they are carried out, according to values negotiated in socially and culturally variable contexts. This complex, multifaceted process of ethical discussion and regulation is what secures the trust of the community in the integrity and reliability of research outcomes. How AI processes might influence – and possibly either enhance or undermine – these well-established standards and what, if any, steps need to be taken to protect them remains to be determined.</p><p>At present, there are a few questions on which at least some agreement has been reached. One of these is the question of authorship. As summarised by the International Committee of Medical Journal Editors,<span><sup>12</sup></span> for an individual to qualify for authorship, several criteria must be satisfied: that he or she has made a substantial contribution to the conception or design of the work or to drafting the manuscript and that final approval has been given of the version to be published, along with agreement to be accountable for all aspects of the work. It would seem clear that not only could an AI process not qualify to be an author under these criteria, but in many cases, even limited contributions from automated devices might raise problems. At the least, for these reasons alone, the utilisation of AI devices in research projects must be reported fully and transparently. AI is already being used to enhance expression in English by authors for whom English is not the native language; arguably, this is an acceptable use of AI provided appropriate boundaries are not breached and a statement to describe its use is provided.</p><p>Many additional issues remain, such as the origin and representativeness of the data on which the AI machines rely and biases that may be embedded in the logical algorithms themselves. Questions of confidentiality and copyright need to be scrutinised. New protocols need to be developed for identifying and managing interests, such as those of wealthy, powerful individuals or companies that control and licence the AI models. Research into AI itself raises still more questions.<span><sup>13</sup></span></p><p>For journals, the use of AI in reviewing manuscripts is yet another unresolved topic of discussion. It is possible that in such settings AI may provide useful assistance in validating data, but the additional reviewer tasks of verifying the integrity, originality and reliability of the work may be limited to human assessors. Here too, at the present time, where the boundaries should be set and how they would be policed remain uncertain.</p><p>Where does this leave the IMJ in relation to the use of AI by its authors? Apart from the question of authorship, the requirement for full disclosures and the other checks and balances mentioned above, the rest remains up in the air. Perhaps the most that can be said is that while it is clear that AI will occupy an important place in publishing, how that place will ultimately be defined will have to be determined not by AI itself but by its human users – in what will no doubt be continuing, vigorous, highly charged, often inconclusive, conversations.</p>","PeriodicalId":13625,"journal":{"name":"Internal Medicine Journal","volume":"54 10","pages":"1601-1602"},"PeriodicalIF":1.8000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/imj.16519","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internal Medicine Journal","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/imj.16519","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

The advent of artificial intelligence (AI) models has already produced wide-ranging effects on all aspects of social life,1 and these continue to evolve rapidly. What the impact on medicine and science will be remains uncertain, but it is also likely to be profound. In the current fluid context, there is a need for clinicians and researchers to inform themselves of both the beneficial possibilities of AI and the ways in which it might undermine or compromise practices and values they have been taking for granted.

While the present pace of change may seem particularly intense, there is, of course, nothing remarkable about change itself. Indeed, we are familiar with a constant flux of new treatments, investigative techniques and tools of various other kinds. Occasionally, concerns have been expressed about a possible loss of skills2 or a potential impact on relationships with patients,3 but for the most part, innovations are welcomed and comfortably accommodated.4

Despite this familiarity with change, it has been argued that the impact of AI will be different from that of previous epochs of technological innovation.5 This is supposedly because AI is not just another tool that allows everyday tasks to be completed more quickly and efficiently but, in many cases, can actually replace human inputs altogether or even, more fundamentally, actually challenge the nature of what it is to be human. While it is too early to tell whether this will indeed turn out to be the case, it is clear that, for the present, we need to scrutinise carefully what is claimed and delivered.

Under these circumstances of uncertainty and ferment, journals and professional societies are hurriedly preparing policies to respond to perceived challenges emerging in the field of scientific publishing.6 Particular emphasis is being placed on issues relating to authorship and originality of manuscripts,7 reviewing practices,8 intellectual property9 and accountability.10 For the most part, the policies remain provisional and precautionary6 and reflect a recognition of the likely need for revision as further information becomes available.

The Internal Medicine Journal (IMJ) welcomes this reflective process and invites comments and suggestions from readers about their experiences with AI and what they consider to be its potential benefits and risks. We also recognise that a reflection on the impact of AI on journal publishing requires an examination of the multiple tasks that journals themselves serve and provides an opportunity for these tasks to be clarified and refined.

Medical journals like the IMJ are not mere manuscript-publishing machines, and their functions are not purely technical. Their success is not judged solely by the numbers of articles published, citations or impact factors, or even the efficiency with which they organise, review, process and disseminate written submissions. Their purposes also include the accumulation and dissemination of experience and knowledge about clinical practice and expansion and critical scrutiny of its scientific basis. They play an important role in stimulating discussion about issues of common concern of a social or ethical kind. They contribute to the formation and maintenance of communities of practitioners and to continuing education and regulatory processes. They assist in the formation of ethical insights and behaviours, deepen knowledge and stimulate ideas.

In addition to this, clinical medicine, which medical journals seek to enhance, is itself a collection of ethical practices underpinned by scientific knowledge. Clinicians, therefore, are encumbered by an unavoidable obligation to respond, rapidly and effectively, to uncertainties that may arise of a factual or ethical kind. As a novel field of technology that draws on existing human knowledge and purports to offer new ideas and strategies for action, AI itself raises unprecedented questions, such as how to identify the boundary between what is purely ‘technical’ and what is inherently ethical, and therefore cannot be disconnected from human agency. In particular, it raises questions about whether automated thinking processes can command the same authority as considered human judgements and whether, like the latter, they are subject to personal, cultural or other influences that must be openly identified and declared.11

The questions do not stop there. The application of AI to the practice of science may evoke issues about the conduct, reporting and publishing of research projects. Research involves multiple steps, such as design, ethics review, recruitment, data acquisition, data analysis and interpretation, production and publication of manuscripts, and public dissemination of outcomes. Many of these utilise ‘tools’ of some kind, such as laboratory equipment, calculators, databases, computers, statistical packages and so on, to assist or facilitate the activities. Each, however, is subject to rigorous processes of social oversight that organise, control and regulate how they are carried out, according to values negotiated in socially and culturally variable contexts. This complex, multifaceted process of ethical discussion and regulation is what secures the trust of the community in the integrity and reliability of research outcomes. How AI processes might influence – and possibly either enhance or undermine – these well-established standards and what, if any, steps need to be taken to protect them remains to be determined.

At present, there are a few questions on which at least some agreement has been reached. One of these is the question of authorship. As summarised by the International Committee of Medical Journal Editors,12 for an individual to qualify for authorship, several criteria must be satisfied: that he or she has made a substantial contribution to the conception or design of the work or to drafting the manuscript and that final approval has been given of the version to be published, along with agreement to be accountable for all aspects of the work. It would seem clear that not only could an AI process not qualify to be an author under these criteria, but in many cases, even limited contributions from automated devices might raise problems. At the least, for these reasons alone, the utilisation of AI devices in research projects must be reported fully and transparently. AI is already being used to enhance expression in English by authors for whom English is not the native language; arguably, this is an acceptable use of AI provided appropriate boundaries are not breached and a statement to describe its use is provided.

Many additional issues remain, such as the origin and representativeness of the data on which the AI machines rely and biases that may be embedded in the logical algorithms themselves. Questions of confidentiality and copyright need to be scrutinised. New protocols need to be developed for identifying and managing interests, such as those of wealthy, powerful individuals or companies that control and licence the AI models. Research into AI itself raises still more questions.13

For journals, the use of AI in reviewing manuscripts is yet another unresolved topic of discussion. It is possible that in such settings AI may provide useful assistance in validating data, but the additional reviewer tasks of verifying the integrity, originality and reliability of the work may be limited to human assessors. Here too, at the present time, where the boundaries should be set and how they would be policed remain uncertain.

Where does this leave the IMJ in relation to the use of AI by its authors? Apart from the question of authorship, the requirement for full disclosures and the other checks and balances mentioned above, the rest remains up in the air. Perhaps the most that can be said is that while it is clear that AI will occupy an important place in publishing, how that place will ultimately be defined will have to be determined not by AI itself but by its human users – in what will no doubt be continuing, vigorous, highly charged, often inconclusive, conversations.

期刊应如何应对人工智能带来的新挑战?
然而,每项研究都受到严格的社会监督,这些监督根据在社会和文化背景下协商确定的价值观,对研究的开展方式进行组织、控制和规范。正是这种复杂、多层面的伦理讨论和监管过程,确保了社会对研究成果完整性和可靠性的信任。人工智能过程会如何影响--可能是加强还是削弱--这些既定的标准,以及需要采取哪些措施(如果有的话)来保护这些标准,仍有待确定。其中之一就是著作权问题。正如国际医学期刊编辑委员会12 总结的那样,个人要想获得作者资格,必须满足几个标准:他或她对作品的构思或设计或手稿的起草做出了重大贡献,最终批准了要发表的版本,并同意对作品的所有方面负责。显然,根据这些标准,不仅人工智能程序没有资格成为作者,而且在许多情况下,即使是自动化设备做出的有限贡献也可能引起问题。至少,仅出于这些原因,在研究项目中使用人工智能设备的情况必须全面、透明地报告。人工智能已被用于提高英语非母语的作者的英语表达能力;可以说,这是一种可以接受的人工智能使用方式,前提是不违反适当的界限,并提供一份说明其使用情况的声明。还有许多其他问题,例如人工智能机器所依赖的数据的来源和代表性,以及逻辑算法本身可能存在的偏差。保密和版权问题需要仔细研究。需要制定新的协议来识别和管理利益,如控制和许可人工智能模型的富有、有权势的个人或公司的利益。13 对期刊而言,使用人工智能审稿是另一个悬而未决的话题。在这种情况下,人工智能有可能在验证数据方面提供有用的帮助,但审稿人验证作品完整性、原创性和可靠性的额外任务可能仅限于人类评审员。在这方面,目前也仍不确定应在何处设定界限以及如何对其进行监管。除了作者身份问题、全面披露的要求以及上文提到的其他制衡措施之外,其他问题仍然悬而未决。也许最能说明的是,虽然人工智能显然将在出版业占据重要地位,但最终如何界定这一地位,将不是由人工智能本身,而是由其人类用户来决定--毫无疑问,这将是一场持续、激烈、高度紧张但往往没有结果的对话。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Internal Medicine Journal
Internal Medicine Journal 医学-医学:内科
CiteScore
3.50
自引率
4.80%
发文量
600
审稿时长
3-6 weeks
期刊介绍: The Internal Medicine Journal is the official journal of the Adult Medicine Division of The Royal Australasian College of Physicians (RACP). Its purpose is to publish high-quality internationally competitive peer-reviewed original medical research, both laboratory and clinical, relating to the study and research of human disease. Papers will be considered from all areas of medical practice and science. The Journal also has a major role in continuing medical education and publishes review articles relevant to physician education.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信