Jasmine Chiat Ling Ong, Yilin Ning, Gary S. Collins, Danielle S. Bitterman, Ashley N. Beecy, Robert T. Chang, Alastair K. Denniston, Oscar Freyer, Stephen Gilbert, Anne de Hond, Artuur M. Leeuwenberg, Liang Zhao, John C. W. Lim, Mingxuan Liu, Xiaoxuan Liu, Christopher A. Longhurst, Yian Ma, Yue Qiu, Rupa Sarkar, Bin Sheng, Kuldev Singh, Iris Siu Kwan Tan, Yih Chung Tham, Arun J. Thirunavukarasu, Daniel Shu Wei Ting, Silke Vogel, Rui Zhang, Jianfei Zhao, Wendy W. Chapman, Nigam H. Shah, Karel G. M. Moons, Tien Yin Wong, Nan Liu
{"title":"管理医学中生成式人工智能模型的国际伙伴关系","authors":"Jasmine Chiat Ling Ong, Yilin Ning, Gary S. Collins, Danielle S. Bitterman, Ashley N. Beecy, Robert T. Chang, Alastair K. Denniston, Oscar Freyer, Stephen Gilbert, Anne de Hond, Artuur M. Leeuwenberg, Liang Zhao, John C. W. Lim, Mingxuan Liu, Xiaoxuan Liu, Christopher A. Longhurst, Yian Ma, Yue Qiu, Rupa Sarkar, Bin Sheng, Kuldev Singh, Iris Siu Kwan Tan, Yih Chung Tham, Arun J. Thirunavukarasu, Daniel Shu Wei Ting, Silke Vogel, Rui Zhang, Jianfei Zhao, Wendy W. Chapman, Nigam H. Shah, Karel G. M. Moons, Tien Yin Wong, Nan Liu","doi":"10.1038/s41591-025-03787-4","DOIUrl":null,"url":null,"abstract":"<p>Generative artificial intelligence (GenAI) models, such as generative adversarial networks (GANs) and transformer-based large language models (LLMs), are developing at an accelerated pace and positioned to be integrated into clinical workflows and healthcare systems across the world. However, this rapid rise of GenAI in medicine and healthcare presents not just unprecedented opportunities, but also systemic risks in the integration of this new technology and critical vulnerabilities in terms of safety, governance and regulatory oversight. GenAI and LLMs are non-deterministic in nature, possess broad generalist functionalities, and display evolving capabilities<sup>1</sup>. These characteristics challenge conventional regulatory frameworks designed for deterministic, task-specific artificial intelligence (AI) models, such as those for Software as a Medical Device (SaMD).</p><p>Some of the fundamental risks associated with GenAI and LLMs applications in healthcare are clear but yet to be fully addressed by current regulatory framework (‘known unknowns’), whereas other risks and challenges have not yet even surfaced (‘unknown unknowns’). Known unknowns include a lack of transparency in training data (including the possible use of synthetic data for training<sup>2</sup>), susceptibility to bias, hallucination of incorrect medical content, and potential misuse in high-stakes clinical settings<sup>1</sup> (Box 1).</p>","PeriodicalId":19037,"journal":{"name":"Nature Medicine","volume":"27 1","pages":""},"PeriodicalIF":58.7000,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"International partnership for governing generative artificial intelligence models in medicine\",\"authors\":\"Jasmine Chiat Ling Ong, Yilin Ning, Gary S. Collins, Danielle S. Bitterman, Ashley N. Beecy, Robert T. Chang, Alastair K. Denniston, Oscar Freyer, Stephen Gilbert, Anne de Hond, Artuur M. Leeuwenberg, Liang Zhao, John C. W. Lim, Mingxuan Liu, Xiaoxuan Liu, Christopher A. Longhurst, Yian Ma, Yue Qiu, Rupa Sarkar, Bin Sheng, Kuldev Singh, Iris Siu Kwan Tan, Yih Chung Tham, Arun J. Thirunavukarasu, Daniel Shu Wei Ting, Silke Vogel, Rui Zhang, Jianfei Zhao, Wendy W. Chapman, Nigam H. Shah, Karel G. M. Moons, Tien Yin Wong, Nan Liu\",\"doi\":\"10.1038/s41591-025-03787-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Generative artificial intelligence (GenAI) models, such as generative adversarial networks (GANs) and transformer-based large language models (LLMs), are developing at an accelerated pace and positioned to be integrated into clinical workflows and healthcare systems across the world. However, this rapid rise of GenAI in medicine and healthcare presents not just unprecedented opportunities, but also systemic risks in the integration of this new technology and critical vulnerabilities in terms of safety, governance and regulatory oversight. GenAI and LLMs are non-deterministic in nature, possess broad generalist functionalities, and display evolving capabilities<sup>1</sup>. These characteristics challenge conventional regulatory frameworks designed for deterministic, task-specific artificial intelligence (AI) models, such as those for Software as a Medical Device (SaMD).</p><p>Some of the fundamental risks associated with GenAI and LLMs applications in healthcare are clear but yet to be fully addressed by current regulatory framework (‘known unknowns’), whereas other risks and challenges have not yet even surfaced (‘unknown unknowns’). Known unknowns include a lack of transparency in training data (including the possible use of synthetic data for training<sup>2</sup>), susceptibility to bias, hallucination of incorrect medical content, and potential misuse in high-stakes clinical settings<sup>1</sup> (Box 1).</p>\",\"PeriodicalId\":19037,\"journal\":{\"name\":\"Nature Medicine\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":58.7000,\"publicationDate\":\"2025-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nature Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1038/s41591-025-03787-4\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BIOCHEMISTRY & MOLECULAR BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1038/s41591-025-03787-4","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOCHEMISTRY & MOLECULAR BIOLOGY","Score":null,"Total":0}
International partnership for governing generative artificial intelligence models in medicine
Generative artificial intelligence (GenAI) models, such as generative adversarial networks (GANs) and transformer-based large language models (LLMs), are developing at an accelerated pace and positioned to be integrated into clinical workflows and healthcare systems across the world. However, this rapid rise of GenAI in medicine and healthcare presents not just unprecedented opportunities, but also systemic risks in the integration of this new technology and critical vulnerabilities in terms of safety, governance and regulatory oversight. GenAI and LLMs are non-deterministic in nature, possess broad generalist functionalities, and display evolving capabilities1. These characteristics challenge conventional regulatory frameworks designed for deterministic, task-specific artificial intelligence (AI) models, such as those for Software as a Medical Device (SaMD).
Some of the fundamental risks associated with GenAI and LLMs applications in healthcare are clear but yet to be fully addressed by current regulatory framework (‘known unknowns’), whereas other risks and challenges have not yet even surfaced (‘unknown unknowns’). Known unknowns include a lack of transparency in training data (including the possible use of synthetic data for training2), susceptibility to bias, hallucination of incorrect medical content, and potential misuse in high-stakes clinical settings1 (Box 1).
期刊介绍:
Nature Medicine is a monthly journal publishing original peer-reviewed research in all areas of medicine. The publication focuses on originality, timeliness, interdisciplinary interest, and the impact on improving human health. In addition to research articles, Nature Medicine also publishes commissioned content such as News, Reviews, and Perspectives. This content aims to provide context for the latest advances in translational and clinical research, reaching a wide audience of M.D. and Ph.D. readers. All editorial decisions for the journal are made by a team of full-time professional editors.
Nature Medicine consider all types of clinical research, including:
-Case-reports and small case series
-Clinical trials, whether phase 1, 2, 3 or 4
-Observational studies
-Meta-analyses
-Biomarker studies
-Public and global health studies
Nature Medicine is also committed to facilitating communication between translational and clinical researchers. As such, we consider “hybrid” studies with preclinical and translational findings reported alongside data from clinical studies.