{"title":"人工智能生成的临床评分表:看杰纳斯的两面性。","authors":"Cristian Berce","doi":"10.1186/s42826-024-00206-6","DOIUrl":null,"url":null,"abstract":"<p><p>In vivo experiments are increasingly using clinical score sheets to ensure minimal distress to the animals. A score sheet is a document that includes a list of specific symptoms, behaviours and intervention guidelines, all balanced to for an objective clinical assessment of experimental animals. Artificial Intelligence (AI) technologies are increasingly being applied in the field of preclinical research, not only in analysis but also in documentation processes, reflecting a significant shift towards more technologically advanced research methodologies. The present study explores the application of Large Language Models (LLM) in generating score sheets for an animal welfare assessment in a preclinical research setting. Focusing on a mouse model of inflammatory bowel disease, the study evaluates the performance of three LLM - ChatGPT-4, ChatGPT-3.5, and Google Bard - in creating clinical score sheets based on specified criteria such as weight loss, stool consistency, and visible fecal blood. Key parameters evaluated include the consistency of structure, accuracy in representing severity levels, and appropriateness of intervention thresholds. The findings reveal a duality in LLM-generated score sheets: while some LLM consistently structure their outputs effectively, all models exhibit notable variations in assigning numerical values to symptoms and defining intervention thresholds accurately. This emphasizes the dual nature of AI performance in this field-its potential to create useful foundational drafts and the critical need for professional review to ensure precision and reliability. The results highlight the significance of balancing AI-generated tools with expert oversight in preclinical research.</p>","PeriodicalId":17993,"journal":{"name":"Laboratory Animal Research","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11097593/pdf/","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence generated clinical score sheets: looking at the two faces of Janus.\",\"authors\":\"Cristian Berce\",\"doi\":\"10.1186/s42826-024-00206-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In vivo experiments are increasingly using clinical score sheets to ensure minimal distress to the animals. A score sheet is a document that includes a list of specific symptoms, behaviours and intervention guidelines, all balanced to for an objective clinical assessment of experimental animals. Artificial Intelligence (AI) technologies are increasingly being applied in the field of preclinical research, not only in analysis but also in documentation processes, reflecting a significant shift towards more technologically advanced research methodologies. The present study explores the application of Large Language Models (LLM) in generating score sheets for an animal welfare assessment in a preclinical research setting. Focusing on a mouse model of inflammatory bowel disease, the study evaluates the performance of three LLM - ChatGPT-4, ChatGPT-3.5, and Google Bard - in creating clinical score sheets based on specified criteria such as weight loss, stool consistency, and visible fecal blood. Key parameters evaluated include the consistency of structure, accuracy in representing severity levels, and appropriateness of intervention thresholds. The findings reveal a duality in LLM-generated score sheets: while some LLM consistently structure their outputs effectively, all models exhibit notable variations in assigning numerical values to symptoms and defining intervention thresholds accurately. This emphasizes the dual nature of AI performance in this field-its potential to create useful foundational drafts and the critical need for professional review to ensure precision and reliability. The results highlight the significance of balancing AI-generated tools with expert oversight in preclinical research.</p>\",\"PeriodicalId\":17993,\"journal\":{\"name\":\"Laboratory Animal Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-05-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11097593/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Laboratory Animal Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s42826-024-00206-6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MEDICINE, RESEARCH & EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Laboratory Animal Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s42826-024-00206-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
摘要
体内实验越来越多地使用临床评分表,以确保尽量减少对动物的伤害。评分表是一份包含特定症状、行为和干预指南清单的文件,所有这些都能对实验动物进行客观的临床评估。人工智能(AI)技术正越来越多地应用于临床前研究领域,不仅在分析方面,而且在记录过程方面,这反映了向技术更先进的研究方法的重大转变。本研究探讨了大语言模型(LLM)在临床前研究环境中生成动物福利评估评分表中的应用。本研究以炎症性肠病小鼠模型为重点,评估了 ChatGPT-4、ChatGPT-3.5 和 Google Bard 这三种大型语言模型在根据指定标准(如体重减轻、粪便一致性和可见粪血)创建临床评分表方面的性能。评估的主要参数包括结构的一致性、表示严重程度的准确性以及干预阈值的适当性。研究结果揭示了 LLM 生成的评分表的双重性:虽然一些 LLM 始终有效地构建其输出结构,但所有模型在为症状分配数值和准确定义干预阈值方面都表现出明显的差异。这强调了人工智能在这一领域表现的双重性--既有可能创建有用的基础草案,也亟需进行专业审查以确保精确性和可靠性。这些结果突显了在临床前研究中平衡人工智能生成工具与专家监督的重要性。
Artificial intelligence generated clinical score sheets: looking at the two faces of Janus.
In vivo experiments are increasingly using clinical score sheets to ensure minimal distress to the animals. A score sheet is a document that includes a list of specific symptoms, behaviours and intervention guidelines, all balanced to for an objective clinical assessment of experimental animals. Artificial Intelligence (AI) technologies are increasingly being applied in the field of preclinical research, not only in analysis but also in documentation processes, reflecting a significant shift towards more technologically advanced research methodologies. The present study explores the application of Large Language Models (LLM) in generating score sheets for an animal welfare assessment in a preclinical research setting. Focusing on a mouse model of inflammatory bowel disease, the study evaluates the performance of three LLM - ChatGPT-4, ChatGPT-3.5, and Google Bard - in creating clinical score sheets based on specified criteria such as weight loss, stool consistency, and visible fecal blood. Key parameters evaluated include the consistency of structure, accuracy in representing severity levels, and appropriateness of intervention thresholds. The findings reveal a duality in LLM-generated score sheets: while some LLM consistently structure their outputs effectively, all models exhibit notable variations in assigning numerical values to symptoms and defining intervention thresholds accurately. This emphasizes the dual nature of AI performance in this field-its potential to create useful foundational drafts and the critical need for professional review to ensure precision and reliability. The results highlight the significance of balancing AI-generated tools with expert oversight in preclinical research.