Artificial intelligence generated clinical score sheets: looking at the two faces of Janus.

IF 2.7 Q3 MEDICINE, RESEARCH & EXPERIMENTAL
Cristian Berce
{"title":"Artificial intelligence generated clinical score sheets: looking at the two faces of Janus.","authors":"Cristian Berce","doi":"10.1186/s42826-024-00206-6","DOIUrl":null,"url":null,"abstract":"<p><p>In vivo experiments are increasingly using clinical score sheets to ensure minimal distress to the animals. A score sheet is a document that includes a list of specific symptoms, behaviours and intervention guidelines, all balanced to for an objective clinical assessment of experimental animals. Artificial Intelligence (AI) technologies are increasingly being applied in the field of preclinical research, not only in analysis but also in documentation processes, reflecting a significant shift towards more technologically advanced research methodologies. The present study explores the application of Large Language Models (LLM) in generating score sheets for an animal welfare assessment in a preclinical research setting. Focusing on a mouse model of inflammatory bowel disease, the study evaluates the performance of three LLM - ChatGPT-4, ChatGPT-3.5, and Google Bard - in creating clinical score sheets based on specified criteria such as weight loss, stool consistency, and visible fecal blood. Key parameters evaluated include the consistency of structure, accuracy in representing severity levels, and appropriateness of intervention thresholds. The findings reveal a duality in LLM-generated score sheets: while some LLM consistently structure their outputs effectively, all models exhibit notable variations in assigning numerical values to symptoms and defining intervention thresholds accurately. This emphasizes the dual nature of AI performance in this field-its potential to create useful foundational drafts and the critical need for professional review to ensure precision and reliability. The results highlight the significance of balancing AI-generated tools with expert oversight in preclinical research.</p>","PeriodicalId":17993,"journal":{"name":"Laboratory Animal Research","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11097593/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Laboratory Animal Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s42826-024-00206-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

In vivo experiments are increasingly using clinical score sheets to ensure minimal distress to the animals. A score sheet is a document that includes a list of specific symptoms, behaviours and intervention guidelines, all balanced to for an objective clinical assessment of experimental animals. Artificial Intelligence (AI) technologies are increasingly being applied in the field of preclinical research, not only in analysis but also in documentation processes, reflecting a significant shift towards more technologically advanced research methodologies. The present study explores the application of Large Language Models (LLM) in generating score sheets for an animal welfare assessment in a preclinical research setting. Focusing on a mouse model of inflammatory bowel disease, the study evaluates the performance of three LLM - ChatGPT-4, ChatGPT-3.5, and Google Bard - in creating clinical score sheets based on specified criteria such as weight loss, stool consistency, and visible fecal blood. Key parameters evaluated include the consistency of structure, accuracy in representing severity levels, and appropriateness of intervention thresholds. The findings reveal a duality in LLM-generated score sheets: while some LLM consistently structure their outputs effectively, all models exhibit notable variations in assigning numerical values to symptoms and defining intervention thresholds accurately. This emphasizes the dual nature of AI performance in this field-its potential to create useful foundational drafts and the critical need for professional review to ensure precision and reliability. The results highlight the significance of balancing AI-generated tools with expert oversight in preclinical research.

人工智能生成的临床评分表:看杰纳斯的两面性。
体内实验越来越多地使用临床评分表,以确保尽量减少对动物的伤害。评分表是一份包含特定症状、行为和干预指南清单的文件,所有这些都能对实验动物进行客观的临床评估。人工智能(AI)技术正越来越多地应用于临床前研究领域,不仅在分析方面,而且在记录过程方面,这反映了向技术更先进的研究方法的重大转变。本研究探讨了大语言模型(LLM)在临床前研究环境中生成动物福利评估评分表中的应用。本研究以炎症性肠病小鼠模型为重点,评估了 ChatGPT-4、ChatGPT-3.5 和 Google Bard 这三种大型语言模型在根据指定标准(如体重减轻、粪便一致性和可见粪血)创建临床评分表方面的性能。评估的主要参数包括结构的一致性、表示严重程度的准确性以及干预阈值的适当性。研究结果揭示了 LLM 生成的评分表的双重性:虽然一些 LLM 始终有效地构建其输出结构,但所有模型在为症状分配数值和准确定义干预阈值方面都表现出明显的差异。这强调了人工智能在这一领域表现的双重性--既有可能创建有用的基础草案,也亟需进行专业审查以确保精确性和可靠性。这些结果突显了在临床前研究中平衡人工智能生成工具与专家监督的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.40
自引率
0.00%
发文量
32
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信