基于牙科医疗记录的生成式预训练变压器模型的医学文本预测和建议。

IF 1.8 4区 医学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS
Joseph Sirrianni, Emre Sezgin, Daniel Claman, Simon L Linwood
{"title":"基于牙科医疗记录的生成式预训练变压器模型的医学文本预测和建议。","authors":"Joseph Sirrianni,&nbsp;Emre Sezgin,&nbsp;Daniel Claman,&nbsp;Simon L Linwood","doi":"10.1055/a-1900-7351","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Generative pretrained transformer (GPT) models are one of the latest large pretrained natural language processing models that enables model training with limited datasets and reduces dependency on large datasets, which are scarce and costly to establish and maintain. There is a rising interest to explore the use of GPT models in health care.</p><p><strong>Objective: </strong>We investigate the performance of GPT-2 and GPT-Neo models for medical text prediction using 374,787 free-text dental notes.</p><p><strong>Methods: </strong>We fine-tune pretrained GPT-2 and GPT-Neo models for next word prediction on a dataset of over 374,000 manually written sections of dental clinical notes. Each model was trained on 80% of the dataset, validated on 10%, and tested on the remaining 10%. We report model performance in terms of next word prediction accuracy and loss. Additionally, we analyze the performance of the models on different types of prediction tokens for categories. For comparison, we also fine-tuned a non-GPT pretrained neural network model, XLNet (large), for next word prediction. We annotate each token in 100 randomly sampled notes by category (e.g., names, abbreviations, clinical terms, punctuation, etc.) and compare the performance of each model by token category.</p><p><strong>Results: </strong>Models present acceptable accuracy scores (GPT-2: 76%; GPT-Neo: 53%), and the GPT-2 model also performs better in manual evaluations, especially for names, abbreviations, and punctuation. Both GPT models outperformed XLNet in terms of accuracy. The results suggest that pretrained models have the potential to assist medical charting in the future. We share the lessons learned, insights, and suggestions for future implementations.</p><p><strong>Conclusion: </strong>The results suggest that pretrained models have the potential to assist medical charting in the future. Our study presented one of the first implementations of the GPT model used with medical notes.</p>","PeriodicalId":49822,"journal":{"name":"Methods of Information in Medicine","volume":"61 5-06","pages":"195-200"},"PeriodicalIF":1.8000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Medical Text Prediction and Suggestion Using Generative Pretrained Transformer Models with Dental Medical Notes.\",\"authors\":\"Joseph Sirrianni,&nbsp;Emre Sezgin,&nbsp;Daniel Claman,&nbsp;Simon L Linwood\",\"doi\":\"10.1055/a-1900-7351\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Generative pretrained transformer (GPT) models are one of the latest large pretrained natural language processing models that enables model training with limited datasets and reduces dependency on large datasets, which are scarce and costly to establish and maintain. There is a rising interest to explore the use of GPT models in health care.</p><p><strong>Objective: </strong>We investigate the performance of GPT-2 and GPT-Neo models for medical text prediction using 374,787 free-text dental notes.</p><p><strong>Methods: </strong>We fine-tune pretrained GPT-2 and GPT-Neo models for next word prediction on a dataset of over 374,000 manually written sections of dental clinical notes. Each model was trained on 80% of the dataset, validated on 10%, and tested on the remaining 10%. We report model performance in terms of next word prediction accuracy and loss. Additionally, we analyze the performance of the models on different types of prediction tokens for categories. For comparison, we also fine-tuned a non-GPT pretrained neural network model, XLNet (large), for next word prediction. We annotate each token in 100 randomly sampled notes by category (e.g., names, abbreviations, clinical terms, punctuation, etc.) and compare the performance of each model by token category.</p><p><strong>Results: </strong>Models present acceptable accuracy scores (GPT-2: 76%; GPT-Neo: 53%), and the GPT-2 model also performs better in manual evaluations, especially for names, abbreviations, and punctuation. Both GPT models outperformed XLNet in terms of accuracy. The results suggest that pretrained models have the potential to assist medical charting in the future. We share the lessons learned, insights, and suggestions for future implementations.</p><p><strong>Conclusion: </strong>The results suggest that pretrained models have the potential to assist medical charting in the future. Our study presented one of the first implementations of the GPT model used with medical notes.</p>\",\"PeriodicalId\":49822,\"journal\":{\"name\":\"Methods of Information in Medicine\",\"volume\":\"61 5-06\",\"pages\":\"195-200\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Methods of Information in Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1055/a-1900-7351\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Methods of Information in Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1055/a-1900-7351","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 2

摘要

背景:生成式预训练转换器(GPT)模型是一种最新的大型预训练自然语言处理模型,它可以使用有限的数据集进行模型训练,并减少对大型数据集的依赖,而大型数据集的建立和维护成本很高。人们对探索GPT模型在医疗保健中的应用越来越感兴趣。目的:研究GPT-2和GPT-Neo模型对374,787份自由文本牙科笔记进行医学文本预测的性能。方法:我们对预训练的GPT-2和GPT-Neo模型进行微调,以在超过37.4万份牙科临床笔记的人工部分数据集中进行下一个单词预测。每个模型在80%的数据集上进行训练,在10%的数据集上进行验证,并在剩下的10%上进行测试。我们根据下一个单词预测的准确性和损失来报告模型的性能。此外,我们还分析了模型在不同类型的类别预测令牌上的性能。为了进行比较,我们还对非gpt预训练的神经网络模型XLNet(大)进行了微调,用于下一个单词预测。我们按类别(例如,名称,缩写,临床术语,标点符号等)在100个随机抽样的音符中注释每个标记,并按标记类别比较每个模型的性能。结果:模型呈现出可接受的准确度分数(GPT-2: 76%;GPT-Neo: 53%),并且GPT-2模型在手动评估中也表现更好,特别是在名称、缩写和标点符号方面。两种GPT模型在准确性方面都优于XLNet。结果表明,预训练模型有可能在未来协助医疗制图。我们将分享经验教训、见解和对未来实现的建议。结论:预训练模型具有辅助医学制图的潜力。我们的研究是第一个将GPT模型用于医疗记录的实现之一。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Medical Text Prediction and Suggestion Using Generative Pretrained Transformer Models with Dental Medical Notes.

Background: Generative pretrained transformer (GPT) models are one of the latest large pretrained natural language processing models that enables model training with limited datasets and reduces dependency on large datasets, which are scarce and costly to establish and maintain. There is a rising interest to explore the use of GPT models in health care.

Objective: We investigate the performance of GPT-2 and GPT-Neo models for medical text prediction using 374,787 free-text dental notes.

Methods: We fine-tune pretrained GPT-2 and GPT-Neo models for next word prediction on a dataset of over 374,000 manually written sections of dental clinical notes. Each model was trained on 80% of the dataset, validated on 10%, and tested on the remaining 10%. We report model performance in terms of next word prediction accuracy and loss. Additionally, we analyze the performance of the models on different types of prediction tokens for categories. For comparison, we also fine-tuned a non-GPT pretrained neural network model, XLNet (large), for next word prediction. We annotate each token in 100 randomly sampled notes by category (e.g., names, abbreviations, clinical terms, punctuation, etc.) and compare the performance of each model by token category.

Results: Models present acceptable accuracy scores (GPT-2: 76%; GPT-Neo: 53%), and the GPT-2 model also performs better in manual evaluations, especially for names, abbreviations, and punctuation. Both GPT models outperformed XLNet in terms of accuracy. The results suggest that pretrained models have the potential to assist medical charting in the future. We share the lessons learned, insights, and suggestions for future implementations.

Conclusion: The results suggest that pretrained models have the potential to assist medical charting in the future. Our study presented one of the first implementations of the GPT model used with medical notes.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Methods of Information in Medicine
Methods of Information in Medicine 医学-计算机:信息系统
CiteScore
3.70
自引率
11.80%
发文量
33
审稿时长
6-12 weeks
期刊介绍: Good medicine and good healthcare demand good information. Since the journal''s founding in 1962, Methods of Information in Medicine has stressed the methodology and scientific fundamentals of organizing, representing and analyzing data, information and knowledge in biomedicine and health care. Covering publications in the fields of biomedical and health informatics, medical biometry, and epidemiology, the journal publishes original papers, reviews, reports, opinion papers, editorials, and letters to the editor. From time to time, the journal publishes articles on particular focus themes as part of a journal''s issue.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信