chatgpt相关研究综述及对未来大型语言模型的展望

Yiheng Liu , Tianle Han , Siyuan Ma , Jiayue Zhang , Yuanyuan Yang , Jiaming Tian , Hao He , Antong Li , Mengshen He , Zhengliang Liu , Zihao Wu , Lin Zhao , Dajiang Zhu , Xiang Li , Ning Qiang , Dingang Shen , Tianming Liu , Bao Ge
{"title":"chatgpt相关研究综述及对未来大型语言模型的展望","authors":"Yiheng Liu ,&nbsp;Tianle Han ,&nbsp;Siyuan Ma ,&nbsp;Jiayue Zhang ,&nbsp;Yuanyuan Yang ,&nbsp;Jiaming Tian ,&nbsp;Hao He ,&nbsp;Antong Li ,&nbsp;Mengshen He ,&nbsp;Zhengliang Liu ,&nbsp;Zihao Wu ,&nbsp;Lin Zhao ,&nbsp;Dajiang Zhu ,&nbsp;Xiang Li ,&nbsp;Ning Qiang ,&nbsp;Dingang Shen ,&nbsp;Tianming Liu ,&nbsp;Bao Ge","doi":"10.1016/j.metrad.2023.100017","DOIUrl":null,"url":null,"abstract":"<div><p>This paper presents a comprehensive survey of ChatGPT-related (GPT-3.5 and GPT-4) research, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains. Indeed, key innovations such as large-scale pre-training that captures knowledge across the entire world wide web, instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) have played significant roles in enhancing LLMs' adaptability and performance. We performed an in-depth analysis of 194 relevant papers on arXiv, encompassing trend analysis, word cloud representation, and distribution analysis across various application domains. The findings reveal a significant and increasing interest in ChatGPT-related research, predominantly centered on direct natural language processing applications, while also demonstrating considerable potential in areas ranging from education and history to mathematics, medicine, and physics. This study endeavors to furnish insights into ChatGPT's capabilities, potential implications, ethical concerns, and offer direction for future advancements in this field.</p></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"1 2","pages":"Article 100017"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"58","resultStr":"{\"title\":\"Summary of ChatGPT-Related research and perspective towards the future of large language models\",\"authors\":\"Yiheng Liu ,&nbsp;Tianle Han ,&nbsp;Siyuan Ma ,&nbsp;Jiayue Zhang ,&nbsp;Yuanyuan Yang ,&nbsp;Jiaming Tian ,&nbsp;Hao He ,&nbsp;Antong Li ,&nbsp;Mengshen He ,&nbsp;Zhengliang Liu ,&nbsp;Zihao Wu ,&nbsp;Lin Zhao ,&nbsp;Dajiang Zhu ,&nbsp;Xiang Li ,&nbsp;Ning Qiang ,&nbsp;Dingang Shen ,&nbsp;Tianming Liu ,&nbsp;Bao Ge\",\"doi\":\"10.1016/j.metrad.2023.100017\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper presents a comprehensive survey of ChatGPT-related (GPT-3.5 and GPT-4) research, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains. Indeed, key innovations such as large-scale pre-training that captures knowledge across the entire world wide web, instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) have played significant roles in enhancing LLMs' adaptability and performance. We performed an in-depth analysis of 194 relevant papers on arXiv, encompassing trend analysis, word cloud representation, and distribution analysis across various application domains. The findings reveal a significant and increasing interest in ChatGPT-related research, predominantly centered on direct natural language processing applications, while also demonstrating considerable potential in areas ranging from education and history to mathematics, medicine, and physics. This study endeavors to furnish insights into ChatGPT's capabilities, potential implications, ethical concerns, and offer direction for future advancements in this field.</p></div>\",\"PeriodicalId\":100921,\"journal\":{\"name\":\"Meta-Radiology\",\"volume\":\"1 2\",\"pages\":\"Article 100017\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"58\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Meta-Radiology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2950162823000176\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Meta-Radiology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2950162823000176","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 58

摘要

本文对ChatGPT相关(GPT-3.5和GPT-4)研究、GPT系列中最先进的大型语言模型(LLM)及其在不同领域的潜在应用进行了全面调查。事实上,关键的创新,如在整个万维网中获取知识的大规模预培训、教学微调和从人类反馈中强化学习(RLHF),在提高LLM的适应性和表现方面发挥了重要作用。我们对194篇关于arXiv的相关论文进行了深入分析,包括趋势分析、词云表示和各个应用领域的分布分析。这些发现表明,人们对ChatGPT相关研究的兴趣越来越大,主要集中在直接的自然语言处理应用上,同时也在教育和历史、数学、医学和物理等领域显示出相当大的潜力。这项研究旨在深入了解ChatGPT的能力、潜在影响、伦理问题,并为该领域的未来发展提供方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Summary of ChatGPT-Related research and perspective towards the future of large language models

This paper presents a comprehensive survey of ChatGPT-related (GPT-3.5 and GPT-4) research, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains. Indeed, key innovations such as large-scale pre-training that captures knowledge across the entire world wide web, instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) have played significant roles in enhancing LLMs' adaptability and performance. We performed an in-depth analysis of 194 relevant papers on arXiv, encompassing trend analysis, word cloud representation, and distribution analysis across various application domains. The findings reveal a significant and increasing interest in ChatGPT-related research, predominantly centered on direct natural language processing applications, while also demonstrating considerable potential in areas ranging from education and history to mathematics, medicine, and physics. This study endeavors to furnish insights into ChatGPT's capabilities, potential implications, ethical concerns, and offer direction for future advancements in this field.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信