ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research.

Madhan Jeyaraman, Swaminathan Ramasubramanian, Sangeetha Balaji, Naveen Jeyaraman, Arulkumar Nallakumarasamy, Shilpa Sharma
{"title":"ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research.","authors":"Madhan Jeyaraman,&nbsp;Swaminathan Ramasubramanian,&nbsp;Sangeetha Balaji,&nbsp;Naveen Jeyaraman,&nbsp;Arulkumar Nallakumarasamy,&nbsp;Shilpa Sharma","doi":"10.5662/wjm.v13.i4.170","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) tools, like OpenAI's Chat Generative Pre-trained Transformer (ChatGPT), hold considerable potential in healthcare, academia, and diverse industries. Evidence demonstrates its capability at a medical student level in standardized tests, suggesting utility in medical education, radiology reporting, genetics research, data optimization, and drafting repetitive texts such as discharge summaries. Nevertheless, these tools should augment, not supplant, human expertise. Despite promising applications, ChatGPT confronts limitations, including critical thinking tasks and generating false references, necessitating stringent cross-verification. Ensuing concerns, such as potential misuse, bias, blind trust, and privacy, underscore the need for transparency, accountability, and clear policies. Evaluations of AI-generated content and preservation of academic integrity are critical. With responsible use, AI can significantly improve healthcare, academia, and industry without compromising integrity and research quality. For effective and ethical AI deployment, collaboration amongst AI developers, researchers, educators, and policymakers is vital. The development of domain-specific tools, guidelines, regulations, and the facilitation of public dialogue must underpin these endeavors to responsibly harness AI's potential.</p>","PeriodicalId":94271,"journal":{"name":"World journal of methodology","volume":"13 4","pages":"170-178"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/4b/d4/WJM-13-170.PMC10523250.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World journal of methodology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5662/wjm.v13.i4.170","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) tools, like OpenAI's Chat Generative Pre-trained Transformer (ChatGPT), hold considerable potential in healthcare, academia, and diverse industries. Evidence demonstrates its capability at a medical student level in standardized tests, suggesting utility in medical education, radiology reporting, genetics research, data optimization, and drafting repetitive texts such as discharge summaries. Nevertheless, these tools should augment, not supplant, human expertise. Despite promising applications, ChatGPT confronts limitations, including critical thinking tasks and generating false references, necessitating stringent cross-verification. Ensuing concerns, such as potential misuse, bias, blind trust, and privacy, underscore the need for transparency, accountability, and clear policies. Evaluations of AI-generated content and preservation of academic integrity are critical. With responsible use, AI can significantly improve healthcare, academia, and industry without compromising integrity and research quality. For effective and ethical AI deployment, collaboration amongst AI developers, researchers, educators, and policymakers is vital. The development of domain-specific tools, guidelines, regulations, and the facilitation of public dialogue must underpin these endeavors to responsibly harness AI's potential.

Abstract Image

ChatGPT在行动:利用人工智能潜力,应对医学、教育和科学研究中的伦理挑战。
人工智能(AI)工具,如OpenAI的聊天生成预训练转换器(ChatGPT),在医疗保健、学术界和各种行业具有相当大的潜力。证据证明了它在医学生水平上的标准化测试能力,表明它在医学教育、放射学报告、遗传学研究、数据优化和起草出院摘要等重复文本方面的实用性。然而,这些工具应该增加而不是取代人类的专门知识。尽管应用前景广阔,但ChatGPT仍面临局限性,包括批判性思维任务和生成虚假参考,需要严格的交叉验证。关注潜在的滥用、偏见、盲目信任和隐私等问题,强调了透明度、问责制和明确政策的必要性。评估人工智能生成的内容和保持学术诚信至关重要。通过负责任的使用,人工智能可以在不损害诚信和研究质量的情况下显著改善医疗保健、学术界和工业。为了有效和合乎道德的人工智能部署,人工智能开发人员、研究人员、教育工作者和政策制定者之间的合作至关重要。开发特定领域的工具、指导方针、法规和促进公众对话必须是负责任地利用人工智能潜力的基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信