How patients are using AI

The BMJ Pub Date : 2024-11-19 DOI:10.1136/bmj.q2393
Chris Stokel-Walker
{"title":"How patients are using AI","authors":"Chris Stokel-Walker","doi":"10.1136/bmj.q2393","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) tools such as ChatGPT have hundreds of millions of users—but are they medically safe and reliable? Chris Stokel-Walker asks patients and physicians about the benefits and risks in an AI world In August this year Hayley Brackley lost a large part of her vision, completely out of the blue. She’d gone to her local chemist with eye pain, and a prescribing pharmacist diagnosed sinusitis. She took the recommended medicine to try to resolve the pain, but it began affecting her ability to see. Her first thought was to turn to ChatGPT for advice on what to do next. The chatbot advised her to go back and get the problem checked out more, which she did. Further examination by an optician found that she had significant inflammation and a haemorrhage in her optic nerve, which is currently being treated. It’s not surprising that Brackley’s first port of call was ChatGPT. She prefers ChatGPT to a search engine such as Google because it can hold a conversation and more quickly find the information she wants. She’s not alone: 200 million of us use the world’s most popular generative AI chatbot every day.1 Neither is it surprising that, before her meeting with the eye consultant in which her condition was diagnosed, she sought to use ChatGPT to see what sorts of questions might be asked. Brackley has attention deficit/hyperactivity disorder (ADHD) and autism, and she thought that being forewarned about what she might be asked could help her in the interaction. But this begs several questions. Should patients be using AI tools? How should the healthcare system react to patients using a new, often untested, tool in addition to human diagnoses? And what does patients’ use of AI tell us about the gaps in the health service and how …","PeriodicalId":22388,"journal":{"name":"The BMJ","volume":"35 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The BMJ","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmj.q2393","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) tools such as ChatGPT have hundreds of millions of users—but are they medically safe and reliable? Chris Stokel-Walker asks patients and physicians about the benefits and risks in an AI world In August this year Hayley Brackley lost a large part of her vision, completely out of the blue. She’d gone to her local chemist with eye pain, and a prescribing pharmacist diagnosed sinusitis. She took the recommended medicine to try to resolve the pain, but it began affecting her ability to see. Her first thought was to turn to ChatGPT for advice on what to do next. The chatbot advised her to go back and get the problem checked out more, which she did. Further examination by an optician found that she had significant inflammation and a haemorrhage in her optic nerve, which is currently being treated. It’s not surprising that Brackley’s first port of call was ChatGPT. She prefers ChatGPT to a search engine such as Google because it can hold a conversation and more quickly find the information she wants. She’s not alone: 200 million of us use the world’s most popular generative AI chatbot every day.1 Neither is it surprising that, before her meeting with the eye consultant in which her condition was diagnosed, she sought to use ChatGPT to see what sorts of questions might be asked. Brackley has attention deficit/hyperactivity disorder (ADHD) and autism, and she thought that being forewarned about what she might be asked could help her in the interaction. But this begs several questions. Should patients be using AI tools? How should the healthcare system react to patients using a new, often untested, tool in addition to human diagnoses? And what does patients’ use of AI tell us about the gaps in the health service and how …
患者如何使用人工智能
ChatGPT等人工智能(AI)工具拥有数亿用户,但它们在医学上是否安全可靠?克里斯-斯托克尔-沃克(Chris Stokel-Walker)向患者和医生询问了人工智能世界的益处和风险。 今年 8 月,海莉-布拉克利(Hayley Brackley)突然失去了大部分视力。她因眼睛疼痛前往当地药店就诊,药剂师诊断为鼻窦炎。她服用了药剂师推荐的药物,试图缓解疼痛,但药物开始影响她的视力。她首先想到的是向 ChatGPT 咨询下一步该怎么做。聊天机器人建议她回去再检查一下,她照做了。验光师的进一步检查发现,她的视神经有明显的炎症和出血,目前正在接受治疗。毫不奇怪,Brackley 首先想到的就是 ChatGPT。与谷歌等搜索引擎相比,她更喜欢 ChatGPT,因为它可以进行对话,更快地找到她想要的信息。她并不孤单:每天有 2 亿人在使用世界上最流行的生成式人工智能聊天机器人1。在与眼科顾问会面并确诊病情之前,她试图使用 ChatGPT 来了解对方可能会问什么问题,这一点也不足为奇。布拉克利患有注意力缺陷/多动症(ADHD)和自闭症,她认为事先了解可能会被问到的问题会有助于她的交流。但这也引出了几个问题。病人应该使用人工智能工具吗?医疗系统应该如何应对患者在使用人工诊断之外使用一种往往未经测试的新工具?病人使用人工智能能告诉我们医疗服务中存在的差距,以及如何...
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信