{"title":"生成性人工智能对健康通信的信任意味着什么。","authors":"Adam G Dunn, Ivy Shih, Julie Ayre, Heiko Spallek","doi":"10.1080/17538068.2023.2277489","DOIUrl":null,"url":null,"abstract":"<p><p><b>ABSTRACT</b>Large language models are fundamental technologies used in interfaces like ChatGPT and are poised to change the way people access and make sense of health information. The speed of uptake and investment suggests that these will be transformative technologies, but it is not yet clear what the implications might be for health communications. In this viewpoint, we draw on research about the adoption of new information technologies to focus on the ways that generative artificial intelligence (AI) tools like large language models might change how health information is produced, what health information people see, how marketing and misinformation might be mixed with evidence, and what people trust. We conclude that transparency and explainability in this space must be carefully considered to avoid unanticipated consequences.</p>","PeriodicalId":38052,"journal":{"name":"Journal of Communication in Healthcare","volume":" ","pages":"385-388"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"What generative AI means for trust in health communications.\",\"authors\":\"Adam G Dunn, Ivy Shih, Julie Ayre, Heiko Spallek\",\"doi\":\"10.1080/17538068.2023.2277489\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><b>ABSTRACT</b>Large language models are fundamental technologies used in interfaces like ChatGPT and are poised to change the way people access and make sense of health information. The speed of uptake and investment suggests that these will be transformative technologies, but it is not yet clear what the implications might be for health communications. In this viewpoint, we draw on research about the adoption of new information technologies to focus on the ways that generative artificial intelligence (AI) tools like large language models might change how health information is produced, what health information people see, how marketing and misinformation might be mixed with evidence, and what people trust. We conclude that transparency and explainability in this space must be carefully considered to avoid unanticipated consequences.</p>\",\"PeriodicalId\":38052,\"journal\":{\"name\":\"Journal of Communication in Healthcare\",\"volume\":\" \",\"pages\":\"385-388\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Communication in Healthcare\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/17538068.2023.2277489\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/12/14 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Communication in Healthcare","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/17538068.2023.2277489","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/12/14 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
What generative AI means for trust in health communications.
ABSTRACTLarge language models are fundamental technologies used in interfaces like ChatGPT and are poised to change the way people access and make sense of health information. The speed of uptake and investment suggests that these will be transformative technologies, but it is not yet clear what the implications might be for health communications. In this viewpoint, we draw on research about the adoption of new information technologies to focus on the ways that generative artificial intelligence (AI) tools like large language models might change how health information is produced, what health information people see, how marketing and misinformation might be mixed with evidence, and what people trust. We conclude that transparency and explainability in this space must be carefully considered to avoid unanticipated consequences.