Filipo Sharevski, J. Loop, Peter Jachim, Amy Devine, Emma Pieroni
{"title":"在TikTok上用ChatGPT谈论堕胎(错误)信息","authors":"Filipo Sharevski, J. Loop, Peter Jachim, Amy Devine, Emma Pieroni","doi":"10.1109/EuroSPW59978.2023.00071","DOIUrl":null,"url":null,"abstract":"In this study, we tested users’ perception of accuracy and engagement with TikTok videos in which ChatGPT responded to prompts about “at-home” abortion remedies. The chatbot’s responses, though somewhat vague and confusing, nonetheless recommended consulting with health professionals before attempting an “at-home” abortion. We used ChatGPT to create two TikTok video variants - one where users can see ChatGPT explicitly typing back a response, and one where the text response is presented without any notion to the chatbot. We randomly exposed 100 participants to each variant and found that the group of participants unaware of ChatGPT’s text synthetization was more inclined to believe the responses were misinformation. Under the same impression, TikTok itself attached misinformation warning labels (Get the facts about abortion”) to all videos after we collected our initial results. We then decided to test the videos again with another set of 50 participants and found that the labels did affect the perceptions of abortion misinformation. We also found that more than 60% of the participants expressed negative or hesitant opinions about chatbots as sources of credible health information.","PeriodicalId":220415,"journal":{"name":"2023 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)","volume":"217 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Talking Abortion (Mis)information with ChatGPT on TikTok\",\"authors\":\"Filipo Sharevski, J. Loop, Peter Jachim, Amy Devine, Emma Pieroni\",\"doi\":\"10.1109/EuroSPW59978.2023.00071\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study, we tested users’ perception of accuracy and engagement with TikTok videos in which ChatGPT responded to prompts about “at-home” abortion remedies. The chatbot’s responses, though somewhat vague and confusing, nonetheless recommended consulting with health professionals before attempting an “at-home” abortion. We used ChatGPT to create two TikTok video variants - one where users can see ChatGPT explicitly typing back a response, and one where the text response is presented without any notion to the chatbot. We randomly exposed 100 participants to each variant and found that the group of participants unaware of ChatGPT’s text synthetization was more inclined to believe the responses were misinformation. Under the same impression, TikTok itself attached misinformation warning labels (Get the facts about abortion”) to all videos after we collected our initial results. We then decided to test the videos again with another set of 50 participants and found that the labels did affect the perceptions of abortion misinformation. We also found that more than 60% of the participants expressed negative or hesitant opinions about chatbots as sources of credible health information.\",\"PeriodicalId\":220415,\"journal\":{\"name\":\"2023 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)\",\"volume\":\"217 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/EuroSPW59978.2023.00071\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EuroSPW59978.2023.00071","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
在这项研究中,我们测试了用户对TikTok视频的准确性和参与度的感知,在这些视频中,ChatGPT对“在家”堕胎疗法的提示做出了回应。聊天机器人的回答虽然有些模糊和令人困惑,但还是建议在尝试“在家”堕胎之前咨询健康专业人士。我们使用ChatGPT创建了两个TikTok视频变体——一个是用户可以看到ChatGPT显式地输入响应,另一个是文本响应在没有任何概念的情况下呈现给聊天机器人。我们随机让100名参与者接触每种变体,发现不知道ChatGPT文本合成的那组参与者更倾向于相信这些回答是错误的信息。在同样的印象下,在我们收集到初步结果后,TikTok自己给所有视频贴上了错误信息警告标签(Get the facts about abortion)。然后,我们决定在另一组50名参与者中再次测试这些视频,发现标签确实影响了人们对堕胎错误信息的看法。我们还发现,超过60%的参与者对聊天机器人作为可靠健康信息的来源表达了负面或犹豫的看法。
Talking Abortion (Mis)information with ChatGPT on TikTok
In this study, we tested users’ perception of accuracy and engagement with TikTok videos in which ChatGPT responded to prompts about “at-home” abortion remedies. The chatbot’s responses, though somewhat vague and confusing, nonetheless recommended consulting with health professionals before attempting an “at-home” abortion. We used ChatGPT to create two TikTok video variants - one where users can see ChatGPT explicitly typing back a response, and one where the text response is presented without any notion to the chatbot. We randomly exposed 100 participants to each variant and found that the group of participants unaware of ChatGPT’s text synthetization was more inclined to believe the responses were misinformation. Under the same impression, TikTok itself attached misinformation warning labels (Get the facts about abortion”) to all videos after we collected our initial results. We then decided to test the videos again with another set of 50 participants and found that the labels did affect the perceptions of abortion misinformation. We also found that more than 60% of the participants expressed negative or hesitant opinions about chatbots as sources of credible health information.