Farnaz Jahanbakhsh, Yannis Katsis, Dakuo Wang, Lucian Popa, Michael J. Muller
{"title":"Exploring the Use of Personalized AI for Identifying Misinformation on Social Media","authors":"Farnaz Jahanbakhsh, Yannis Katsis, Dakuo Wang, Lucian Popa, Michael J. Muller","doi":"10.1145/3544548.3581219","DOIUrl":null,"url":null,"abstract":"This work aims to explore how human assessments and AI predictions can be combined to identify misinformation on social media. To do so, we design a personalized AI which iteratively takes as training data a single user’s assessment of content and predicts how the same user would assess other content. We conduct a user study in which participants interact with a personalized AI that learns their assessments of a feed of tweets, shows its predictions of whether a user would find other tweets (in)accurate, and evolves according to the user feedback. We study how users perceive such an AI, and whether the AI predictions influence users’ judgment. We find that this influence does exist and it grows larger over time, but it is reduced when users provide reasoning for their assessment. We draw from our empirical observations to identify design implications and directions for future work.","PeriodicalId":314098,"journal":{"name":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3544548.3581219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This work aims to explore how human assessments and AI predictions can be combined to identify misinformation on social media. To do so, we design a personalized AI which iteratively takes as training data a single user’s assessment of content and predicts how the same user would assess other content. We conduct a user study in which participants interact with a personalized AI that learns their assessments of a feed of tweets, shows its predictions of whether a user would find other tweets (in)accurate, and evolves according to the user feedback. We study how users perceive such an AI, and whether the AI predictions influence users’ judgment. We find that this influence does exist and it grows larger over time, but it is reduced when users provide reasoning for their assessment. We draw from our empirical observations to identify design implications and directions for future work.